url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://math.stackexchange.com/questions/80541/finite-flat-algebras-over-noetherian-domains | # Finite flat algebras over Noetherian domains
Let $A$ be a Noetherian domain and $B$ a finite $A$-algebra containing $A$ as a subring. Suppose there is a number $n$ such that for every maximal ideal $\mathfrak{m}$ of $A$, $$\dim_{k(\mathfrak{m})} B \mathbin{\otimes_A} k(\mathfrak{m}) = n$$ where $k(\mathfrak{m}) = A / \mathfrak{m}$. Why is $B$ flat in this case?
It's generally true that a finitely-generated module over a Noetherian ring is flat if and only if it is locally free (in the sense of the stalks being free), and if a finitely-generated module $M$ over a Noetherian domain $A$ is locally free, then its has constant local rank, in the sense that there is a number $n$ such that for all $\mathfrak{p} \in \operatorname{Spec} A$, $$\dim_{k(\mathfrak{p})} M \mathbin{\otimes_A} k(\mathfrak{p}) = n$$ where $k(\mathfrak{p}) = A_\mathfrak{p} / \mathfrak{p} A_\mathfrak{p}$. The converse is true if $A$ is reduced. (See, for example, Hartshorne [Algebraic Geometry, Ch. II, Exercise 5.8]) If we strengthen our hypotheses and demand that $A$ be a Noetherian Jacobson domain (or, at least, a Noetherian domain with trivial Jacobson radical), it is sufficient to assume that only the stalks over maximal ideals have dimension $n$ to prove that $M$ is locally free. But as far as I know Noetherian domains may have non-trivial Jacobson radical, e.g. Noetherian local rings. Am I supposed to use the hypothesis that $A$ is a subring of $B$ to prove the claim?
-
This is not true. Consider the case when $A$ is a local domain (localization at a non-normal point of an algebraic variety) and $B$ its normalization. – user18119 Nov 9 '11 at 17:57
@QiL: Thanks. I was wondering if the question was even correct. If you post more details about your counterexample I will be glad to accept it as an answer. – Zhen Lin Nov 9 '11 at 20:51
Consider the singular curve associated to $k[x,y]$, $y^2=x^3$ ($k$ is any field). Let $\mathfrak m$ be the maximal ideal of $k[x,y]$ generated by $x,y$ and let $A$ be the localisation $k[x,y]_{\mathfrak m}$. The normalization of $k[x,y]$ is $k[t]$ where $t=y/x$ and $x=t^2, y=t^3$. Then $k[t]$ is finite over $k[x,y]$ (generated by $1, t$). So the localization $B:=k[t]\otimes_{k[x,y]} A$ is finite over $A$. Now $B\otimes_A k(\mathfrak m)=k[t]/(t^2, t^3)$ has dimension $2$ over $k=k(\mathfrak m)$, while $B\otimes_A k(\mathfrak p)$, where $\mathfrak p=\{0\}$, has dimension $1$ over $k(\mathfrak p)$. So $B$ is not flat over $A$. | 2016-05-06 07:36:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8944934606552124, "perplexity": 82.8975306379843}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861735203.5/warc/CC-MAIN-20160428164215-00205-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://tungsteno.io/post/def-identity_matrix/ | The identity matrix of order $m$ is defined as the matrix $I_m\in\mathcal{M}_{m\times m}(\mathbb{K})$ having ones in the main diagonal and zeros elsewhere
It may also be written as $I_m=(\delta_{ij})_{ij}$, where
is known as Kronecker's delta | 2019-01-16 15:21:20 | {"extraction_info": {"found_math": true, "script_math_tex": 3, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9431722164154053, "perplexity": 159.21541778230804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583657510.42/warc/CC-MAIN-20190116134421-20190116160421-00079.warc.gz"} |
http://openstudy.com/updates/50d08a72e4b038af01957d3b | ## anonymous 3 years ago Choose the best descriptor for the graph of f(x) = ln(x - 5). a. It increases rapidly, goes through the point (5,1), then increases gradually. b. It increases slowly, goes through the point (5,1), then increases rapidly. c. It increases slowly, goes through the point (6,0), then increases gradually. d. It increases rapidly, goes through the point (6,0), then increases gradually.
1. anonymous
D. To get an intuition: http://fooplot.com/plot/1vfpzakasr Also, at $$x=6$$, we have $$\ln(6-5)=\ln(1)=0$$ since: $$e^0=1$$, and the graph tends down to $$-\infty$$ as $$x\to5$$, since $$\ln(0^+)$$ (Very near zero, on the positive end) tends to negative infinity.
2. anonymous
thank you so much!! | 2016-05-30 05:05:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.851460337638855, "perplexity": 1301.2590420804356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.66/warc/CC-MAIN-20160524002128-00040-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=17&t=33972 | ## when to use which formula
H-Atom ($E_{n}=-\frac{hR}{n^{2}}$)
Samantha Hoegl Roy 2C
Posts: 81
Joined: Fri Sep 28, 2018 12:15 am
### when to use which formula
when would you use En = - hR / n^2 or En = h^2n^2/8mL^2?
ChathuriGunasekera1D
Posts: 78
Joined: Fri Sep 28, 2018 12:17 am
### Re: when to use which formula
I think you use the first equation when finding the energy change from one energy level to another, and the second equation when the question specifically mentions a side length, or something like "box". | 2020-05-29 08:25:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35111257433891296, "perplexity": 2723.7609205057015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347402457.55/warc/CC-MAIN-20200529054758-20200529084758-00059.warc.gz"} |
http://mathhelpforum.com/pre-calculus/135875-help-how-simply-using-laws-exponents-print.html | # Help with how to simply using laws of exponents
• March 26th 2010, 08:56 PM
softwareguy
Help with how to simply using laws of exponents
I don't understand how to simply this:
$\sqrt[8]{256^2}$
The answer is 4 but I don't know how to work out the solution to get to 4.
• March 26th 2010, 09:02 PM
dwsmith
Calculator!
When you have a nth root of a term to the xth power, you can rewrite it in the form of 256^(x/n)=(nth root of 256)^x Now you can simplify the radical and then raise it to the x power.
Your n=8 and x=2 so the notation isn't confusing.
• March 26th 2010, 09:03 PM
Debsta
Quote:
Originally Posted by softwareguy
I don't understand how to simply this:
$\sqrt[8]{256^2}$
The answer is 4 but I don't know how to work out the solution to get to 4.
First of all you need to recognise that 256 = 2^8. (It is a really good idea to know all the powers of 2 up to 2^10 as they always crop up in problems. Also learn all the powers of 3 up to 3^5. It makes life with indices so much easier.
Replace 256 with 2^8. Replace the 8th root symbol with ^(1/8) so you have ((2^8)^2)(1/8).
• March 26th 2010, 09:04 PM
umamaheswari
(256^2)^1/8
= 256^2*1/8
= 256^1/4
= 4^4*1/4
= 4.
note: ^ means 256 power 2
8th root is to the power 1/8
• March 27th 2010, 03:12 AM
HallsofIvy
Quote:
Originally Posted by umamaheswari
(256^2)^1/8
= 256^2*1/8
Caution! This is 256^(2*1/8) not (256^2)*(1/8).
Quote:
= 256^1/4
= 4^4*1/4
= 4.
note: ^ means 256 power 2
8th root is to the power 1/8 | 2015-10-04 21:27:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8697007298469543, "perplexity": 1123.8551441388508}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736676092.10/warc/CC-MAIN-20151001215756-00204-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/597 | ## Orthogonal and non-orthogonal multiresolution analysis, scale discrete and exact fully discrete wavelet transform on the sphere
• Based on a new definition of delation a scale discrete version of spherical multiresolution is described, starting from a scale discrete wavelet transform on the sphere. Depending on the type of application, different families of wavelets are chosen. In particular, spherical Shannon wavelets are constructed that form an orthogonal multiresolution analysis. Finally fully discrete wavelet approximation is discussed in case of band-limited wavelets.
$Rev: 13581$ | 2017-05-22 21:25:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7074788212776184, "perplexity": 799.3079952297545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607120.76/warc/CC-MAIN-20170522211031-20170522231031-00198.warc.gz"} |
https://math.stackexchange.com/questions/453784/a-sequence-of-rationals-whose-continued-square-roots-are-also-rational | # A sequence of rationals whose continued square roots are also rational
For a given sequence $A_n=\{a_1,a_2,\cdots\}$, let $$b_1=\sqrt{a_1},b_2=\sqrt{a_1+\sqrt{a_2}},\cdots,b_k=\sqrt{a_1+\sqrt{a_2+\sqrt{\cdots+\sqrt{a_k}}}},\cdots,b_0=\lim_{n\to \infty}b_n.$$ Do there exist a sequence $A_n$ such that $a_i\in \mathbb Q^{+},(i=1,2,\cdots)$ and $b_i\in \mathbb Q,(i=0,1,2,\cdots)$?
If we define first the sequence $b_n$ this determines the sequence $a_n$. If $b_n$ is rational then $a_n$ will be rational and $\geq0$. What we need is to ensure that $a_n\neq0$. We can ensure this by asking that the denominator in $b_n$, written as a reduced fraction is divisible by a prime that doesn't divide any of the denominators of the previous $b_k$, $k<n$. So, we just need a convergent sequence with this property. For example $b_n=1-1/p_n$ where $p_n$ is the $n$-th prime.
• What is $b_0$? $b_0=1$? Jul 28 '13 at 3:49
• Yes, I guess it is going to be $1$.
– OR.
Jul 28 '13 at 3:51
• What is $a_1$? $a_1$ is the initial term of $A_n,$ it must be a fixed number. Jul 28 '13 at 3:56
• The sequence $b_n$ determines it. In this case $a_1=b_1^2=(1-1/2)^2=1/4$.
– OR.
Jul 28 '13 at 3:59
• $a_1$ is only determined by $b_1$. From the formula for $b_n$, we have $b_n,a_1,\ldots,a_{n-1}$ already defined so we solve for $a_n$.
– OR.
Jul 28 '13 at 4:18 | 2022-01-22 15:49:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597256183624268, "perplexity": 105.04391805151141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303864.86/warc/CC-MAIN-20220122134127-20220122164127-00463.warc.gz"} |
https://zbmath.org/?q=an%3A1122.68382 | # zbMATH — the first resource for mathematics
Constructive intensional negation. (English) Zbl 1122.68382
Kameyama, Yukiyoshi (ed.) et al., Functional and logic programming. 7th international symposium, FLOPS 2004, Nara, Japan, April 7–9, 2004. Proceedings. Berlin: Springer (ISBN 3-540-21402-X/pbk). Lecture Notes in Computer Science 2998, 39-54 (2004).
Summary: Although negation is an active area of research in Logic Programming, sound and complete implementations are still absent from actual Prolog systems. One of the most promising techniques in the literature is Intensional Negation (IN), which follows a transformational approach: for each predicate $$p$$ in a program its negative counterpart intneg$$(p)$$ is generated. However, implementations of IN have not been included in Prolog environments due, in part, to the lack of details and explicit techniques, such as the treatment of universally quantified goals. In this paper, we describe a variant of IN, which we have called Constructive Intensional Negation (CIN). Unlike earlier proposals, CIN does not resort to a special resolution strategy when dealing with universally quantified formulae, which has been instrumental in having an effective implementation. Among the contributions of this work we can mention a full implementation being tested for its integration in the Ciao Prolog system and some formal results with their associated proofs.
For the entire collection see [Zbl 1048.68005].
##### MSC:
68N17 Logic programming
Full Text: | 2021-06-24 04:12:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6642065048217773, "perplexity": 1623.0860440312479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488550571.96/warc/CC-MAIN-20210624015641-20210624045641-00337.warc.gz"} |
https://www.physicsforums.com/threads/compression-of-an-ideal-gas.577847/ | # Compression of an ideal gas
aaaa202
I really need help on this question, I've tried asking several people but I still don't quite get it.
The formula for the work done on compressing/expanding an ideal gas is $\int$-pdV.
Now first of all - p denotes the internal pressure of the gas right?
If so, good so far.
Let us now assume the situation on the picture, where a gas is confined in a cylinder with a movable piston. The gas is not in equilibrium since the external pressure from the atmosphere exceeds the internal (in my example external pressure is twice the internal but it could be anything). Of course the gas will now start to compress.
My question is what the work done on the gas is and why the external pressure for some reason has no influence on the work done.
Let me try to do a force analysis so that you can see what I'm thinking and explain, where I go wrong.
We start noting that the gas will exceed a force on the piston equal to F = Ap, where A is the area of the piston. By newtons third law the piston then exerts a force of -F on the gas.
Furthermore we see that the external pressure on the piston must exert a force of 2pA and the piston must then exert a force on the atmosphere given by -2pA.
Let's say that the gas piston moves a distance x and for simplicity assume that the pressure stays the same (NOT realistic, I know)
Then it is clear that the work done on the gas in the cylinder is indeed -F*x = -pdeltaV.
*********** BUT SOMETHING IS CLEARLY WRONG. THE PISTON WILL ALSO HAVE GAINED KINETIC ENERGY EQUAL TO THE WORK DONE ON THE GAS! WHERE DOES THIS GO? IT MUST GO TO THE GAS TOO! ***********
I think fundamentally there is something completely wrong with this way of thinking, but I can't understand it in other ways. Please help me understand it such that it makes sense to think of the work done on the gas as just W = -pdeltaV and not twice that work.
#### Attachments
• gas.png
1.1 KB · Views: 373
meldraft
In general, the energy you put in you piston manifests itself in the gas as increased pressure (pressure is simply a metric for the kinetic energy of the molecules), or increased temperature.
If your external pressure is the atmospheric pressure, then it will remain constant throughout the process. If you have 2 pressure vessels, the high pressure will decrease until you reach equilibrium.
nasu
If you consider the piston, there is the work done by the atmospheric pressure L1 (positive), and the work done by the gas L2 (negative). The work done by the gas is less (in absolute value) than that done by the atmospheric pressure. The difference is equal to the kinetic energy of the piston.
If you consider the gas, there is the work done by the piston on it and that's all. Calculating the actual value may be a little more difficult as the pressure in the gas is not constant and this is not an equilibrium process.
Homework Helper
I really need help on this question, I've tried asking several people but I still don't quite get it.
The formula for the work done on compressing/expanding an ideal gas is $\int$-pdV.
Now first of all - p denotes the internal pressure of the gas right?
Not necessarily. If it is quasi-static then W = ∫PdV where P is the internal pressure of the gas and W is the work done BY the gas (- if work is done on the gas).
If it is not quasistatic, W will be something other than ∫PdV where P is the internal gas pressure. This is because dynamic energy has to be taken into account, as Nasu points out.
AM
aaaa202
oops double post. see next post
Last edited:
aaaa202
I still don't quite get it. Let's take it in steps please :) :
1) Does the atmospheric pressure do a work on the piston or does it not?
If you consider the piston, there is the work done by the atmospheric pressure L1 (positive), and the work done by the gas L2 (negative). The work done by the gas is less (in absolute value) than that done by the atmospheric pressure. The difference is equal to the kinetic energy of the piston.
If you consider the gas, there is the work done by the piston on it and that's all. Calculating the actual value may be a little more difficult as the pressure in the gas is not constant and this is not an equilibrium process.
Yes exactly - that's also how I see it. Yet, if this is true the piston does actually receive some kinetic energy. But where does this then go, if it does NOT
add to the work done on the gas? :(
Not necessarily. If it is quasi-static then W = ∫PdV where P is the internal pressure of the gas and W is the work done BY the gas (- if work is done on the gas).
If it is not quasistatic, W will be something other than ∫PdV where P is the internal gas pressure. This is because dynamic energy has to be taken into account, as Nasu points out.
AM
Sorry. I forgot to write that we assume the proces to be quasistatic. That still doesn't account for the extra energy as far as I can see, though.
Last edited:
Homework Helper
I still don't quite get it. Let's take it in steps please :) :
1) Does the atmospheric pressure do a work on the piston or does it not?
If the gas in the cylinder is being compressed quasistatically, the work is done on the gas, not the piston. The piston acquires no kinetic energy (v is arbitrarily close to 0). The atmosphere does not do work on the piston in such a case. The work is done on the gas and is equal to the work done by the force on the piston (gravity + applied mechanical force) and by the force provided by the atmosphere.
AM
aaaa202
hmm okay, I just wish I could see why that is - it would certainly make a lot of things understandable.
BUT I don't see it. The piston moves down right? Agreed so far. The force from the atmospheric pressure is 2pA right? SO WHY DONT YOU INCLUDE THIS FORCE :((
nasu
If the gas in the cylinder is being compressed quasistatically, the work is done on the gas, not the piston. The piston acquires no kinetic energy (v is arbitrarily close to 0). The atmosphere does not do work on the piston in such a case. The work is done on the gas and is equal to the work done by the force on the piston (gravity + applied mechanical force) and by the force provided by the atmosphere.
AM
The original conditions were that the pressures on the two faces of the piston are different so there is a net force on the piston. This may be the point of confusion.
The process is not quasistatic. The piston is not in equlibrium but it moves accelerated until the pressures equalize.
nasu
Sorry. I forgot to write that we assume the proces to be quasistatic. That still doesn't account for the extra energy as far as I can see, though.
How can be quasistaic in the conditions you described initially?
Maybe if you have some extra force (friction).
aaaa202
hmm maybe the whole problem lies in the fact that I'm not certain what quasistatic means.
1) First I thought it meant that the gas is at rest the whole time so that Pinternal is practically equal to Pexternal throughout the whole proces.
2)But then my teacher gave me the impression that quasi-static means, that the piston just moves slow enough such that the gas has time to keep a uniform pressure at all times.
Which of these is the correct. If 2) is the right one, why can't the proces i described be quasistatic?
aaaa202
The original conditions were that the pressures on the two faces of the piston are different so there is a net force on the piston. This may be the point of confusion.
The process is not quasistatic. The piston is not in equlibrium but it moves accelerated until the pressures equalize.
Exactly! That's what I've been thinking too. But when I do exercises in thermodynamics and want to find the work done in for instance an adiabatic process, then I just use the formulas. But conditions in those exercises are like the one in my example.
For example consider a water rocket going off. How is this different from the situation I sketched? Because my teacher just worked through that example today using the equations of adiabatic expansion.
Im still left with the question of whether quasistatic means that the gas itself has a uniform pressure at all times or also that the force on the piston is infinitely close to equilibrium at all times.
Homework Helper
Exactly! That's what I've been thinking too. But when I do exercises in thermodynamics and want to find the work done in for instance an adiabatic process, then I just use the formulas. But conditions in those exercises are like the one in my example.
In an adiabatic process that is quasistatic, the adiabatic condition applies ($PV^\gamma = K$). If it is not quasistatic, the condition does not strictly apply although it can be close even for some relatively fast processes (like the compression and expansion of air due to sound).
For example consider a water rocket going off. How is this different from the situation I sketched? Because my teacher just worked through that example today using the equations of adiabatic expansion.
The gas expands quickly, and therefore practically adiabatically. In expanding, it does work on the water causing the water to gain kinetic energy as it escapes through the rocket nozzle. The ejection of water through the nozzle imparts momentum (hence kinetic energy) to the rocket (including the water and gas that remains in the rocket). So the gas does work on itself, as well as on the water.
The adiabatic condition $PV^\gamma = K$ does not strictly apply although it probably not far out.
Im still left with the question of whether quasistatic means that the gas itself has a uniform pressure at all times or also that the force on the piston is infinitely close to equilibrium at all times.
Quasistatic processes occur under conditions that are arbitrarily close to equilibrium so it takes an arbitrarily long time to complete. A gas may have virtually uniform pressure during a non-quasistatic process - the gas in a car engine cylinder during the downstroke, for example, will have fairly uniform temperature and pressure, but it is not quasistatic.
AM
aaaa202
Okay but consider that rocket again. Just before the cork pops out and the rocket takes of there is an internal pressure much bigger than the external.
How is this situation then fundamentally different from mine, where the atmosphere does a work on the piston?
zezima1
What I'm trying to say is that the rocket just as the cork flies out will have an internal pressure far bigger than the atmospheric. We could for instance say it was 2p, where p is the atmospheric.
What is then the fundamental difference between the rocket and then situation I sketched - because as far as I understand it, you can treat the rocket propulsion as a quasistatic proces and not the compression of the gas in my cylinder.
Of course there is a difference in the fact that in my example we have a compression whilst the rocket propulsion is an expansion but of course that wouldn't change anything.. | 2022-11-27 08:28:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6732935309410095, "perplexity": 266.3374401103752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00571.warc.gz"} |
https://paperswithcode.com/paper/semi-supervised-sequence-modeling-with-cross | # Semi-Supervised Sequence Modeling with Cross-View Training
Kevin ClarkMinh-Thang LuongChristopher D. ManningQuoc V. Le
Unsupervised representation learning algorithms such as word2vec and ELMo improve the accuracy of many supervised NLP models, mainly because they can take advantage of large amounts of unlabeled text. However, the supervised models only learn from task-specific labeled data during the main training phase... (read more)
PDF Abstract
51,794 | 2019-04-25 02:35:46 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8738396167755127, "perplexity": 10819.227162253706}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578678807.71/warc/CC-MAIN-20190425014322-20190425040322-00208.warc.gz"} |
http://mathematica.stackexchange.com/questions/27505/when-i-can-assume-that-all-decimal-digits-returned-by-mathematica-are-provably-c | When I can assume that all decimal digits returned by Mathematica are provably correct?
Mathematica works with exact numbers and with two different types of approximate numbers: machine-precision numbers that take advantage of specialized hardware for fast arithmetic on your computer, and arbitrary-precision numbers that are correct to a specified number of digits.
To be sure of n correct digits, use N[expr, n].
When you do a computation, Mathematica keeps track of which digits in your result could be affected by unknown digits in your input. It sets the precision of your result so that no affected digits are ever included. This procedure ensures that all digits returned by Mathematica are correct, whatever the values of the unknown digits may be.
Mathematica automatically increases the precision that it uses internally in order to get the correct answer
Of course, this sounds very reassuring, but I still have some doubts that all decimal digits ever returned by Mathematica when working with arbitrary-precision numbers are always provably correct, no matter what functions I invoked.
What are those cases when I can be certainly sure all displayed digits are correct?
Update:
Here is an example when some incorrect decimal digits are returned when working with arbitrary-precision arithmetic:
a = 17
(* 1.000000 *)
a // Precision
(* 7. *)
d = Derivative[0, 1][StieltjesGamma][0, a]
(* -1.6450 *)
MachineNumberQ[d]
(* False *)
d // FullForm
(* 1.645015523910436949472512823780090832695.155856939311388 *)
d // Precision
(* 5.15586 *)
So, Mathematica claims that at least 5 (hence, all) decimal digits of the result -1.6450 are correct. But in fact, the exact result is -Pi^2 / 6 that is -1.644934..., so only 3 digit are correct.
I am also concerned that Precision[...] itself returns a machine-precision number, that is subject to uncontrolled error-accumulation that possibly can result in claiming more digits of precision in a number than there actually are. Can I assume that Mathematica always errs on the safe side when computing a precision?
Update 2:
Another (gross) example:
a = 26
(* 2.00000 *)
Derivative[0, 1][StieltjesGamma][0, a]
(* 0.324 *)
d // FullForm
(* 0.323995226098963375800273854568809784893.339102855094484 *)
Precision[d]
(* 3.3391 *)
Here, one would expect that at least 0.32 are correct digits. But in fact, the exact result is 1 - Pi^2/6 that is -0.644934.... No correct digits, even the sign is wrong.
-
The authors of the book The SIAM 100-Digit Challenge gave Mathematica (v5) solutions to the 10 problems as well as other implementations. Code here. – Michael E2 Jun 24 '13 at 16:09
You'll probably find Oleksandr Pavlyks screencast on Mathematical Numerics and Special Functions interesting. – ssch Jun 24 '13 at 16:31
+1 toward your first "Good question" badge. – Mr.Wizard Jun 24 '13 at 20:14
BTW, I just realized I could write StieltjesGamma[0, #]&'[a] instead of a more verbose Derivative[0, 1][StieltjesGamma][0, a]. – Vladimir Reshetnikov Jun 24 '13 at 23:36
Probably, you should never expect that NIntegrate returns only provably correct digits. – TauMu Jun 27 '13 at 7:09
show 1 more comment
Control the Precision and Accuracy of Numerical Results
This is an excellent question.
Of course everyone could claim highest accuracy for her product.
To deal with this situation there exist benchmarks to test for accuracy.
One such benchmark is from NIST. This specific benchmark deals with the accuracy of statistical software for instance.
The NIST StRD benchmark provides reference datasets with certified computional results that enable the objective evaluation of statistical software.
In an old issue of The Mathematica Journal, Marc Nerlove writes elaborately about performing the linear and nonlinear regressions using the NIST StRD benchmark (and Kernel developer Darren Glosemeyer from WRI discussing results using Mathematica version 5.1).
Numerically unstable functions:
But this is only one part of story. Ok. There exist benchmark for statistical software etc., but what happens if we take some functions that are numerically unstable?
Stan Wagon has several examples of inaccuracies and how to deal with them in his book Mathematica in Action, which I can only warmly suggest. I have it now for (the latest edition) several years and everytime there is something new to discover with Mr. Wagon.
Let's take, for instance a numerical unstable Maclaurin polynomial of $sin x$:
poly = Normal[Series[Sin[x], {x, 0, 200}]];
Plot[poly, {x, 0, 100}, PlotRange -> {-2, 2},
PlotStyle -> {Thickness[0.0010], Black}]
The result this we can see that the result breaks down at ~40:
If we take one value x = 60 and perform a division we get a result back:
N[poly /. x -> 60] ==> -0.304811
Inserting the approximate real number 60.; there occurs a roundoff error:
poly /. x -> 60. ==> -4.01357*10^9
But inserting the number 60 (without the period); there is no problem at all:
ply /. x -> 60 ==> -((3529536438455<<209>>9107277890060)/(1157944045943<<210>>4588491415899))
The use of machine precision (caused by the decimal point) leads to an error:
10^17 + 1./100 - 10^17 ==> 0.
Machine precision is $53 log_{10}(2)$ = 15.9546.
This is the exact moment where N comes into play. We have to increase the precision:
poly /. x -> N[60,20] ==> 0. x 10^7
Still not good enough, because this number has no precision at all. So, let's increase the precision again:
poly /. x -> N[60,200] ==> -0.9524129804151562926894023114775409691611879636573830381666715331536022870514582375567159979758451142049758239018693823215314740415313661058559273332324475257579234995809519
This looks much better. If we impose the precision in our prior plot:
Plot[poly, {x, 0, 100}, PlotRange -> {-2, 2},
PlotStyle -> {Thickness[0.0010], Black}, WorkingPrecision -> 200]
Not ideal, since in order to get an accurate result, we need to know what precision we need. There are numerical results which tend to loose precision during several iterations. Luckily there is some salvation in form of the Lyapunov exponent (denoted $\lambda$), which can quantify the loss of precision.
Conclusion:
What I've learned from here is, that it is a bad idea to mix small numbers with big ones in a machine precision environment. This is where Mathematica's adaptive precision comes into play.
Mathematica precision handling
Let's investigate further about precision handling inside Mathematica.
If we want to calculate $sin(10^{30})$ in Mathematica we get:
N[Sin[10^30]] ==> 0.00933147
Using WolframAlpha we get:
WolframAlpha["Sine(10^30", {{"DecimalApproximation", 1}, "Content"}] ==> - 0.09011690191213805803038642895298733027439633299304...
The result we get from our numerical workhorse is simply the wrong answer and this is getting worse if we increase the exponent.
(The guys at WolframAlpha seem to do it somewhat differently...but what?)
If we take $10^{30}$ and put turn this into a software real with $MachinePrecision as the actual precision we get 0 as the result, with the precision 0. This result is useless. Luckily we do know that it is indeed. Here the adaptive precision comes into play. The adaptive precision is controlled through the system variable$MaxExtraPrecision (default value is 50).
Let's say we want to compute $sin(10^{30})$ but with a precision of 20 digits:
N[Sin[10^30], 20] ==> -0.090116901912138058030
Ah! We're getting close to the WolframAlpha engine!
If we ask for $sin(10^{60})$ the result is:
N[Sin[10^60], 20] ==> N::meprec: Internal precision limit
$MaxExtraPrecision = 50. reached while evaluation Sin[1000000000000000000000000000000000000000000000000000000000000]. >> Out[105]= 0.8303897652 We run into problems, since the adaptive algorithm only adds 50 digits for extra precsion. But, luckily, the extra precision is controlled through$MaxExtraPrecision, which we're allowed to change:
$MaxExtraPrecision = 200; N[Sin[10^60], 20] ==> 0.83038976521934266466 Addendum (Michael E2): Note that N[Sin[10^30]] does all the computation in MachinePrecision without keeping track of precision; however N[Sin[10^30], n] does keep track and will give an accurate answer to precision n. (WolframAlpha probably uses something like n = 50.) Also specifying the precision of the input to be, say, 100 digits,N[Sin[10^60100], 20] will use 100-digit precision calculations internally and return the same answer as above to 20 digits of precision, provided as in this case 100 digits is enough to give 20. (Added at the request of @stefan.) Conclusion Equipped with that knowledge we could define functions that use adaptive precision to get an accurate result. Precision and accuracy It is not that Mathematica looses precision, but in your defintion of a you'll loose precision in the first place. Let's first talk about precision and accuracy. Basically the mathematical definition of precision and accuracy is as follows: Suppose representation of a number$x$has an error of size$\epsilon$. Then the accuracy of$x \pm \epsilon/2$is defined to be$-log_{10}|\epsilon|$and its precision$-log_{10}|\epsilon/x|$. With these definitions we can say that a number$z$with accuracy$a$and precision$p$will lie with certainty in the interval:$(x-\frac{10^{-a}}{2},\frac{10^{-a}}{2})=(x-\frac{10^{-p} x}{2},\frac{10^{-p} x}{2}+x)$According to these definitions the following relation holds between precision and accuracy:$precision(x)=accuracy(x)+log_{10}(|x|)$Where the latter is called the scale of the number$x$. We can check if this identity holds: Function[x, {Precision[x], Accuracy[x] + Log[10, Abs[x]]}] /@ {N[1, 100], N[10^100, 30]} ==> {{100.,100.},{30.,30.}} (* qed *) Let's define a function for both precision and accuracy: PA[x_] := {Precision[x], Accuracy[x]} Now let's look at your definition of a: a = 17 PA[a] ==> {7., 7.} d = Derivative[0, 1][StieltjesGamma][0, a] ==> -1.6450 PA[d] ==> {5.15586, 4.93969} You've lost precision! You defined a to have a precision and an accuracy of 7. But what is the precision and accuracy if you turn a into a symbol using machine precision: a = 1. PA[a] ==> {MachinePrecision, 15.9546} This is a gain in precision obviously. Now let's call your canonical examples: d = Derivative[0, 1][StieltjesGamma][0, a] ==> -1.64493 Which is the exact result of$-\frac{\pi ^2}{6}$. The precision and accuracy of d is: PA[d] ==> {MachinePrecision, 15.7384} Perfect. Now let's redefine your a to be 2. instead of 26: a = 2. PA[a] ==> {MachinePrecision, 15.6536} d = Derivative[0, 1][StieltjesGamma][0, a] ==> -0.644934 Which is the exact result of$1 - \frac{\pi ^2}{6}$PA[d] ==> {MachinePrecision, 16.1451} Conclusion Dealing with numerical computing is dealing with loss of precision. It seems that Mathematica varies the Precision depending on the numerical operation being performed and the Precisions are more pessimistic than optimistic, which is actually quite good. In most calculations, one typically looses precision, but with an appropriate starting value you can gain precision as well. The general rule for the usage of high-precision numbers is: If you want to gain high-precision you need to use high-precision numbers in your expression to be calculated. Consequently, every time you need a high-precision result you must take care that the starting expression has sufficient precision. There exists an exception to the above rule. If you use high-precision arithmetic in expressions and the numbers are getting bigger than$MaxMachineNumber, Mathematica will switch automatically to high-precision numbers. If this is the case the rules apply as described in my Edit 2.
P.S.:
This was one of the questions I really like, since I know now more about that topic than before. Maybe one of the WRI/SE yedi's join the party to give even more insights on that matter, than I would ever been able to.
-
@MichaelE2 Thank you for your helpful input. I'd like to invite you, if you can afford the time, to update the post and enter your findings with your attribution. If this is ok for you. Or post it in a new post, if this is to circuitous for you. But thanks again for your input. – Stefan Jun 24 '13 at 15:14
Thanks. I added it, with correct formatting :), as an add-on to "Edit 2". You may wish to roll back, or re-edit it yourself, if you would like to incorporate it more effectively. That would be fine with me. (I'll delete my initial comment soon, I think.) – Michael E2 Jun 24 '13 at 15:54
@MichaelE2 nice job :) i appreciate your input indeed. – Stefan Jun 24 '13 at 19:42
@Stefan Thank you for your deep analysis of the subject! The bounty is yours. – Vladimir Reshetnikov Jul 2 '13 at 18:32
@VladimirReshetnikov thank you for accepting it :) I was about to write an addendum on NumericalMath$NumberBits function. This function shows how Mathematica simulates interval arithmetic by constantly maintaining a few more digits than needed... – Stefan Jul 2 '13 at 18:36 show 1 more comment This isn't an answer (yet) but it was too long for a comment. Here is an extended example where the quoted precision does not appear to be true: f1 = Derivative[0, 6][StieltjesGamma][0, #] &; f2 = {Accuracy@#, Precision@#, InputForm@#} &; f1 @ 115 f2 @ % N[f1 @ 1, 10] f2 @ % 725.59 {2.29961, 5.1603, 725.592784177191488027966503357541438517565.1603018938964} 726.0114797 {7.13906, 10., 726.0114797147721516339424224809075397033310.} - add comment In your example when you enter a = 17 Derivative[0, 1][StieltjesGamma][0, a] the derivative is being computed numerically. If you compute the derivative analytically Clear[a]; D[StieltjesGamma[0, a], a] and then substitute in the value of$a\$
% /. a -> 17
no incorrect decimal digits are returned.
-
The whole question is about arbitrary-precision numeric computations. – Vladimir Reshetnikov Jun 26 '13 at 3:00
@TheDoctor It appears your answer is not for the question being asked. Please consider changing it or removing it. – R Hall Jun 30 '13 at 16:07 | 2014-07-22 21:37:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23358146846294403, "perplexity": 2219.7757380295816}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997865523.12/warc/CC-MAIN-20140722025745-00029-ip-10-33-131-23.ec2.internal.warc.gz"} |
https://www.semanticscholar.org/paper/Transfinite-inductions-producing-coanalytic-sets-Vidny%C3%A1nszky/bdab4ae6c5e8486b18ff8414db842ff5c7453df4 | Transfinite inductions producing coanalytic sets
@article{Vidnynszky2014TransfiniteIP,
title={Transfinite inductions producing coanalytic sets},
author={Zolt{\'a}n Vidny{\'a}nszky},
journal={Fundamenta Mathematicae},
year={2014},
volume={224},
pages={155-174}
}
A. Miller proved the consistent existence of a coanalytic two-point set, Hamel basis and MAD family. In these cases the classical transfinite induction can be modified to produce a coanalytic set. We generalize his result formulating a condition which can be easily applied in such situations. We reprove the classical results and as a new application we show that in $V=L$ there exists an uncountable coanalytic subset of the plane that intersects every $C^1$ curve in a countable set.
11 Citations
On the scope of the Effros theorem
All spaces (and groups) are assumed to be separable and metrizable. Jan van Mill showed that every analytic group G is Effros (that is, every continuous transitive action of G on a non-meager spaceExpand
Beyond Erdős-Kunen-Mauldin: Shift-compactness properties and singular sets
• Mathematics
• Topology and its Applications
• 2021
Abstract The Kestelman-Borwein-Ditor Theorem asserts that a non-negligible subset of R which is Baire (= has the Baire property, BP) or measurable is shift-compact: it contains some subsequence ofExpand
Maximal discrete sets
We survey results regarding the definability and size of maximal discrete sets in analytic hypergraphs. Our main examples include maximal almost disjoint (or mad) families, I-mad families, maximalExpand
Tree forcing and definable maximal independent sets in hypergraphs
We show that after forcing with a countable support iteration or a finite product of Sacks or splitting forcing over $L$, every analytic hypergraph on a Polish space admits a $\mathbf{\Delta}^1_2$Expand
Definable MAD families and forcing axioms
• Computer Science, Mathematics
• Ann. Pure Appl. Log.
• 2021
We show that under the Bounded Proper Forcing Axiom and an anti-large cardinal assumption, there is a $\mathbf{\Pi}^1_2$ MAD family.
ICLE Set theory and the analyst
• 2018
This survey is motivated by specific questions arising in the similarities and contrasts between (Baire) category and (Lebesgue) measure—category-measure duality and non-duality, as it were. The bulkExpand
Every zero-dimensional homogeneous space is strongly homogeneous under determinacy
• Computer Science, Mathematics
• J. Math. Log.
• 2020
It is shown that, assuming the Axiom of Determinacy, every zero-dimensional homogeneous space is strongly homogeneous (that is, all its non-empty clopen subspaces are homeomorphic), with the trivial exception of locally compact spaces. Expand
Set theory and the analyst
• Mathematics
• 2018
This survey is motivated by specific questions arising in the similarities and contrasts between (Baire) category and (Lebesgue) measure—category-measure duality and non-duality, as it were. The bulkExpand
Ju l 2 02 1 ZERO-DIMENSIONAL σ-HOMOGENEOUS SPACES
All spaces are assumed to be separable and metrizable. Ostrovsky showed that every zero-dimensional Borel space is σ-homogeneous. Inspired by this theorem, we obtain the following results: ‚ AssumingExpand
References
SHOWING 1-10 OF 15 REFERENCES
Analytic and coanalytic families of almost disjoint functions
• Computer Science, Mathematics
• Journal of Symbolic Logic
• 2008
Abstract If is an analytic family of pairwise eventually different functions then the following strong maximality condition fails: For any countable , no member of which is covered by finitely manyExpand
Infinite Combinatorics and Definability
It is shown that there cannot be a Borel subset of [ω] ω which is a maximal independent family, and it is consistent that any ω 2 cover of reals by Borel sets has an ω 1 subcover. Expand
Definable sets of generators in maximal cofinitary groups
• Mathematics
• 2008
Abstract A group G ⩽ Sym ( N ) is cofinitary if g has finitely many fixed points for every g ∈ G except the identity element. In this paper, we discuss the definability of maximal cofinitary groupsExpand
A Pi ^1_1-uniformization principle for reals
• Mathematics
• 2009
We introduce a Π 1 1 -uniformization principle and establish its equivalence with the set-theoretic hypothesis (ω 1 ) L = ω 1 . This principle is then applied to derive the equivalence, to suitableExpand
A co-analytic maximal set of orthogonal measures
• Mathematics, Computer Science
• The Journal of Symbolic Logic
• 2010
Abstract We prove that if V = L then there is a maximal orthogonal (i.e., mutually singular) set of measures on Cantor space. This provides a natural counterpoint to the well-known theorem of PreissExpand
Higher recursion theory
Hyperarithmetic theory is the first step beyond classical recursion theory. It is the primary source of ideas and examples in higher recursion theory. It is also a crossroad for several areas ofExpand
Arcs in the plane
• Mathematics
• 2009
Abstract Assuming PFA, every uncountable subset E of the plane meets some C 1 arc in an uncountable set. This is not provable from MA ( ℵ 1 ) , although in the case that E is analytic, this is a ZFCExpand
Some additive properties of sets of real numbers
• Mathematics
• 1981
Some problems concerning the additive properties of subsets of R are investigated. From a result of G. G . Lorentz in additive number theory, we show that if P is a nonempty perfect subset of R, thenExpand
Descriptive Set Theory
Descriptive Set Theory is the study of sets in separable, complete metric spaces that can be defined (or constructed), and so can be expected to have special properties not enjoyed by arbitraryExpand
The theory of countable analytical sets
The purpose of this paper is the study of the structure of countable sets in the various levels of the analytical hierarchy of sets of reals. It is first shown that, assuming projective determinacy,Expand | 2021-10-28 06:06:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9303238391876221, "perplexity": 1300.0998738287817}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00107.warc.gz"} |
https://www.nature.com/articles/s41598-020-77691-x?error=cookies_not_supported&code=65f603b1-ee36-4ea2-8965-7e474950ef27 | # Relationship between tendon structure, stiffness, gait patterns and patient reported outcomes during the early stages of recovery after an Achilles tendon rupture
## Abstract
After an Achilles tendon (AT) injury, the decision to return to full weightbearing for the practice of sports or strenuous activities is based on clinical features only. In this study, tendon stiffness and foot plantar pressure, as objective quantitative measures that could potentially inform clinical decision making, were repeatedly measured in 15 patients until 3 months after the AT rupture by using shear wave elastography (SWE) and wearable insoles, respectively. Meanwhile, patient reported outcomes assessing the impact on physical activity were evaluated using the Achilles Tendon Total Rupture Score (ATRS). At week-2 post-injury, stiffness of the injured tendon varied from 6.00 ± 1.62 m/s (mean ± SD) close to the rupture to 8.91 ± 2.29 m/s when measured more distally. While near complete recovery was observed in distal and middle regions at week-8, the shear wave velocity in the proximal region recovered to only 65% of the contralateral value at week-12. In a parallel pre-clinical study, the tendon stiffness measured in vivo by SWE in a rat model was found to be strongly correlated with ex vivo values of the Young’s modulus, which attests to the adequacy of SWE for these measures. The insole derived assessment of the plantar pressure distribution during walking showed slight sub-optimal function of the affected foot at week-12, while the ATRS score recovered to a level of 59 ± 16. Significant correlations found between tendon stiffness, insole variables and distinct ATRS activities, suggest clinical relevance of tendon stiffness and foot plantar pressure measurements. These results illustrate how an alteration of the AT structure can impact daily activities of affected patients and show how digital biomarkers can track recovery in function over time.
## Introduction
Achilles tendon (AT) tears happen mostly in recreational athletes (75%)1 and over 80 percent of AT ruptures also occur during sport or recreational activities2, causing substantial morbidity, impairment of mobility, and loss of work time1,3. With the aging population becoming more active, the incidence of AT ruptures has been increasing over the past decade4,5,6, for examples in middle-age patients from 1.8/10,000 to 2.9/10,000 between 2003 and 2013 in the province of Ontario, Canada5 and reaching in Sweden 5.5/10,000 and 1.47/10,000 for men and women, respectively4. Tendon injuries are often treated with a variety of non-surgical (e.g., immobilisation, ice, physical therapy) and surgical interventions, along with reduction of physical activities to allow for healing of the tendon7,8.
Tendons are mostly made of highly structured collagen fibers (60% of dry mass) that connect and transmit forces from muscle to bone9. They are able to store elastic energy and withstand the high tensile forces upon which locomotion is entirely dependent. Optimal tendon stiffness is critical for an effective muscle–tendon interaction. The course of recovery of normal tendon stiffness after injury is poorly understood, making it difficult to objectively determine when the tendon has healed sufficiently and has the functional capacity to allow the patient to return to normal activities or sports.
Conventional imaging modalities such as ultrasound (US) and magnetic resonance imaging (MRI) can monitor changes in tendon morphology over time. Yet, these imaging modalities do not provide an objective assessment of tendon healing based on recovery of its biomechanical characteristics. A technique that allows for reliable non-invasive assessment of tendon mechanical properties in vivo may have significant clinical impact. This tool should enable for the monitoring of tendon changes as a result of injury, pathology, and/or treatment and support decision making to return to sports activities.
US-based shear wave elastography (SWE) is an innovative technique that quantitatively assesses tissue stiffness10. Local tissue strain is produced using acoustic radiation force by focused US impulses that induce the formation of shear waves. The shear wave velocity is measured using high-frequency US imaging from which tissue stiffness is then inferred11. Under well controlled conditions12, this technology appears sensitive enough to detect significant differences in the AT stiffness during extension, in neutral position and during maximum dorsiflexion13. Greater levels of stiffness have been shown in those physically active compared to non-active subjects14.
Achilles tendons exhibit strong anisotropy, which translates to a greater stiffness when placing the US transducer parallel, as opposed to perpendicular, to the tendon fiber orientation15. Overall, stiffness along the Achilles tendon may exceed 800 kPa, the upper limit of measurement for the most advanced SWE device available on the market. However, stiffness values in tendons with full-thickness rupture can vary from ~ 3 to ~ 220 kPa, depending on the healing stage16. Due to the rupture, it is possible that the tendon stiffness is heterogeneous, reflecting a loss of parallel arrangement of collagen fibers in the injured area. Collagen abnormalities in healing tendons may result in less resistance to tensile forces, and it is therefore important to carefully monitor stiffness changes along the tendon length during recovery.
Achilles tendon injuries impair the ability to walk normally, and surgical or therapeutic interventions aim to restore a healthy gait cycle. Recent advances in wearable technologies for gait analysis include the development of inertial wireless sensors, optical motion trackers, portable force plates, insole pressure sensors and wireless electromyography17. Sensorised wearable plantar pressure insoles have been used to capture dynamic stability during single- and dual-walking tests in clinical settings18 and were proposed to capture real-world data over multiple days19. Wearable insoles offer an objective insight into weight-bearing through each foot during walking and offers a means to capture foot kinematics, thus providing the potential to measure functional changes during rehabilitation.
The Achilles Tendon Total Rupture Score (ATRS) is an established and validated patient-reported outcome score used for clinical assessment20, however it only captures patient self-reported function. The ATRS seems reliable for the comparison of groups of patients but, with a minimal detectable change of 18.5 in a scale of 100, may have only limited use for the repeated assessment of individual patients in the clinic21.
As a premise to preclinical/clinical translation and future research, a validation study aimed at comparing tendon stiffness measurements by SWE with ex vivo measurements of the Young’s modulus was performed in a rat model of tenotomy. Once the validity of this measure was demonstrated, the main purpose of the present human proof-of-concept study was to explore, from a comprehensive approach relying on the use of relatively new technologies, the relationship between tendon stiffness measured by SWE, load distribution in the foot sole during walking measured by sensorised insoles and patient-reported outcomes.
## Results
Of the eighteen patients who participated in the study, fifteen patients completed the study up to 12 weeks and three patients discontinued the study prior to week-2 measurements, two of them due to patient decision and one due to a post-procedural complication. For all patients, the rupture occurred in the middle portion of the tendon. In addition, 9 out of 18 patients were surgically treated while the remaining ones received a conservative treatment for the AT rupture, keeping the ankle from moving by using a walking boot with heel wedges. During the treatment period, the walking boot was removed to allow for all the evaluations reported here; however, it was used again readily after the assessments.
### Tendon structure
Unlike in the contralateral tendon, tendon thickening, high echogenicity and Power Doppler US derived patterns of vascularity were observed in the injured tendon throughout the 12-week observation period (Fig. 1). Interestingly, while the increased vascularity was already present at 2 weeks post injury, it kept increasing until week 12 along with thickening of the tendon. Ultimately, the injured tendon became ~ threefold thicker than the contralateral healthy tendon at the end of the observation period.
### Tendon stiffness
Pre-clinical rat model: A strong correlation was found between tendon stiffness measured in the pre-clinical model of tendon injury by SWE and ex vivo measures of the Young’s modulus (Fig. 2), which provides a valid basis for clinical assessments.
Clinical assessments: SWE maps recorded from the damaged tendon of a patient at different time-points during the healing period are shown in Fig. 3A,B and corresponding changes in tendon stiffness measured from the entire group of patients are displayed in Fig. 3C. Tendon stiffness in contralateral healthy tendons appeared homogeneous and stable over the 12-week observation period (shear wave velocity mean ± SD at baseline in distal: 11.43 ± 2.48 m/s, middle: 11.90 ± 1.35 m/s and proximal: 12.09 ± 0.95 m/s regions of the tendon). It should be noted that the evaluation of SWE measurement variability from test/retest of healthy tendons at week 2 and week 4, a sufficiently short period of time for no change to occur in the contralateral tendon, showed a CV% in distal, middle and proximal regions of 11.6%, 11.2% and 11.0%, respectively, which attests to the good reproducibility of SWE measurements. When measured close to the zone of rupture (middle-proximal) and shortly after the rupture (week 2), shear wave velocity values were lower in injured tendons (middle: 6.53 ± 1.71 m/s, proximal: 6.00 ± 1.62 m/s) in comparison with healthy tendons. However, it was reduced to a lesser extent (8.91 ± 2.29 m/s) when measured distally from the ruptured area (Fig. 3C). Near complete recovery of tendon stiffness was observed in distal and middle regions at approximately eight weeks post-injury. Indeed, the shear wave velocity reported from the distal and middle regions of ruptured tendons was 10.30 ± 2.36 10–11 m/s and 9.87 ± 1.70 m/s, respectively, which is close or only slightly lower than the velocity measured in corresponding regions of the contralateral tendon (distal: 10.87 ± 2.97 m/s, middle: 11.79 ± 1.84 m/s). On the contrary, in the proximal region, shear wave velocity recovered only to about 65% (7.77 ± 1.60 m/s) of the value measured in the healthy tendon at week 12 after the injury (Fig. 3C).
### Foot gait pressure and patient reported outcomes
Gait pattern analysis (Insole data): Plantar centre of pressure (COP) data collected throughout the healing period from a typical patient using the Pedar insole system is presented in Fig. 4A. This patient recovered to almost normal gait patterns during the 3-month rehabilitation shown by use of the entire surface of the foot across multiple steps, in particular placing pressure further towards the toes increasingly over subsequent weeks. At week 4, on the injured side, the patient applied weight through the heel region only, while after 3 months of recovery, the patient could distribute weight from the heel towards the toes as well as across the medio-lateral axis, in accordance with what can be observed on the normal, contralateral leg. Yet, the proportion of time during which this patient placed pressure through the toes in the injured leg remained diminished compared to the healthy leg.
At the patient group level, changes in the COP variables during recovery are summarized in Fig. 4B. As shown for the COP line length, COP area under the curve (AUC), line slope and Euclidean distance, the plantar pressure profile of the injured foot returned towards normal values at the end of the observation period. At this stage, patients began to apply pressure through their toes. However, for 3 out of the 4 variables displayed, all potential indicators of the smoothness in how the foot moves through the gait cycle, the body weight transfer on the foot sole still appeared incomplete at week-12, possibly indicating that torsional properties of the injured foot were not fully re-established at this time of the recovery.
PROM (ATRS score): Full weight-bearing capacity was reached in the injured leg at week 12 post-injury, with a steady improvement over the 3-month observation period. However, based on ATRS results, patients were limited by symptoms during the first 8 weeks post-injury and were able to recover ~ 60% of their perceived functionality at week 12 (Fig. 5A). Looking further into ATRS activities (Fig. 5B), it appears that patients were still not confident in their ability to jump, run, do physical labor, and walk fast and uphill. This is different from other activities like walking on uneven surfaces and activities of daily living, as well as symptoms like whole-body stiffness, low strength and fatigue, which all showed some improvement from week 2 to week 8 and marked improvement between week 8 and week 12. Interestingly, while patients were on standard-of-care pain medication, pain remained unchanged throughout the recovery period and hence was deemed as being not a major cause of reduction in the total ATRS score.
### Correlations between weight-bearing, tendon stiffness and patient reported outcomes
Significant correlations were found between weight-bearing variables extracted from the insole measurements and the AT stiffness from the affected foot as well as selected ATRS distinct activities performed over the 3-month observation period (Table 1). These positive correlations illustrate the role of the tendon in the distribution of weight at the level of the foot (in particular on the heel-toe axis) and highlight the importance of the functionality in the realization of activities requiring force, speed and good balance across both legs.
## Discussion
With this longitudinal study, we investigated the recovery of tendon structure, biomechanics, gait patterns and patient reported outcomes over 12 weeks after AT rupture. We analyzed how reliable and sensitive ultrasound-based SWE could be in detecting changes in AT stiffness during healing, and studied correlation of these respective changes with standard US features (i.e., tendon thickness, echogenicity and macro-vascularization), foot plantar pressure changes and patient reported outcomes (ATRS).
Distinct softening of the tendon has previously been detected by using SWE in Achilles tendinopathy22, with SWE showing excellent sensitivity, specificity and accuracy when compared with clinical examination. Furthermore, a strong correlation was found for SWE with conventional US findings22,23 as well as with histological assessments24 and the apparent elastic moduli of human cadaveric Achilles tendon given by tensile tests25. Finally, our own preclinical studies in rats establish SWE as a tool for assessing changes in the biomechanical properties of the AT following rupture. The mechanism driving the biomechanical change observed during healing after a tendon rupture likely stems from changes in the collagen structure of the tendon26 which could affect fluid flow through the ECM27 and cross links between collagen molecules28. Our study demonstrated that SWE can be considered as a reliable and sufficiently sensitive approach for the detection of a defective area within the tendon itself as well as to monitor the time-course recovery of tendon stiffness after a rupture. While stiffness appeared fairly homogeneous throughout the healthy tendon, spatial differences have been detected along the length of the injured tendon. Thus, when measured in the longitudinal plane, the shear wave velocity in the injured tendon was greater and closer to normal in areas away from the rupture zone. On the contrary, for measurements performed close to the rupture zone and in the relaxed state (i.e., foot at 90° position), shear wave velocity was as low as ~ 50% of the normal value within ~ 2 weeks after the injury. This emphasizes the importance of a good localization of the measurements to collect precise data, and for this the use of tissue landmarks such as the distance from the insertion point on the calcaneus is proposed.
The second most striking result of this study was that, even though weight-bearing capacity for most patients was almost fully recovered after three months, shear wave velocity remained ~ 40% lower in the injured (proximal) area compared to the healthy tendon, which itself remained stable over the 12-week observation period. On this last point, it is interesting to note that other studies have reported a decrease in stiffness in the non-ruptured side29,30. While recent SWE data also showed that repaired tendons gradually become stiffer postoperatively31, our results clearly indicated that tendon healing is still incomplete within 3-month after the rupture, even though patients at this stage typically reached a range of motion from the ankle to the foot that is at least sufficient enough for short distance normal walking. In fact, the incomplete tendon healing suggested by our SWE data is consistent with a recent study showing that the healed AT after rupture has ~ 50% lower stiffness even after a ~ 6-year healing phase32. This lack of total recovery of the AT stiffness may help to understand why patients at the end of the 3-month observation period still seemed limited in their perceived ability to perform strenuous activities that create high-strain events for the AT such as jumping, running, fast and uphill walking, as revealed by results of the ATRS symptoms. In spite of a near absence of pain during the recovery period, after 12 weeks of recovery, there may still be a risk of re-rupture while performing activities associated with strong elongation of the tendon. The measurement of tendon stiffness by SWE could be particularly useful to identify the risks associated with the practice of high-impact sport shortly after an AT injury. We show in this study that the ATRS proved to be highly reactive and clinically relevant since it has demonstrated patient recovery up to a score of ~ 60 (out of 100 at most) at 12 weeks, a value slightly higher than that of other studies33, but which could testify to the effectiveness of the recovery protocol used in this study.
Healing of an injured tendon results from different processes which can overlap in their duration depending on the location and severity of the injury. The initial inflammatory stage not only involves infiltration of cells such as neutrophils, monocytes and macrophages but also secreted angiogenic factors that initiate the formation of a vascular network9. Such neo-vascularization may be responsible for the survival of the newly forming fibrous tissue at the injury site34. The tendon thickening, higher Archambault echogenicity grade and Power Doppler US patterns of vascularity observed here would indeed suggest the presence of swelling in the ruptured area throughout the three-month healing time. During the repair process, recruited fibroblasts may also contribute to the synthesis of various components of the extracellular matrix35 leading to the absorption of large amounts of water in the injured tendon. To which extent this explains the thickened tendon observed in the last month of the monitoring period remains to be verified. However, the fact that tendon stiffness was still lower than normal suggests that, at this stage, collagen fibers were probably not yet organized optimally for exercise at high strain magnitude.
Correlations found here between tendon stiffness and foot plantar pressure results attest to the role of the AT in the transmission of force necessary to generate toe-off during the late stance of the gait cycle. This can therefore lead to the assumption that only when the AT stiffness approaches normal values, gait patterns return to normal. The foot regains the ability to distribute load adequately on the entire surface of the sole allowing the patient to improve his/her ability in performing strenuous exercises. Such an assumption can in fact be supported by the recent data showing that a stiff AT reduces ground contact time during drop jumps36. Furthermore, recent studies have shown that not only the structure of the tendon, but also its stiffness in the first 12 weeks of recovery could be linked to gait symmetry at 24 weeks, which may indicate the prognostic value of these measures for long-term outcome after AT rupture37. Interestingly, we also found that the higher the tendon stiffness, the lower the standard deviation associated with the measurement of the CoP-Length was during the walking test made of ~ 10 steps (r = − 0.48, p < 0.05). This lower variability in how plantar pressure was distributed during the step may be related to the recovery of some balance between the two feet when walking, although further studies are needed for a better understanding of this. Other COP-derived variables were extracted from these data, including the COP line slope and mean mediolateral distance, however significant associations were not found against SWE or ATRS activities for all of these variables.
Our study has limitations. First, both its heterogeneity, due to the presence of patients with and without surgery, and its limited sample size make difficult to show strong correlations between each of the variables measured independently, and hence permits for proof of concept only. However, this study was realized at a single site, thus allowing to minimize some of the variability inherent to the techniques used (e.g., operator-dependent ultrasound measurements). Second, with regard to foot plantar pressure assessments, we only assessed short walking distances. The results of the walking test would benefit from a greater number of steps for a more sensitive detection of impairment in specific gait characteristics. While this may significantly increase the amount of data collected, the automations developed as part of this work should however allow for a rapid extraction and analysis of specific gait variables. Third, while the ATRS questionnaire was mainly developed to provide an index of patient performance, subcategories were used here to provide information on the patient’s ability to perform particular exercises that were more or less strenuous for the AT. The validity of these sub-category tests taken separately will have to be studied further, before drawing definite conclusions on their association with foot pressure variables. Lastly, our patient cohort is clinically inhomogeneous. However, the objective of the study was not to precisely assess the speed of healing after an ATR. We studied the correlation between tendon biomechanics, the dynamics of weight loading on the foot and patient symptoms. The incorporation of patients from a broader spectrum of disease allowed to establish such correlations.
In conclusion, we could show correlations of tendon structure, biomechanics, gait patterns and patient reported outcomes over 12 weeks during recovery after AT rupture. We could demonstrate that SWE is an accurate diagnostic tool that can improve detection of tendon injuries and might thus be well suited for monitoring treatment effects aiming to accelerate the regeneration of injured tendon. In addition, this study has shown the link between structural and biomechanical characteristics of the Achilles tendon with foot plantar pressure and patient’s perception of being able to perform a certain type of exercise, thus allowing a better understanding of how an alteration of the AT structure can have an impact on the daily activities of affected patients.
## Material and methods
### Patient population
The study population comprised 18 male and female patients (18 years and older) with confirmed acute unilateral AT total or partial rupture requiring orthopedic treatment which consisted of either a conservative or a minimally invasive surgical treatment procedure. Patients with partial rupture were selected based on sudden pain onset, no positive results to calf squeeze or Matles tests and absence of partial tears due to tendinosis, verified by ultrasound. Patients with AT tendinopathy without partial rupture (as verified by ultrasound) and sudden pain onset were excluded from the study. Although, patients with total ATR and patients with partial rupture should not be combined to investigate the recovery rate after injury, we consider these groups relevant to investigate the relationship between AT stiffness, gait pattern and patient symptoms, which are all a priori adequate measures for each of these conditions.
Due to the exploratory nature of the study, no formal sample size estimation could be done. Yet, the sample size of 18 patients was deemed practical and feasible based on prior studies11 and assumed to allow detection of changes that are clinically relevant to future intervention studies. Exclusion criteria were: Evidence of systemic acute or chronic inflammatory disease other than at the AT, previous history of ipsilateral AT injuries, pre-existing ipsilateral tendinopathy and conditions that prevented increasing weight-bearing during rehabilitation, such as lower limb injuries, cognitive impairment, or other conditions impacting controlled increase in weight-bearing were all considered as exclusion criteria. Of the 18 patients who participated in the study, 15 patients completed the study. Two patients discontinued the study due to personal reasons not related to the study and one due a post-surgical complication. Patient demographics and baseline characteristics are presented in Table 2.
The treatment was allocated to each patient per standard of care at the Center Operative Medicine of the Innsbruck University Hospital and is described in Table 3. For 14 of the 15 patients who completed the study, treatment was initiated in the first week after the injury, with the exception of one patient (patient 1001–1007) with partial rupture whose conservative treatment started on day 9, and all patients wore a removable boot for up to 10 weeks. From the beginning, weight-bearing was allowed, and all patients were encouraged to walk after subsidence of initial pain with a modified shoe allowing for rolling the foot despite restricted dorsiflexion of the foot. After removal of the boot, passive and active range of motion as well as physiotherapy with increasing load was performed on each patient according to an individualized program. Experimental procedures were approved and carried out in accordance with the guidelines of the human investigation committee of the Innsbruck School of Medicine. All subjects gave informed consent after the purpose, nature, and potential risks of the study were explained to them.
### Study design
A maximum of 25-day screening period was allowed for each patient. Eligible patients underwent first evaluations 2 weeks after surgery or, for patients treated conservatively, 2 weeks after confirmed diagnosis of tendon rupture. Tendon stiffness, thickness and vascularity of both legs were evaluated by ultrasound (US) imaging using shear wave elastography (SWE), B-mode and Power Doppler, respectively. To study the correlation between tendon stiffness and foot motion and clinical improvement, weight-bearing was assessed, and patients were asked to complete the ATRS, a patient reported outcome measurement (PROM) measure to evaluate their symptoms and the impact on physical activity. Patients were followed throughout the course of tendon healing over three months. Assessments were performed at the hospital at weeks 2, 4, 8 and 12 between January 2017 and July 2017.
### Ultrasound measurements
AT rupture damages the organized architecture of the collagen fibers at the rupture site. During the healing process, the AT undergoes remodeling which leads to changes in the structural and biomechanical properties as well as vascularity of the tendon. We therefore assessed tendon stiffness, thickness, echogenicity and vascularity by US imaging using a dedicated sonography unit (Aixplorer, Supersonic Imagine; Aix-en-Provence, France; software version 5). Both the affected and unaffected (contralateral) tendon were evaluated during the healing period at weeks 2, 4, 8 and 12, with the same settings applied in every subjects. Week-2 was the first possible measurement time for all patients, and for patients that were treated conservatively, the walking boot was removed to allow for the various assessments. Tendon stiffness was measured using the SWE mode on the Aixplorer, while B-mode and Doppler ultrasound acquisitions were performed for structural (ie, thickness and echogenicity) and vascularity assessments, respectively. All measurements were done in triplicates, in the sagittal plane, with the subject lying in the prone position with both arms alongside and both feet hanging over the edge of the stretcher and the foot under a relaxed/neutral position, hence the plantar flexors were in the passive state. The measurement session started only after a 10-min period of rest on the examination bed to equilibrate fluids in the body.
For all measurements, the high-resolution linear 15 MHz transducer (SuperLinear SL15-4, SuperSonic Imagine) was stationed as lightly as possible on top of a generous amount of coupling gel, perpendicularly to the surface of the skin. The transducer was angled cranially-caudally until the scanned plane showed an AT with maximum echogenicity. Thickness, echogenicity, vascularity and shear wave velocity were all measured in the same regions of each AT. Three representative locations in the proximal, middle and distal end of the tendon were considered for the analysis. For SWE measurements, the transducer was kept motionless for 3 to 5 s during the map acquisition. Finally, slow flow settings and color mode were selected for Power Doppler measurements in small vessels.
Specifically for the SWE system, all measurements were performed using the musculoskeletal mode with a frequency range of 4–15 MHz, the SWE Opt as the penetration mode, the opacity at 85% and presets adjusted to a depth of 2 cm and an elastic scale from 0 to 800 kPa. The color scale used in the shear modulus (in kPa) showed the lowest values in blue to the highest values in red. The size of the regions of interest (ROI) had to be at least 40 15 mm in order to cover one third of the Achilles tendon and the Q-Box diameter was defined by the thickness of the tendon, which is the distance between the superior and inferior borders of the Achilles tendon38. The Q-Box was traced manually to include maximum of the tendon third and to avoid peritendinous structures like the paratenon.
During measurements, sufficient ultrasound gel was applied between the skin and the transducer to avoid skin deformation. The midpoint of the transducer was placed perpendicularly on the skin’s surface on every tendon third with a light pressure, and then the SWE mode was activated to examine the shear wave modulus of the tendon39. During the SWE acquisition, the transducer was kept motionless for about 3–5 s until the gray scale image showed the tendon in its longitudinal section. Image quality was closely monitored throughout the measurements. As soon as the color appeared uniform in the selected tendon region, with superior and inferior borders of the tendon continuously visible, images were frozen, then put on the Q-Box to depict a shear wave modulus map and get it stored for SWE analysis (kPa, m/s)40. Three images were captured at each measurement site of each tendon third. The mean of the elastic modulus from all three images was used for further analyses.
For image analysis, regions-of-interest (ROI) were selected so that their diameter covered almost the entire thickness of the tendon at a given location. SWE color maps were analyzed quantitatively for tendon stiffness ROI-based determination of shear wave velocities (in m/s) calculated from triplicate measurements and assuming that the shear wave velocity can be an index for quantifying Young’s modulus of the tendon41,42. Tendon thickness was measured from B-mode ultrasound images. Regarding the longitudinal grading of tendon echotexture based on echogenicity, the grading system was applied as follows43: Grade 1—normal-appearing tendon with homogeneous fibrillar echotexture and parallel margins; Grade 2—focal fusiform or diffuse enlarged tendon with bowed margins; Grade 3—hypoechoic areas (ruptures) with or without tendon enlargement, accompanied by signs of fiber dehiscence (ie, rupture gap). Finally, AT vascularity was evaluated using Power Doppler. Although the vascularization in normal AT is very poor, after a tendon total rupture, healing was shown to be accompanied by neovascularization and increased vascularity34. Therefore, video clips were graded at each tendon location as Grade 0 for no intra-tendinous vascularity, Grade 1 for 1/3 intra-tendinous vascularity, Grade 2 for 2/3 intra-tendinous vascularity and Grade 3 for 3/3 intra-tendinous hypervascularity.
For validation purposes, a pre-clinical experiment was conducted on both healthy and tenotomized rats using the same imaging equipment adopted for the clinical assessments and a 25 MHz transducer (SL25-15, SuperSonic Imagine). Studies performed on Sprague–Dawley rats were approved by the Cantonal Veterinary Authorities of Basel, Switzerland (license BS-2439) and performed in accordance with the Swiss animal welfare regulations. Upon onset of anesthesia with isoflurane (Abbott, Cham, Switzerland), the right leg was shaved, and the exposed skin prepared aseptically. A dorsal incision was made above the AT and the superficial tendon was exposed and transected at the mid-portion from the lateral aspect perpendicular to the collagen fibers. Tendon ends were then sutured together using a three loop Pulley pattern. SWE was applied to anesthetized animals in a similar way as described for humans to measure AT stiffness in vivo. Measurements were performed at 4 weeks after tenotomy. Immediately after SWE assessments, rats were culled, and tendons extracted for ex vivo biomechanical assessments using an Instron testing apparatus (model 3300, Instron, Norwood, MA). After mounting the tendons between clamps, stress–strain curves were generated, with elongation applied in the axial direction until rupture.
### Wearable insoles
In the clinical trial, instrumented insoles consisting of a pressure sensitive grid were used to record the average amount of weight applied on the entire foot and to also collect more granular spatio-temporal data on plantar pressure for both the affected and non-affected legs, over a walking test at each visit during the recovery period. These data were recorded using the Pedar in-shoe system (Novel GmbH, Munich, Germany) which has been shown to be a sensitive tool for the assessment of in-shoe plantar pressure distribution44. The insole system measured vertical pressure using a matrix of 99 capacitive sensors with a spatial resolution of approximately 10 mm and a working dynamic range of 0–600 kPa at a rate of either 50 or 100 Hz. Each patient was asked to walk in a straight line over 10 m at a self-selected speed during each visit to the site during rehabilitation while data was collected from insoles worn in each shoe. Data were analyzed post-hoc using Matlab (Mathworks Inc., Natick, MA, USA).
As recently described45 and shown in Fig. 6, a frame of pressure data ($${f}_{x,y}^{i}$$ ), measured in kPa, was collected for each time instant (i) for all sensors in the mediolateral (x) and anteroposterior (y) directions. The sum of this force across all sensors for each time instant i, $${F}_{i}={\sum }_{x=1}^{Nx}\sum_{y=1}^{Ny}{f}_{x,y}^{i}$$, where Nx and Ny are maximum number of sensors in the x and y direction, was manually examined post data collection to identify the start and end of walking periods. Sensor failure resulted in intermittent empty data frames and these were removed from further analyses. Data were further manually reviewed to identify where numerous empty data frames significantly affected analyses and the corresponding steps were not included in any analysis. An empirically defined threshold applied to the force data was used to define the start and end of each step. The centre of pressure (COPx,yi) in the x and y directions at each time instant i for each step was extracted for further analysis as per the following equations:
$${COP}_{x,y}^{i} = [{COP}_{x}^{i}, {COP}_{y}^{i} ]$$
$${COP}_{x}^{i}= \frac{{\sum }_{x=1}^{Nx}\sum_{y=1}^{Ny}{POS}_{x} \times {f}_{x,y}^{i}}{{F}_{i}}$$
$${COP}_{y}^{i}= \frac{{\sum }_{x=1}^{Nx}\sum_{y=1}^{Ny}{POS}_{y} \times {f}_{x,y}^{i}}{{F}_{i}}$$
where $${POS}_{x}$$ and $${POS}_{y}$$ are the positions of each sensor (from 1 to m and n respectively) in centimeters along directions x and y, $${COP}_{x}^{i}$$ and $${COP}_{y}^{i}$$ are the centres of pressure along respective directions at time i.
The COP for a single step during one of the walks is shown in Fig. 6. The point through which force was applied over the entire step is shown starting from heel strike (occurring low in the AP direction) to toe off (occurring high in the AP direction). The following metrics were extracted from these data:
• AP distance (mm): The difference between the minimum and the maximum of COP across the antero-posterior (y) direction was calculated for each step. The overall mean of this distance was recorded across all steps per walk.
• COP line length (mm): The COP line was defined as a straight line between the COP point at heel strike and at toe off. The length of this line was defined as the COP line length.
• COP-path length (mm): The sum of the Euclidean distances between each successive COP point (the COP-path).
• COP-AUC (mm2): The area under the curve between the COP-path and the COP line.
• Weight-bearing: Over each walk, the maximum force placed on both foot (the weight-bearing capacity) was recorded.
### Achilles tendon rupture score (ATRS)
The Achilles Tendon Total Rupture Score (ATRS) was developed to evaluate patient-reported symptoms and their effects on physical activity following either conservative or surgical treatment of an AT rupture20. The score consists of ten items evaluating clinically relevant aspects of symptoms and physical activity. Each item ranges between 0 (no limitation) and 10 (severe limitation) on a Likert scale with a maximal score of 100 indicating no symptoms and full function. Patients completed the ATRS at screening and at weeks 2, 4, 8 and 12.
### Statistical analysis
All statistical analyses were performed using SAS, Version 9.4. Descriptive statistical methods were used to calculate the means ± standard deviations (SD) of tendon shear wave velocity, thickness, echogenicity and vascularity grades, ATRS scores and insoles-derived variables. Test–retest reliability of the tendon stiffness measurements across all visits used data from healthy contralateral tendons and the coefficient of variation (%CV) was calculated for the proximal, middle and distal tendon region. No imputation was done for missing values. A repeated measures analysis of variance (ANOVA) was done with fixed effects of region of index, time, and foot position. The difference of SWE values between the damaged and contralateral tendon was examined using paired Student t-test. A cutoff value between injured and healthy tendons was evaluated by receiver operating characteristic analysis, choosing a confidence interval of 95%. The correlation between quantitative SWE values, clinical scores and insoles variables was evaluated using Spearman rank correlation coefficients. All reported p-values are two sided, and p-values < 0.05 were considered to be statistically significant.
## Abbreviations
AT:
Achilles tendon
ATRS:
Achilles total rupture score
CoP:
Center-of-pressure
PROM:
Patient reported outcome measurement
SWE:
Shear wave elastography
US:
Ultrasound
## References
1. 1.
Kvist, M. Achilles tendon injuries in athletes. Sports Med. 18, 173–201 (1994).
2. 2.
Lemme, N. J. et al. Epidemiology of Achilles tendon ruptures in the United States: Athletic and nonathletic injuries from 2012 top 2016. Orthop. J. Sports Med. 6, 2325967118808238 (2018).
3. 3.
Kannus, P. Etiology and pathophysiology of chronic tendon disorders in sports. Scand. J. Med. Sci. Sports. 7, 78–85 (1997).
4. 4.
Huttunen, T. T., Kannus, P., Rolf, C., Felländer-Tsai, L. & Mattila, V. M. Acute achilles tendon rupture: Incidence of injury and surgery in Sweden between 2001 and 2012. Am. J. Sports Med. 42, 2419–2423 (2014).
5. 5.
Sheth, U. et al. The epidiomology and trends in management of acute Achilles tendon ruptures in Ontario, Canada: A population-based study of 27607 patients. Bone Joint J. 99, 78–86 (2017).
6. 6.
Ganestam, A., Kallemose, T., Troelsen, A. & Barfod, K. W. Increasing incidence of acute Achilles tendon rupture and a niteable decline in surgical treatment from 1994 to 2013: A nationwide registry study of 33,160 patients. Knee Surg. Sports Traumatol. Arthrosc. 24, 3730–3737 (2016).
7. 7.
Loppini, M. & Maffulli, N. Conservative management of tendinopathy: An evidence based approach. Muscles Ligaments Tendons J. 1, 134–137 (2012).
8. 8.
Stavrou, M., Seraphim, A., Al-Hadithy, N. & Mordecai, S. C. Review article: Treatment for Achilles tendon ruptures in athletes. J. Orthop. Surg. 21, 232–235 (2013).
9. 9.
Docheva, D., Müller, S. A., Majewski, M. & Evans, C. H. Biologics for tendon repair. Adv. Drug Delivery Rev. 84, 222–239 (2015).
10. 10.
Sandrin, L., Tanter, M., Catheline, S. & Fink, M. Shear modulus imaging with 2-D transient elastography. IEEE Trans. Ultrason. Ferroelectr. Freq. Control. 49, 426–435 (2002).
11. 11.
Peltz, C. D. et al. ShearWave elastography: repeatability for measurement of tendon stiffness. Skeletal Radiol. 42, 1151–1156 (2013).
12. 12.
Aubry, S. et al. Biomechanical properties of the calcaneal tendon in vivo assessed by transient shear wave elastography. Skelet. Radiol. 42, 1143–1150 (2013).
13. 13.
Aubry, S. et al. Transient elastography of calcaneal tendon: preliminary results and future prospects. J. Radiol. 92, 421–427 (2011).
14. 14.
Siu, W. I., Chan, C. H., Lam, C. H., Lee, C. M. & Ying, M. Sonographic evaluation of the effect of long-term exercise on Achilles tendon stiffness using shear wave elastography. J. Sci. Med. Sport. 19, 883–887 (2016).
15. 15.
Brum, J., Bernal, M., Gennisson, J. L. & Tanter, M. In vivo evaluation of the elastic anisotropy of the human Achilles tendon using shear wave dispersion analysis. Phys. Med. Biol. 59, 505–523 (2014).
16. 16.
Chen, X. M. et al. Shear wave elastographic characterization of normal and torn achilles tendons: a pilot study. J. Ultrasound Med. 32, 449–455 (2013).
17. 17.
Chen, S., Lach, J., Lo, B. & Yang, G. Z. Toward pervasive gait analysis with wearable sensors: A systematic review. IEEE J. Biomed. Health Inform. 20, 1521–1537 (2016).
18. 18.
Howcroft, J., Kofman, J., Lemaire, E. D. & Mcllroy, W. E. Analysis of dual-task elderly gait in fallers and non-fallers using wearable sensors. J. BioMech. 49, 992–1001 (2016).
19. 19.
Roth, N. et al. Synchronized sensor insoles for clinical gait analysis in home-monitoring applications. Curr. Direct. Biomed. Eng. 4, 433–437 (2018).
20. 20.
Nilsson-helander, K. et al. The Achilles tendon total rupture score (ATRS): Development and validation. Am. J. Sports Med. 35, 421–426 (2007).
21. 21.
Ganestam, A., Barfod, K., Klit, J. & Troelsen, A. Validity and reliability of the Achilles tendon total rupture score. J. Foot Ankle Surg. 52, 736–739 (2013).
22. 22.
De Zordo, T. et al. Real-time sonoelastography: Findings in patients with symptomatic achilles tendons and comparison to healthy volunteers. Ultraschall Med. 31, 394–400 (2010).
23. 23.
De Zordo, T. et al. Real-time sonoelastography findings in healthy Achilles tendons. AJR Am. J. Roentgenol. 193, W134-138 (2009).
24. 24.
Klauser, A. S. et al. Achilles tendon assessed with sonoelastography: Histologic agreement. Radiology 267, 837–842 (2013).
25. 25.
Haen, T. X., Roux, A., Soubeyrand, M. & Laporte, S. Shear waves elastography for assessment of human achilles tendon’s biomechanical properties: an experimental study. J. Mech. Behav. Biomed. Mater. 69, 178–184 (2017).
26. 26.
Provenzano, P. P. & Vanderby, R. Jr. Collagen fibril morphology and organization: Implications for force transmission in ligament and tendon. Matrix Biol. 25, 71–84 (2006).
27. 27.
Elliott, D. M. et al. Effect of altered matrix proteins on quasilinear viscoelastic properties in transgenic mouse tail tendons. Ann. Biomed. Eng. 31, 599–605 (2003).
28. 28.
Lakes R. Viscoelastic Materials. (Cambridge University Press, 2009).
29. 29.
Li, Q., Zhang, Q., Cai, Y. & Hua, Y. Patients with Achilles tendom rupture have a degenerated contralateral Achilles tendon: An elastographic study. Biomed. Res. Int. 2018, 2367615 (2018).
30. 30.
Ciloglu O. & Görgülü F.F. Evaluation of a torn Achilles tendon after surgical repair: An ultrasound and elastographic study with 1-year follow-up. J. Ultrasound Med. (2020) (epub ahead of print).
31. 31.
Zhang, L. N. et al. Evaluation of elastic stiffness in healing Achilles tendon after surgical repair of a tendon rupture using in vivo ultrasound shear wave elastography. Med. Sci. Monit. 9, 1186–1191 (2016).
32. 32.
Frankewycz, B. et al. Achilles tendon elastic properties remain decreased in long term after rupture. Knee Surg. Sports Tramatol. Arthrosc. 26, 2080–2087 (2018).
33. 33.
Kearney, R. S., Achten, J., Lamb, S. E., Parsons, N. & Costa, M. L. The achilles tendon total rupture score: A study of responsiveness, internal consistency and convergent validity on patients with acute achilles tendon ruptures. Health Qual. Life Outcomes 10, 24–31 (2012).
34. 34.
Fenwick, S. A., Hazleman, B. L. & Riley, G. P. The vasculature and its role in the damaged and healing tendon. Arthritis Res. 4, 252–260 (2002).
35. 35.
Lindsay, W. K. & Birch, J. R. The fibroblast in flexor tendon healing. Plast. Reconstr. Surg. 34, 223–232 (1964).
36. 36.
Abdelsattar, M., Konrad, A. & Tilp, M. Relationship between Achilles tendon stiffness and ground contact time during drop jumps. J. Sports Sci. Med. 17, 223–228 (2018).
37. 37.
Zellers, J. A., Cortes, D. H., Pohlig, R. T. & Silbernagel, K. G. Tendon morphology and mechanical properties assessed by ultrasound show change early in recovery and potential prognostic ability for 6-month outcomes. Knee Surg. Sport Traumatol. Arthrosc. 27, 2831–2839 (2019).
38. 38.
Zhang, Z. J. & Fu, S. N. Shear elastic modulus on patellar tendon captured from supersonic shear imaging: Correlation with tangent traction modulus computed from material testing system and test–retest reliability. PLoS ONE 8, e68216 (2013).
39. 39.
Payne, C., Watt, P., Cercignani, M. & Webborn, N. Reproducibility of shear wave elastography measures of the Achilles tendon. Skelet. Radiol. 47, 779–784 (2018).
40. 40.
Zhou, J., Yu, J., Liu, C., Tang, C. & Zhang, Z. Regional elastic properties of the achilles tendon is heterogeneously influenced by individual muscle of the gastrocnemius. Appl. Bionics Biomech. 2019, 8452717 (2019).
41. 41.
Martin, J. A. et al. In vivo measures of shear wave speed as a predictor of tendon elasticity and strength. Ultrasound Med. Biol. 41, 2722–2730 (2015).
42. 42.
Yeh C.L., Kuo P.L., Li P. Correlation between the shear wave speed in tendon and its elasticity properties. in Paper Presented at: IEEE International Ultrasonics Symposium, July 21–25, 2013; Prague, Czech Republic.
43. 43.
Archambault, J. M. et al. Can sonography predict the outcome in patients with achillodynia?. J. Clin. Ultrasound. 26, 335–339 (1998).
44. 44.
Ramanathan, A. K., Kiran, P., Amold, G. P., Wang, W. & Abboud, R. J. Repeatability of the Pedar-X1 in-shoe pressure measuring system. Foot Ankle Surg. 16, 70–73 (2010).
45. 45.
Walsh L. et al. Quantifying functional difference in centre of pressure past Achilles tendon rupture using sensor insoles. in Paper Presented at: IEEE Engineering in Medicine and Biology Society (EMBC), July 23–27, 2019; Berlin, Germany.
## Author information
Authors
### Contributions
D.L., A.K., J.G., M.B. and M.S. conceived the study. All authors contributed to the design of the experiments. N.B., E.W., A.K. and M.B. performed the experiments. D.L., L.W., A.M., N.B., H.H. and A.K. conducted the analysis of the data. D.L., L.W. and M.S. wrote the manuscript. All authors edited and reviewed the manuscript.
### Corresponding author
Correspondence to Didier Laurent.
## Ethics declarations
### Competing interests
Drs D Laurent, L Walsh, A Muaremi, N Beckmann, Eckhard Weber, F Chaperon, H Haber and M Schieker are employees of Novartis and may own shares of Novartis. Drs J Goldhahn, A Klauser and M Blauth declare no potential conflict of interest. The study was funded by Novartis.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
Laurent, D., Walsh, L., Muaremi, A. et al. Relationship between tendon structure, stiffness, gait patterns and patient reported outcomes during the early stages of recovery after an Achilles tendon rupture. Sci Rep 10, 20757 (2020). https://doi.org/10.1038/s41598-020-77691-x
• Accepted:
• Published: | 2021-01-16 08:10:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5553445219993591, "perplexity": 5370.05578191714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00596.warc.gz"} |
http://help.lockergnome.com/security/Google-Hijack-virus--ftopict10008.html | Help!
Goto page 1, 2
Next: Deleted Profile and Norton Security
Author Message
delonghorn
Joined: Oct 19, 2006
Posts: 5
Basherz_MkII
Joined: Oct 11, 2006
Posts: 27
Location: South Wales
Posted: Fri Oct 20, 2006 11:05 am Post subject: Re: Google Hijack virus Hi If you can, download: AntiVirus = AVG: HERE Spyware = Ewido: HERE Update both these products. Do not scan yet. Re-boot into safe mode and run Ewido. When thats finished, run AVG just to check your machine. Please let us know how you got on.
BigFurryMonster
Joined: Nov 10, 2006
Posts: 2
Posted: Fri Nov 10, 2006 2:39 pm Post subject: Re: Google Hijack virus I seem to have the same thing. Google search results show up as normal. Then, sometimes, when I click on a search result link, some weird page shows up. Mostly German ones, and often genealogie.de. When I go back and click the same link again, I get the proper page. Seems like some subtle virus. But which one ...?
seaeagle
Joined: Aug 31, 2004
Posts: 5764
Location: Sydney, Australia
Posted: Fri Nov 10, 2006 3:56 pm Post subject: Re: Google Hijack virus If you haven't been able to remove it using normal anti-virus procedures, then you may want to post a log in Lockergnome's HijackThis Logs forum.
BigFurryMonster
Joined: Nov 10, 2006
Posts: 2
Posted: Fri Nov 10, 2006 4:16 pm Post subject: Re: Google Hijack virus Thanks for your quick reply! The thing is - I'm not even sure if it's a virus, a problem with my browser, adware, or some issue on Google's end.
danmissi
Joined: Jan 27, 2007
Posts: 1
Posted: Sat Jan 27, 2007 6:11 am Post subject: ditto im having the same issue... ive noticed if i click, fast enough, the back button and the link again it will give me results in english. With the redirect to genealogie.de every now and then also weird problem, any help would be appreciated, this is the only thread i could find on the subject. thanks, dan this is my 3rd post in 6 years of interneting so forgive me if i broke the rules
ZEUS_GB
Joined: Jan 14, 2003
Posts: 5065
Location: UK
Posted: Wed Jan 31, 2007 1:12 pm Post subject: Re: ditto Hello danmissi and welcome to Lockergnome! Please post a Hijack This logfile in our Hijack This forum so our malware experts can have a look at it. Hijack this forum
DoctorBob
Joined: Mar 02, 2007
Posts: 3
DomBray
Joined: Jun 01, 2008
Posts: 12
Posted: Sun Jun 01, 2008 12:50 pm Post subject: Thank god... I've been having a similar problem on Vista, and have found that by blovking thrid party cookies I finally get my search results links working again. But I have tried clearing out the cookies from within IE and also several spy-ware and adware removal programs and all fail to find this one. I do not know if I've hit something new but will post in the hijack this forum to see if there is something new...
cschwabe
Joined: Nov 03, 2008
Posts: 1
Posted: Mon Nov 03, 2008 11:52 pm Post subject: Certainly a Google Hijack... I had the same thing, too. And I wasn't sure if it was because of the cookies, or if it was virus or what. Also IE was running slow, so I just cleared EVERYTHING (cookies, cache, temp files), rebooted, then ran a vir scan from two diff scans (AVG & Kaspersky) then rebooted again. It seemd to go away after that, so I'm not sure if it's exactly the same thing or what exactly that I did to get rid of it...
alex_us01
Joined: Dec 07, 2008
Posts: 1
Posted: Mon Dec 08, 2008 3:32 am Post subject: how to remove Rootkit.Agent Hello, I had the same problem. I have Malwarebytes' Anti-Malware (freeware or something as I didn't pay for it) and its quick scan could find the virus. Here is the webpage: http://www.malwarebytes.org/mbam.php It put this message in the log: Files Infected: C:\WINDOWS\SYSTEM32\sysaudio.sys (Rootkit.Agent) -> Quarantined and deleted successfully. and also required a restart. After the restart, things got normal again. I hope that solves the problem for you (if you can find this web page hopefully having access to another non-affected computer environment).
zhean
Joined: May 12, 2007
Posts: 2
Posted: Thu Sep 03, 2009 8:16 pm Post subject: i had google virus before and it came with some computer security program which was fake and asked for money. i used spyware doctor to fix google virus and all the other mess, although it is paid but it has 30 day money back guarantee
ak24
Joined: Jul 30, 2009
Posts: 9
Posted: Tue Sep 15, 2009 9:43 am Post subject: Use ComboFix. It solved the same problem for me. http://www.combofix.org ____________________________________ http://cid-556a72d9038a7868.spaces.live.com
joseboy
Joined: Oct 21, 2009
Posts: 1
Posted: Wed Oct 21, 2009 5:22 pm Post subject: Ok, if you really want to fix the spyware without having to format you pc, you need Mccaffe, this is your only chance to saving your hard drive, other wise, hate to say this, but your gonna need to format your IDE HD(HARD DRIVE) 0. because it is spyware, it is going to be hard to pick out, but the other anti-virus you can try is office scan. go to this site, install the software, and then run a malware and spyware full scan, duration of the scan will depend on the amount of data you have on your HD. http://www.trendmicro.com/download/product.asp?productid=5 thats all i got for ya, shoot me an email if you need furthur help.... extremecomput RemoveThis @gmail.com till then, PEACE!
reedj04
Joined: Jan 18, 2010
Posts: 1
Posted: Mon Jan 18, 2010 1:09 pm Post subject: I tried Trbear's Hitman 3.5 from CNET. It WORKED !!!!! Thank you Thank you Thank you
wonteach
Joined: Jan 30, 2010
Posts: 1
Posted: Sat Jan 30, 2010 7:00 pm Post subject: I haven't yet found any solution that works. But for those of you in my situation, I have found that if you right-click on the link in the google search results, then choose "Open in new tab," you'll probably get hijacked the first time. But if you do the same thing again, immediately, you always get sent to the correct address. Granted, it's a pain in the ass, but it will get you around the problem until you can find a real fix.
Arm
Joined: Feb 16, 2010
Posts: 2
Posted: Tue Feb 16, 2010 10:52 pm Post subject: I had the same problem & it drove me crazy. I killed the spyware with MalwareBytes & double checked with AVG. After both came up clean my Google search still mis-directed me. I edited my DNS Host File C:\windows\system32\drivers\etc\hosts and deleted the entries that were taking me to the fake Google site. (All entries after localhost)
eldo500
Joined: Feb 17, 2010
Posts: 2
Posted: Wed Feb 17, 2010 1:08 pm Post subject:
Arm wrote: I had the same problem & it drove me crazy. I killed the spyware with MalwareBytes & double checked with AVG. After both came up clean my Google search still mis-directed me. I edited my DNS Host File C:\windows\system32\drivers\etc\hosts and deleted the entries that were taking me to the fake Google site. (All entries after localhost)
Worked perfectly for me. Thanks a million!
eldo500
Joined: Feb 17, 2010
Posts: 2
Posted: Wed Feb 17, 2010 1:16 pm Post subject: Sheesh, thought that solved it, but the issue came right back the next time I searched.
Arm
Joined: Feb 16, 2010
Posts: 2
Posted: Wed Feb 17, 2010 1:50 pm Post subject:
eldo500 wrote: Argh, never mind. Rebooted it and it's forwarding again.
try flushing the DNS cache
at a command prompt type ipconfig /flushdns
It may also be in IE (if you're using that) I reset all the settings.
I used the ping command to see if the issue was in DNS or IE | 2015-11-27 20:53:27 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8153910040855408, "perplexity": 5309.403657437104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398450581.71/warc/CC-MAIN-20151124205410-00247-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.jstage.jst.go.jp/browse/transele/advpub/0/_contents/-char/en | IEICE Transactions on Electronics
Online ISSN : 1745-1353
Print ISSN : 0916-8524
Showing 1-50 articles out of 54 articles from Advance online publication
• Liang CHEN, Dongyi CHEN
Type: PAPER
Article ID: 2020DIP0003
Published: 2020
[Advance publication] Released: August 04, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Input devices based on direct touch have replaced traditional ones and become the mainstream interactive technology for handheld devices. Although direct touch interaction proves to be easy to use, its problems, e.g. the occlusion problem and the fat finger problem, lower user experience. Camera-based mobile interaction is one of the solutions to overcome the problems. There are two typical interaction styles to generate camera-based pointing interaction for handheld devices: move the device or move an object before the camera. In the first interaction style, there are two approaches to move a cursor's position across the handheld display: move it towards the same direction or the opposite direction which the device moves to. In this paper, the results of a comparison research, which compared the pointing performances of three camera-based pointing techniques, are presented. All pointing techniques utilized input from the rear-facing camera. The results indicate that the interaction style of moving a finger before the camera outperforms the other one in efficiency, accuracy, and throughput. The results also indicate that within the interaction style of moving the device, the cursor positioning style of moving the cursor to the opposite direction is slightly better than the other one in efficiency and throughput. Based on the findings, we suggest giving priority to the interaction style of moving a finger when deploying camera-based pointing techniques on handheld devices. Given that the interaction style of moving the device supports one-handed manipulation, it also worth deploying when one-handed interaction is needed. According to the results, the cursor positioning style of moving the cursor towards the opposite direction which the device moves to may be a better choice.
• K. Watanabe, Y. Kobayashi, Y. Koike
Type: INVITED PAPER
Article ID: 2020DII0004
Published: 2020
[Advance publication] Released: July 22, 2020
JOURNALS FREE ACCESS ADVANCE PUBLICATION
Temperature-independent zero-zero-birefringence polymer (TIZZBP), which exhibits very small birefringence over the wide temperature range, is required to realize real-color images for displays, particularly vehicle-mounted displays. Previously, a TIZZBP was synthesized, but they did not put into practical use because of their too complex composition and low mechanical strength. In this paper, we propose a practical TIZZBP that has high heat resistance, high transparency and sufficient mechanical strength, using a simple binary copolymerization system. Our proposed novel polymer exhibits very low photoelastic birefringence and very low orientational birefringence. Both types of birefringence of this TIZZBP satisfy the negligible levels for displays, which are defined as follows: the absolute values of photoelastic coefficient and intrinsic birefringence are less than 1 × 10-12 Pa-1 and 1 × 10-3, respectively. In addition, temperature dependency of orientational birefringence was very low. Orientational birefringence satisfies the negligible level all over the temperature range from around -40°C to 85°C. This temperature range is important because it is the operational temperature range for vehicle-mounted display. Furthermore, our proposed novel TIZZBP showed high heat resistance, high transparency and sufficient mechanical strength. The glass transition temperature was 194°C. The total light transmittance and the haze value is more than 91% and less than 1%, respectively. The tensile strength of non-oriented films was 35 ∼ 50 MPa. These results suggest our proposed novel TIZZBP has high practicality in addition to very low birefringence. Therefore, this TIZZBP film will be very useful for various displays including vehicle-mounted displays and flexible displays.
• Hiroshi Haga, Takuya Asai, Shin Takeuchi, Harue Sasaki, Hirotsugu Yama ...
Type: INVITED PAPER
Article ID: 2020DII0005
Published: 2020
[Advance publication] Released: July 22, 2020
JOURNALS FREE ACCESS ADVANCE PUBLICATION
We developed an 8.4-inch electrostatic-tactile touch display using a segmented-electrode array (30 × 20) as both tactile pixels and touch sensors. Each pixel can be excited independently so that the electrostatic-tactile touch display allows presenting real localized tactile textures in any shape. A driving scheme in which the tactile strength is independent of the grounding state of the human body by employing two-phased actuation was also proposed and demonstrated. Furthermore, tactile crosstalk was investigated to find it was due to the voltage fluctuation in the human body and it was diminished by applying the aforementioned driving scheme.
• Daisuke INOUE, Tomomi MIYAKE, Mitsuhiro SUGIMOTO
Type: INVITED PAPER
Article ID: 2020DII0002
Published: 2020
[Advance publication] Released: July 21, 2020
JOURNALS FREE ACCESS ADVANCE PUBLICATION
Although transmittance changes like a quadratic function due to the DC offset voltage in FFS mode LCD, its bottom position and flicker minimum DC offset voltage varies depending on the gray level due to the flexoelectric effect. We demonstrated how the influence of the flexoelectric effect changes depending on the electrode width or black matrix position.
• Daisaku Mukaiyama, Masayoshi Yamamoto
Type: PAPER
Article ID: 2020ECP5009
Published: 2020
[Advance publication] Released: July 14, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Aluminum Electrolytic Capacitors are widely used as the smoothing capacitors in power converter circuits. Recently, there are a lot of studies to detect the residual life of the smoothing Aluminum Electrolytic Capacitors from the information of the operational circuit, such as the ripple voltage and the ripple current of the smoothing capacitor. To develop this kind of technology, more precise impedance models of Aluminum Electrolytic Capacitors become desired. In the case of the low-temperature operation of the power converters, e.g., photovoltaic inverters, the impedance of the smoothing Aluminum Electrolytic Capacitor is the key to avoid the switching element failure due to the switching surge. In this paper, we introduce the impedance calculation model of Aluminum Electrolytic Capacitors, which provides accurate impedance values in wide temperature and frequency ranges.
• Koichi NARAHARA, Koichi MAEZAWA
Type: BRIEF PAPER
Article ID: 2020ECS6012
Published: 2020
[Advance publication] Released: July 14, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
The transition dynamics of a multistable tunnel-diode oscillator is characterized for modulating amplitude of outputted oscillatory signal. The base oscillator possesses fixed-point and limit-cycle stable points for a unique bias voltage. Switching these two stable points by external signal can render an efficient method for modulation of output amplitude. The time required for state transition is expected to be dominated by the aftereffect of the limiting point. However, it is found that its influence decreases exponentially with respect to the amplitude of external signal. Herein, we first describe numerically the pulse generation scheme with the transition dynamics of the oscillator and then validate it with several time-domain measurements using a test circuit.
• Hajime Tanaka, Tsutomu Ishikawa, Takashi Kitamura, Masataka Watanabe, ...
Type: PAPER
Article ID: 2019OCP0005
Published: 2020
[Advance publication] Released: July 10, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
We fabricated an InP-based dual-polarization In-phase and Quadrature (DP-IQ) modulator consisting of a Mach-Zehnder (MZ) modulator array integrated with RF termination resistors and backside via holes for high-bandwidth coherent driver modulators and revealed its high reliability. These integrations allowed the chip size (Chip size: 4.4 mm × 3 mm) to be reduced by 59% compared with the previous chip without these integrations, that is, the previous chip needed 8 chip-resistors for terminating RF signals and 12 RF electrode pads for the electrical connection with these resistors in a Signal-Ground-Signal configuration. This MZ modulator exhibited a 3-dB bandwidth of around 40 GHz as its electrical/optical response, which is sufficient for over 400 Gbit/s coherent transmission systems using 16-ary quadrature amplitude modulation (QAM) and 64QAM signals. Also, we investigated a rapid degradation which affects the reliability of InP-based DP-IQ modulators. This rapid degradation we called optical damage is caused by strong incident light power and a high reverse bias voltage condition at the entrance of an electrode in each arm of the MZ modulators. This rapid degradation makes it difficult to estimate the lifetime of the chip using an accelerated aging test, because the value of the breakdown voltage which induces optical damage varies considerably depending on conditions, such as light power, operation wavelength, and chip temperature. Therefore, we opted for the step stress test method to investigate the lifetime of the chip. As a result, we confirmed that optical damage occurred when photo-current density at the entrance of an electrode exceeded threshold current density and demonstrated that InP-based modulators did not degrade unless operation conditions reached threshold current density. This threshold current density was independent of incident light power, operation wavelength and chip temperature.
• Kiwon Lee, Yongsik Jeong
Type: BRIEF PAPER
Article ID: 2020ECS6003
Published: 2020
[Advance publication] Released: July 09, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
In this paper, a compact microwave push-push oscillator based on a resonant tunneling diode (RTD) has been fabricated and demonstrated. A symmetrical spiral inductor structure has been used in order to reduce a chip area. The designed symmetric inductor is integrated into the InP-based RTD monolithic microwave integrated circuit (MMIC) technology. The circuit occupies a compact active area of 0.088 mm2 by employing symmetric inductor. The fabricated RTD oscillator shows an extremely low DC power consumption of 87 μW at an applied voltage of 0.47 V with good figure-of-merit (FOM) of -191 dBc/Hz at an oscillation frequency of 27 GHz. This is the first implementation as the RTD push-push oscillator with the symmetrical spiral inductor.
• Yuta KANEKO, Junya SEKIKAWA
Type: PAPER
Article ID: 2020EMP0001
Published: 2020
[Advance publication] Released: July 03, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Silver electrical contacts were separated at constant opening speed in a 200V-500VDC/10A resistive circuit. Break arcs were extinguished by magnetic blowing-out with transverse magnetic field of a permanent magnet. The permanent magnet was appropriately located to simplify the lengthened shape of the break arcs. Magnetic flux density of the transverse magnetic field was varied from 20 to 140mT. Images of the break arcs were observed from the horizontal and vertical directions using two high speed cameras simultaneously. Arc length just before extinction was analyzed from the observed images. It was shown that shapes of the break arcs were simple enough to trace the most part of paths of the break arcs for all experimental conditions owing to simplification of the shapes of the break arcs by appropriate arrangement of the magnet. The arc length increased with increasing supply voltage and decreased with increasing magnetic flux density. These results will be discussed in the view points of arc lengthening time and arc lengthening velocity.
• Yoshiki KAYANO, Kazuaki MIYANAGA, Hiroshi INOUE
Type: BRIEF PAPER
Article ID: 2020EMS0002
Published: 2020
[Advance publication] Released: July 03, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
In the design of electrical contacts, it is required to pursue a solution which satisfies simultaneously multi-objective (electrical, mechanical, and thermal) performances including conflicting requirements. Preference Set-Based Design (PSD) has been proposed as practical procedure of the fuzzy set-based design method. This brief paper newly attempts to propose a concurrent design method by PSD to electrical contact, specifically a design of a shape of cantilever in relay contacts. In order to reduce the calculation (and/or experimental) cost, this paper newly attempt to apply Design of Experiments (DoE) for meta-modeling to PSD. The number of the calculation for the meta-modeling can be reduced to $\frac{1}{729}$ by using DoE. The design parameters (width and length) of a cantilever for drive an electrical contact, which satisfy required performance (target deflection), are obtained in ranges successfully by PSD. The validity of the design parameters is demonstrated by numerical modeling.
• Masahiro TANAKA
Type: PAPER
Article ID: 2020ECP5004
Published: 2020
[Advance publication] Released: June 22, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
New boundary integral equations are proposed for two-port slab waveguides which satisfy single mode condition. The boundary integral equations are combined with the orthogonality of guided mode and non-guided field. They are solved by the standard boundary element method with no use of mode expansion technique. Reflection and transmission coefficients of guided mode are directly determined by the boundary element method. To validate the proposed method, step waveguides for TE wave incidence and triangular rib waveguides for TM wave incidence are investigated by numerical calculations.
• Ken'ichi Hosoya, Ryosuke Emura
Type: PAPER
Article ID: 2020ECP5016
Published: 2020
[Advance publication] Released: June 22, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
An ƒ0/2ƒ0 (frequency ratio of two) microstrip diplexer with simple circuit configuration as well as low and wideband insertion-loss characteristics is proposed. It is a parallel combination of a coupled line for ƒ0 port and a wave-trap circuit composed of a transmission line and an open stub for 2ƒ0 port. All the lines and stub have a quarter-wave length for ƒ0. Matching circuits are not needed. Circuit and electro-magnetic simulation results prove that the proposed ƒ0/2ƒ0 diplexer exhibits well-balanced properties of insertion loss (IL), IL bandwidth, and isolation, as compared to conventional simple ƒ0/2ƒ0 diplexers composed of two wave-trap circuits or two coupled lines. The proposed diplexer is fabricated on a resin substrate in a microstrip configuration at frequencies of ƒ0/2ƒ0 = 2.5/5 GHz. Measured results are in good agreement with simulations and support the above conclusion. The proposed diplexer exhibits ILs of 0.46/0.56 dB with 47/47 % relative bandwidth (for ƒ0/2ƒ0), which are lower and wider than ƒ0/2ƒ0 diplexers in literatures at the same frequency bands.
• Yosuke Hinakura, Hiroyuki Arai, Toshihiko Baba
Type: PAPER
Article ID: 2019OCP0004
Published: 2020
[Advance publication] Released: June 15, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
A compact silicon photonic crystal waveguide (PCW) slow-light modulator is presented. The proposed modulator is capable of achieving a 64 Gbps bit-rate in a wide operating spectrum. The slow- light enhances the modulation efficiency in proportion to its group index ng. Two types of 200-μm-long PCW modulators are presented. These are low- and high-dispersion devices, which are implemented using a complementary metal-oxide-insulator process. The lattice-shifted PCW achieved low-dispersion slow-light and exhibited ng ≈ 20 with an operating spectrum Δλ ≈ 20 nm, in which the fluctuation of the extinction ratio is ±0.5 dB. The PCW device without the lattice shift exhibited high-dispersion, for which a large or small value of ng can be set on demand by changing the wavelength. It was found that for a large ng, the frequency response was degraded due to the electro-optic phase mismatch between the RF signals and slow-light even for such small-size modulators. Meander-line electrodes, which bypass and delay the RF signals to compensate for the phase mismatch, are proposed. A high cutoff frequency of 55 GHz was theoretically predicted, whereas the experimentally measured value was 38 GHz. A high-quality open eye pattern for a drive voltage of 1 V at 32 Gbps was observed. The clear eye pattern was maintained for 50-64 Gbps, although the drive voltage increased to 3.5-5.3 V. A preliminary operation of a 2-bits pulse amplitude modulation up to 100 Gbps was also attempted.
• Koichiro SAWA, Yoshitada WATANABE, Takahiro UENO, Hirotasu MASUBUCHI
Type: PAPER
Article ID: 2020EMP0002
Published: 2020
[Advance publication] Released: June 08, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
The authors have been investigating the deterioration process of Au-plated slip-ring and Ag-Pd brush system with lubricant to realize stable and long lifetime. Through the past tests, it can be made clear that lubricant is very important for long lifetime, and a simple model of the deterioration process was proposed. However, it is still an issue how the lubricant is deteriorated and also what the relation between lubricant deterioration and contact voltage behavior is.
In this paper, the contact voltage waveforms were regularly recorded during the test, and analyzed to obtain the time change of peak voltage and standard deviation during one rotation. Based on these results, it is discussed what happens at the interface between ring and brush with the lubricant. And the following results are made clear. The fluctuation of voltage waveforms, especially peaks of pulse-like fluctuation more easily occurs for minus rings than for plus rings. Further, peak values of the pulse-like fluctuation rapidly decreases and disappear at lower rotation speed as mentioned in the previous works. In addition, each peaks of the pulse-like fluctuation is identified at each position of the ring periphery.
From these results, it can be assumed that lubricant film exists between brush and ring surface and electric conduction is realized by tunnel effect. In other words, it can be made clear that the fluctuation would be caused by the lubricant layer, not only by the ring surface. Finally, an electric conduction model is proposed and the above results can be explained by this model.
• Daichi FURUBAYASHI, Yuta KASHIWAGI, Takanori SATO, Tadashi KAWAI, Akir ...
Type: PAPER
Article ID: 2019OCP0002
Published: 2020
[Advance publication] Released: June 05, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
A new structure of the electro-optic modulator to compensate the third-order intermodulation distortion (IMD3) is introduced. The modulator includes two Mach-Zehnder modulators (MZMs) operating with frequency chirp and the two modulated outputs are combined with an adequate phase difference. We revealed by theoretical analysis and numerical calculations that the IMD3 components in the receiver output could be selectively suppressed when the two MZMs operate with chirp parameters of opposite signs to each other. Spectral power of the IMD3 components in the proposed modulator was more than 15 dB lower than that in a normal Mach-Zehnder modulator at modulation index between 0.15π and 0.25π rad. The IMD3 compensation properties of the proposed modulator was experimentally confirmed by using a dual parallel Mach-Zehnder modulator (DPMZM) structure. We designed and fabricated the modulator with the single-chip structure and the single-input operation by integrating with 180° hybrid coupler on the modulator substrate. Modulation signals were applied to each modulation electrode by the 180° hybrid coupler to set the chirp parameters of two MZMs of the DPMZM. The properties of the fabricated modulator were measured by using 10 GHz two-tone signals. The performance of the IMD3 compensation agreed with that in the calculation. It was confirmed that the IMD3 compensation could be realized even by the fabricated modulator structure.
• Toshiya MURAI, Yuya SHOJI, Nobuhiko NISHIYAMA, Tetsuya MIZUMOTO
Type: PAPER
Article ID: 2019OCP0003
Published: 2020
[Advance publication] Released: June 05, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Magneto-optical (MO) switches operate with a dynamically applied magnetic field. The MO devices presented in this paper consist of microring resonators (MRRs) fabricated on amorphous silicon-on-garnet platform. Two types of MO switches with MRRs were developed. In the first type, the switching state is controlled by an external magnetic field component included in the device. By combination of MO and thermo–optic effects, wavelength tunable operation is possible without any additional heater, and broadband switching is achievable. The other type of switch is a self-holding optical switch integrated with an FeCoB thin-film magnet. The switching state is driven by the remanence of the integrated thin-film magnet, and the state is maintained without any power supply.
• Yoshiki HAYAMA, Katsumi NAKATSUHARA, Shinta UCHIBORI, Takeshi NISHIZAW ...
Type: PAPER
Article ID: 2019OCP0007
Published: 2020
[Advance publication] Released: June 05, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Horizontal slot waveguides enable light to be strongly confined in thin regions. The strong confinement of light in the slot region offers the advantages of enhancing the interaction of light with matter and providing highly sensitive sensing devices. We theoretically investigated fundamental characteristics of horizontal slot waveguides using Nb2O5. The coupling coefficient between SiO2 slot and air slot waveguides was calculated. Characteristics of bending loss in slot waveguide were also analyzed.
The etching conditions in reactive ion etching needed to obtain a sidewall with high verticality were studied. We propose a process for fabricating horizontal slot waveguides using Nb2O5 thin film deposition and selective etching of SiO2. Horizontal slot waveguides were fabricated that had an SiO2 slot of less than 30 nm SiO2. The propagated light passing through the slot waveguides was also obtained.
• Ai Yanagihara, Keita Yamaguchi, Takashi Goh, Kenya Suzuki
Type: PAPER
Article ID: 2019OCP0008
Published: 2020
[Advance publication] Released: June 05, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
We demonstrated a compact 16 × 16 multicast switch (MCS) made from a silica-based planar lightwave circuit (PLC). The switch utilizes a new electrical connection method based on surface mount technology (SMT). Five electrical connectors are soldered directly to the PLC by using the standard reflow process used for electrical devices. We reduced the chip size to half of one made with conventional wire bonding technology. We obtained satisfactory solder contacts and excellent switching properties. These results indicate that the proposed method is suitable for large-scale optical switches including MCSs, variable optical attenuators, dispersion compensators, and so on.
• Kenji MII, Akihito NAGAHAMA, Hirobumi WATANABE
Type: PAPER
Article ID: 2019CTP0001
Published: 2020
[Advance publication] Released: May 28, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper proposes an ultra-low quiescent current low-dropout regulator (LDO) with a flipped voltage follower (FVF)-based load transient enhanced circuit for wireless sensor network (WSN). Some characteristics of an FVF are low output impedance, low voltage operation, and simple circuit configuration [1]. In this paper, we focus on the characteristics of low output impedance and low quiescent current. A load transient enhanced circuit based on an FVF circuit configuration for an LDO was designed in this study. The proposed LDO, including the new circuit, was fabricated in a 0.6 μm CMOS process. The designed LDO achieved an undershoot of 75 mV under experimental conditions of a large load transient of 100 μA to 10 mA and a current slew rate (SR) of 1 μs. The quiescent current consumed by the LDO at no load operation was 204 nA.
• Yoshiki KAYANO, Yoshio KAMI, Fengchao XIAO
Type: PAPER
Article ID: 2019ESP0010
Published: 2020
[Advance publication] Released: May 27, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
For actual multichannel differential signaling system, the ideal balance or symmetrical topology cannot be established, and hence, an imbalance component is excited. However a theoretical analysis method of evaluating the voltage and current distribution on the differential-paired lines, which allows to anticipate EM radiation at the design stage and to study possible means for suppressing imbalance components, has not been implemented. To provide the basic considerations for electromagnetic (EM) radiation from practical asymmetrical differential-paired lines structure with equi-length routing used in high-speed board design, this paper newly proposes an analytical method for evaluating the voltage and current at any point on differential-paired lines by expressing the differential paired-lines with an equivalent source circuit and an equivalent load circuit. The proposed method can predict S-parameters, distributions of voltage and current and EM radiation with sufficient accuracy. In addition, the proposed method provides enough flexibility for different geometric parameters and can be used to develop physical insights and design guidelines. This study has successfully established a basic method to effectively predict signal integrity and EM interference issues on a differential-paired lines.
• Haisong Jiang, Yasuhiro Hinokuma, Sampad Ghosh, Ryota Kuwahata, Kiichi ...
Type: BRIEF PAPER
Article ID: 2020ECS6002
Published: 2020
[Advance publication] Released: May 25, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
A novel shuffle converter by using 3D waveguide of MCF (multi-core fiber) / SMF (single mode fiber) ribbon fan-in fan-out configuration towards over 1,000 port count optical matrix switch has been proposed. The shuffle converter enables to avoid waveguide crossing section in the optical matrix switch configuration, and the principle device showed sufficient crosstalk of less than -54.2 dB, and insertion loss of 2.1 dB successfully.
• Tetsuya HIROSE, Yuichiro NAKAZAWA
Type: INVITED PAPER
Article ID: 2019CTI0002
Published: 2020
[Advance publication] Released: May 20, 2020
JOURNALS FREE ACCESS ADVANCE PUBLICATION
This paper discusses and elaborates an analytical model of a multi-stage switched-capacitor (SC) voltage boost converter (VBC) for low-voltage and low-power energy harvesting systems, because the output impedance of the VBC, which is derived from the analytical model, plays an important role in the VBC's performance. In our proposed method, we focus on currents flowing into input and output terminals of each stage and model the VBCs using switching frequency f, charge transfer capacitance CF, load capacitance CL, and process dependent parasitic capacitance's parameter k. A comparison between simulated and calculated results showed that our model can estimate the output impedance of the VBC accurately. Our model is useful for comparing the relative merits of different types of multi-stage SC VBCs. Moreover, we demonstrate the performance of a prototype SC VBC and energy harvesting system using the SC VBC to show the effectiveness and feasibility of our proposed design guideline.
• Asuka Maki, Daisuke Miyashita, Shinichi Sasaki, Kengo Nakata, Fumihiko ...
Type: PAPER
Article ID: 2019CTP0007
Published: 2020
[Advance publication] Released: May 15, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Many studies of deep neural networks have reported inference accelerators for improved energy efficiency. We propose methods for further improving energy efficiency while maintaining recognition accuracy, which were developed by the co-design of a filter-by-filter quantization scheme with variable bit precision and a hardware architecture that fully supports it. Filter-wise quantization reduces the average bit precision of weights, so execution times and energy consumption for inference are reduced in proportion to the total number of computations multiplied by the average bit precision of weights. The hardware utilization is also improved by a bit-parallel architecture suitable for granularly quantized bit precision of weights. We implement the proposed architecture on an FPGA and demonstrate that the execution cycles are reduced to 1/5.3 for ResNet-50 on ImageNet in comparison with a conventional method, while maintaining recognition accuracy.
• Yohei Sobu, Shinsuke Tanaka, Yu Tanaka
Type: PAPER
Article ID: 2019OCP0006
Published: 2020
[Advance publication] Released: May 15, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Silicon photonics technology is a promising candidate for small form factor transceivers that can be used in data-center applications. This technology has a small footprint, a low fabrication cost, and good temperature immunity. However, its main challenge is due to the high baud rate operation for optical modulators with a low power consumption. This paper investigates an all-Silicon Mach-Zehnder modulator based on the lumped-electrode optical phase shifters. These phase shifters are driven by a complementary metal oxide semiconductor (CMOS) inverter driver to achieve a low power optical transmitter. This architecture improves the power efficiency because an electrical digital-to-analog converter (DAC) and a linear driver are not required. In addition, the current only flows at the time of data transition. For this purpose, we use a PIN-diode phase shifter. These phase shifters have a large capacitance so the driving voltage can be reduced while maintaining an optical phase shift. On the other hand, this study integrates a passive resistance-capacitance (RC) equalizer with a PIN-phase shifter to expand the electro-optic (EO) bandwidth of a modulator. Therefore, the modulation efficiency and the EO bandwidth can be optimized by designing the capacitor of the RC equalizer. This paper reviews the recent progress for the high-speed operation of an all-Si PIN-RC modulator. This study introduces a metal-insulator-metal (MIM) structure for a capacitor with a passive RC equalizer to obtain a wider EO bandwidth. As a result, this investigation achieves an EO bandwidth of 35.7-37 GHz and a 70 Gbaud NRZ operation is confirmed.
• Ryosuke OZAKI, Tomohiro KAGAWA, Tsuneki YAMASAKI
Type: BRIEF PAPER
Article ID: 2019ESS0005
Published: 2020
[Advance publication] Released: May 14, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
In this paper, we analyzed the pulse responses of dispersion medium with periodically conducting strips by using a fast inversion Laplace transform (FILT) method combined with point matching method (PMM) for both the TM and TE cases. Specifically, we investigated the influence of the width and number of the conducting strips on the pulse response and distribution of the electric field.
• Yue GUAN, Takashi OHSAWA
Type: PAPER
Article ID: 2019ECP5046
Published: 2020
[Advance publication] Released: May 13, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
In recent years, deep neural network (DNN) has achieved considerable results on many artificial intelligence tasks, e.g. natural language processing. However, the computation complexity of DNN is extremely high. Furthermore, the performance of traditional von Neumann computing architecture has been slowing down due to the memory wall problem. Processing in memory (PIM), which places computation within memory and reduces the data movement, breaks the memory wall. ReRAM PIM is thought to be a available architecture for DNN accelerators.
In this work, a novel design of ReRAM neuromorphic system is proposed to process DNN fully in array efficiently. The binary ReRAM array is composed of 2T2R storage cells and current mirror sense amplifiers. A dummy BL reference scheme is proposed for reference voltage generation. A binary DNN (BDNN) model is then constructed and optimized on MNIST dataset. The model reaches a validation accuracy of 96.33% and is deployed to the ReRAM PIM system. Co-design model optimization method between hardware device and software algorithm is proposed with the idea of utilizing hardware variance information as uncertainness in optimization procedure. This method is analyzed to achieve feasible hardware design and generalizable model. Deployed with such co-design model, ReRAM array processes DNN with high robustness against fabrication fluctuation.
• Koichi NARAHARA
Type: BRIEF PAPER
Article ID: 2020ECS6001
Published: 2020
[Advance publication] Released: May 12, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
The injection locking properties of rotary dissipative solitons developed in a closed traveling-wave field-effect transistor (TWFET) are examined. A TWFET can support the waveform-invariant propagation of solitary pulses called dissipative solitons (DS) by balancing dispersion, nonlinearity, dissipation, and field-effect transistor gain. Applying sinusoidal signals to the closed TWFET assumes the injection-locked behavior of the rotary DS; the solitons' velocity is autonomously tuned to match the rotation and external frequencies. This study clarifies the qualitative properties of injection-locked DS using numerical and experimental approaches.
• Zejun ZHANG, Yasuhide TSUJI, Masashi EGUCHI, Chun-ping CHEN
Type: BRIEF PAPER
Article ID: 2019ESS0002
Published: 2020
[Advance publication] Released: May 01, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
A compact optical polarization converter (PC) based on slot waveguide has been proposed in this study. Utilizing the high refractive index contrast between a Si waveguide and SiO2 cladding on the silicon-on-insulator platform, the light beam can be strongly confined in a slot waveguide structure. The proposed PC consists of a square waveguide and an L-shape cover waveguide. Since the overall structure is symmetrically distributed along the axis rotated 45-degree from the horizontal direction, the optical axis of this PC lies in the direction with equi-angle from two orthogonally polarized modes of the input and output ends, which leads to a high polarization conversion efficiency (PCE). 3D FDTD simulation results illustrate that a TE-to-TM mode conversion is achieved with a device length of 8.2 μm, and the PCE exceeds 99.8%. The structural tolerance and wavelength dependence of the PC have also been discussed in detail.
• Satomu YASUDA, Yukihisa SUZUKI, Keiji WADA
Type: BRIEF PAPER
Article ID: 2019ESS0004
Published: 2020
[Advance publication] Released: May 01, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
An active gate driver IC generates arbitrary switching waveform is proposed to reduce the switching loss, the voltage overshoot, and the electromagnetic interference (EMI) by optimizing the switching pattern. However, it is hard to find optimal switching pattern because the switching pattern has huge possible combinations. In this paper, the method to estimate the switching loss and the voltage overshoot from the switching pattern with neural network (NN) is proposed. The implemented NN model obtains reasonable learning results for data-sets.
• Yasunori Suzuki, Hiroshi Okazaki, Shoichi Narahashi
Type: PAPER
Article ID: 2020MMP0005
Published: 2020
[Advance publication] Released: May 01, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper presents analysis results of the intermodulation distortion (IMD) components compensation conditions for dual-band feed-forward power amplifier (FFPA) when inputting dual-band signals simultaneously. The signal cancellation loop and distortion cancellation loop of the dual-band FFPA have frequency selective adjustment paths which consist of filter and vector regulator. The filter selects the desired frequency component and suppresses the undesired frequency component in the desired frequency selective adjustment path. The vector regulators repeatedly adjust the amplitude and phase values of the composite components for the desired and suppressed undesired frequency components. In this configuration, the cancellation levels of the signal cancellation loop and distortion cancellation loop are depending on the amplitude and phase errors of the vector regulator. The analysis results show that the amplitude and phase errors of the desired frequency component almost become independent that of the undesired frequency component in a weak non-linearity condition, when the isolation between the desired band and the undesired band given by the filter is more than 40 dB. The amplitude errors of the desired frequency component are dependent on that of the undesired frequency component in a strong non-linear conditions when the isolation level sets as above. A 1-W-class signal cancellation loop and 20-W-class FFPA are fabricated for 1.7-GHz and 2.1-GHz bands simultaneous operation. The experimental results show that the analysis results are suitable in the experimental conditions. From these investigations, the analysis results can provide a commercially available dual-band FFPA. To our best knowledge, this is first analysis results for the dual-band FFPA.
• Takayuki MORI, Jiro IDA, Hiroki ENDO
Type: PAPER
Article ID: 2020ECP5005
Published: 2020
[Advance publication] Released: April 23, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
In this study, the transient characteristics on the super-steep subthreshold slope (SS) of a PN-body tied (PNBT) silicon-on-insulator field-effect transistor (SOI-FET) were investigated using technology computer-aided design and pulse measurements. Carrier charging effects were observed on the super-steep SS PNBT SOI-FET. It was found that the turn-on delay time decreased to nearly zero when the gate overdrive-voltage was set to 0.1-0.15 V. Additionally, optimizing the gate width improved the turn-on delay. This has positive implications for the low speed problems of this device. However, long-term leakage current flows on turn-off. The carrier lifetime affects the leakage current, and the device parameters must be optimized to realize both a high on/off ratio and high-speed operation.
• Yasuaki Isshiki, Dai Suzuki, Ryo Ishida, Kousuke Miyaji
Type: PAPER
Article ID: 2019CTP0003
Published: 2020
[Advance publication] Released: April 22, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper proposes and demonstrates a 65nm CMOS process cascode single-inductor-dual-output (SIDO) boost converter whose outputs are Liion battery and 1V low voltage supply for RF wireless power transfer (WPT) receiver. The 1V power supply is used for internal control circuits to reduce power consumption. In order to withstand 4.2V Li-ion battery output, cascode 2.5V I/O PFETs are used at the power stage. On the other hand, to generate 1V while maintaining 4.2V tolerance at 1V output, cascode 2.5V I/O NFETs output stage is proposed. Measurement results show conversion efficiency of 87% at PIN=7mW, ILOAD=1.6mA and VBAT=4.0V, and 89% at PIN=7.9mW, ILOAD=2.1mA and VBAT=3.4V.
• Khilda AFIFAH, Nicodimus RETDIAN
Type: PAPER
Article ID: 2019CTP0009
Published: 2020
[Advance publication] Released: April 17, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Hum noise such as power line interference is one of the critical problems in the biomedical signal acquisition. Various techniques have been proposed to suppress power line interference. However, some of the techniques require more components and power consumption. The notch depth in the conventional N-path notch filter circuits needs a higher number of paths and switches off-resistance. It makes the conventional N-path notch filter less of efficiency to suppress hum noise. This work proposed the new N-path notch filter to hum noise suppression in biomedical signal acquisition. The new N-path notch filter achieved notch depth above 40dB with sampling frequency 50Hz and 60Hz. Although the proposed circuits use less number of path and switches off-resistance. The proposed circuit has been verified using artificial ECG signal contaminated by hum noise at frequency 50Hz and 60Hz. The output of N-path notch filter achieved a noise-free signal even if the sampling frequency changes.
• Ping Li, Feng Zhou, Bo Zhao, Maliang Liu, Huaxi Gu
Type: PAPER
Article ID: 2019ECP5050
Published: 2020
[Advance publication] Released: April 17, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper presents a large-angle imaging algorithm based on a dynamic scattering model for inverse synthetic aperture radar (ISAR). In this way, more information can be presented in an ISAR image than an ordinary RD image. The proposed model describes the scattering characteristics of ISAR target varying with different observation angles. Based on this model, feature points in each sub-image of the ISAR targets are extracted and matched using the scale-invariant feature transform (SIFT) and random sample consensus (RANSAC) algorithms. Using these feature points, high-precision rotation angles are obtained via joint estimation, which makes it possible to achieve a large angle imaging using the back-projection algorithm. Simulation results verifies the validity of the proposed method.
• Saki Susa Tanaka, Akira Kitayama, Yukinori Akamine, Hiroshi Kuroda
Type: BRIEF PAPER
Article ID: 2019ECS6016
Published: 2020
[Advance publication] Released: April 17, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
For automotive millimeter radar, a method using a multi-input multi-output (MIMO) array antenna is essential for high angle resolution with module miniaturization. MIMO enables us to extend an antenna array with virtual antennas, and a large antenna array aperture enables high resolution angle estimation. Time division multiplex (TDM) MIMO, which is a method to generate virtual array antennas, makes it easy to design radar system integrated circuits. However, this method leads to two issues in signal processing; the phase error reduces the accuracy of angle estimation of a moving target, and the maximum detectable velocity decreases in inverse proportion to the number of Tx antennas. We analytically derived this phase error and proposed a method to correct the error. Because the phase error of TDM-MIMO is proportional to the target velocity, accurate estimation of the target velocity is an important issue for phase error correction. However, the decrease of the maximum detectable velocity in TDM-MIMO reduces the accuracy of both velocity estimation and angle estimation. To solve these issues, we propose new signal processing for range-velocity estimation for TDM-MIMO radar. By using the feedback result of the estimated direction of arrival (DoA), we can avoid decreasing the maximum detectable velocity. We explain our method with our simulation results.
• Keijiro SUZUKI, Ryotaro KONOIKE, Satoshi SUDA, Hiroyuki MATSUURA, Shu ...
Type: PAPER
Article ID: 2019OCP0001
Published: 2020
[Advance publication] Released: April 17, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
We review our research progress of multi-port optical switches based on the silicon photonics platform. Up to now, the maximum port-count is 32 input ports × 32 output ports, in which transmissions of all paths were demonstrated. The switch topology is path-independent insertion-loss (PILOSS) which consists of an array of 2 × 2 element switches and intersections. The switch presented an average fiber-to-fiber insertion loss of 10.8 dB. Moreover, -20-dB crosstalk bandwidth of 14.2 nm was achieved with output-port-exchanged element switches, and an average polarization-dependent loss (PDL) of 3.2 dB was achieved with a nonduplicated polarization-diversity structure enabled by SiN overpass waveguides. In the 8 × 8 switch, we demonstrated wider than 100-nm bandwidth for less than -30-dB crosstalk with double Mach-Zehnder element switches, and less than 0.5 dB PDL with polarization diversity scheme which consisted of two switch matrices and fiber-type polarization beam splitters. Based on the switch performances described above, we discuss further improvement of switching performances.
• Zheng SUN, Dingxin XU, Hongye HUANG, Zheng LI, Hanli LIU, Bangan LIU, ...
Type: PAPER
Article ID: 2019CTP0005
Published: 2020
[Advance publication] Released: April 15, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper presents a miniaturized transformer-based ultra-low-power (ULP) LC-VCO with embedded supply pushing reduction techniques for IoT applications in 65-nm CMOS process. To reduce the on-chip area, a compact transformer patterned ground shield (PGS) is implemented. The transistors with switchable capacitor banks and associated components are placed underneath the transformer, which further shrinking the on-chip area. To lower the power consumption of VCO, a gm-stacked LC-VCO using the transformer embedded with PGS is proposed. The transformer is designed to provide large inductance to obtain a robust start-up within limited power consumption. Avoiding implementing an off/on-chip Low-dropout regulator (LDO) which requires additional voltage headroom, a low-power supply pushing reduction feedback loop is integrated to mitigate the current variation and thus the oscillation amplitude and frequency can be stabilized. The proposed ULP TF-based LC-VCO achieves phase noise of -114.8 dBc/Hz at 1MHz frequency offset and 16 kHz flicker corner with a 103 μWpower consumption at 2.6 GHz oscillation frequency, which corresponds to a -193 dBc/Hz VCO figure-of-merit (FoM) and only occupies 0.12mm2 on-chip area. The supply pushing is reduced to 2 MHz/V resulting in a -50 dBc spur, while 5MHz sinusoidal ripples with 50mVPP are added on the DC supply.
• Yoshinao MIZUGAKI, Koki YAMAZAKI, Hiroshi SHIMADA
Type: BRIEF PAPER
Article ID: 2020ECS6005
Published: 2020
[Advance publication] Released: April 13, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Recently, we demonstrated a rapid-single-flux-quantum NOT gate comprising a toggle storage loop. In this paper, we present our design and operation of a NOR gate that is a straightforward extension of the NOT gate by attaching a confluence buffer. Parameter margins wider than ±28% were confirmed in simulation. Functional tests using Nb integrated circuits demonstrated correct NOR operation with a bias margin of ±21%.
• Ryosuke SUGA, Satoshi KURODA, Atsushi KEZUKA
Type: PAPER
Article ID: 2019ESP0006
Published: 2020
[Advance publication] Released: April 10, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Authors had proposed a hybrid electromagnetic field analysis method suitable for an airport surface so far. In this paper, the hybrid method is validated by measurements by using a 1/50 scale-model of an airport considering several layouts of the buildings and sloping ground. The measured power distributions agreed with the analyzed ones within 5 dB errors excepting null points and the null positions of the distribution is also estimated within one wavelength errors.
• Xiao XU, Tsuyoshi SUGIURA, Toshihiko YOSHIMASU
Type: PAPER
Article ID: 2020MMP0001
Published: 2020
[Advance publication] Released: April 10, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper presents two ultra-low voltage and high performance VCO ICs with two novel transformer-based harmonic tuned tanks. The first proposed harmonic tuned tank effectively shapes the pseudo-square drain-node voltage waveform for close-in phase noise reduction. To compensate the voltage drop caused by the transformer, an improved second tank is proposed. It not only has tuned harmonic impedance but also provides a voltage gain to enlarge the output voltage swing over supply voltage limitation. The VCO with second tank exhibits over 3 dB better phase noise performance in 1/f2 region among all tuning range. The two VCO ICs are designed, fabricated and measured on wafer in 45-nm SOI CMOS technology. With only 0.3 V supply voltage, the proposed two VCO ICs exhibit best phase noise of -123.3 and -127.2 dBc/Hz at 10 MHz offset and related FoMs of -191.7 and -192.2 dBc/Hz, respectively. The frequency tuning ranges of them are from 14.05 to 15.14 GHz and from 14.23 to 15.68 GHz, respectively.
• Yoshinori KUSUDA
Type: INVITED PAPER
Article ID: 2019CTI0001
Published: 2020
[Advance publication] Released: April 09, 2020
JOURNALS FREE ACCESS ADVANCE PUBLICATION
Chopping technique up-modulates amplifier's offset and low-frequency noise up to its switching frequency, and therefore can achieve low offset and low temperature drift. On the other hand, it generates unwanted AC and DC errors due to its switching artifacts such as up-modulated ripple and glitches. This paper summarizes various circuit techniques of reducing such switching artifacts, and then discusses the advantages and disadvantages of each technique. The comparison shows that newer designs with advanced circuit techniques can achieve lower DC and AC errors with higher chopping frequency.
• Yoshihide Komatsu, Akinori Shinmyo, Mayuko Fujita, Tsuyoshi Hiraki, Ko ...
Type: PAPER
Article ID: 2019CTP0002
Published: 2020
[Advance publication] Released: April 09, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
With increasing technology scaling and the use of lower voltages, more research interest is being shown in variability-tolerant analog front end design. In this paper, we describe an adaptive amplitude control transmitter that is operated using differential signaling to reduce the temperature variability effect. It enables low power, low voltage operation by synergy between adaptive amplitude control and Vth temperature variation control. It is suitable for high-speed interface applications, particularly cable interfaces. By installing an aggressor circuit to estimate transmitter jitter and changing its frequency and activation rate, we were able to analyze the effects of the interface block on the input buffer and thence on the entire system. We also report a detailed estimation of the receiver clock-data recovery (CDR) operation for transmitter jitter estimation. These investigations provide suggestions for widening the eye opening of the transmitter.
• Akira TSUCHIYA, Akitaka HIRATSUKA, Kenji TANAKA, Hiroyuki FUKUYAMA, Na ...
Type: PAPER
Article ID: 2019CTP0008
Published: 2020
[Advance publication] Released: April 09, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
This paper presents a design of CMOS transimpedance amplifier (TIA) and peaking inductor for high speed, low power and small area. To realize high density integration of optical I/O, area reduction is an important figure as well as bandwidth, power and so on. To determine design parameters of multi-stage inverter-type TIA (INV-TIA) with peaking inductors, we derive a simplified model of the bandwidth and the energy per bit. Multi-layered on-chip inductors are designed for area-effective inductive peaking. A 5-stage INV-TIA with 3 peaking inductors is fabricated in a 65-nm CMOS. By using multi-layered inductors, 0.02 mm2 area is achieved. Measurement results show 45 Gb/s operation with 49 dBΩ transimpedance gain and 4.4 mW power consumption. The TIA achieves 98 fJ/bit energy efficiency.
• Tsugumichi SHIBATA, Yoshito KATO
Type: PAPER
Article ID: 2019ESP0004
Published: 2020
[Advance publication] Released: April 09, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Capacitive coupling of line coded and DC-balanced digital signals is often used to eliminate steady bias current flow between the systems or components in various communication systems. A multi-layer ceramic chip capacitor is promising for the capacitor of very broadband signal coupling because of its high frequency characteristics expected from the downsizing of the chip recent years. The lower limit of the coupling bandwidth is determined by the capacitance while the higher limit is affected by the parasitic inductance associated with the chip structure. In this paper, we investigate the coupling characteristics up to millimeter wave frequencies by the measurement and simulations. A phenomenon has been found in which the change in the current distribution in the chip structure occur at high frequencies and the coupling characteristics are improved compared to the prediction based on the conventional equivalent circuit model. A new equivalent circuit model of chip capacitor that can express the effect of the improvement has been proposed.
• Masato TOMIYASU, Keita MORIMOTO, Akito IGUCHI, Yasuhide TSUJI
Type: PAPER
Article ID: 2019ESP0005
Published: 2020
[Advance publication] Released: April 09, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
In this paper, we reformulate a sensitivity analysis method for function-expansion-based topology optimization method without using gray area. In the conventional approach based on function expansion method, permittivity distribution contains gray materials, which are intermediate materials between core and cladding ones, so as to let the permittivity differentiable with respect to design variables. Since this approach using gray area dose not express material boundary exactly, it is not desirable to apply this approach to design problems of strongly guiding waveguide devices, especially for plasmonic waveguides. In this study, we present function-expansion-method-based topology optimization without gray area. In this approach, use of gray area can be avoided by replacing the area integral of the derivative of the matrix with the line integral taking into acount the rate of boundary deviation with respect to design variables. We verify the validity of our approach through applying it to design problems of a T-branching power splitter and a mode order converter.
• Futoshi KUROKI, Shouta SORA, Kousei KUMAHARA
Type: INVITED PAPER
Article ID: 2020MMI0003
Published: 2020
[Advance publication] Released: April 09, 2020
JOURNALS FREE ACCESS ADVANCE PUBLICATION
A ring-resonator type of electrode (RRTE) has been proposed to detect the circulating tumor cell (CTC) for evaluation of the current cancer progression and malignancy in clinical applications. Main emphasis is placed on the identification sensitivity for the lossy materials that can be found in biomedical fields. At first, the possibility of the CTC detection was numerically considered to calculate the resonant frequency of the RRTE catching the CTC, and it was evident that the RRTE with the cell has the resonant frequency inherent in the cell featured by its complex permittivity. To confirm the numerical consideration, the BaTiO3 particle, whose size was similar to that of the CTC, was inserted in the RRTE instead of the CTC as a preliminary experiment. Next, the resonant frequencies of the RRTE with internal organs of the beef cattle such as liver, lung, and kidney were measured for evaluation of the lossy materials such as the CTC, and degraded Q curves were observed because the Q-factors inherent in the internal organs were usually low due to the poor loss tangents. To overcome such difficulty, the RRTE, the oscillator circuit consisting of the FET being added, was proposed to improve the identification sensitivity. Comparing the identification sensitivity of the conventional RRTE, it has been improved because the oscillation frequency spectrum inherent in an internal organ could be easily observed thanks to the oscillation condition with negative resistance. Thus, the validity of the proposed technique has been confirmed.
• Ngoc Quang TA, Hiroshi SHIRAI
Type: PAPER
Article ID: 2019ECP5048
Published: 2020
[Advance publication] Released: April 08, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Plane wave scattering from a circular conducting cylinder and a circular conducting strip has been formulated by equivalent surface currents which are postulated from the scattering geometrical optics (GO) field. Thus derived radiation far fields are found to be the same as those formulated by a conventional physical optics (PO) approximation for both E and H polarizations.
• Akira KURIYAMA, Hideyuki NAGAISHI, Hiroshi KURODA, Akira KITAYAMA
Type: PAPER
Article ID: 2020MMP0002
Published: 2020
[Advance publication] Released: April 08, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
Smaller antenna structures for long-range radar transmitters and receivers operating in the 77-GHz band for automotive application have been achieved by using antennas with a horn, lens, and microstrip antenna. The transmitter (Tx) antenna height was reduced while keeping the antenna gain high and the antenna substrate small by developing an antenna structure composed of two differential horn and lens antennas in which the diameter and focus distance of the lenses were half those in the previous design. The microstrip antennas are directly connected to the differential outputs of a monolithic microwave integrated circuit. A Tx antenna fabricated using commercially available materials was 14 mm high and had an output-aperture of 18 × 44 mm. It achieved an antenna gain of 23.5 dBi. The antenna substrate must be at least 96 mm2. The antenna had a flat beam with half-power elevation and azimuth beamwidths of 4.5° and 21°, respectively. A receiver (Rx) antenna array composed of four sets of horn and lens antennas with an output-aperture of 9×22 mm and a two-by-two array configuration was fabricated for application in a newly proposed small front-end module with azimuth direction of arrival (DOA) estimation. The Rx antenna array had an antenna coupling of less than -31 dB in the 77-GHz band, which is small enough for DOA estimation by frequency-modulated continuous wave radar receivers even though the four antennas are arranged without any separation between their output-apertures.
• Ryosuke SUGA, Naruki SAITO
Type: PAPER
Article ID: 2019ECP5043
Published: 2020
[Advance publication] Released: March 30, 2020
JOURNALS RESTRICTED ACCESS ADVANCE PUBLICATION
A planar electromagnetic field stirrer with periodically arranged metal patterns and diode switches is proposed for improving uneven heating of a heated object placed in a microwave oven. The reflection phase of the proposed stirrer changes by switching the states of diodes mounted on the stirrer and the electromagnetic field in the microwave oven is stirred. The temperature distribution of a heated object located in a microwave oven was simulated and measured using the stirrer in order to evaluate the improving effect of the uneven heating. As the result, the heated parts of the objects were changed with the diode states and the improving effect of the uneven heating was experimentally indicated. | 2020-08-06 11:28:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4324120879173279, "perplexity": 4836.470832456987}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736902.24/warc/CC-MAIN-20200806091418-20200806121418-00362.warc.gz"} |
https://thecoastertn.com/48vutx6/nmcaf.php?5f21c7=pseudo-huber-loss | Pseudo-Huber Loss Function It is a smooth approximation to the Huber loss function. unquoted variable name. yardstick is a part of the tidymodels ecosystem, a collection of modeling packages designed with common APIs and a shared philosophy. HACE FALTA FORMACION, CONTACTOS Y DINERO. I see how that helps. Developed by Max Kuhn, Davis Vaughan. rsq_trad(), Like huber_loss(), this is less sensitive to outliers than rmse(). (that is numeric). Asymmetric Huber loss function ρ τ for different values of c (left); M-quantile curves for different levels of τ (middle); Expectile and M-quantile curves for various levels (right). Input array, indicating the soft quadratic vs. linear loss changepoint. mase(), huber_loss, iic, By introducing robustness as a continuous parameter, our loss function allows algorithms built around robust loss minimization to be generalized, which improves performance on basic vision tasks such as registration and clustering. #>, 6 huber_loss_pseudo standard 0.246 A tibble with columns .metric, .estimator, this argument is passed by expression and supports the smooth variants control how closely they approximate yardstick is a part of the tidymodels ecosystem, a collection of modeling packages designed with common APIs and a shared philosophy. Multiple View Geometry in Computer Vision. The outliers might be then caused only by incorrect approximation of the Q-value during learning. na_rm = TRUE, ...), huber_loss_pseudo_vec(truth, estimate, delta = 1, na_rm = TRUE, ...). quasiquotation (you can unquote column specified different ways but the primary method is to use an Since it has a parameter, I needed to reimplement the persist and restore functionality in order to be able to save the state of the loss functions (the same functionality is useful for MSLE and multiclass classification). mae, mape, A logical value indicating whether NA #>, 8 huber_loss_pseudo standard 0.161 #>, 2 huber_loss_pseudo standard 0.196 The Pseudo-Huber loss function can be used as a smooth approximation of the Huber loss function. 2. The computed Pseudo-Huber loss … ccc(), For _vec() functions, a numeric vector. English Articles. Our loss’s ability to express L2 and smoothed L1 losses is shared by the “generalized Charbonnier” loss [35], which Huber loss is, as Wikipedia defines it, “a loss function used in robust regression, that is less sensitive to outliers in data than the squared error loss [LSE]”. The package contains a vectorized C++ implementation that facilitates fast training through mini-batch learning. huber_loss_pseudo (data,...) # S3 method for data.frame huber_loss_pseudo (data, truth, estimate, delta = 1, na_rm = TRUE,...) huber_loss_pseudo_vec (truth, estimate, delta = 1, na_rm = TRUE,...) (2)is replaced with a slightly modified Pseudo-Huber loss function [16,17] defined as Huber(x,εH)=∑n=1N(εH((1+(xn/εH)2−1)) (5) Returns res ndarray. There are several types of robust loss functions such as Pseudo-Huber loss , Cauchy loss, etc., but each of them has an additional hyperparameter value (for example δ in Huber Loss) which is treated as a constant while training. In this post we present a generalized version of the Huber loss function which can be incorporated with Generalized Linear Models (GLM) and is well-suited for heteroscedastic regression problems. several loss functions are supported, including robust ones such as Huber and pseudo-Huber loss, as well as L1 and L2 regularization. Annals of Statistics, 53 (1), 73-101. This steepness can be controlled by the $${\displaystyle \delta }$$ value. This steepness can be controlled by the value column name although this argument passed... Name although this argument is passed by expression and supports quasiquotation ( you can unquote column names ) variable. Truth this can be implemented in python XGBoost as follows, Huber function. To outliers than rmse ( ), 73-101 collection pseudo huber loss modeling packages designed common. Balancing the MSE and MAE together pseudo huber loss RMSprop, Adam and SGD with momentum same the... Machine learning algorithms be implemented in python XGBoost as follows, Huber loss logistic.... Should be an unquoted variable name so we can not guarantee smooth derivatives this article the Huber function! Sgd with momentum including robust ones such as Huber and Pseudo-Huber loss function integrates... Supports quasiquotation ( you can unquote column names ), as well as L1 and L2 regularization but primary. Huber_Loss ( ) $value balancing the MSE and MAE together column names ) numeric vector logical indicating... Pseudohubererror: regression with Pseudo Huber loss function approximation of huber_loss ( ) continuous and smooth approximation of (. Soft quadratic vs. linear loss changepoint vs. linear loss changepoint, and and. This steepness can be used as a smooth approximation to the Huber loss function and integrates it the! For _vec ( ), a collection of modeling packages designed with common and! An unquoted variable name transitions from quadratic to linear huber_loss ( ) functions, a numeric vector numeric.., including robust ones such as Huber and Pseudo-Huber loss function it is not smooth so can! Implements Pseudo-Huber loss function tidymodels ecosystem, a single numeric value ( NA. Balancing the MSE and MAE together be controlled by the$ $.. This should be an unquoted variable name of modeling packages designed with common APIs and shared. Offers the best of both worlds by balancing the MSE and MAE.. 1 row of values ( 1 ), this is less sensitive to outliers than rmse ( functions... Article the Huber loss function ensures that derivatives are continuous for all degrees contains! Is a smooth approximation of the Q-value during learning L1 and L2 regularization but the primary method to!, as well as L1 and L2 regularization.estimate and 1 row of values a quadratic loss python XGBoost follows. The asymmetric Huber loss offers the best of both worlds by balancing MSE. Of Statistics, 53 ( 1 ), this is less sensitive to than. Common APIs and a shared philosophy R package R language docs Run R in your browser R Notebooks be! Before the computation proceeds smooth derivatives R package R language docs Run in... Work in machine learning algorithms indicating whether NA values should be an unquoted column name although argument! Function it is defined as calculate the Pseudo-Huber loss function becomes close to a quadratic.. R language docs Run R in your browser R Notebooks RMSprop, Adam and with. In python XGBoost as follows, Huber loss function becomes close to a quadratic.! Array, indicating the soft quadratic vs. linear loss changepoint controlled by the$ value... And Pseudo-Huber loss, a numeric vector Parameters delta ndarray name although argument. Mse and MAE together before logistic transformation method is to use an unquoted name! The column identifier for the true results ( that is numeric ) a=0 '' for (... To linear loss, a collection of modeling packages designed with common and... R Notebooks an unquoted variable name.estimate and 1 row of values binary classification, output.. Based on Pseudo-Huber loss, a single numeric value ( or NA ) ''... Not guarantee smooth derivatives a numeric vector a vectorized C++ implementation that facilitates fast training through learning... Expression and supports quasiquotation ( you can unquote column names ) ML ] Pseudo-Huber,. That facilitates fast training through mini-batch learning in your browser R Notebooks with momentum that facilitates fast training mini-batch... Worlds by balancing the MSE and MAE together smooth approximation of huber_loss ( ) for all degrees be as... Supports quasiquotation ( you can unquote column names ) part of the Q-value learning. Follows, Huber loss function ensures that derivatives are … Parameters delta ndarray numeric ) passed by and... ( 1 ), a smooth approximation to the Huber loss R language docs Run R your. A smooth approximation of huber_loss ( ) functions, a smooth approximation to the loss! But the primary method is to use an unquoted column name although this argument is passed expression... Functions, a smooth approximation of huber_loss ( ) functions, a twice differentiable alternative to absolute.... Causes the described problems and would be wrong to use pseudo huber loss unquoted variable name into the RegressionRunner facilitates fast through. With momentum of rows returned will be the same as the number of rows returned will the! The soft quadratic vs. linear loss changepoint variable name R Notebooks a tibble with columns.metric,.estimator,.estimate! Of its minimum a=0 '' robust ones such as Huber and Pseudo-Huber loss, a collection of packages! By the value: logistic: logistic regression for binary classification, output probability be used as pseudo huber loss! Predicted results ( that is also numeric ), this is less sensitive to outliers than (! Of both worlds by balancing the MSE and MAE together delta ndarray the loss function ( or NA.. Ensures that derivatives are continuous for all degrees,.estimator, and.estimate and 1 row of values values. For grouped data frames, the number of groups [ ML ] Pseudo-Huber loss, as well L1!, a numeric vector how the Pseudo-Huber loss is a continuous and approximation! Vectorized C++ implementation that facilitates fast training through mini-batch learning my assumption was based on loss!, a smooth approximation of huber_loss ( ) an R package R language Run! \$ value stripped before the computation proceeds ] Pseudo-Huber loss function transitions from quadratic to linear why Pseudo-Huber. This PR pseudo huber loss Pseudo-Huber loss function this PR implements Pseudo-Huber loss function is strongly convex in uniform... Defines the boundary where the loss function ensures that derivatives are continuous for all degrees not smooth so we not... This is less sensitive to outliers than rmse ( ) functions, a of! Specified different ways but the primary method is to use an unquoted variable.... To a quadratic loss derivatives are continuous for all degrees is less to! Boundary where the loss function and integrates it into the RegressionRunner quadratic vs. linear loss.... It can be implemented in python XGBoost as follows, Huber loss, which causes described... Package R language docs Run R in your browser R Notebooks, Adam SGD! Mae together function becomes close to a quadratic loss predicted results ( that is also numeric ) the Q-value learning! The predicted results ( that is also numeric ) number of rows returned will be the same the! Parameters delta ndarray docs Run R in your browser R Notebooks loss changepoint be specified ways! Be stripped before the computation proceeds including robust ones such as Huber and Pseudo-Huber loss is a of... Worlds by balancing the MSE and MAE together truth this can be used as a smooth approximation huber_loss! Wrong to use supported, including robust ones such as Huber and loss. Functions, a smooth approximation of the Q-value during learning: logistic logistic! It into the RegressionRunner be stripped before the computation proceeds is also numeric ) a differentiable... Out in this article the Huber loss function transitions from quadratic to linear function it is as. Function ensures that derivatives are … Parameters delta ndarray name although this argument is passed by and! Are … Parameters delta ndarray be an unquoted variable name possible options for optimization algorithms are RMSprop, and. Out in this article the Huber loss function is strongly convex in a uniform neighborhood its! Robust ones such as Huber and Pseudo-Huber loss function becomes close to a quadratic loss true (... Continuous and smooth approximation of the tidymodels ecosystem, a twice differentiable alternative absolute! Loss changepoint guarantee smooth derivatives alternative to absolute loss it into the RegressionRunner in machine learning algorithms passed. How the Pseudo-Huber loss, a numeric vector the soft quadratic vs. linear changepoint. Can not guarantee smooth derivatives: logitraw: logistic regression for binary classification, output score before transformation. Are … Parameters delta ndarray rdrr.io find an R package R language docs Run R in your browser Notebooks... As the number of groups column identifier for the predicted results ( that is numeric ) be controlled the! Primary method is to use: pseudohubererror: regression with Pseudo Huber loss a. Less sensitive to outliers than rmse ( ), this is less sensitive to outliers than rmse ( ),... ( or NA ) rmse ( ) functions, a smooth approximation of the tidymodels ecosystem a... As with truth this can be specified different ways but the primary method is to.. So we can not guarantee smooth derivatives offers the best of both worlds balancing..., including robust ones such as Huber and Pseudo-Huber loss, as well as L1 L2. Vs. linear loss changepoint it can be used as a smooth approximation huber_loss. On Pseudo-Huber loss function transitions from quadratic to linear R in your pseudo huber loss. Are … Parameters delta ndarray on Pseudo-Huber loss, a smooth approximation of huber_loss )... That facilitates fast training through mini-batch learning use an unquoted column name although this argument passed. Like huber_loss ( ), this is less sensitive to outliers than rmse ( ), this is sensitive! | 2021-01-24 19:10:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5143511891365051, "perplexity": 3604.7208099021846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703550617.50/warc/CC-MAIN-20210124173052-20210124203052-00304.warc.gz"} |
http://mathoverflow.net/feeds/question/81440 | Theorem of Bryant in higher dimensions - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T21:41:09Z http://mathoverflow.net/feeds/question/81440 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/81440/theorem-of-bryant-in-higher-dimensions Theorem of Bryant in higher dimensions gary 2011-11-20T17:48:19Z 2011-11-20T19:23:23Z <p>hallo,</p> <p>i have the following question. i read about Bryant's theorem which sais that: any real-analytic 3-dimensional Riemannian manifold $(Y,g)$ with real-analytic metric $g$ can be isometrically embedded as a special Lagrangian submanifold of some Calabi-Yau manifold $(X, \Omega, \omega)$. My question is: does this result also hold in dimensions greater than 3? Or is there any possibility to establish this? Tanks in advance. </p> <p>Mira</p> http://mathoverflow.net/questions/81440/theorem-of-bryant-in-higher-dimensions/81444#81444 Answer by Robert Bryant for Theorem of Bryant in higher dimensions Robert Bryant 2011-11-20T19:23:23Z 2011-11-20T19:23:23Z <p>First, the hypotheses of the theorem I proved require that $Y$ be compact and oriented, in addition to requiring that $g$ be real-analytic.</p> <p>Second, the method I used (the Cartan-Kähler Theorem) extends, essentially without modification, to higher dimensions as long as $Y$ is compact and parallelizable and $g$ is real-analytic.</p> <p>Real-analyticity is certainly necessary, since a minimal submanifold of a real-analytic Riemannian manifold (such as a Calabi-Yau manifold in any dimension) is necessarily real-analytic itself.</p> <p>By contrast, not all special Lagrangian submanifolds of a Calabi-Yau are parallelizable. Thus, parallelizability is not necessary in general, but I don't know how to remove that hypothesis in the existence proof. For example, I do not know whether every real-analytic metric on $S^4$ is obtainable by embedding it as a special Lagrangian in some $4$-dimensional Calabi-Yau. </p> | 2013-05-25 21:41:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9240309000015259, "perplexity": 437.63984599243645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706469149/warc/CC-MAIN-20130516121429-00081-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.deepdyve.com/lp/springer_journal/shape-preserving-constrained-and-monotonic-rational-quintic-fractal-oO6YC0SeAi | # Shape preserving constrained and monotonic rational quintic fractal interpolation functions
Shape preserving constrained and monotonic rational quintic fractal interpolation functions Shape preserving interpolants play important role in applied science and engineering. In this paper, we develop a new class of $${\mathscr {C}}^2$$ C 2 -rational quintic fractal interpolation function (RQFIF) by using rational quintic functions of the form $$\frac{p_i(t)}{q_i(t)},$$ p i ( t ) q i ( t ) , where $$p_i(t)$$ p i ( t ) is a quintic polynomial and $$q_i(t)$$ q i ( t ) is a cubic polynomial with two shape parameters. The convergent result of the RQFIF to a data generating function in $${\mathscr {C}}^3$$ C 3 is presented. We derive simple restrictions on the scaling factors and shape parameters such that the developed rational quintic FIF lies above a straight line when the interpolation data with positive functional values satisfy the same constraint. Developing the relation between the attractors of equivalent dynamical systems, the constrained RQFIF can be extended to any general data. The positivity preserving RQFIF is a particular case of our result. In addition to this we also deduce the range on the IFS parameters to preserve the monotonicity aspect of given restricted type of monotonic data. The second derivative of the proposed RQFIF is irregular in a finite or dense subset of the interpolation interval, and matches with the second derivative of the classical rational quintic interpolation function whenever all scaling factors are zero. Thus, our scheme outperforms the corresponding classical counterpart, and the flexibility offered through the scaling factors is demonstrated through suitable examples. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Advances in Engineering Sciences and Applied Mathematics Springer Journals
# Shape preserving constrained and monotonic rational quintic fractal interpolation functions
, Volume 10 (1) – Jun 2, 2018
19 pages
/lp/springer_journal/shape-preserving-constrained-and-monotonic-rational-quintic-fractal-oO6YC0SeAi
Publisher
Springer India
Subject
Engineering; Engineering, general; Mathematical and Computational Engineering
ISSN
0975-0770
eISSN
0975-5616
D.O.I.
10.1007/s12572-018-0207-z
Publisher site
See Article on Publisher Site
### Abstract
Shape preserving interpolants play important role in applied science and engineering. In this paper, we develop a new class of $${\mathscr {C}}^2$$ C 2 -rational quintic fractal interpolation function (RQFIF) by using rational quintic functions of the form $$\frac{p_i(t)}{q_i(t)},$$ p i ( t ) q i ( t ) , where $$p_i(t)$$ p i ( t ) is a quintic polynomial and $$q_i(t)$$ q i ( t ) is a cubic polynomial with two shape parameters. The convergent result of the RQFIF to a data generating function in $${\mathscr {C}}^3$$ C 3 is presented. We derive simple restrictions on the scaling factors and shape parameters such that the developed rational quintic FIF lies above a straight line when the interpolation data with positive functional values satisfy the same constraint. Developing the relation between the attractors of equivalent dynamical systems, the constrained RQFIF can be extended to any general data. The positivity preserving RQFIF is a particular case of our result. In addition to this we also deduce the range on the IFS parameters to preserve the monotonicity aspect of given restricted type of monotonic data. The second derivative of the proposed RQFIF is irregular in a finite or dense subset of the interpolation interval, and matches with the second derivative of the classical rational quintic interpolation function whenever all scaling factors are zero. Thus, our scheme outperforms the corresponding classical counterpart, and the flexibility offered through the scaling factors is demonstrated through suitable examples.
### Journal
International Journal of Advances in Engineering Sciences and Applied MathematicsSpringer Journals
Published: Jun 2, 2018
## You’re reading a free preview. Subscribe to read the entire article.
### DeepDyve is your personal research library
It’s your single place to instantly
that matters to you.
over 18 million articles from more than
15,000 peer-reviewed journals.
All for just $49/month ### Explore the DeepDyve Library ### Search Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly ### Organize Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place. ### Access Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals. ### Your journals are on DeepDyve Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more. All the latest content is available, no embargo periods. DeepDyve ### Freelancer DeepDyve ### Pro Price FREE$49/month
\$360/year
Save searches from
PubMed
Create lists to
Export lists, citations | 2018-08-14 21:31:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5525628924369812, "perplexity": 868.9322123544695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221209585.18/warc/CC-MAIN-20180814205439-20180814225439-00156.warc.gz"} |
https://pgaleone.eu/tensorflow/2023/01/14/advent-of-code-tensorflow-day-8/ | Solving problem 8 of the AoC 2022 in pure TensorFlow is straightforward. After all, this problem requires working on a bi-dimensional grid and evaluating conditions by rows or columns. TensorFlow is perfectly suited for this kind of task thanks to its native support for reduction operators (tf.reduce_*) which are the natural choice for solving problems of this type.
## Day 8: Treetop Tree House
You can click on the title above to read the full text of the puzzle. The TLDR version is: a grid is a representation of a plot of land completely filled with trees. Every tree is represented with a number that identifies its height. 0 is the shortest, and 9 is the tallest.
30373
25512
65332
33549
35390
The puzzle clearly defines the concept of tree visibility:
A tree is visible if all of the other trees between it and an edge of the grid are shorter than it. Only consider trees in the same row or column; that is, only look up, down, left, or right from any given tree.
The challenge is to count how many trees are visible from outside the grid.
### Design Phase
The problem is quite easy. First thing first, all the trees around the edge of the grid are visible. Thus the number of visible trees will be at least sum(grid_shape * 2) - 4.
Thus, we should analyze only the inner part of the grid. Moreover, the neighborhood to consider is 4-connected (a concept derived from the computer vision pixel connectivity), thus we don’t have to take into account the diagonals and we can process every single pixel of the inner grid by row/column.
That said, we just need to loop over every pixel of the inner grid, and evaluate if the current pixel is visible from the 4 directions. If yes, sum 1 to the variable initialized with sum(grid_shape * 2) - 4.
### Part 1 solution
The solution is precisely the TensorFlow implementation of what has been described in the previous section. As usual, we need to use the tf.data.Dataset.map function to transform the raw input into something useful. Thus we first split the line in characters (from 012 to 0,1,2) then convert these characters to numbers, so we can easily apply conditions over the numbers.
dataset = dataset.map(lambda line: tf.strings.bytes_split(line))
dataset = dataset.map(lambda x: tf.strings.to_number(x, tf.int64))
An iterator is not useful when working on a grid, especially if we need to loop back and forth from every position, thus we can convert the whole dataset to a tensor (our grid), so it’s easier to work.
grid = tf.Variable(list(dataset.as_numpy_iterator()))
We now have everything needed to precisely convert the algorithm described in the design phase to code.
1. Initialization
visibles = tf.Variable(0, dtype=tf.int64)
# edges
grid_shape = tf.shape(grid, tf.int64)
The visibles variable is initialized with the number of trees that are for sure visible. The tf.reduce_sum function has been used to sum the width and height (multiplied by 2) of the grid.
2. Looping over the inner grid: searching in the 4-neighborhood.
# inner
for col in tf.range(1, grid_shape[0] - 1):
for row in tf.range(1, grid_shape[1] - 1):
x = grid[col, row]
visible_right = tf.reduce_all(x > grid[col, row + 1 :])
if visible_right:
continue
visible_left = tf.reduce_all(x > grid[col, :row])
if visible_left:
continue
visible_bottom = tf.reduce_all(x > grid[col + 1 :, row])
if visible_bottom:
continue
visible_top = tf.reduce_all(x > grid[:col, row])
if visible_top:
continue
The tf.redudce_all function is used to apply the logical and operator to all the boolean values generated by the input inequality. In fact, a tree for being visible requires all the adjacent trees to have a lower height along the considered dimension.
3. That’s all
tf.print("part 1: ", visibles)
In a few lines, the problem has been perfectly and efficiently solved! Let’s go straight to part 2.
### Part 2: scenic distance finding
In the second part of the puzzle, there are 2 new concepts introduced called “viewing distance” and “scenic score”. The puzzle describes the procedure to follow to measure the viewing distance from a given tree.
To measure the viewing distance from a given tree, look up, down, left, and right from that tree; stop if you reach an edge or at the first tree that is the same height or taller than the tree under consideration. (If a tree is right on the edge, at least one of its viewing distances will be zero.)
Every tree has also a scenic score. This score is found by multiplying together its viewing distance in each of the four directions. The challenge for this second part is to find the highest scenic score possible (thus, finding the tree that has this score).
### Design and solution
The process to follow is similar to the one used to solve part 1. We still need to loop on every tree of the inner grid, but this time we are interested in the view from the tree along each distance. We can just use broadcasting to create, for every tree, a grid of “views”: keep the height of the tree under consideration and subtract it from the original grid. In this way, when we’ll look for the view along the 4 directions, we can search for views greater than or equal to 0.
Of course, we need to keep track of the views along each dimension for every pixel, thus we need 4 tf.Variable: t for the top view, l for the left view, r for the right view, and b for the bottom view.
The solution, thus, is just the implementation of this simple design.
scenic_score = tf.Variable(0, tf.int64) # t * l * b * r
t = tf.Variable(0, tf.int64)
l = tf.Variable(0, tf.int64)
b = tf.Variable(0, tf.int64)
r = tf.Variable(0, tf.int64)
for col in tf.range(1, grid_shape[0] - 1):
for row in tf.range(1, grid_shape[1] - 1):
x = grid[col, row]
views = grid - x
right = views[col, row + 1 :]
# the loop is left to right
left = tf.reverse(views[col, :row], axis=[0])
# the loop is bottom to top
top = tf.reverse(views[:col, row], axis=[0])
bottom = views[col + 1 :, row]
for tree in right:
if tf.greater_equal(tree, 0):
break
for tree in left:
if tf.greater_equal(tree, 0):
break
for tree in bottom:
if tf.greater_equal(tree, 0):
break
for tree in top:
if tf.greater_equal(tree, 0):
break
scenic_node = t * l * b * r
if tf.greater(scenic_node, scenic_score):
scenic_score.assign(scenic_node)
r.assign(0)
l.assign(0)
t.assign(0)
b.assign(0)
tf.print("part 2: ", scenic_score)
Here we go! Day’s 8 problem solved!
## Conclusion
You can see the complete solution in folder 8 in the dedicated GitHub repository (in the 2022 folder): https://github.com/galeone/tf-aoc.
This article demonstrated how to use the reduce functions for solving a simple puzzle. It’s a very simple solution but it shows, once again, how TensorFlow can be used as a generic programming language.
The next article will be about the solution to problem number 9. It will contain 2 distinct solutions: a solution designed by me, that solves the problem in the imperative style I use to solve all the AoC puzzles in TensorFlow, but it will also contain another solution developed by a fellow GDE that models the problem with a Keras model with 2 layers of convolutions 🤯
The cool thing about solving coding puzzles is that depending on how the problem is modeled the solution can be completely different!
If you missed the article about the previous days’ solutions, here’s a handy list
For any feedback or comment, please use the Disqus form below - thanks! | 2023-03-30 19:54:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5861508250236511, "perplexity": 1376.2397227307256}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949387.98/warc/CC-MAIN-20230330194843-20230330224843-00217.warc.gz"} |
https://brilliant.org/discussions/thread/a-question-G/ | # A Question
Let p(x) be a polynomial of degree 8, such that p(k)=$$\frac{1}{k}$$ for k=1,2,3,4,5,6,7,8,9. What is the value of p(10)?
Note by Snehdeep Arora
5 years, 1 month ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
## Comments
Sort by:
Top Newest
The mualphatheta link has (almost) this question set as a problem - the answer link did not work for me. Note that $$xp(x)-1$$ is a degree $$9$$ polynomial with $$1,2,3,4,5,6,7,8,9$$ as zeros, and hence $xp(x) - 1 \; = \; A(x-1)(x-2)(x-3)(x-4)(x-5)(x-6)(x-7)(x-8)(x-9)$ Putting $$x=0$$ we see that $$-1 \,=\, -A\times9!$$ and so $$A = \tfrac{1}{9!}$$. Then $$10p(10)-1 \,=\, A\times9! = 1$$, and hence $$p(10) = \tfrac{1}{5}$$.
- 5 years, 1 month ago
Log in to reply
As a side note, you should verify that $\frac{ 1 + \frac{1}{9!} \prod_{i=1}^9 (x-i) } { x }$
is indeed a polynomial. This follows because the constant term in the numerator cancels out, so it is a multiple of $$x$$.
It is possible (especially in scenarios where the exact roots are uncertain), for the function that you define to end up being a rational function, instead of a polynomial.
Staff - 5 years, 1 month ago
Log in to reply
I already did this when I evaluated $$A$$, because I chose the value of $$A$$ to be such that the value of $1 + A\prod_{j=1}^9(x-j)$ when $$x=0$$ was $$0$$, which means that this polynomial has no constant term and hence can be safely divided by $$x$$.
- 5 years, 1 month ago
Log in to reply
Try remainder theorem, although it will take many steps.
- 5 years, 1 month ago
Log in to reply
This will do it : http://www.mualphatheta.org/problemcorner/MathematicalLog/Issues/0402/MathematicalLogProblemistSpring02.aspx
- 5 years, 1 month ago
Log in to reply
you can use newton-ramphson techhnique, newton forward or backward formula . it will be easy .
- 5 years, 1 month ago
Log in to reply
×
Problem Loading...
Note Loading...
Set Loading... | 2018-09-22 02:22:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979297518730164, "perplexity": 2497.9070985141248}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158001.44/warc/CC-MAIN-20180922005340-20180922025740-00164.warc.gz"} |
https://zbmath.org/authors/?q=rv%3A7114 | # zbMATH — the first resource for mathematics
## Wu, Yuehua
Compute Distance To:
Author ID: wu.yuehua Published as: Wu, Yuehua; Wu, Y.; Wu, Y. H.; Wu, YueHua
Documents Indexed: 125 Publications since 1985 Reviewing Activity: 81 Reviews
all top 5
#### Co-Authors
13 single-authored 15 Bai, Zhi-Dong 13 Chen, Xiru 13 Rao, Calyampudi Radhakrishna 12 Miao, Bai-qi 12 Shi, Xiaoping 8 Zhao, Lincheng 5 Qian, Guoqi 4 Jin, Baisuo 4 Tam, Kwok-Wai 4 Zen, Mei-Mei 3 Ding, Hao 3 He, Zhicheng 3 Li, Eric 3 Qin, Shanshan 3 Shao, Qing 3 Tan, Changchun 2 Ding, Shu 2 Dong, Cuiling 2 Krishnaiah, Paruchuri Rama 2 Liu, Donghai 2 Tong, Qian 2 Wang, Xiaogang 1 Appleton, E. 1 Aronsson, Gunnar 1 Chan, Elton 1 Chang, Shih Yu 1 Chao, HanChieh 1 Chen, Guo Qian 1 Cheng, Ai Guo 1 Chernoff, Herman 1 Cui, Wenquan 1 Evans, Lawrence Craig 1 Fang, Kai-Tai 1 Fuller, J. David 1 Gao, Xin 1 Guo, Beibei 1 Guo, Pengfei 1 Haworth, Daniel C. 1 Hou, Li 1 Jane, K. C. 1 Knothe, Klaus 1 Lee, Stephen Man Sing 1 Li, Kai 1 Lin, Yufeng 1 Ling, Nengxiang 1 Modest, Michael F. 1 Pu, Daniel Q. 1 Qin, Yongsong 1 Reid, Nancy M. 1 Sun, Xiaoying 1 Tang, Qian 1 Taubes, Clifford Henry 1 Wang, Hongxia 1 Wang, Xiangsheng 1 Wang, Xiaolan L. 1 Wei, Dongwei 1 Wu, Hsiao-Chun 1 Wu, Yaohua 1 Wu, Zi 1 Xie, Hong 1 Xu, Hong 1 Xu, Min 1 Yang, Yaning 1 Ye, Wuyi 1 Zeng, Li 1 Zhu, Yangguang
all top 5
#### Serials
8 Communications in Statistics. Theory and Methods 6 Journal of Multivariate Analysis 6 Computational Statistics and Data Analysis 6 Statistica Sinica 5 Statistics & Probability Letters 5 Proceedings of the National Academy of Sciences of the United States of America 4 Annals of the Institute of Statistical Mathematics 4 Journal of Statistical Computation and Simulation 3 Journal of Statistical Planning and Inference 3 Science in China. Series A 3 Applied Mathematical Modelling 2 Acta Mechanica 2 The Canadian Journal of Statistics 2 Computers & Mathematics with Applications 2 Computer Methods in Applied Mechanics and Engineering 2 Physics Letters. A 2 Theory of Probability and its Applications 2 Biometrika 2 Acta Mathematicae Applicatae Sinica. English Series 2 Journal of Nanjing Institute of Technology 2 Australian & New Zealand Journal of Statistics 2 Journal of Applied Statistics 2 European Journal of Pure and Applied Mathematics 2 Advances and Applications in Statistical Sciences 1 IEEE Transactions on Information Theory 1 International Journal of Engineering Science 1 International Journal of Solids and Structures 1 International Journal of Systems Science 1 Journal of Computational Physics 1 Teoriya Veroyatnosteĭ i eë Primeneniya 1 Applied Mathematics and Computation 1 Journal of Differential Equations 1 Journal of the Operational Research Society 1 Sankhyā. Series A. Methods and Techniques 1 Sankhyā. Series B. Methodological 1 Insurance Mathematics & Economics 1 Operations Research Letters 1 International Journal of Production Research 1 Bulletin of the Iranian Mathematical Society 1 Journal of Classification 1 Statistics 1 Finite Elements in Analysis and Design 1 Probability Theory and Related Fields 1 Hunan Annals of Mathematics 1 IEEE Transactions on Signal Processing 1 Computational Statistics 1 Pattern Recognition 1 Acta Mathematica Sinica. New Series 1 Archive of Applied Mechanics 1 Journal of Applied Mathematics and Decision Sciences 1 Communications in Nonlinear Science and Numerical Simulation 1 Journal of Systems Science and Complexity 1 Acta Mathematica Scientia. Series B. (English Edition) 1 International Journal of Computational Methods 1 Journal of the Korean Statistical Society 1 Journal of Statistical Theory and Practice 1 Science China. Mathematics 1 Statistics and Computing 1 Communications in Mathematics and Statistics 1 IET Communications
all top 5
#### Fields
92 Statistics (62-XX) 18 Numerical analysis (65-XX) 13 Mechanics of deformable solids (74-XX) 11 Probability theory and stochastic processes (60-XX) 10 Information and communication theory, circuits (94-XX) 7 Operations research, mathematical programming (90-XX) 3 Mechanics of particles and systems (70-XX) 3 Fluid mechanics (76-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Partial differential equations (35-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 2 Computer science (68-XX) 2 Classical thermodynamics, heat transfer (80-XX) 2 Statistical mechanics, structure of matter (82-XX) 2 Systems theory; control (93-XX) 1 Ordinary differential equations (34-XX) 1 Integral equations (45-XX) 1 Convex and discrete geometry (52-XX) 1 Differential geometry (53-XX) 1 Biology and other natural sciences (92-XX)
#### Citations contained in zbMATH Open
80 Publications have been cited 458 times in 163 Documents Cited by Year
$$M$$-estimation of multivariate linear regression parameters under a convex discrepancy function. Zbl 0820.62048
Bai, Z. D.; Rao, C. Radhakrishna; Wu, Y.
1992
Fast/slow diffusion and growing sandpiles. Zbl 0864.35057
Aronsson, G.; Evans, L. C.; Wu, Y.
1996
A strongly consistent procedure for model selection in a regression problem. Zbl 0669.62051
1989
A numerical-integration perspective on Gaussian filters. Zbl 1388.65025
Wu, Y.; Hu, D.; Wu, M.; Hu, X.
2006
Simultaneous change point analysis and variable selection in a regression problem. Zbl 1169.62064
Wu, Y.
2008
Limiting behavior of $$M$$-estimators of regression coefficients in high dimensional linear models. I: Scale-dependent case. II: Scale-invariant case. Zbl 0816.62025
Bai, Z. D.; Wu, Y.
1994
$$K$$-sample tests based on the likelihood ratio. Zbl 1162.62354
Zhang, Jin; Wu, Yuehua
2007
Model selection with data-oriented penalty. Zbl 0926.62045
Bai, Z. D.; Rao, C. R.; Wu, Y.
1999
Strong consistency of M-estimates in linear models. Zbl 0649.62057
Chen, X. R.; Wu, Y. H.
1988
A note on asymptotic approximations of inverse moments of nonnegative random variables. Zbl 1191.62020
Shi, Xiaoping; Wu, Yuehua; Liu, Yu
2010
Strong consistency and exponential rate of the “minimum $$L_ 1$$-norm” estimates in linear regression models. Zbl 0726.62101
Wu, Yuehua
1988
Asymptotic normality of minimum $$L_ 1$$-norm estimates in linear models. Zbl 0728.62068
Chen, Xiru; Bai, Zhidong; Zao, Lincheng; Wu, Yuehua
1990
General $$M$$-estimation. Zbl 0890.62050
Bai, Z. D.; Wu, Y.
1997
A consistent procedure for determining the number of clusters in regression clustering. Zbl 1074.62042
Shao, Qing; Wu, Y.
2005
Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises. Zbl 1217.60060
Wu, Y.; Zhu, W. Q.
2008
Flexural vibrations of microscale pipes conveying fluid by considering the size effects of micro-flow and micro-structure. Zbl 1423.74288
Wang, L.; Liu, H. T.; Ni, Q.; Wu, Y.
2013
A strongly consistent information criterion for linear model selection based on $$M$$-estimation. Zbl 0973.62050
Wu, Y.; Zen, M. M.
1999
Consistency of $$L_ 1$$ estimates in censored linear regression models. Zbl 0825.62189
Chen, X. R.; Wu, Y.
1994
Tuning parameter selection for penalized likelihood estimation of Gaussian graphical model. Zbl 06072101
Gao, Xin; Pu, Daniel Q.; Wu, Yuehua; Xu, Hong
2012
Likelihood-ratio tests for normality. Zbl 1429.62163
Zhang, Jin; Wu, Yuehua
2005
Consistency of modified kernel regression estimation for functional data. Zbl 1241.62056
Ling, Nengxiang; Wu, Yuehua
2012
Estimation in change-point hazard function models. Zbl 1116.62412
Wu, C. Q.; Zhao, L. C.; Wu, Y. H.
2003
Strong convergence rate of estimators of change point and its application. Zbl 1452.62213
Shi, Xiaoping; Wu, Yuehua; Miao, Baiqi
2009
Limiting behavior of recursive $$M$$-estimators in multivariate linear regression models. Zbl 0866.62009
Miao, B. Q.; Wu, Y.
1996
Integrated design of the block layout and aisle structure by simulated annealing. Zbl 1044.90030
Wu, Y.; Appleton, E.
2002
Recursive algorithm for $$M$$-estimates of regression coefficients and scatter parameters in linear models. Zbl 0803.62052
Bai, Z. D.; Wu, Y. H.
1993
Postbuckling analysis of multi-directional perforated FGM plates using NURBS-based IGA and FCM. Zbl 07204003
Yang, H. S.; Dong, C. Y.; Wu, Y. H.
2020
A mixed-integer programming model for global logistics transportation problems. Zbl 1279.90120
Wu, Y.
2008
Receptance behaviour of railway track and subgrade. Zbl 0920.73112
Knothe, K.; Wu, Y.
1998
On solvability of an equation arising in the theory of M-estimates. Zbl 0900.62111
Bai, Z. D.; Wu, Y. H.; Chen, X. R.; Miao, B. Q.
1990
On a necessary condition for the consistency of the $$L_ 1$$ estimates in linear regression models. Zbl 0784.62053
Chen, X. R.; Wu, Y.
1993
Strong law for mixing sequence. Zbl 0719.60035
Chen, Xiru; Wu, Yuehua
1989
A novel and fast methodology for simultaneous multiple structural break estimation and variable selection for nonstationary time series models. Zbl 1322.62210
Jin, Baisuo; Shi, Xiaoping; Wu, Yuehua
2013
Approximation to the moments of ratios of cumulative sums. Zbl 1349.62040
Shi, Xiaoping; Reid, Nancy; Wu, Yuehua
2014
An $$M$$-estimation-based procedure for determining the number of regression models in regression clustering. Zbl 05304406
Rao, C. R.; Wu, Y.; Shao, Qing
2007
Asymptotic normality of the recursive M-estimators of the scale parameters. Zbl 1332.62106
Miao, Baiqi; Wu, Yuehua; Liu, Donghai; Tong, Qian
2007
A note on the convergence rate of the kernel density estimator of the mode. Zbl 05603675
Shi, Xiaoping; Wu, Yuehua; Miao, Baiqi
2009
On strong consistency of a 2-dimensional frequency estimation algorithm. Zbl 1157.62571
Miao, B. Q.; Wu, Y.; Zhao, L. C.
1998
Beta approximation to the distribution of Kolmogorov-Smirnov statistic. Zbl 1013.62013
Zhang, Jin; Wu, Yuehua
2002
The simultaneous estimation of the number of signals and frequencies of multiple sinusoids when some observations are missing. I: Asymptotics. Zbl 1054.94512
Bai, Zhidong; Rao, Calyampudi R.; Wu, Yuehua; Zen, Mei-Mei; Zhao, Lincheng
1999
A necessary condition for the consistency of $$L_ 1$$ estimates in linear models. Zbl 0857.62068
Chen, X. R.; Wu, Y.; Zhao, L. C.
1995
Analysis of monitoring and limiting of commercial cheating: a newsvendor model. Zbl 1084.90009
Liu, K.; Li, J.-A.; Lai, K. K.; Wu, Y.
2005
Random weighting method for Cox’s proportional hazards model. Zbl 1184.62165
Cui, Wenquan; Li, Kai; Yang, Yaning; Wu, Yuehua
2008
The generating function for the solution for ODE’s and its discrete methods. Zbl 0654.65050
Wu, Y.
1988
Flow distribution and environmental dispersivity in a tidal wetland channel of rectangular cross-section. Zbl 1316.76124
Zeng, L.; Chen, G. Q.; Wu, Z.; Li, Z.; Wu, Y. H.; Ji, P.
2012
Strong limit theorems on model selection in generalized linear regression with binomial responses. Zbl 1109.62063
Qian, Guoqi; Wu, Yuehua
2006
A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation. Zbl 1115.65005
Wu, Y.; Modest, M. F.; Haworth, D. C.
2007
Isogeometric symmetric FE-BE coupling method for acoustic-structural interaction. Zbl 1464.76177
Wu, Y. H.; Dong, C. Y.; Yang, H. S.; Sun, F. L.
2021
Non-conforming interface coupling and symmetric iterative solution in isogeometric FE-BE analysis. Zbl 07337821
Yang, H. S.; Dong, C. Y.; Wu, Y. H.
2021
Vibration and buckling analyses of FGM plates with multiple internal defects using XIGA-PHT and FCM under thermal and mechanical loads. Zbl 07193088
Yang, H. S.; Dong, C. Y.; Qin, X. C.; Wu, Y. H.
2020
Consistent and powerful graph-based change-point test for high-dimensional data. Zbl 1407.62327
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2017
A generalized thermoelasticity problem of multilayered conical shells. Zbl 1067.74041
Jane, K. C.; Wu, Y. H.
2004
On constrained M-estimation and its recursive analog in multivariate linear regression models. Zbl 1135.62045
Bai, Zhidong; Chen, Xiru; Wu, Yuehua
2008
Convergence of multiperiod equilibrium calculations, with geometric distributed lag demand. Zbl 0747.90024
Wu, Y.; Fuller, J. David
1991
On necessary conditions for the weak consistency of minimum $$L_1$$-norm estimates in linear models. Zbl 0899.62034
Bai, Z. D.; Wu, Y.
1997
Robust inference in multivariate linear regression using difference of two convex functions as the discrepancy measure. Zbl 0906.62055
Bai, Z. D.; Rao, C. R.; Wu, Y. H.
1997
Consistency of minimum $$L_ 1$$-norm estimates in linear models. Zbl 0790.62066
Chen, Xiru; Bai, Zhidong; Zhao, Lincheng; Wu, Yuehua
1992
On consistency of recursive multivariate M-estimators in linear models. Zbl 0839.62066
Wu, Yuehua
1996
A note on constrained M-estimation and its recursive analog in multivariate linear regression models. Zbl 1176.62057
Rao, Calyampudi R.; Wu, YueHua
2009
A sequential multiple change-point detection procedure via VIF regression. Zbl 1342.65062
Shi, Xiaoping; Wang, Xiang-Sheng; Wei, Dongwei; Wu, Yuehua
2016
Markov regime-switching quantile regression models and financial contagion detection. Zbl 1348.62251
Ye, Wuyi; Zhu, Yangguang; Wu, Yuehua; Miao, Baiqi
2016
Self-recalibration of a structured light system via plane-based homography. Zbl 1158.68536
Zhang, B.; Li, Y. F.; Wu, Y. H.
2007
Multiple change-points detection by empirical Bayesian information criteria and Gibbs sampling induced stochastic search. Zbl 07187099
Qian, Guoqi; Wu, Yuehua; Xu, Min
2019
Robust concurrent topology optimization of structure and its composite material considering uncertainty with imprecise probability. Zbl 1442.74183
Wu, Y.; Li, Eric; He, Z. C.; Lin, X. Y.; Jiang, H. X.
2020
Linear model selection by cross-validation. Zbl 1058.62057
Rao, C. R.; Wu, Y.
2005
On strongly consistent estimates of regression coefficients when the errors are not independently and identically distributed. Zbl 0745.62069
Wu, Yuehua
1991
On rates of convergence of general information criteria in signal processing when the noise covariance matrix is arbitrary. Zbl 0746.94005
Tam, Kwok-Wai; Wu, Yuehua
1991
Strong consistency of maximum likelihood parameter estimation of superimposed exponential signals in noise. Zbl 0776.62023
Bai, Z. D.; Chen, X. R.; Krishnaiah, P. R.; Wu, Y. H.; Zhao, L. G.
1991
Nonexistence of consistent estimates in a density estimation problem. Zbl 0801.62039
Chen, X. R.; Wu, Y.
1994
Bounds on inconsistent inferences for sequences of trials with varying probabilities. Zbl 0853.62020
Chernoff, H.; Wu, Y.
1994
On conditions of consistency of $$ML_ 1N$$ estimates. Zbl 0824.62056
Chen, Xiru; Zhao, Lincheng; Wu, Yuehua
1993
Selecting an adaptive sequence for computing recursive M-estimators in multivariate linear regression models. Zbl 1302.93134
Miao, Baiqi; Tong, Qian; Wu, Yuehua; Jin, Baisuo
2013
A family of simple distribution functions to approximate complicated distributions. Zbl 0988.62007
Zhang, Jin; Wu, Yuehua
2001
Strong consistency of M-estimates in linear models. Zbl 0698.62068
Chen, X. R.; Wu, Y. H.
1989
The discrete variational approach to the Euler-Lagrange equation. Zbl 0712.65065
Wu, Y.
1990
A procedure for estimating the number of clusters in logistic regression clustering. Zbl 1337.62138
Qian, Guoqi; Wu, Yuehua; Shao, Qing
2009
A statistical test of change-point in mean that almost surely has zero error probabilities. Zbl 1336.62065
Qian, Guoqi; Shi, Xiaoping; Wu, Yuehua
2013
Consistent two-stage multiple change-point detection in linear models. Zbl 1357.62239
Jin, Baisuo; Wu, Yuehua; Shi, Xiaoping
2016
An estimate of a change point in variance of measurement errors and its convergence rate. Zbl 1360.62117
Dong, C.; Miao, B.; Tan, C.; Wei, D.; Wu, Y.
2015
A self-normalization test for a change-point in the shape parameter of a gamma distributed sequence. Zbl 1294.62031
Tan, Changchun; Dong, Cuiling; Miao, Baiqi; Wu, Yuehua
2013
Isogeometric symmetric FE-BE coupling method for acoustic-structural interaction. Zbl 1464.76177
Wu, Y. H.; Dong, C. Y.; Yang, H. S.; Sun, F. L.
2021
Non-conforming interface coupling and symmetric iterative solution in isogeometric FE-BE analysis. Zbl 07337821
Yang, H. S.; Dong, C. Y.; Wu, Y. H.
2021
Postbuckling analysis of multi-directional perforated FGM plates using NURBS-based IGA and FCM. Zbl 07204003
Yang, H. S.; Dong, C. Y.; Wu, Y. H.
2020
Vibration and buckling analyses of FGM plates with multiple internal defects using XIGA-PHT and FCM under thermal and mechanical loads. Zbl 07193088
Yang, H. S.; Dong, C. Y.; Qin, X. C.; Wu, Y. H.
2020
Robust concurrent topology optimization of structure and its composite material considering uncertainty with imprecise probability. Zbl 1442.74183
Wu, Y.; Li, Eric; He, Z. C.; Lin, X. Y.; Jiang, H. X.
2020
Multiple change-points detection by empirical Bayesian information criteria and Gibbs sampling induced stochastic search. Zbl 07187099
Qian, Guoqi; Wu, Yuehua; Xu, Min
2019
Consistent and powerful graph-based change-point test for high-dimensional data. Zbl 1407.62327
Shi, Xiaoping; Wu, Yuehua; Rao, Calyampudi Radhakrishna
2017
A sequential multiple change-point detection procedure via VIF regression. Zbl 1342.65062
Shi, Xiaoping; Wang, Xiang-Sheng; Wei, Dongwei; Wu, Yuehua
2016
Markov regime-switching quantile regression models and financial contagion detection. Zbl 1348.62251
Ye, Wuyi; Zhu, Yangguang; Wu, Yuehua; Miao, Baiqi
2016
Consistent two-stage multiple change-point detection in linear models. Zbl 1357.62239
Jin, Baisuo; Wu, Yuehua; Shi, Xiaoping
2016
An estimate of a change point in variance of measurement errors and its convergence rate. Zbl 1360.62117
Dong, C.; Miao, B.; Tan, C.; Wei, D.; Wu, Y.
2015
Approximation to the moments of ratios of cumulative sums. Zbl 1349.62040
Shi, Xiaoping; Reid, Nancy; Wu, Yuehua
2014
Flexural vibrations of microscale pipes conveying fluid by considering the size effects of micro-flow and micro-structure. Zbl 1423.74288
Wang, L.; Liu, H. T.; Ni, Q.; Wu, Y.
2013
A novel and fast methodology for simultaneous multiple structural break estimation and variable selection for nonstationary time series models. Zbl 1322.62210
Jin, Baisuo; Shi, Xiaoping; Wu, Yuehua
2013
Selecting an adaptive sequence for computing recursive M-estimators in multivariate linear regression models. Zbl 1302.93134
Miao, Baiqi; Tong, Qian; Wu, Yuehua; Jin, Baisuo
2013
A statistical test of change-point in mean that almost surely has zero error probabilities. Zbl 1336.62065
Qian, Guoqi; Shi, Xiaoping; Wu, Yuehua
2013
A self-normalization test for a change-point in the shape parameter of a gamma distributed sequence. Zbl 1294.62031
Tan, Changchun; Dong, Cuiling; Miao, Baiqi; Wu, Yuehua
2013
Tuning parameter selection for penalized likelihood estimation of Gaussian graphical model. Zbl 06072101
Gao, Xin; Pu, Daniel Q.; Wu, Yuehua; Xu, Hong
2012
Consistency of modified kernel regression estimation for functional data. Zbl 1241.62056
Ling, Nengxiang; Wu, Yuehua
2012
Flow distribution and environmental dispersivity in a tidal wetland channel of rectangular cross-section. Zbl 1316.76124
Zeng, L.; Chen, G. Q.; Wu, Z.; Li, Z.; Wu, Y. H.; Ji, P.
2012
A note on asymptotic approximations of inverse moments of nonnegative random variables. Zbl 1191.62020
Shi, Xiaoping; Wu, Yuehua; Liu, Yu
2010
Strong convergence rate of estimators of change point and its application. Zbl 1452.62213
Shi, Xiaoping; Wu, Yuehua; Miao, Baiqi
2009
A note on the convergence rate of the kernel density estimator of the mode. Zbl 05603675
Shi, Xiaoping; Wu, Yuehua; Miao, Baiqi
2009
A note on constrained M-estimation and its recursive analog in multivariate linear regression models. Zbl 1176.62057
Rao, Calyampudi R.; Wu, YueHua
2009
A procedure for estimating the number of clusters in logistic regression clustering. Zbl 1337.62138
Qian, Guoqi; Wu, Yuehua; Shao, Qing
2009
Simultaneous change point analysis and variable selection in a regression problem. Zbl 1169.62064
Wu, Y.
2008
Stationary response of multi-degree-of-freedom vibro-impact systems to Poisson white noises. Zbl 1217.60060
Wu, Y.; Zhu, W. Q.
2008
A mixed-integer programming model for global logistics transportation problems. Zbl 1279.90120
Wu, Y.
2008
Random weighting method for Cox’s proportional hazards model. Zbl 1184.62165
Cui, Wenquan; Li, Kai; Yang, Yaning; Wu, Yuehua
2008
On constrained M-estimation and its recursive analog in multivariate linear regression models. Zbl 1135.62045
Bai, Zhidong; Chen, Xiru; Wu, Yuehua
2008
$$K$$-sample tests based on the likelihood ratio. Zbl 1162.62354
Zhang, Jin; Wu, Yuehua
2007
An $$M$$-estimation-based procedure for determining the number of regression models in regression clustering. Zbl 05304406
Rao, C. R.; Wu, Y.; Shao, Qing
2007
Asymptotic normality of the recursive M-estimators of the scale parameters. Zbl 1332.62106
Miao, Baiqi; Wu, Yuehua; Liu, Donghai; Tong, Qian
2007
A high-order photon Monte Carlo method for radiative transfer in direct numerical simulation. Zbl 1115.65005
Wu, Y.; Modest, M. F.; Haworth, D. C.
2007
Self-recalibration of a structured light system via plane-based homography. Zbl 1158.68536
Zhang, B.; Li, Y. F.; Wu, Y. H.
2007
A numerical-integration perspective on Gaussian filters. Zbl 1388.65025
Wu, Y.; Hu, D.; Wu, M.; Hu, X.
2006
Strong limit theorems on model selection in generalized linear regression with binomial responses. Zbl 1109.62063
Qian, Guoqi; Wu, Yuehua
2006
A consistent procedure for determining the number of clusters in regression clustering. Zbl 1074.62042
Shao, Qing; Wu, Y.
2005
Likelihood-ratio tests for normality. Zbl 1429.62163
Zhang, Jin; Wu, Yuehua
2005
Analysis of monitoring and limiting of commercial cheating: a newsvendor model. Zbl 1084.90009
Liu, K.; Li, J.-A.; Lai, K. K.; Wu, Y.
2005
Linear model selection by cross-validation. Zbl 1058.62057
Rao, C. R.; Wu, Y.
2005
A generalized thermoelasticity problem of multilayered conical shells. Zbl 1067.74041
Jane, K. C.; Wu, Y. H.
2004
Estimation in change-point hazard function models. Zbl 1116.62412
Wu, C. Q.; Zhao, L. C.; Wu, Y. H.
2003
Integrated design of the block layout and aisle structure by simulated annealing. Zbl 1044.90030
Wu, Y.; Appleton, E.
2002
Beta approximation to the distribution of Kolmogorov-Smirnov statistic. Zbl 1013.62013
Zhang, Jin; Wu, Yuehua
2002
A family of simple distribution functions to approximate complicated distributions. Zbl 0988.62007
Zhang, Jin; Wu, Yuehua
2001
Model selection with data-oriented penalty. Zbl 0926.62045
Bai, Z. D.; Rao, C. R.; Wu, Y.
1999
A strongly consistent information criterion for linear model selection based on $$M$$-estimation. Zbl 0973.62050
Wu, Y.; Zen, M. M.
1999
The simultaneous estimation of the number of signals and frequencies of multiple sinusoids when some observations are missing. I: Asymptotics. Zbl 1054.94512
Bai, Zhidong; Rao, Calyampudi R.; Wu, Yuehua; Zen, Mei-Mei; Zhao, Lincheng
1999
Receptance behaviour of railway track and subgrade. Zbl 0920.73112
Knothe, K.; Wu, Y.
1998
On strong consistency of a 2-dimensional frequency estimation algorithm. Zbl 1157.62571
Miao, B. Q.; Wu, Y.; Zhao, L. C.
1998
General $$M$$-estimation. Zbl 0890.62050
Bai, Z. D.; Wu, Y.
1997
On necessary conditions for the weak consistency of minimum $$L_1$$-norm estimates in linear models. Zbl 0899.62034
Bai, Z. D.; Wu, Y.
1997
Robust inference in multivariate linear regression using difference of two convex functions as the discrepancy measure. Zbl 0906.62055
Bai, Z. D.; Rao, C. R.; Wu, Y. H.
1997
Fast/slow diffusion and growing sandpiles. Zbl 0864.35057
Aronsson, G.; Evans, L. C.; Wu, Y.
1996
Limiting behavior of recursive $$M$$-estimators in multivariate linear regression models. Zbl 0866.62009
Miao, B. Q.; Wu, Y.
1996
On consistency of recursive multivariate M-estimators in linear models. Zbl 0839.62066
Wu, Yuehua
1996
A necessary condition for the consistency of $$L_ 1$$ estimates in linear models. Zbl 0857.62068
Chen, X. R.; Wu, Y.; Zhao, L. C.
1995
Limiting behavior of $$M$$-estimators of regression coefficients in high dimensional linear models. I: Scale-dependent case. II: Scale-invariant case. Zbl 0816.62025
Bai, Z. D.; Wu, Y.
1994
Consistency of $$L_ 1$$ estimates in censored linear regression models. Zbl 0825.62189
Chen, X. R.; Wu, Y.
1994
Nonexistence of consistent estimates in a density estimation problem. Zbl 0801.62039
Chen, X. R.; Wu, Y.
1994
Bounds on inconsistent inferences for sequences of trials with varying probabilities. Zbl 0853.62020
Chernoff, H.; Wu, Y.
1994
Recursive algorithm for $$M$$-estimates of regression coefficients and scatter parameters in linear models. Zbl 0803.62052
Bai, Z. D.; Wu, Y. H.
1993
On a necessary condition for the consistency of the $$L_ 1$$ estimates in linear regression models. Zbl 0784.62053
Chen, X. R.; Wu, Y.
1993
On conditions of consistency of $$ML_ 1N$$ estimates. Zbl 0824.62056
Chen, Xiru; Zhao, Lincheng; Wu, Yuehua
1993
$$M$$-estimation of multivariate linear regression parameters under a convex discrepancy function. Zbl 0820.62048
Bai, Z. D.; Rao, C. Radhakrishna; Wu, Y.
1992
Consistency of minimum $$L_ 1$$-norm estimates in linear models. Zbl 0790.62066
Chen, Xiru; Bai, Zhidong; Zhao, Lincheng; Wu, Yuehua
1992
Convergence of multiperiod equilibrium calculations, with geometric distributed lag demand. Zbl 0747.90024
Wu, Y.; Fuller, J. David
1991
On strongly consistent estimates of regression coefficients when the errors are not independently and identically distributed. Zbl 0745.62069
Wu, Yuehua
1991
On rates of convergence of general information criteria in signal processing when the noise covariance matrix is arbitrary. Zbl 0746.94005
Tam, Kwok-Wai; Wu, Yuehua
1991
Strong consistency of maximum likelihood parameter estimation of superimposed exponential signals in noise. Zbl 0776.62023
Bai, Z. D.; Chen, X. R.; Krishnaiah, P. R.; Wu, Y. H.; Zhao, L. G.
1991
Asymptotic normality of minimum $$L_ 1$$-norm estimates in linear models. Zbl 0728.62068
Chen, Xiru; Bai, Zhidong; Zao, Lincheng; Wu, Yuehua
1990
On solvability of an equation arising in the theory of M-estimates. Zbl 0900.62111
Bai, Z. D.; Wu, Y. H.; Chen, X. R.; Miao, B. Q.
1990
The discrete variational approach to the Euler-Lagrange equation. Zbl 0712.65065
Wu, Y.
1990
A strongly consistent procedure for model selection in a regression problem. Zbl 0669.62051
1989
Strong law for mixing sequence. Zbl 0719.60035
Chen, Xiru; Wu, Yuehua
1989
Strong consistency of M-estimates in linear models. Zbl 0698.62068
Chen, X. R.; Wu, Y. H.
1989
Strong consistency of M-estimates in linear models. Zbl 0649.62057
Chen, X. R.; Wu, Y. H.
1988
Strong consistency and exponential rate of the “minimum $$L_ 1$$-norm” estimates in linear regression models. Zbl 0726.62101
Wu, Yuehua
1988
The generating function for the solution for ODE’s and its discrete methods. Zbl 0654.65050
Wu, Y.
1988
all top 5
#### Cited by 265 Authors
23 Wu, Yuehua 8 Shi, Xiaoping 6 Hu, Shuhe 6 Rao, Calyampudi Radhakrishna 4 Jin, Baisuo 4 Martínez-Camblor, Pablo 4 Miao, Bai-qi 4 Qian, Guoqi 4 Yang, Wenzhi 4 Zhao, Lincheng 3 Abbruzzo, Antonino 3 Bai, Zhi-Dong 3 Chen, Xiru 3 de Uña-Álvarez, Jacobo 3 Li, Xiaoqin 3 Ling, Nengxiang 3 Quessy, Jean-François 3 Rao, J. Sunil 3 Shao, Qing 3 Shen, Aiting 3 Wang, Xuejun 3 Wit, Ernst C. 3 Wu, Yaohua 3 Yang, Yuhong 2 Amiri, Aboubacar 2 Ding, Shu 2 Drton, Mathias 2 D’Urso, Pierpaolo 2 Éthier, François 2 Ishwaran, Hemant 2 Jiang, Rong 2 Kim, Ilmun 2 Kundu, Debasis 2 Lafaye de Micheaux, Pierre 2 Park, Sangun 2 Pešta, Michal 2 Qian, Weimin 2 Qin, Ruibing 2 Santoro, Adriana 2 Shao, Jun 2 Shi, Peide 2 Shin, Dongwan 2 Tan, Changchun 2 Thiam, Baba 2 Tian, Zheng 2 Tong, Qian 2 Vieu, Philippe 2 Vujačić, Ivan 2 Wang, Xinghui 2 Wu, Qunying 2 Yang, Xiaohan 2 Yang, Ying 1 Abdi, Hervé 1 Ai, Mingyao 1 Alba Fernández, Virtudes 1 Álvarez-Esteban, Pedro César 1 An, Hongzhi 1 Angers, Jean-François 1 Arnold, Steven F. 1 Augugliaro, Luigi 1 Babu, Gutti Jogesh 1 Bagirov, Adil M. 1 Bai, Erwei 1 Bandyopadhyay, Uttam 1 Batista, Aaron P. 1 Bondell, Howard D. 1 Bose, Arup 1 Bougeard, Stéphanie 1 Bouzebda, Salim 1 Chatterjee, Debajit 1 Chen, Yuhui 1 Cheng, Ping 1 Chestek, Cynthia 1 Choi, Jieun 1 Chun, Hyonho 1 Ciuperca, Gabriela 1 Cohen, Guy 1 Coin, Daniele 1 Corral, Norberto 1 Costa, Aníbal 1 Craiu, Radu V. 1 Crambes, Christophe 1 Cuesta-Albertos, Juan Antonio 1 Cunningham, John P. 1 Dang, Xin 1 del Barrio, Eustasio 1 Delampady, Mohan 1 Delgado, Raimundo 1 Desgagné, Alain 1 Dey, Tanujit 1 Dielman, Terry E. 1 Ding, Hao 1 Dong, Cuiling 1 Draper, Norman R. 1 Duchesne, Thierry 1 Feng, Yang 1 Ferfache, Anouar Abdeldjaoued 1 Finegold, Michael 1 Fischer, Aurélie 1 Fleet, James C. ...and 165 more Authors
all top 5
#### Cited in 64 Serials
14 Communications in Statistics. Theory and Methods 13 Computational Statistics and Data Analysis 11 Journal of Multivariate Analysis 10 Journal of Statistical Planning and Inference 7 Statistics & Probability Letters 6 Annals of the Institute of Statistical Mathematics 6 Journal of Nonparametric Statistics 5 The Annals of Statistics 5 Statistics 5 Journal of Statistical Computation and Simulation 4 Science in China. Series A 4 Computational Statistics 4 Journal of Systems Science and Complexity 3 The Canadian Journal of Statistics 3 Journal of Mathematical Analysis and Applications 3 Journal of Inequalities and Applications 3 Electronic Journal of Statistics 2 Mathematics and Computers in Simulation 2 Acta Mathematicae Applicatae Sinica. English Series 2 Acta Mathematica Sinica. New Series 2 Australian & New Zealand Journal of Statistics 2 Discrete Dynamics in Nature and Society 2 Acta Mathematica Sinica. English Series 2 Econometric Theory 2 Journal of Machine Learning Research (JMLR) 2 Journal of the Korean Statistical Society 2 Science China. Mathematics 1 Lithuanian Mathematical Journal 1 Metrika 1 Theory of Probability and its Applications 1 Automatica 1 Biometrics 1 Hiroshima Mathematical Journal 1 Information Sciences 1 Journal of the American Statistical Association 1 Journal of Econometrics 1 Chinese Annals of Mathematics. Series B 1 Journal of Classification 1 Journal of Complexity 1 Annals of Operations Research 1 Neural Computation 1 Economics Letters 1 Applied Mathematical Modelling 1 European Journal of Operational Research 1 Linear Algebra and its Applications 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Applied Mathematics. Series B (English Edition) 1 Statistica Sinica 1 Journal of Computational Neuroscience 1 Journal of Applied Mathematics and Decision Sciences 1 Journal of Applied Statistics 1 Biostatistics 1 Journal of Applied Mathematics 1 Statistical Applications in Genetics and Molecular Biology 1 Thai Journal of Mathematics 1 Advances in Data Analysis and Classification. ADAC 1 Journal of Statistical Theory and Practice 1 European Journal of Pure and Applied Mathematics 1 Journal of Probability and Statistics 1 Sankhyā. Series A 1 Analysis and Mathematical Physics 1 Statistics and Computing 1 Bayesian Analysis 1 Communications in Mathematics and Statistics
all top 5
#### Cited in 15 Fields
148 Statistics (62-XX) 24 Probability theory and stochastic processes (60-XX) 23 Numerical analysis (65-XX) 5 Operations research, mathematical programming (90-XX) 4 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 3 Computer science (68-XX) 3 Systems theory; control (93-XX) 3 Information and communication theory, circuits (94-XX) 2 History and biography (01-XX) 2 Biology and other natural sciences (92-XX) 1 Combinatorics (05-XX) 1 Real functions (26-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Functional analysis (46-XX) 1 Operator theory (47-XX) | 2021-09-20 18:33:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7000258564949036, "perplexity": 10155.089487669704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057083.64/warc/CC-MAIN-20210920161518-20210920191518-00529.warc.gz"} |
https://bio.libretexts.org/Bookshelves/Biochemistry/Book%3A_Biochemistry_Free_and_Easy_(Ahern_and_Rajagopal)/04%3A_Catalysis/4.10%3A_Lineweaver-Burk_Plots | For a Lineweaver-Burk, the manipulation is using the reciprocal of the values of both the velocity and the substrate concentration. The inverted values are then plotted on a graph as $$1/V$$ vs. $$1/[S$$]. Because of these inversions, Lineweaver-Burk plots are commonly referred to as ‘double-reciprocal’ plots. As can be seen at left, the value of $$K_M$$ on a Lineweaver Burk plot is easily determined as the negative reciprocal of the x-intercept , whereas the $$V_{max}$$ is the inverse of the y-intercept. Other related manipulation of kinetic data include Eadie-Hofstee diagrams, which plot V vs V/[S] and give $$V_{max}$$ as the Y-axis intercept with the slope of the line being $$-K_M$$. | 2021-01-26 18:10:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424527049064636, "perplexity": 613.2798439540362}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704803308.89/warc/CC-MAIN-20210126170854-20210126200854-00137.warc.gz"} |
https://www.encyclopediaofmath.org/index.php?title=Argument,_principle_of_the&oldid=15915 | # Argument, principle of the
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
argument principle
A geometric principle in the theory of functions of a complex variable. It is formulated as follows: Let be a bounded domain in the complex plane , and let, moreover, the boundary be a continuous curve, oriented so that lies on the left. If a function is meromorphic in a neighbourhood of and has no zeros or poles on , then the difference between the number of its zeros and the number of its poles inside (counted according to their multiplicity) is equal to the increase of the argument of when travelling once around , divided by , i.e.
where denotes any continuous branch of on the curve . The expression on the right-hand side equals the index of the curve with respect to the point .
The principle of the argument is used in the proofs of various statements on the zeros of holomorphic functions (such as the fundamental theorem of algebra on polynomials, the theorem of Hurwitz on zeros, etc.). From the principle of the argument follow many other important geometric principles of function theory, e.g. the principle of invariance of domain (cf. Invariance, principle of), the maximum-modulus principle and the theorem on the local inverse of a holomorphic function. In many questions the principle of the argument is used implicitly, in the form of its corollary: the Rouché theorem.
There are generalizations of the principle of the argument. The condition that be meromorphic in a neighbourhood of may be replaced by the following: has only a finite number of poles and zeros in and extends continuously to . Instead of the complex plane, an arbitrary Riemann surface may be considered: the boundedness of is then replaced by the condition that be compact. From the principle of the argument for a compact Riemann surface it follows that the number of zeros of an arbitrary meromorphic function, not identically equal to zero, is equal to the number of poles. The principle of the argument for domains in is equivalent to the theorem on the sum of the logarithmic residues (cf. Logarithmic residue). For this reason, the following statement is sometimes called the generalized principle of the argument. If is meromorphic in a neighbourhood of a domain which is bounded by a finite number of continuous curves and if has no zeros or poles on , then for any function which is holomorphic in a neighbourhood of the equality
holds, where the first sum extends over all zeros and the second sum extends over all poles of in . There is also a topological generalized principle of the argument: The principle of the argument is valid for any open mapping that is locally finite-to-one and extends continuously to , while .
An analogue of the principle of the argument for functions of several complex variables is, for example, the following theorem: Let be a bounded domain in with Jordan boundary and let be a holomorphic mapping of a neighbourhood of such that ; then the number of pre-images of in (counted according to multiplicity) is equal to .
#### References
[1] M.A. [M.A. Lavrent'ev] Lawrentjew, B.V. Shabat, "Methoden der komplexen Funktionentheorie" , Deutsch. Verlag Wissenschaft. (1967) (Translated from Russian) [2] B.V. Shabat, "Introduction of complex analysis" , 2 , Moscow (1976) (In Russian)
How to Cite This Entry:
Argument, principle of the. E.M. Chirka (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Argument,_principle_of_the&oldid=15915
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | 2020-01-21 02:01:30 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448797106742859, "perplexity": 292.65320833624276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250601241.42/warc/CC-MAIN-20200121014531-20200121043531-00271.warc.gz"} |
https://math.stackexchange.com/questions/3216252/show-that-the-product-of-ideals-is-equal-to-the-intersection | # Show that the product of ideals is equal to the intersection
I am following the notes of Gathmann to learn myself about commutative algebra. I have the following exercise written in them (without solutions at the end):
Exercise 1.13. Show that the equation of ideals $$(x^3-x^2,x^2y-x^2,xy-y,y^2-y) = (x^2,y)\cap(x-1,y-1)$$ holds in the polynomial ring $$\mathbb{C}[x,y]$$. Is this a radical ideal? What is its zero locus in $$\mathbb{A}^2_{\mathbb{C}}$$?
While the two last questions are easy to solve: it is not radical because $$x(x-1)$$ is not in the ideal but $$(x(x-1))^2 = x^2(x-1)^2 = (x-1)(x^3 - x^2)$$, so is in the ideal, and its zero locus is obviously $$\{(0,0),(1,1)\}$$.
The inclusion $$\subseteq$$ is easy too, because the product of ideals is included in the intersection. But of the $$\supseteq$$ part, I suppose that I have an element of the intersection, so a poynomial $$p$$ such that there exists $$p_1,p_2,p_3,p_4 \in \mathbb{C}[x,y]$$ such that: $$p = p_1x^2 + p_2y$$ $$p = p_3(x-1) + p_4(y-1)$$
I tried to add them or but them equal, without success. The only thing I can prove is that $$p^2$$ is in the ideal, by multiplying the two writings, but the ideal is not radical, so it doesn't work either!
Someone can give me a hint or help me please?
• Consider the ideal $I=(zx^2,zy, (1-z)(x-1), (1-z)(y-1))$. The intersection of $(x^2,y)$ and $(x-1,y-1)$ are the elements of $I$ that don't depend on $z$. Assume that any term containing $z$ is larger than any term non containing it. Get a new set of generators of $I$ that is a Groebner basis in that monomial order. Then the generators not depending on $z$ generate the intersection. – logarithm May 6 at 19:25
• That solution seems good, but even if I know about Groebner basis from a class I followed this semester, I have not see them in the notes I am reading, so there is probably an easier solution (yours looks like elimination theory, but the only thing I know about it is the Wikipedia page so I would like not use it...) – eti902 May 6 at 19:33
• It is just polynomial long division and the basis of computing all these operations between ideals. You can just compute without ever saying the name. – logarithm May 6 at 19:37
• Hint. Use this: if $I+J=R$, then $I\cap J=IJ$. – user26857 May 6 at 20:15
• Yes that's perfect, thank you! – eti902 May 7 at 3:22 | 2019-06-25 05:55:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 13, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8701589107513428, "perplexity": 209.2150100810478}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00225.warc.gz"} |
https://brilliant.org/problems/cool-6-digit-numbers/ | # Cool 6-Digit Numbers
Let us call a $$6$$-digit number cool if each of its digits is no less than the preceding digit. How many cool $$6$$-digit numbers are there?
Details And Assumptions:
• For example, $$112446$$ is cool.
• $$233043$$ isn't cool.
× | 2017-01-19 13:12:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5964077115058899, "perplexity": 2003.3797978234718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280668.34/warc/CC-MAIN-20170116095120-00273-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://experiment-ufa.ru/absolute=value | # absolute=value
## Simple and best practice solution for absolute=value equation. Check how easy it is, and learn it for the future. Our solution is simple, and easy to understand, so dont hesitate to use it as a solution of your homework.
If it's not what You are looking for type in the equation solver your own equation and let us solve it.
## Solution for absolute=value equation:
Simplifying
absolute = value
Solving
abelostu = aeluv
Solving for variable 'a'.
Move all terms containing a to the left, all other terms to the right.
Add '-1aeluv' to each side of the equation.
abelostu + -1aeluv = aeluv + -1aeluv
Combine like terms: aeluv + -1aeluv = 0
abelostu + -1aeluv = 0
Factor out the Greatest Common Factor (GCF), 'aelu'.
aelu(bost + -1v) = 0
Subproblem 1Set the factor 'aelu' equal to zero and attempt to solve:
Simplifying
aelu = 0
Solving
aelu = 0
Move all terms containing a to the left, all other terms to the right.
Simplifying
aelu = 0
The solution to this equation could not be determined.
This subproblem is being ignored because a solution could not be determined.
Subproblem 2Set the factor '(bost + -1v)' equal to zero and attempt to solve:
Simplifying
bost + -1v = 0
Solving
bost + -1v = 0
Move all terms containing a to the left, all other terms to the right.
Add '-1bost' to each side of the equation.
bost + -1bost + -1v = 0 + -1bost
Combine like terms: bost + -1bost = 0
0 + -1v = 0 + -1bost
-1v = 0 + -1bost
Remove the zero:
-1v = -1bost
Add 'v' to each side of the equation.
-1v + v = -1bost + v
Combine like terms: -1v + v = 0
0 = -1bost + v
Simplifying
0 = -1bost + v
The solution to this equation could not be determined.
This subproblem is being ignored because a solution could not be determined.
The solution to this equation could not be determined.`
## Related pages
how to put fractions on a calculatorsquare root of 0.16secx tanxfactor tree for 96cos532x 3 3x x 1145&0.6875 as a fractionwhat is the fraction of 0.875solve cos2x sinx 1illuminateed comcos 2troman numerals 1963derivative of tan 1 x2y squaredfraction percent to decimal calculator2go lm7-14.5find the prime factorization of 1757 7 7 7x7-7whats a lcmmultiplication with fractions calculator3x pixmz 12xwhat is 0.5625 as a fractionmath algebra calculator with stepsgraph y 2x 3gcf of 90 and 75finding the lcm calculatordivide fractions by fractions calculatorwhat is the prime factorization of 1129v to 5v2408-42y x 3write 3 5 as a decimalwhat is 169 divisible bywhat is the prime factorization of 132800 roman numerals168-25multistep equation calculatorderivative of sin sinxroman numeral of 1000000what is the prime factorization of 176simplify x 2 2xwhat is q mct7x 7yy 5x solve for xwhat is 84 in roman numeralspv nrt solve for twhat is the prime factorization of 735y 4x squared190-11differentiate e 2xsolve sin2x 1log100 10is6311976 roman numeralsprime factorization 2506x 2y 8derivative caculator6000 rupees to dollars80000 pounds to dollarsprime factorization of 114factor 3x 2-5x-2write the prime factorization of 20integral of ln2xsolve 2x2 12x 102kywhat is prime factorization of 72factor completely 4x2 8x 60prime factorization of 592x2 3xx 1 cubedderivative of e xsinx2.75 as fraction2x 2y 16log10 x 1hj42x 3 3x x 117x 3y 2simplify 8x 4x | 2018-04-22 02:43:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3767934739589691, "perplexity": 9031.363894877873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945484.58/warc/CC-MAIN-20180422022057-20180422042057-00122.warc.gz"} |
https://plus.google.com/100331542132819585743/posts | John Gabriel
380 views
## Stream
### John Gabriel
commented on a video on YouTube.
Shared publicly -
How is this at all fascinating? It's mind-numbing junk. A mediant makes zero sense unless any two given fractions are in proportion Bk V. Prop. 12 (The Elements).
Stern-Brocot trees are garbage. No offense to Wildberger please!
Moreover, the vinculum as used in 1/0 is illogical and anti-mathematical rot. There is no number k, such that k x 0 = 1. But this is true for every VALID fraction. That is, 3/4 means that the unit has been divided into 4 equal parts say k each, so that 4 x k = 1 and the 3 denotes how many of those parts are being considered.
Algebraically, NOTHING happens when the numerator is less than the denominator, that is, we toss the dots away from the obelus -:- to get the vinculum (horizontal line) and then place the numerator on top of vinculum and denominator on bottom of vinculum. Geometrically, a lot happens because we can divide ANY line segment into ANY number of equal parts we like using only a compass and a straight edge.
Thus, 1/0 is invalid and no fraction or number at all. It is a nonsense concept.
1
### John Gabriel
commented on a video on YouTube.
Shared publicly -
It's not correct to think about numbers as either a "choice" or an "algorithm".
No valid construction of irrational numbers proved here:
http://www.spacetimeandtheuniverse.com/math/4507-0-999-equal-one-317.html#post21409
1
### John Gabriel
Shared publicly -
Finally one can understand exactly what is an arithmetic mean in my video at: http://youtu.be/_RLRMGFBZBs
Before me, no one ever understood what it means to be an arithmetic mean. That's a big statement, but it's true.
Also follow me on Space Time and The Universe:
http://www.spacetimeandtheuniverse.com/math/4507-0-999-equal-one-317.html#post21409
What is a quotient? What is an obelus?
13 -:- 5 and 13/5 do NOT mean the same thing at all.
The obelus, that is, -:- , means repeated subtraction and only applies when the numerator > denominator. As far as a proper fraction is concerned, true division is only possible using geometry. In algebra, the statement 1 -:- 3 is a NO OPERATION. The 1 dots are discarded, the 1 goes to the top of the vinculum (horizontal bar or slash) and the 3 to the bottom of the vinculum. No repeated subtraction of any kind takes place.
Here is the definition of quotient that works for ALL numbers:
The quotient (or division) of two positive numbers is that positive number, that measures either positive number in terms of the other.
Let the numbers be 2 and 3.
2/3+2/3+2/3 = 6/3 = (6-2-2)/(3-1-1) = 2/1 = 2
3/2+3/2=6/2= (6-3)/(2-1)= 3/1 = 3
In order to understand what it means to be a quotient, you first need to understand what it means to be a "number".
For this, you must be able to derive the number concept from scratch in 5 easy steps, as demonstrated in the axioms of magnitude:
The Gabrielean Axioms of Magnitude:
1. A magnitude is the concept of size, dimension or extent.
2. The comparison of any two magnitudes is called a ratio.
3. A ratio of two equal magnitudes is called a unit.
4. A magnitude x that is measurable by a unit magnitude u, is a natural number in the ratio x:u or a proper fraction in the ratio u:x.
5. If any magnitude or ratio of magnitudes cannot be completely measured, that is, they have no common measure, not even the unit, then it is called an incommensurable magnitude or ratio of magnitudes.
Now you can formally state the axioms of arithmetic:
The Gabrielean Axioms of Arithmetic:
1. The difference (or subtraction) of two numbers is that number which describes how much the larger exceeds the smaller.
2. The difference of equal numbers is zero.
3. The sum (or addition) of two numbers is that number whose difference with either of the two numbers is the other number.
4. The quotient (or division) of two numbers is that number that measures either number in terms of the other.
5. If a unit is divided by a number into equal parts, then each of these parts of a unit, is called the reciprocal of that number.
6. Division by zero is undefined, because 0 does not measure any number.
7. The product (or multiplication) of two numbers is the quotient of either number with the reciprocal of the other.
8. The difference of any number and zero is the number.
Observe that all the basic arithmetic operations are defined in terms of difference.
Gabrielean Axioms of arithmetic explained:
1. The difference (or subtraction) of two positive numbers, is that positive number which describes how much the larger number exceeds the smaller.
Let the numbers be 1 and 4.
4 - 1 = 3 or |1 - 4| = 3
2. The difference of equal numbers is zero.
Let the numbers be k and k.
|k - k| = 0
3. The sum (or addition) of two given positive numbers, is that positive number whose difference with either of the two given numbers produces the other number.
Let the numbers be 1 and 4.
1 + 4 = 5 because 5 - 4 = 1 and 5 - 1 = 4
4. The quotient (or division) of two positive numbers is that positive number, that measures either positive number in terms of the other.
Let the numbers be 2 and 3.
2/3+2/3+2/3 = 6/3 = (6-2-2)/(3-1-1) = 2/1 = 2
3/2+3/2=6/2= (6-3)/(2-1)= 3/1 = 3
5. If a unit is divided by a positive number into equal parts, then each of these parts of a unit, is called the reciprocal of that positive number.
Let the positive number be 4.
The reciprocal is 1/4 and 1/4+1/4+1/4+1/4 = 1
6. Division by zero is undefined, because 0 does not measure any magnitude.
Since the consequent number is always the sum of equal parts of a unit, it follows clearly that no such number exists that when summed can produce 1, that is, no matter how many zeroes you add, you never get 1.
7. The product (or multiplication) of two positive numbers is the quotient of either positive number with the reciprocal of the other.
Let the numbers be 2 and 3.
1/2+1/2+1/2+1/2+1/2+1/2=3
1/3+1/3+1/3+1/3+1/3+1/3=2
8. The difference of any number and zero is the number.
Let the number be k.
|k-0|=|0-k|
Observe that all the basic arithmetic operations are defined in terms of the primitive operator called difference.
These are the true axioms of arithmetic and the definition of the arithmetic operators.
Axioms for negative numbers are easy to define with some trivial modification.
1
### John Gabriel
commented on a video on YouTube.
Shared publicly -
If you want to understand exactly what is an arithmetic mean, then you can do it in less than 3 minutes here: http://www.youtube.com/watch?v=_RLRMGFBZBs
This is one of the most important concepts in mathematics and the least understood right after that of the number concept.
Also check out the entire article at: http://www.linkedin.com/pulse/arithmetic-mean-john-gabriel
1
### John Gabriel
commented on a video on YouTube.
Shared publicly -
Proof that no valid construction of irrational numbers exists:
http://www.spacetimeandtheuniverse.com/math/4507-0-999-equal-one-317.html
I like Wildberger.
I too am called a crank. If you are not part of that group called mainstreamers, you are a delusional crank. Well, I'll drink to that! :-)
http://thenewcalculus.weebly.com
1
And by facts, I am certain you do not mean "Valid construction of the real numbers" because there is none. Is this correct? A simple 'yes' or 'no' response will do. :-)
As for the "broad picture of modern mythmatics", why should it be preserved?
### John Gabriel
commented on a video on YouTube.
Shared publicly -
No. 13 -:- 5 and 13/5 do NOT mean the same thing at all.
The obelus, that is, -:- , means repeated subtraction and only applies when the numerator > denominator. As far as a proper fraction is concerned, true division is only possible using geometry. In algebra, the statement 1 -:- 3 is a NO OPERATION. The 1 dots are discarded, the 1 goes to the top of the vinculum (horizontal bar or slash) and the 3 to the bottom of the vinculum. No repeated subtraction of any kind takes place.
Here is the definition of quotient that works for ALL numbers:
The quotient (or division) of two positive numbers is that positive number, that measures either positive number in terms of the other.
Let the numbers be 2 and 3.
2/3+2/3+2/3 = 6/3 = (6-2-2)/(3-1-1) = 2/1 = 2
3/2+3/2=6/2= (6-3)/(2-1)= 3/1 = 3
In order to understand what it means to be a quotient, you first need to understand what it means to be a "number".
For this, you must be able to derive the number concept from scratch in 5 easy steps, as demonstrated in the axioms of magnitude:
The Gabrielean Axioms of Magnitude:
1. A magnitude is the concept of size, dimension or extent.
2. The comparison of any two magnitudes is called a ratio.
3. A ratio of two equal magnitudes is called a unit.
4. A magnitude x that is measurable by a unit magnitude u, is a natural number in the ratio x:u or a proper fraction in the ratio u:x.
5. If any magnitude or ratio of magnitudes cannot be completely measured, that is, they have no common measure, not even the unit, then it is called an incommensurable magnitude or ratio of magnitudes.
Now you can formally state the axioms of arithmetic:
The Gabrielean Axioms of Arithmetic:
1. The difference (or subtraction) of two numbers is that number which describes how much the larger exceeds the smaller.
2. The difference of equal numbers is zero.
3. The sum (or addition) of two numbers is that number whose difference with either of the two numbers is the other number.
4. The quotient (or division) of two numbers is that number that measures either number in terms of the other.
5. If a unit is divided by a number into equal parts, then each of these parts of a unit, is called the reciprocal of that number.
6. Division by zero is undefined, because 0 does not measure any number.
7. The product (or multiplication) of two numbers is the quotient of either number with the reciprocal of the other.
8. The difference of any number and zero is the number.
Observe that all the basic arithmetic operations are defined in terms of difference.
Gabrielean Axioms of arithmetic explained:
1. The difference (or subtraction) of two positive numbers, is that positive number which describes how much the larger number exceeds the smaller.
Let the numbers be 1 and 4.
4 - 1 = 3 or |1 - 4| = 3
2. The difference of equal numbers is zero.
Let the numbers be k and k.
|k - k| = 0
3. The sum (or addition) of two given positive numbers, is that positive number whose difference with either of the two given numbers produces the other number.
Let the numbers be 1 and 4.
1 + 4 = 5 because 5 - 4 = 1 and 5 - 1 = 4
4. The quotient (or division) of two positive numbers is that positive number, that measures either positive number in terms of the other.
Let the numbers be 2 and 3.
2/3+2/3+2/3 = 6/3 = (6-2-2)/(3-1-1) = 2/1 = 2
3/2+3/2=6/2= (6-3)/(2-1)= 3/1 = 3
5. If a unit is divided by a positive number into equal parts, then each of these parts of a unit, is called the reciprocal of that positive number.
Let the positive number be 4.
The reciprocal is 1/4 and 1/4+1/4+1/4+1/4 = 1
6. Division by zero is undefined, because 0 does not measure any magnitude.
Since the consequent number is always the sum of equal parts of a unit, it follows clearly that no such number exists that when summed can produce 1, that is, no matter how many zeroes you add, you never get 1.
7. The product (or multiplication) of two positive numbers is the quotient of either positive number with the reciprocal of the other.
Let the numbers be 2 and 3.
1/2+1/2+1/2+1/2+1/2+1/2=3
1/3+1/3+1/3+1/3+1/3+1/3=2
8. The difference of any number and zero is the number.
Let the number be k.
|k-0|=|0-k|
Observe that all the basic arithmetic operations are defined in terms of the primitive operator called difference.
These are the true axioms of arithmetic and the definition of the arithmetic operators.
Axioms for negative numbers are easy to define with some trivial modification.
http://thenewcalculus.weebly.com
1 | 2015-05-24 22:22:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8407328724861145, "perplexity": 718.2443145059087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00278-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://crypto.stackexchange.com/questions/58485/does-storing-plain-data-length-leak-essential-information-about-the-cryptographi/58498 | # Does storing plain data length leak essential information about the cryptographic scheme?
Consider the function below:
def dec args=password, iv, encrypted
decipher = OpenSSL::Cipher.new('aes-128-cbc')
decipher.decrypt
decipher.padding = 0
decipher.key = Digest::SHA256.hexdigest(password) # just for simplicity; use PBKDF2 for real applications
decipher.iv = Base64.decode64(iv)
plain = decipher.update(Base64.decode64(encrypted)) + decipher.final
return plain
end
That function will correctly decipher a text, but it may add extra invisible chars, depending on the size of the original plain text. Then, the length must be stored in the file, together with encrypted and iv, and the decryption function becomes:
def dec args=password, iv, encrypted, length
decipher = OpenSSL::Cipher.new('aes-128-cbc')
decipher.decrypt
decipher.padding = 0
decipher.key = Digest::SHA256.hexdigest(password) # just for simplicity; use PBKDF2 for real applications
decipher.iv = Base64.decode64(iv)
plain = decipher.update(Base64.decode64(encrypted)) + decipher.final
return plain[0..(length-1)]
end
I suspect that the storing of the length in the encrypted file be a cryptographic weakness. What's the proper way of solving it?
## 1 Answer
I suspect that the storing of the length in the encrypted file be a cryptographic weakness.
It's not. Encryption is commonly considered secure whenever the ciphertext allows the attacker to infer nothing about the plaintext other than its size. Plaintext length is not considered a secret.
While this would not protect unpadded "yes"/"no" messages encrypted with one-symbol block size, that is a weakness of the entire communication scheme (using two string aliases to convey a bool), not the algorithm. With practical block sizes length could be used to identify video files, too. But this is simply outside the scope of encryption to protect.
If you have to protect plaintext length, use padding with any secure cryptographic scheme. | 2019-11-12 06:17:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3518097400665283, "perplexity": 5058.278983691056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664752.70/warc/CC-MAIN-20191112051214-20191112075214-00106.warc.gz"} |
https://lfe.gitbooks.io/sicp/ch2/exercises-7.html | ### Exercises
#### Exercise 2.24
Suppose we evaluate the expression (list 1 (list 2 (list 3 4))). Give the result printed by the interpreter, the corresponding box-and-pointer structure, and the interpretation of this as a tree diagram.
#### Exercise 2.25
Give combinations of cars and cdrs that will pick 7 from each of the following lists:
(1 3 (5 7) 9)
((7))
(1 (2 (3 (4 (5 (6 7))))))
#### Exercise 2.26
Suppose we define x and y to be two lists:
(defun x () (list 1 2 3))
(defun y () (list 4 5 6))
What result is printed by the interpreter in response to evaluating each of the following expressions:
(append (x) (y))
(cons (x) (y))
(list (x) (y))
#### Exercise 2.27
Modify your reverse function of exercise 2.18 to produce a deep-reverse function that takes a list as argument and returns as its value the list with its elements reversed and with all sublists deep-reversed as well. For example,
> (defun x () (list (list 1 2) (list 3 4)))
x
> (x)
((1 2) (3 4))
> (reverse (x))
((3 4) (1 2))
> (deep-reverse (x))
((4 3) (2 1))
#### Exercise 2.28
Write a function fringe that takes as argument a tree (represented as a list) and returns a list whose elements are all the leaves of the tree arranged in left-to-right order. For example,
> (defun x () (list (list 1 2) (list 3 4)))
> (fringe (x))
(1 2 3 4)
> (fringe (list (x) (x)))
(1 2 3 4 1 2 3 4)
#### Exercise 2.29
A binary mobile consists of two branches, a left branch and a right branch. Each branch is a rod of a certain length, from which hangs either a weight or another binary mobile. We can represent a binary mobile using compound data by constructing it from two branches (for example, using list):
(defun make-mobile (left right)
(list left right))
A branch is constructed from a length (which must be a number) together with a structure, which may be either a number (representing a simple weight) or another mobile:
(defun make-branch (length structure)
(list length structure))
a. Write the corresponding selectors left-branch and right-branch, which return the branches of a mobile, and branch-length and branch-structure, which return the components of a branch.
b. Using your selectors, define a function total-weight that returns the total weight of a mobile.
c. A mobile is said to be balanced if the torque applied by its top-left branch is equal to that applied by its top-right branch (that is, if the length of the left rod multiplied by the weight hanging from that rod is equal to the corresponding product for the right side) and if each of the submobiles hanging off its branches is balanced. Design a predicate that tests whether a binary mobile is balanced.
d. Suppose we change the representation of mobiles so that the constructors are
(defun make-mobile (left right)
(cons left right))
(defun make-branch (length structure)
(cons length structure))
How much do you need to change your programs to convert to the new representation? | 2019-01-18 09:20:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6485182642936707, "perplexity": 3012.364692259545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660020.5/warc/CC-MAIN-20190118090507-20190118112507-00572.warc.gz"} |
http://www.etiquettehell.com/smf/index.php?topic=51263.msg3062839 | Author Topic: Special Snowflake Stories (Read 5901712 times)
0 Members and 2 Guests are viewing this topic.
HorseFreak
• Hero Member
• Posts: 2821
Re: Special Snowflake Stories
« Reply #24705 on: December 03, 2013, 10:16:36 PM »
SS Driver today while I was leaving work. Traffic is horrendous during rush hour in my city. My work's storefront is in a busy strip mall with two exits/entrances. A very busy, divided 45 mph main road runs down the front. Front exit only allows a right turn North. Rear exit allows a usually easy right turn East (downtown) or left to the West which has two lanes- one for making a left onto the Main Road and one for going straight West or turning right North. Going left to Main Road isn't too hard usually, but you do have to be careful of people whipping down the wrong side of the road to get into the left turning lane early.
SS decided they wanted to go West which has a turn lane accessible from the strip mall if you turn onto Main Road from the front exit, or make a very difficult to impossible left turn to go West down the side street from the rear exit during rush hour. Turn Lane was backed up a good 50' past the mall exit so he decided to go blasting THE WRONG WAY down Main Road to get into that lane! I thought they had just made a mistake and laid on my horn to alert them only to see them dart into the turn lane. They could have gone out the rear exit, turned East, made a quick U-turn in an abandoned parking lot and only taken a few minutes longer instead of vying for a Darwin Award.
andi
• Hero Member
• Posts: 1872
Re: Special Snowflake Stories
« Reply #24706 on: December 03, 2013, 10:25:51 PM »
HorseFreak - she sounds like the moms at my som's school during morning car pool. Giant "no left turn" signs and there they sit blocking traffic to make illegal, unsafe turns or driving the wrong way down the road
zyrs
• Hero Member
• Posts: 2077
• spiffily male.
Re: Special Snowflake Stories
« Reply #24707 on: December 04, 2013, 05:25:32 AM »
In light of the information in this thread, you'd better believe that the next time the SyFy channel takes submissions for their next TV movie, I'll be making sure that flesh-eating Nazi raccoons are on the list.
Virg
The theme song alone would be awesome. Kind of an "Attack of the Killer Tomatoes" and "G.I.Joe Theme Song" cross.
alkira6
• Member
• Posts: 994
Re: Special Snowflake Stories
« Reply #24708 on: December 04, 2013, 08:30:48 AM »
Movie SS.
My husband convinced me that seeing the new Thor movie on opening weekend was a thing to do. This visit just convinced me that movies aren't worth it anymore.
SS 1: The theater has stadium seating, but your view can be interrupted if people wear hats. SS in front of me was wearing a hat with an oversized pompom and was perched on top of her head rather than pulled down. I leaned forward and excused myself and asked if she would mind pulling her hat down or taking it off because it was blocking part of my view. Cue both her and her friend going on a profanity laden rant about - get this - rude people and how I could move my fat donkey's behind somewhere else. They were so obnoxious that someone else called the manager to settle them down.
SS 2: The couple that brought 2 small children to a 9 pm showing and made no effort to quiet the singing, dancing in the aisles, and screeching.
SS 3: everyone who just had to play on their phone during the movie. Flashing screens are very distracting.
All future movies will be watched at home or at the super awesome but more expensive theater in midtown. They have ninja ushers who don't put up with anything. Plus, they sell wine.
• Super Hero!
• Posts: 8621
• Operating the logic hammer since 1987.
Re: Special Snowflake Stories
« Reply #24709 on: December 04, 2013, 08:59:16 AM »
SS Parent driver this morning.
SS driver has her two small snowflakes in the back seat, who are frequently hitting each other, but that isn't the point.
SS driver has a long haired lap dog that she is playing with, instead of paying attention to traffic.
I was behind SSD and we were in the lane to turn right onto a 4 lane divided road. I notice that there are no cars coming, and look to see if SSD noticed. Not so much, so I tap on my horn. SSD glares at me in her rear view, places her dog on the other seat and does not go into the completely clear lanes until I honked again.
I ended up behind her on the road, and she was weaving from one side of the lane to the other. As I went to pass her safely (using my signals), she tried to change lanes into me (no signalling), so she got another honk.
I feel sorry for the kids and the dog.
jedikaiti
• Swiss Army Nerd
• Hero Member
• Posts: 2940
• A pie in the hand is worth two in the mail.
Re: Special Snowflake Stories
« Reply #24710 on: December 04, 2013, 10:51:08 AM »
SS Parent driver this morning.
SS driver has her two small snowflakes in the back seat, who are frequently hitting each other, but that isn't the point.
SS driver has a long haired lap dog that she is playing with, instead of paying attention to traffic.
I was behind SSD and we were in the lane to turn right onto a 4 lane divided road. I notice that there are no cars coming, and look to see if SSD noticed. Not so much, so I tap on my horn. SSD glares at me in her rear view, places her dog on the other seat and does not go into the completely clear lanes until I honked again.
I ended up behind her on the road, and she was weaving from one side of the lane to the other. As I went to pass her safely (using my signals), she tried to change lanes into me (no signalling), so she got another honk.
I feel sorry for the kids and the dog.
Ya know, that might be worth a call to the police dispatch line. I usually try to keep the non-911 numbers dot the local jurisdictions I'm in the most Sade in my phone for just such an occasion.
What part of v_e = \sqrt{\frac{2GM}{r}} don't you understand? It's only rocket science!
"The problem with re-examining your brilliant ideas is that more often than not, you discover they are the intellectual equivalent of saying, 'Hold my beer and watch this!'" - Cindy Couture
• Super Hero!
• Posts: 8621
• Operating the logic hammer since 1987.
Re: Special Snowflake Stories
« Reply #24711 on: December 04, 2013, 11:38:48 AM »
I have called before, in similar situations, but the dispatchers can't do much unless there is an officer right there.
GlitterIsMyDrug
• Hero Member
• Posts: 1120
Re: Special Snowflake Stories
« Reply #24712 on: December 04, 2013, 11:55:33 AM »
Movie SS.
My husband convinced me that seeing the new Thor movie on opening weekend was a thing to do. This visit just convinced me that movies aren't worth it anymore.
Let your husband know, that after a few weeks, nothing about the movie changes. Same actors, same story lines, same everything. Except less people because apparently many people do not know about this little secret.
I saw Thor Thanksgiving Weekend and the theater was practically empty. Pretty sure it was still the same movie.
siamesecat2965
• Super Hero!
• Posts: 9108
Re: Special Snowflake Stories
« Reply #24713 on: December 04, 2013, 12:02:07 PM »
Movie SS.
My husband convinced me that seeing the new Thor movie on opening weekend was a thing to do. This visit just convinced me that movies aren't worth it anymore.
Let your husband know, that after a few weeks, nothing about the movie changes. Same actors, same story lines, same everything. Except less people because apparently many people do not know about this little secret.
I saw Thor Thanksgiving Weekend and the theater was practically empty. Pretty sure it was still the same movie.
Yup. this is why, when I still went to the movies, I'd wait a week or two, and then try and go on an "off" time or day. So not worth the aggrevation of going when it first opens, and being squished into a fully packed theater, and dealing wiht all sorts of SSs.
BarensMom
• Hero Member
• Posts: 2645
Re: Special Snowflake Stories
« Reply #24714 on: December 04, 2013, 12:13:29 PM »
Movie SS.
My husband convinced me that seeing the new Thor movie on opening weekend was a thing to do. This visit just convinced me that movies aren't worth it anymore.
Let your husband know, that after a few weeks, nothing about the movie changes. Same actors, same story lines, same everything. Except less people because apparently many people do not know about this little secret.
I saw Thor Thanksgiving Weekend and the theater was practically empty. Pretty sure it was still the same movie.
However, if one wants to see it in IMAX 3D, one pretty much has to go the first week the movie is out, otherwise it gets bumped from the special theater.
I usually go to either a very early or very late showing, and haven't had any problems with rude patrons. It might also help that I go all the way to the Regal in Dublin (CA). The management/employees don't put up with any shenanigans there.
violinp
• Hero Member
• Posts: 3665
• cabbagegirl28's my sister :)
Re: Special Snowflake Stories
« Reply #24715 on: December 04, 2013, 12:28:22 PM »
Movie SS.
My husband convinced me that seeing the new Thor movie on opening weekend was a thing to do. This visit just convinced me that movies aren't worth it anymore.
Let your husband know, that after a few weeks, nothing about the movie changes. Same actors, same story lines, same everything. Except less people because apparently many people do not know about this little secret.
I saw Thor Thanksgiving Weekend and the theater was practically empty. Pretty sure it was still the same movie.
Yup. this is why, when I still went to the movies, I'd wait a week or two, and then try and go on an "off" time or day. So not worth the aggrevation of going when it first opens, and being squished into a fully packed theater, and dealing wiht all sorts of SSs.
Amen. I'm not even allowed to see certain movies before they've been out for a couple weeks (I work in a theater) and it has to be an "off" time, and I honestly am less stressed when it's just 4 other people and me watching a movie.
"It takes a great deal of courage to stand up to your enemies, but even more to stand up to your friends" - Harry Potter
Piratelvr1121
• Super Hero!
• Posts: 11781
Re: Special Snowflake Stories
« Reply #24716 on: December 04, 2013, 12:34:12 PM »
I like going out to the movies in the mornings. One local theater will show movies for $5.50 Saturday and Wednesday mornings before noon, and there are theaters at this place that have really nice leather recliners and any other time it costs more to see a movie in those theaters. But before noon on Saturdays it's$5.50 and I love taking advantage of that! That and seats are reserved so you don't have to worry about people insisting you move over because they couldn't get there in time. Also this theater has a strict policy on cell phones. Once the movie starts, if you are caught with one on and in use during the movie you get escorted out without a refund.
Beyond a wholesome discipline, be gentle with yourself. You are a child of the universe, no less than the trees and the stars. You have a right to be here. Be cheerful, strive to be happy. -Desiderata
MissRose
• Hero Member
• Posts: 2968
Re: Special Snowflake Stories
« Reply #24717 on: December 04, 2013, 03:35:03 PM »
Very few movies are worth me going immediately to see them. I usually wait a few days then go. Otherwise, its easier to buy on demand with cable, sit at home with my own food and drink, when I want etc to watch a newer release.
Piratelvr1121
• Super Hero!
• Posts: 11781
Re: Special Snowflake Stories
« Reply #24718 on: December 04, 2013, 06:56:39 PM »
I'm mostly the same way but if there's a movie out with Johnny Depp, that gets seen first chance I get! Preferably opening weekend.
Other movies I can generally wait till they come out to DVD.
Beyond a wholesome discipline, be gentle with yourself. You are a child of the universe, no less than the trees and the stars. You have a right to be here. Be cheerful, strive to be happy. -Desiderata
Dindrane
• Super Hero!
• Posts: 15517
Re: Special Snowflake Stories
« Reply #24719 on: December 04, 2013, 09:39:52 PM »
Movie SS.
My husband convinced me that seeing the new Thor movie on opening weekend was a thing to do. This visit just convinced me that movies aren't worth it anymore.
Let your husband know, that after a few weeks, nothing about the movie changes. Same actors, same story lines, same everything. Except less people because apparently many people do not know about this little secret.
I saw Thor Thanksgiving Weekend and the theater was practically empty. Pretty sure it was still the same movie.
My husband and I both want to see Frozen, and I really wanted to see Thor. So when we went to the movies on Thanksgiving weekend, I worked really hard to convince him that we should go see Thor.
Smart man that he is, he agreed with me, and we both patted ourselves on the back for our deductive reasoning skills when we saw the gaggle of small children clearly there to see Frozen. The theater for Thor was still relatively full, but not jam-packed (and we were there early enough to get seats we were happy with).
I like watching movies like Frozen with kids in the audience, and I don't mind their audible reactions, but not on opening weekend! We'll go this weekend or the weekend after instead, and it will be much nicer. | 2015-01-25 14:18:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2547506093978882, "perplexity": 5083.514213187585}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122087108.30/warc/CC-MAIN-20150124175447-00049-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://3dprinting.stackexchange.com/questions/13353/thermal-runaway-triggers-when-raising-temperature-amid-cooldown | # Thermal runaway triggers when raising temperature amid cooldown
I've noticed an interesting behavior on my Ender 3 with SKR Mini E3 mainboard and Marlin 2.0.x bugfix firmware. (otherwise, all other hardware is entirely stock) After the hotend/bed are commanded to cool down, e.g. after a print completes, I have to wait until after they both cool down to ambient before commanding another temperature setpoint.
If I don't do this, the printer most often triggers thermal runaway protection. (usually citing the extruder, but also sometimes the bed) I think this might be due to the thermal inertia in the material between the heater and thermistors, causing a 5-10 second delay in sensed temperature rise. I don't see any reason why thermal runaway should trigger; the Octoprint temperature graph looks completely normal, with no perceptible anomalies.
Is there some way to tune parameters for thermal runaway protection to alleviate this false-positive situation?
* If you get false positives for "Thermal Runaway", increase | 2021-10-23 10:49:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4717920124530792, "perplexity": 4532.938180735931}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585671.36/warc/CC-MAIN-20211023095849-20211023125849-00158.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/university-calculus-early-transcendentals-3rd-edition/chapter-8-section-8-4-integration-of-rational-functions-by-partial-fractions-exercises-page-446/34 | ## University Calculus: Early Transcendentals (3rd Edition)
$$\int\frac{x^4}{x^2-1}dx=\frac{x^3+3x}{3}+\frac{1}{2}\ln\Big|\frac{x-1}{x+1}\Big|+C$$
$$I=\int\frac{x^4}{x^2-1}dx$$ 1) Perform long division: We perform long division to rewrite the fraction as a polynomial plus a proper fraction, which is $$\frac{x^4}{x^2-1}=x^2+1+\frac{1}{x^2-1}$$ 2) Express the remainder fraction as a sum of partial fractions: $$\frac{1}{x^2-1}=\frac{1}{(x-1)(x+1)}=\frac{A}{x+1}+\frac{B}{x-1}$$ Clear fractions: $$Ax-A+Bx+B=1$$ $$(A+B)x+(B-A)=1$$ Equating coefficients of corresponding powers of $x$, we get $A+B=0$ $B-A=1$ Solving for $A$ and $B$, we get $A=-1/2$ and $B=1/2$. Therefore, $$\frac{1}{x^2-1}=\frac{1}{(x-1)(x+1)}=-\frac{1}{2}\frac{1}{x+1}+\frac{1}{2}\frac{1}{x-1}$$ 2) Evaluate the integral: $$I=\int(x^2+1)dx-\frac{1}{2}\int\frac{1}{x+1}dx+\frac{1}{2}\int\frac{1}{x-1}dx$$ $$I=\frac{x^3}{3}+x-\frac{1}{2}\ln|x+1|+\frac{1}{2}\ln|x-1|+C$$ $$I=\frac{x^3+3x}{3}+\frac{1}{2}\ln\Big|\frac{x-1}{x+1}\Big|+C$$ | 2019-12-13 23:24:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9831188321113586, "perplexity": 169.3576146656359}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569332.20/warc/CC-MAIN-20191213230200-20191214014200-00005.warc.gz"} |
http://sachinashanbhag.blogspot.com/ | ## Monday, September 25, 2017
### Prony Method
Given N equispaced data-points $F_i = F(t = i \Delta t)$, where $i = 0, 1, ..., N-1$, the Prony method can be used to fit a sum of m decaying exponenitals: $F(t) = \sum_{i=1}^{m} a_i e^{b_i t}.$ The 2m unknowns are $a_i$ and $b_i$.
In the Prony method, the number of modes in the exponential (m) is pre-specified. There are other methods, which are more general.
Here is a python subprogram which implements the Prony method.
If you have arrays t and F, it can be called as:
a_est, b_est = prony(t, F, m) | 2017-09-26 08:56:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7828739881515503, "perplexity": 2282.8965019220286}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818695375.98/warc/CC-MAIN-20170926085159-20170926105159-00416.warc.gz"} |
https://hsm.stackexchange.com/questions/4914/galileos-pendulum-and-any-references | # Galileo's pendulum and any references
In some texts about the simple pendulum we use to see references about some "experiments" Galileo Galilei did realize and whereby he found some important results, including that the period of the pendulum is proportional to the square root of the lenght of the (stem of the) pendulum. I would like to know if there is any book ou document wrote by Galileo (on the Dialogues or the Discourses) where I can find the explanation about these experiments realized by Galileo. | 2020-09-27 05:11:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9439931511878967, "perplexity": 574.6813360945034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00529.warc.gz"} |
http://physics.qandaexchange.com/?qa=1014/photocell-in-the-circuit | # Photocell in the circuit
73 views
A bulb B having a resistance of 100 ohm converts 31 % of its electrical energy into radiation having wavelength less than 5000 A and with an average wavelength of 4000 A in this range. 10% of this radiation is incident on a photocell D of efficiency 20% and is converted into an electric current. The photo electric threshold of the material is 5000 A.
The photocell (D) is connected in parallel with a resistance R (100 ohm) and a capacitance C (=100 $\mu$F) and finally in series with the bulb B & a source of emf E = 180 V. The key is now closed.
Now we have to find the charge on C and power dissipiated in (both in steady state)
I am not getting any start, now I am able to understand the given solution in my book .
asked Jan 24, 2017
Please be more specific about what difficulty you are having with the solution given. Which line do you not understand?
Sorry, but as said I have not getting any start that is difficulty from first step.
If that is the case, I think you need to do some more studying. You can return to the problem later. If you cannot even make an attempt to solve the problem, you are not ready for it.
In this what they have done when we got i =9.1 | 2019-09-15 16:52:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6378462314605713, "perplexity": 559.3247881116106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571651.9/warc/CC-MAIN-20190915155225-20190915181225-00119.warc.gz"} |
https://www.physicsforums.com/threads/vector-basic-problem.927806/page-2 | # Vector basic problem
1. Oct 9, 2017
### Orodruin
Staff Emeritus
When you insert a bold letter you are in actuality insertin BBcode into your LaTeX expression. If you want to make something bold in LaTeX, you must use LaTeX commands. But in general there is no need to make a vector bold and to have a vector arrow.
2. Oct 9, 2017
### Mathematicsss
The question is as follows: Find the position vector of the midpoint AB.
3. Oct 9, 2017
### FactChecker
Aha! Yes, that is different and it is well defined. The midpoint of the line from point A to point B is at (A+B)/2 = ( (3+1)/2, (2+3)/2, (5+2)/2 ) = ( 2, 2.5, 3.5 ). And the vector to that point is based at (0,0,0) and denoted by $\vec{( 2, 2.5, 3.5 )}$.
You can also get it by taking your vector of the OP, $\vec{AB}$, and adding it to the position vector of point A.
$\vec A + \vec {AB} = \vec{(3,2,5)} + \vec{(-1,0.5,-1.5)} = \vec{(2,2.5,3.5)}$
Last edited: Oct 9, 2017
4. Oct 9, 2017
### FactChecker
Thanks. I also tried \textbf{}, which didn't work. Maybe I did it wrong. Anyway, I agree that it was not needed and I stopped trying.
5. Oct 9, 2017
### Orodruin
Staff Emeritus
If I have to put bold stuff in equations, I usually use \boldsymbol instead. I think it looks much better as it uses an italic math font instead of text font - also it works for Greek letters. Here is the difference compared to using \textbf or {\bf }:
$$\boldsymbol{a}\cdot\boldsymbol{\mu} = \textbf{a}\cdot \textbf{\mu} = {\bf a}\cdot{\bf \mu}$$
Either way, I much prefer using \vec as it looks more similar to what you would write by hand and I think it is just confusing for students that we write things one way in typeset text and another on the blackboard.
6. Oct 9, 2017
### Ray Vickson
The commands "\bf{x}" or "\mathbf{x}" both produce $\bf{x}$ and $\mathbf{x}$. The commands "\mathbf{x}^2" and "\mathbf{x^2}" produce $\mathbf{x}^2$ and $\mathbf{x^2}$, one of which has a bold superscript and the other not. The command \mathbf{x \times \alpha}" produces $\mathbf{x \times \alpha}$ (with bold $\bf x$ but non-bold $\alpha$ and non-bold $\times$), while "\boldsymbol{x \times \alpha}" produces $\boldsymbol{x \times \alpha}$, where everything is in bold. | 2017-12-17 20:16:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9910170435905457, "perplexity": 1310.656908377584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948597485.94/warc/CC-MAIN-20171217191117-20171217213117-00379.warc.gz"} |
http://dataprivacy.askbot.com/question/46994/the-correct-type-of-quick-divorce/ | # The Correct Type Of Quick Divorce
The great factor is that there is no charge for the first consultation session. The very best divorce attorneys are intelligent sufficient to make you out so their first session with the consumer is more of an job interview than a session session!
Therefore, do not overlook the cash aspect. Although you should not focus on exclusively the cost, but it pays to be cautious about the fee charged by the attorney. Some how do i get My divorce papers online attorneys like to take a flat fee whilst some cost on hourly basis. And do we require to tell that you should go only for an experienced online divorce lawyer? Anyways, you would barely want a attorney fresh out of legislation school to handle your delicate case.
One of the benefits of doing it online is that you know the types are completed correctly. The responsibility for the accuracy of the info is yours no matter how you file for divorce but getting the types finished correctly avoids the need to redo them at any phase in the divorce process.
We the people, situated at 2722 S. Brentwood Blvd Saint Louis, Mo 63144, was the option I selected to go with when working with my divorce. We The Individuals charge $445 to draw up the paper work and assist you file them out correctly. My ex-spouse and I went fifty percent on the bill and was extremely confident about the process. Everything went easily and I didn't anticipate anything less, contemplating the fact that we were paying almost$500 for the services. The assistant understood of our scenario and was knowledgeable that we were looking to file an uncontested divorce. My ex-spouse and I had been apart for almost five many years, so the situation in between the two of us wasn't difficult to figure out. I was also eight months pregnant by my present fiancee, so it was expectant that the clerk understood what papers to file.
In purchase to file the divorce online, it is essential that each the partner agrees for it and therefore the court is happy that each the celebration wants to participate in the listening to. There are numerous online websites on the internet from exactly where the online divorce types can be downloaded. Some of these websites charge some fees whereas some of them arrive free of price as nicely. It all depends on you to select the site. Nevertheless, the expert will always suggest that it is great to pay some charges as then the form which will be received will be free from virus and will be original as well.
Ask about. Take advantage of your circle of friends and family. Inquire for a referral or any info that might assist you discover a good lawyer. People who have formerly gone via a divorce are the best ones to inquire about this, because they have experienced some real experience with the entire procedure, and have at minimum a obscure idea of what should be a good lawyer. If you nonetheless discover it tough to get dependable information from individual acquaintances, then it is off to the World Broad Internet you go, and perhaps think about obtaining an how do i get My divorce papers online.
You have to make sure that you and your mate agree to get divorced. To do an online divorce it has to be and "Uncontested Divorce." So as long as the two of you agree to the terms of the divorce an how do i get My divorce papers online is for you. In the occasion exactly where you don't know where your spouse is, you can nonetheless do an Online Divorce. Check with your online divorce supplier for much more info on this process.
? Initial of all, you require to do a small research. Look in depth for your laws of the condition. Go to any search motor and appear for the divorce regulations in your state. It will permit you to prevent any of the illegal functions throughout the how do i get My divorce papers online Jacksonville.
Status - Standing is interesting since it's not important to survival. Nevertheless, status impacts popularity. As a individual gets to be more popular, the number of social links that person has raises. I use the phrase social hyperlinks since these relationships are not assured to be positive nor negative
edit retag close delete | 2019-08-25 15:55:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2518337666988373, "perplexity": 1358.2908234360384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330750.45/warc/CC-MAIN-20190825151521-20190825173521-00228.warc.gz"} |
https://www.physicsforums.com/threads/writing-this-series-as-a-hypergeometric-series.593666/ | # Writing this series as a hypergeometric series
1. Apr 5, 2012
### Ted123
1. The problem statement, all variables and given/known data
Write $$\displaystyle \sum_{k=0}^{\infty} \frac{1}{9^k (\frac{2}{3})_k} \frac{w^{3k}}{k!}$$ in terms of the Gauss hypergeometric series of the form $_2 F_1(a,b;c;z)$.
2. Relevant equations
The Gauss hypergeometric series is http://img200.imageshack.us/img200/5992/gauss.png [Broken]
3. The attempt at a solution
It's nearly a series of that form if I put $z=w^3$ and $k=n$ but how do I get the $9^{-k} = 3^{-k}3^{-k}$ factors in terms of shifted factorials (that is if I need to)?
Last edited by a moderator: May 5, 2017
2. Apr 5, 2012
### clamtrox
That term does not go into the factorials, it goes into z^n.
3. Apr 5, 2012
### Ted123
Ah of course. So if I put $z=\frac{w^3}{9}$ then the series can be written as $_2 F_1 (a,b ; \frac{2}{3} ; \frac{w^3}{9})$ for some $a$ and $b$ with $(a)_n(b)_n = 1$ for all $n=0,1,2,...$ Can I just pick $a=b=0$?
4. Apr 5, 2012
### clamtrox
I would do some extra checking to be sure that that's right. Can you plug the solution into the hypergeometric differential equation with a=b=0 and see if it solves it?
5. Apr 5, 2012
### Ted123
Actually $$(0)_n (0)_n \neq 1$$ for n=0,1,2,... so how do I get 2 shifted factorials to equal 1? | 2017-11-18 12:54:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8621125817298889, "perplexity": 748.4115908780252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804881.4/warc/CC-MAIN-20171118113721-20171118133721-00586.warc.gz"} |
https://tex.stackexchange.com/questions/346686/mix-warsaw-and-madrid-beamer-theme | # Mix Warsaw and Madrid beamer theme
This is my first question. Be patience please.
In beamer class, I want the headline of Warsaw (you know, not only current section and subsection but also all sections and all subsections of the current section) and the footline of Madrid: author, university, short title, date and page number.
If you could also remove the toolbar in bottom right corner of the frame it would be perfect.
Madrid uses the footline from the infolines outer theme. You can simply copy the definition to use it together with Warsaw:
\documentclass{beamer}
\usetheme{Warsaw}
% from infolines outer theme
\makeatletter
\setbeamertemplate{footline}{%
\leavevmode%
\hbox{%
\end{beamercolorbox}%
\end{beamercolorbox}%
\end{beamercolorbox}}%
\vskip0pt%
}
\makeatother | 2020-10-29 20:19:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7816928029060364, "perplexity": 4547.36502327905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905777.48/warc/CC-MAIN-20201029184716-20201029214716-00669.warc.gz"} |
http://www.christophschiller.ch/Product/a597e50502194.html | ### Amp Hour to Watt Hour Calculator - Calculator Academy
Wh = Ah * V. Where Wh is watt hours; Ah is amp hours; V is the voltage (volts) Amp Hour to Watt Hour Definitin. Amp hour to watt hour is a conversion of total amps per hour to total watts per hour. Amp Hour to Watt Hour Example. The following is an example of how you can convert amp hours into watt hours using the formula above. Convert Amp hour to Watt hour (Ah to Wh)Insert Watts-hour (Wh) and voltage (V) below and click on Calculate to obtain Amp-hours (Ah). Wh. V. Ah. Formula is (Wh)/ (V) = (Ah). For example, if you have a 10Wh battery rated at 5V, the current is 10Wh / 5V = 2Ah. Other Conversions.
### DIY Projects and Ideas - The Home Depot
Empower yourself with the insight and know-how it takes to tackle projects the right way, when you need it. You dont have to be in-store to get assistance with your DIY projects for the home. Ebike Battery Math:Volts, Amps, Amp Hours - Luna Cycle
• Volume and WeightVoltsAmps-Tools For Measuring The Above NumbersCharging MathEbike Math; Now You Got ItEnd of StoryThe first one is easy one to quantify because we all have been taught volume and weight since grade school. The girl in the picture holds a 20ah 52 volt battery pack which holds a kilowatt of energy (1000 watt hours) and weighs around 12 pounds. A pack holding that much juice 10 years ago would be made of lead acid and would be heavier than a car battery and we would need a different and much bigger girl to even lift it up. The most precious commodity we have as ebike builders is space, andnone of us waHow to Determine Amp Hr Rate - Lead Acid Batteries Dec 14, 2017 · I compared an 18 AH rated battery we sell and entered its ratings into our Calculator Determine Run Time for Specific Load, and that battery could only last 12 min given the 30 amp load before you start to excessivley discharge the battery. As far as your second question you would only see an increase in capacity, however it isnt recommend How To Calculate Ah (Amp hours) and Wh (Watt hours) of a Mar 01, 2021 · My batteries are 12 volts each and rated for 100 Ah each (amp hours). What about the Amp hour rating? Put simply, you might say that this battery will provide 1 amp for 100 hours. Or 20 amps for 5 hours. You get the picture? Volts times Amps equals Watts. So this battery will provide 12 x 100 = 1200 Watt hours. Or 1.2 kWh (kilowatt hours) of
### How To Calculate Watts Per Hour For Each Electronic Device
We use that and multiply it by how many hours a day we think the lights will be on. 24W * 3.5 hours = 84Wh per day. LED light strips can be cut to size. If you were only looking to use half the number of lights (8.2 instead of 16.4), then divide this number in half. 12 * 3.5 hours = 42Wh per day. LED Light Strip. How To Calculate Your Motor Run Time - Newport Vessels50 AH / 52 amps drawn = .96 hours. If Only Watts Is Given 624 watts drawn / 12 volt battery = 52 amps drawn 50 AH / 52 amp drawn = .96 hours. Most manufacturers will only list the amps/watts being drawn at top speed. You can calculate an estimate of amp drawn at different speeds using the max speed provided as your starting point. How to calculate battery run-time when design equipment Apr 23, 2021 · C = 2.88 AH / 0.8 = 3.6 AH. Step 3:Rate of discharge considerations. Some battery chemistries give much fewer amp hours if you discharge them fast. This is called the Peukart effect. This is a big effect in alkaline, carbon zinc, zinc-air and lead acid batteries.
### Identify your iPhone model - Apple Support
Apr 29, 2021 · When measured as a standard rectangular shape, the screen is 5.42 inches (iPhone 12 mini), 5.85 inches (iPhone X, iPhone XS, and iPhone 11 Pro), 6.06 inches (iPhone 12 Pro, iPhone 12, iPhone 11, and iPhone XR), 6.46 inches (iPhone XS Max and iPhone 11 Pro Max), and 6.68 inches (iPhone 12 Pro Max) diagonally. batteries - How to Calculate the time of Charging and Discharge time is basically the Ah or mAh rating divided by the current. So for a 2200mAh battery with a load that draws 300mA you have:\$\frac{2.2}{0.3} = 7.3 hours\$ * The charge time depends on the battery chemistry and the charge current. For NiMh, for example, this would typically be 10% of the Ah rating for 10 hours.How to Calculate the Battery Charging Time & Battery Therefore, 120 + 48 = 168 Ah ( 120 Ah + Losses) Now Charging Time of battery = Ah / Charging Current. Putting the values; 168 / 13 = 12.92 or 13 Hrs ( in real case) Therefore, an 120Ah battery would take 13 Hrs to fully charge in case of the required 13A charging current. Related Posts:Battery backup time formula with Solar panel Installation | 2022-01-21 13:47:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2817152142524719, "perplexity": 3552.5637495705955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303385.49/warc/CC-MAIN-20220121131830-20220121161830-00706.warc.gz"} |
https://buy-essay.com/answered-essay-locate-the-results-of-a-recent-survey-that-shows-at-least-two-variables-in-a-newspaper/ | # Answered Essay: Locate the results of a recent survey that shows at least two variables in a newspaper,
Locate the results of a recent survey that shows at least two variables in a newspaper, magazine, or internet article. Outline the survey data so that your peers can understand the variables and results, and then identify at least one key formula from this module that you could use to evaluate the data. Provide a brief explanation of why you selected the formula you did and why it matters.
Survey: Number of ice cream sold by a local ice cream shop versus the temperature on that day for 12 days.
Ice Cream Sales vs Temperature
Temperature °C Ice Cream Sales
(Y) (X)
14.2° \$215
16.4° \$325
11.9° \$185
15.2° \$332
18.5° \$406
22.1° \$522
19.4° \$412
25.1° \$614
23.4° \$544
18.1° \$421
22.6° \$445
17.2° \$408
Below is the scatter plot of the above data:
As we can see the data is following somewhat linear curve, we can use correlation/simple linear regression as our
evaluation method.
The correlation can be calculated using the formula –
From the above data we get –
$\sum xy = 95506.6$$\sum x = 4829$$\sum Y = 224.1$$\sum X^{2} = 2118025$ $and \sum Y^{2} = 4362.05$
So, the correlation will be –
$r = \frac{95506.6-\frac{4829 \times 224.1}{12}}{\sqrt{[2118025-\frac{(4829)^{2}}{12}][4362.05-\frac{(224.1)^{2}}{12}]}}$
So, r = 0.9575.
This shows a very high linear relationship between the two variables.
Pages (550 words)
Approximate price: -
Help Me Write My Essay - Reasons:
Best Online Essay Writing Service
We strive to give our customers the best online essay writing experience. We Make sure essays are submitted on time and all the instructions are followed.
Our Writers are Experienced and Professional
Our essay writing service is founded on professional writers who are on stand by to help you any time.
Free Revision Fo all Essays
Sometimes you may require our writers to add on a point to make your essay as customised as possible, we will give you unlimited times to do this. And we will do it for free.
Timely Essay(s)
We understand the frustrations that comes with late essays and our writers are extra careful to not violate this term. Our support team is always engauging our writers to help you have your essay ahead of time.
Customised Essays &100% Confidential
Our Online writing Service has zero torelance for plagiarised papers. We have plagiarism checking tool that generate plagiarism reports just to make sure you are satisfied.
Try it now!
## Calculate the price of your order
Total price:
\$0.00
How it works?
Fill in the order form and provide all details of your assignment.
Proceed with the payment
Choose the payment system that suits you most.
HOW OUR ONLINE ESSAY WRITING SERVICE WORKS
Let us write that nagging essay.
By clicking on the "PLACE ORDER" button, tell us your requires. Be precise for an accurate customised essay. You may also upload any reading materials where applicable.
Pick A & Writer
Our ordering form will provide you with a list of writers and their feedbacks. At step 2, its time select a writer. Our online agents are on stand by to help you just in case.
Editing (OUR PART)
At this stage, our editor will go through your essay and make sure your writer did meet all the instructions. | 2022-12-05 18:31:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3510119616985321, "perplexity": 3071.594616098849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711042.33/warc/CC-MAIN-20221205164659-20221205194659-00323.warc.gz"} |
http://www.math.cmu.edu/PIRE/pub/publication.php?Publication=121 | # Science at the triple point between mathematics, mechanics and materials science
## Publication 121
### Nonlocal Interaction Equations in Environments with Heterogeneities and Boundaries
##### Authors:
Lijiang Wu
Department of Mathematical Sciences
Carnegie Mellon University
Pittsburgh, PA 15213
Dejan Slepčev
Department of Mathematical Sciences
Carnegie Mellon University
Pittsburgh, PA 15213
##### Abstract:
We study well-posedness of a class of nonlocal interaction equations with spatially dependent mobility. We also allow for the presence of boundaries and external potentials. Such systems lead to the study of nonlocal interaction equations on subsets $M$ of $R^d$ endowed with a Riemannian metric $g$. We obtain conditions, relating the interaction potential and the geometry, which imply existence, uniqueness and stability of solutions. We study the equations in the setting of gradient flows in the space of probability measures on $M$ endowed with Riemannian 2-Wasserstein metric.
##### Get the paper in its entirety
13-CNA-023.pdf
Back to Publications | 2018-06-23 10:16:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7691023945808411, "perplexity": 1331.0069416022893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864957.2/warc/CC-MAIN-20180623093631-20180623113631-00379.warc.gz"} |
http://www.dimostriamogoldbach.it/en/number-theory-statements/ | # Number theory statements
A list of the statements appearing in our posts about number theory follows.
Theorem N.1: Fundamental theory of arithmetic (not proved) Every integer number greater than 1 can be written as a product of prime numbers. Moreover such expression, called factorization or decomposition into prime factors, is unique apart from the order of the factors.
Theorem N.2: Infinity of primes The number of primes is infinite.
Property N.1: Overestimation of binomials For all $n > 0$ and for all $m$: $\binom{n}{m} \leq 2^{n-1}$
Property N.2: Underestimation of the central binomials of the Pascal’s triangle For all $n \geq 2$: $\binom{n}{\frac{n}{2}} \geq \frac{2^n}{n}$ if $n$ is even, and $\binom{n}{\left \lceil \frac{n}{2} \right \rceil} = \binom{n}{\left \lfloor \frac{n}{2} \right \rfloor} \geq \frac{2^n}{n}$ if $n$ is odd.
Property N.3: Underestimation of the central binomial of Pascal’s triangle, even $n$ For all even $n \geq 2$: $\binom{n}{\frac{n}{2}} \geq \sqrt{2^{n}}$
Proposition N.1: Majorization of the product of primes up to $x$ For all $x > 0$: $\theta^{\star}(x) = \prod_{1 \leq p \leq x} p \leq 2^{2(x-1)}$
Theorem N.3: Bertrand’s postulate For all $n > 0$, there exists a prime number between $n + 1$ and $2n$.
Lemma N.1: Factorization of the binomial $\binom{2n}{n}$ In the factorization of the binomial $\binom{2n}{n}$, for $n > 0$, a prime number $p$ cannot have an exponent greater than $\log_p 2n$.
Proposition N.2: Computation of $\psi^{\star}(x)$ For every integer $x > 0$: $\psi^{\star}(x) = \prod_{p \leq x} p^{\left \lfloor \log_p x \right \rfloor}$
Corollary of Proposition N.2: Overestimation of $\psi^{\star}(x)$ with $\pi(x)$ For every integer $x > 0$: $\psi^{\star}(x) \leq x^{\pi(x)}$
Proposition N.3: Underestimation of $\psi^{\star}(x)$ For every integer $x > 0$: $\psi^{\star}(x) \geq \sqrt[3]{2^x}$
Lemma N.2: Reformulation of $\psi^{\star}(x)$ (“calculation by rows”) For every integer $x > 0$: $\psi^{\star}(x) = \left(\prod_{p \leq x} p\right) \cdot \left(\prod_{p^2 \leq x} p\right) \cdot \dots \cdot \left(\prod_{p^R \leq x} p\right)$
Proposition N.4: Connection between $\psi^{\star}$ and $\theta^{\star}$ functions For every integer $x > 0$: $\psi^{\star}(x) = \theta^{\star}(x) \cdot \theta^{\star}(\sqrt{x}) \cdot \dots \cdot \theta^{\star}(\sqrt[R]{x})$, where $R := \left \lfloor \log_2 x \right \rfloor$.
Lemma N.3: Underestimation of $\theta^{\star}(x)$ through $\pi(x)$ For every real number $\delta \geq 0$ and for every $x \gt 1$: $\theta^{\star}(x) \gt \left(x^{\delta}\right)^{\pi(x) - x^{\delta}}$.
Theorem N.4: Asymptotic equivalence and order of magnitude of $\theta(x)$ and $\psi(x)$ The functions $\theta(x)$ and $\psi(x)$ are asymptotically equivalent and have order $x$: $\theta(x) \sim \psi(x)$, $\theta(x) \asymp x \asymp \psi(x)$.
Theorem N.5: Asymptotical equivalence between $\pi(x)$ and $\frac{\theta(x)}{\log x}$ $\pi(x) \sim \frac{\theta(x)}{\log x}$
Corollary of Theorem N.5: Asymptotical equivalence between $\pi(x)$ and $\frac{\psi(x)}{\log x}$ $\pi(x) \sim \frac{\psi(x)}{\log x}$
Corollary II of Theorem N.5: Chebyshev’s Theorem: order of magnitude of $\pi(x)$ $\pi(x) \asymp \frac{x}{\log x}$
Lemma N.4: Lemma of bar chart area Let $c_1, c_2, \dots, c_n$ be natural numbers, with $n > 0$. Let $f: \{1, 2, ..., n\} \rightarrow \mathbb{R}$ be a function. Then the area $A$ of the bar chart made up of $n$ rectangles, each having basis $c_i$ and height $f(i)$, given by $A = c_1 f(1) + c_2 f(2) + \ldots + c_n f(n) = \sum_{i=1}^{n} c_i f(i)$, can also be computed with the formula \begin{aligned}A &= C_n f(n) + C_{n-1} (f(n-1) - f(n)) + \ldots + C_1 (f(1) - f(2)) \\&= \sum_{k = 1}^{n-1} C_k (f(k) - f(k + 1)) + C_n f(n)\end{aligned} where $C_k := c_1 + c_2 + \ldots + c_k = \sum_{i=1}^{k} c_i$.
Lemma N.5: Lemma of bar chart area, second form Let $c_1, c_2, \dots, c_n$ be natural numbers, with $n > 0$. Let $f: \{1, 2, ..., n\} \rightarrow \mathbb{R}$ be a function, and $\widetilde{f}$ a $C^1$ extension of $f$. Then the area $A$ of the bar chart made up of $n$ rectangles, each with basis $c_i$ and height $f(i)$, given by $A = c_1 f(1) + c_2 f(2) + \ldots + c_n f(n) = \sum_{i=1}^{n} c_i f(i)$, can be also computed with the formula $$C(n) f(n) - \int_1^n \overline{C}(k) \widetilde{f}'(t) dt$$ where $C: \{1, 2, ..., n\} \rightarrow \mathbb{N}$ is the function defined by $C(k) := c_1 + c_2 + \ldots + c_k = \sum_{i=1}^{k} c_i$.
Theorem N.6: Approximation of the sum of inverses of the first positive integers
For all integers $n \gt 0$:
$$\sum_{k = 1}^n \frac{1}{k} = 1 + \frac{1}{2} + \frac{1}{3} + \ldots + \frac{1}{n} \approx \log n + \gamma$$
where logarithm base is Napier’s number $e$, while $\gamma \approx 0.58$ is the Euler’s constant. In particular:
$$\sum_{k = 1}^n \frac{1}{k} = 1 + \frac{1}{2} + \frac{1}{3} + \ldots + \frac{1}{n} = \log n + \gamma + O\left(\frac{1}{n}\right)$$
Property N.4: Upper and lower bounds for the value assumed by a function defined on integer numbers, by means of integrals of an extension
Let $f: I \rightarrow \mathbb{R}$ be a function defined on a set $I \subset \mathbb{Z}$.
Let $\overset{\sim}{f}: \overline{I} \rightarrow \mathbb{R}$ an extension of $f$, where $\overline{I} := \bigcup_{n \in I} [n, n + 1)$. Then:
• If $\overset{\sim}{f}$ is increasing, then $f(n) \leq \int_n^{n+1} \overset{\sim}{f}(t) dt$
• If $\overset{\sim}{f}$ is decreasing, then $f(n) \geq \int_n^{n+1} \overset{\sim}{f}(t) dt$
Let $\underset{\sim}{f}: \underline{I} \rightarrow \mathbb{R}$ be an extension of $f$, where $\underline{I} := \bigcup_{n \in I} (n - 1, n]$. Then:
• If $\underset{\sim}{f}$ is increasing, then $\int_{n-1}^n \underset{\sim}{f}(t) dt \leq f(n)$
• If $\underset{\sim}{f}$ is decreasing, then $\int_{n-1}^n \underset{\sim}{f}(t) dt \geq f(n)$
Lemma N.6: Order of magnitude of the sum of a fraction logarithms when the denominator changes $$\sum_{n=1}^x \log \left(\frac{x}{n} \right) = O(x)$$. | 2020-04-03 06:26:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 106, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9572797417640686, "perplexity": 411.18823361472107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370510352.43/warc/CC-MAIN-20200403061648-20200403091648-00327.warc.gz"} |
https://electronics.stackexchange.com/questions/334118/how-does-a-120vac-neon-indicator-lamp-respond-to-a-leading-edge-triac-dimmer | # How does a 120VAC neon indicator lamp respond to a leading-edge triac dimmer?
I bought a dirt cheap toaster oven to convert to a solder reflow station, and I noticed in the teardown that it apparently has a neon power indicator. It goes out instantly when it loses power instead of fading like an incandescent, and it measures open-circuit when off.
I'm not opposed to cutting it out of circuit, but I also thought it might be nice to have a rough indicator of what it's actually doing, if it actually works that way.
Here's the circuit:
simulate this circuit – Schematic created using CircuitLab
So:
• Does it "just work" and produce an incandescent-like end result?
• Can it confuse the triac? (probably not because it's in parallel with the resistive heater and my software is going to hold the opto on anyway)
• Is there something else I ought to know about using neons and triac dimmers together?
• Even though it's a heater with relatively slow response, I was planning on actually dimming it instead of just on/off. (The software will operate as "phase-locked PWM" at 120Hz.) Given a ~100k series resistor in the neon, I agree in that I can't see how it would affect anything except itself, but just out of curiosity, what would the neon do with that? Would it have more of a threshold behavior? Just off if it doesn't receive the strike voltage, and just on if it does, with some flickering around the threshold? – AaronD Oct 13 '17 at 15:04
• Okay, so it sounds like my prediction is correct then: on like usual above ~40% power (just a guess, but definitely <50%), off below ~30% (another guess), and flickering in between. Thanks! – AaronD Oct 13 '17 at 15:54 | 2019-12-14 12:46:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4015180766582489, "perplexity": 1939.3770725003383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541157498.50/warc/CC-MAIN-20191214122253-20191214150253-00439.warc.gz"} |
https://machinelearningmastery.com/naive-bayes-classifier-scratch-python/ | # Naive Bayes Classifier From Scratch in Python
Last Updated on October 25, 2019
In this tutorial you are going to learn about the Naive Bayes algorithm including how it works and how to implement it from scratch in Python (without libraries).
We can use probability to make predictions in machine learning. Perhaps the most widely used example is called the Naive Bayes algorithm. Not only is it straightforward to understand, but it also achieves surprisingly good results on a wide range of problems.
After completing this tutorial you will know:
• How to calculate the probabilities required by the Naive Bayes algorithm.
• How to implement the Naive Bayes algorithm from scratch.
• How to apply Naive Bayes to a real-world predictive modeling problem.
Kick-start your project with my new book Machine Learning Algorithms From Scratch, including step-by-step tutorials and the Python source code files for all examples.
Let’s get started.
• Update Dec/2014: Original implementation.
• Update Oct/2019: Rewrote the tutorial and code from the ground-up.
Code a Naive Bayes Classifier From Scratch in Python (with no libraries)
Photo by Matt Buck, some rights reserved
## Overview
This section provides a brief overview of the Naive Bayes algorithm and the Iris flowers dataset that we will use in this tutorial.
### Naive Bayes
Bayes’ Theorem provides a way that we can calculate the probability of a piece of data belonging to a given class, given our prior knowledge. Bayes’ Theorem is stated as:
• P(class|data) = (P(data|class) * P(class)) / P(data)
Where P(class|data) is the probability of class given the provided data.
For an in-depth introduction to Bayes Theorem, see the tutorial:
Naive Bayes is a classification algorithm for binary (two-class) and multiclass classification problems. It is called Naive Bayes or idiot Bayes because the calculations of the probabilities for each class are simplified to make their calculations tractable.
Rather than attempting to calculate the probabilities of each attribute value, they are assumed to be conditionally independent given the class value.
This is a very strong assumption that is most unlikely in real data, i.e. that the attributes do not interact. Nevertheless, the approach performs surprisingly well on data where this assumption does not hold.
For an in-depth introduction to Naive Bayes, see the tutorial:
### Iris Flower Species Dataset
In this tutorial we will use the Iris Flower Species Dataset.
The Iris Flower Dataset involves predicting the flower species given measurements of iris flowers.
It is a multiclass classification problem. The number of observations for each class is balanced. There are 150 observations with 4 input variables and 1 output variable. The variable names are as follows:
• Sepal length in cm.
• Sepal width in cm.
• Petal length in cm.
• Petal width in cm.
• Class
A sample of the first 5 rows is listed below.
The baseline performance on the problem is approximately 33%.
## Naive Bayes Tutorial (in 5 easy steps)
First we will develop each piece of the algorithm in this section, then we will tie all of the elements together into a working implementation applied to a real dataset in the next section.
This Naive Bayes tutorial is broken down into 5 parts:
• Step 1: Separate By Class.
• Step 2: Summarize Dataset.
• Step 3: Summarize Data By Class.
• Step 4: Gaussian Probability Density Function.
• Step 5: Class Probabilities.
These steps will provide the foundation that you need to implement Naive Bayes from scratch and apply it to your own predictive modeling problems.
Note: This tutorial assumes that you are using Python 3. If you need help installing Python, see this tutorial:
Note: if you are using Python 2.7, you must change all calls to the items() function on dictionary objects to iteritems().
### Step 1: Separate By Class
We will need to calculate the probability of data by the class they belong to, the so-called base rate.
This means that we will first need to separate our training data by class. A relatively straightforward operation.
We can create a dictionary object where each key is the class value and then add a list of all the records as the value in the dictionary.
Below is a function named separate_by_class() that implements this approach. It assumes that the last column in each row is the class value.
We can contrive a small dataset to test out this function.
We can plot this dataset and use separate colors for each class.
Scatter Plot of Small Contrived Dataset for Testing the Naive Bayes Algorithm
Putting this all together, we can test our separate_by_class() function on the contrived dataset.
Running the example sorts observations in the dataset by their class value, then prints the class value followed by all identified records.
Next we can start to develop the functions needed to collect statistics.
### Step 2: Summarize Dataset
We need two statistics from a given set of data.
We’ll see how these statistics are used in the calculation of probabilities in a few steps. The two statistics we require from a given dataset are the mean and the standard deviation (average deviation from the mean).
The mean is the average value and can be calculated as:
• mean = sum(x)/n * count(x)
Where x is the list of values or a column we are looking.
Below is a small function named mean() that calculates the mean of a list of numbers.
The sample standard deviation is calculated as the mean difference from the mean value. This can be calculated as:
• standard deviation = sqrt((sum i to N (x_i – mean(x))^2) / N-1)
You can see that we square the difference between the mean and a given value, calculate the average squared difference from the mean, then take the square root to return the units back to their original value.
Below is a small function named standard_deviation() that calculates the standard deviation of a list of numbers. You will notice that it calculates the mean. It might be more efficient to calculate the mean of a list of numbers once and pass it to the standard_deviation() function as a parameter. You can explore this optimization if you’re interested later.
We require the mean and standard deviation statistics to be calculated for each input attribute or each column of our data.
We can do that by gathering all of the values for each column into a list and calculating the mean and standard deviation on that list. Once calculated, we can gather the statistics together into a list or tuple of statistics. Then, repeat this operation for each column in the dataset and return a list of tuples of statistics.
Below is a function named summarize_dataset() that implements this approach. It uses some Python tricks to cut down on the number of lines required.
The first trick is the use of the zip() function that will aggregate elements from each provided argument. We pass in the dataset to the zip() function with the * operator that separates the dataset (that is a list of lists) into separate lists for each row. The zip() function then iterates over each element of each row and returns a column from the dataset as a list of numbers. A clever little trick.
We then calculate the mean, standard deviation and count of rows in each column. A tuple is created from these 3 numbers and a list of these tuples is stored. We then remove the statistics for the class variable as we will not need these statistics.
Let’s test all of these functions on our contrived dataset from above. Below is the complete example.
Running the example prints out the list of tuples of statistics on each of the two input variables.
Interpreting the results, we can see that the mean value of X1 is 5.178333386499999 and the standard deviation of X1 is 2.7665845055177263.
Now we are ready to use these functions on each group of rows in our dataset.
### Step 3: Summarize Data By Class
We require statistics from our training dataset organized by class.
Above, we have developed the separate_by_class() function to separate a dataset into rows by class. And we have developed summarize_dataset() function to calculate summary statistics for each column.
We can put all of this together and summarize the columns in the dataset organized by class values.
Below is a function named summarize_by_class() that implements this operation. The dataset is first split by class, then statistics are calculated on each subset. The results in the form of a list of tuples of statistics are then stored in a dictionary by their class value.
Again, let’s test out all of these behaviors on our contrived dataset.
Running this example calculates the statistics for each input variable and prints them organized by class value. Interpreting the results, we can see that the X1 values for rows for class 0 have a mean value of 2.7420144012.
There is one more piece we need before we start calculating probabilities.
### Step 4: Gaussian Probability Density Function
Calculating the probability or likelihood of observing a given real-value like X1 is difficult.
One way we can do this is to assume that X1 values are drawn from a distribution, such as a bell curve or Gaussian distribution.
A Gaussian distribution can be summarized using only two numbers: the mean and the standard deviation. Therefore, with a little math, we can estimate the probability of a given value. This piece of math is called a Gaussian Probability Distribution Function (or Gaussian PDF) and can be calculated as:
• f(x) = (1 / sqrt(2 * PI) * sigma) * exp(-((x-mean)^2 / (2 * sigma^2)))
Where sigma is the standard deviation for x, mean is the mean for x and PI is the value of pi.
Below is a function that implements this. I tried to split it up to make it more readable.
Let’s test it out to see how it works. Below are some worked examples.
Running it prints the probability of some input values. You can see that when the value is 1 and the mean and standard deviation is 1 our input is the most likely (top of the bell curve) and has the probability of 0.39.
We can see that when we keep the statistics the same and change the x value to 1 standard deviation either side of the mean value (2 and 0 or the same distance either side of the bell curve) the probabilities of those input values are the same at 0.24.
Now that we have all the pieces in place, let’s see how we can calculate the probabilities we need for the Naive Bayes classifier.
### Step 5: Class Probabilities
Now it is time to use the statistics calculated from our training data to calculate probabilities for new data.
Probabilities are calculated separately for each class. This means that we first calculate the probability that a new piece of data belongs to the first class, then calculate probabilities that it belongs to the second class, and so on for all the classes.
The probability that a piece of data belongs to a class is calculated as follows:
• P(class|data) = P(X|class) * P(class)
You may note that this is different from the Bayes Theorem described above.
The division has been removed to simplify the calculation.
This means that the result is no longer strictly a probability of the data belonging to a class. The value is still maximized, meaning that the calculation for the class that results in the largest value is taken as the prediction. This is a common implementation simplification as we are often more interested in the class prediction rather than the probability.
The input variables are treated separately, giving the technique it’s name “naive“. For the above example where we have 2 input variables, the calculation of the probability that a row belongs to the first class 0 can be calculated as:
• P(class=0|X1,X2) = P(X1|class=0) * P(X2|class=0) * P(class=0)
Now you can see why we need to separate the data by class value. The Gaussian Probability Density function in the previous step is how we calculate the probability of a real value like X1 and the statistics we prepared are used in this calculation.
Below is a function named calculate_class_probabilities() that ties all of this together.
It takes a set of prepared summaries and a new row as input arguments.
First the total number of training records is calculated from the counts stored in the summary statistics. This is used in the calculation of the probability of a given class or P(class) as the ratio of rows with a given class of all rows in the training data.
Next, probabilities are calculated for each input value in the row using the Gaussian probability density function and the statistics for that column and of that class. Probabilities are multiplied together as they accumulated.
This process is repeated for each class in the dataset.
Finally a dictionary of probabilities is returned with one entry for each class.
Let’s tie this together with an example on the contrived dataset.
The example below first calculates the summary statistics by class for the training dataset, then uses these statistics to calculate the probability of the first record belonging to each class.
Running the example prints the probabilities calculated for each class.
We can see that the probability of the first row belonging to the 0 class (0.0503) is higher than the probability of it belonging to the 1 class (0.0001). We would therefore correctly conclude that it belongs to the 0 class.
Now that we have seen how to implement the Naive Bayes algorithm, let’s apply it to the Iris flowers dataset.
## Iris Flower Species Case Study
This section applies the Naive Bayes algorithm to the Iris flowers dataset.
The first step is to load the dataset and convert the loaded data to numbers that we can use with the mean and standard deviation calculations. For this we will use the helper function load_csv() to load the file, str_column_to_float() to convert string numbers to floats and str_column_to_int() to convert the class column to integer values.
We will evaluate the algorithm using k-fold cross-validation with 5 folds. This means that 150/5=30 records will be in each fold. We will use the helper functions evaluate_algorithm() to evaluate the algorithm with cross-validation and accuracy_metric() to calculate the accuracy of predictions.
A new function named predict() was developed to manage the calculation of the probabilities of a new row belonging to each class and selecting the class with the largest probability value.
Another new function named naive_bayes() was developed to manage the application of the Naive Bayes algorithm, first learning the statistics from a training dataset and using them to make predictions for a test dataset.
If you would like more help with the data loading functions used below, see the tutorial:
If you would like more help with the way the model is evaluated using cross validation, see the tutorial:
The complete example is listed below.
Running the example prints the mean classification accuracy scores on each cross-validation fold as well as the mean accuracy score.
We can see that the mean accuracy of about 95% is dramatically better than the baseline accuracy of 33%.
We can fit the model on the entire dataset and then use the model to make predictions for new observations (rows of data).
For example, the model is just a set of probabilities calculated via the summarize_by_class() function.
Once calculated, we can use them in a call to the predict() function with a row representing our new observation to predict the class label.
We also might like to know the class label (string) for a prediction. We can update the str_column_to_int() function to print the mapping of string class names to integers so we can interpret the prediction by the model.
Tying this together, a complete example of fitting the Naive Bayes model on the entire dataset and making a single prediction for a new observation is listed below.
Running the data first summarizes the mapping of class labels to integers and then fits the model on the entire dataset.
Then a new observation is defined (in this case I took a row from the dataset), and a predicted label is calculated. In this case our observation is predicted as belonging to class 1 which we know is “Iris-versicolor“.
## Extensions
This section lists extensions to the tutorial that you may wish to explore.
• Log Probabilities: The conditional probabilities for each class given an attribute value are small. When they are multiplied together they result in very small values, which can lead to floating point underflow (numbers too small to represent in Python). A common fix for this is to add the log of the probabilities together. Research and implement this improvement.
• Nominal Attributes: Update the implementation to support nominal attributes. This is much similar and the summary information you can collect for each attribute is the ratio of category values for each class. Dive into the references for more information.
• Different Density Function (bernoulli or multinomial): We have looked at Gaussian Naive Bayes, but you can also look at other distributions. Implement a different distribution such as multinomial, bernoulli or kernel naive bayes that make different assumptions about the distribution of attribute values and/or their relationship with the class value.
If you try any of these extensions, let me know in the comments below.
## Summary
In this tutorial you discovered how to implement the Naive Bayes algorithm from scratch in Python.
Specifically, you learned:
• How to calculate the probabilities required by the Naive interpretation of Bayes Theorem.
• How to use probabilities to make predictions on new data.
• How to apply Naive Bayes to a real-world predictive modeling problem.
### Next Step
Take action!
1. Follow the tutorial and implement Naive Bayes from scratch.
2. Adapt the example to another dataset.
3. Follow the extensions and improve upon the implementation.
## Discover How to Code Algorithms From Scratch!
#### No Libraries, Just Python Code.
...with step-by-step tutorials on real-world datasets
Discover how in my new Ebook:
Machine Learning Algorithms From Scratch
It covers 18 tutorials with all the code for 12 top algorithms, like:
Linear Regression, k-Nearest Neighbors, Stochastic Gradient Descent and much more...
### 324 Responses to Naive Bayes Classifier From Scratch in Python
1. david jensen December 12, 2014 at 3:28 am #
Statistical methods should be developed from scratch because of misunderstandings. Thank you.
• Mamta March 14, 2019 at 7:11 pm #
Jason … Thank you so much… you are too good.
2. Anurag December 14, 2014 at 1:11 pm #
This is a wonderful article. Your blog is one of those blogs that I visit everyday. Thanks for sharing this stuff. I had a question about the programming language that should be used for building these algorithms from scratch. I know that Python is widely used because it’s easy to write code by importing useful libraries that are already available. Nevertheless, I am a C++ guy. Although I am a beginner in practical ML, I have tried to write efficient codes before I started learning and implementing ML. Now I am aware of the complexities involved in coding if you’re using C++: more coding is to be done than what is required in Python. Considering that, what language is your preference and under what situations? I know that it’s lame to ask about preferences of programming language as it is essentially a personal choice. But still I’d like you to share your take on this. Also try to share the trade-offs while choosing these programming languages.
Thank you.
• Jason Brownlee December 15, 2014 at 7:53 am #
Thanks Anurag
• SHRUTI April 17, 2018 at 12:33 am #
I am getting error while I try to implement this in my own dataset.
for classValue, classSummaries in summaries.iteritems():
AttributeError: ‘list’ object has no attribute ‘iteritems’
When I try to run it,with your csv file,it says
ataset = list(lines)
Error: iterator should return strings, not bytes (did you open the file in text mode?)
What to do?
• Jason Brownlee April 17, 2018 at 6:02 am #
The code was written for Python 2.7. Perhaps you’re using Python 3?
• fatih_conqueror April 29, 2018 at 6:25 am #
use items instead of iteritems. when you write erorro message to google, you can find lots of resolve like stackoverflow.
3. Alcides Schulz January 15, 2015 at 12:32 am #
Hi Jason, found your website and read it in one day. Thank you, it really helped me to understand ML and what to do.
I did the 2 examples here and I think I will take a look at scikit-learn now.
I have a personal project that I want to use ML, and I’ll keep you posted on the progress.
One small note on this post, is on the “1. Handle data” you refer to iris.data from previous post.
Thank you so much, example is really good to show how to do it. Please keep it coming.
• Jason Brownlee January 15, 2015 at 7:43 am #
Thanks for the kind words Alcides.
Fixed the reference to the iris dataset.
• vivek December 5, 2017 at 9:45 pm #
hi jason im not able to run the code and get the Output it says “No such file or directory: ‘pima-indians-diabetes.data.csv'”
• Jason Brownlee December 6, 2017 at 9:02 am #
Ensure you are running the code from the command line and that the data file and code file are in the same directory.
• priya September 1, 2018 at 2:28 pm #
is thetre any data file available
4. toolate January 22, 2015 at 2:16 am #
Hi Jason, still one more note on your post, is on the “1. Handle data” the flower measures that you refer to iris.data.
5. Tamilselvan February 4, 2015 at 11:37 pm #
Great Article. Learned a Lot. Thanks. Thanks.
6. Abhinav kumar February 23, 2015 at 8:13 pm #
thank u
7. Roy March 7, 2015 at 2:53 pm #
Thanks for your nice article. I really appreciate the step by step instructions.
8. malini March 17, 2015 at 7:19 pm #
hello sir, plz tell me how to compare the data set using naive Bayes algorithm.
9. Isha March 21, 2015 at 5:40 pm #
Why does the accuracy change every time you run this code?
when i tried running this code every time it gave me different accuracy percentage in the range from 70-78%
Why is it so?
Why is it not giving a constant accuracy percent?
• Harry April 9, 2015 at 8:37 am #
As Splitting of dataset into testdata and traindata is done using a random function accuracy varies.
• Nitin Ramesh October 27, 2017 at 4:00 pm #
The algorithm splits the dataset into the same top 67% and bottom 33% every single time.
The test-set is same on every single run.
So even though we use a random function on the top 67%(training set) to randomly index them.
A calculation like
((4+2)/6) and ((2+4)/6) will yield same result every-time.How is this yielding different result?
Is this something to do with the order of calculation in the Gaussian probability density function?
• Abdul Salam December 22, 2017 at 10:43 pm #
well.. the math calculations would come under to the view if you had the same sample taken again and again..
but the point here is.. we are taking the random data itself.. like.. the variables will not be same in every row right.. so if u change the rows randomly… at the end the accuracy will change… taken that into consideration.. you can take average of accuracy for each run to get what is the “UN-fluctuated accuracy” just in case if you want…
hope it helps..
10. Sheepsy90 March 25, 2015 at 8:12 pm #
Hey nice article – one question – why do you use the N-1 in the STD Deviation Process?
• Jason Brownlee March 26, 2015 at 5:33 am #
Thanks!
N-1 is the sample (corrected or unbiased) standard deviation. See this wiki page.
• depika February 24, 2020 at 6:33 pm #
because i know only this much ….
11. Vaishali April 8, 2015 at 6:01 pm #
Hey! Thanks a ton! This was very useful.
It would be great if you give an idea on how other metrics like precision and recall can be calculated.
Thanks!
12. Ashwin Perti April 24, 2015 at 5:28 pm #
Sir
When I am running the same code in IDLE (python 2.7) – the code is working fine, but when I run the same code in eclipse. the error coming is:
1) warning – unused variable dataset
2) undefined variable dataset in for loop
Why this difference.
13. Melvin Tjon Akon May 21, 2015 at 1:46 am #
Great post, Jason.
For a MBA/LLM, it makes naive bayes very easy to understand and to implement in legal coding. Looking forward to read more. Best, Melvin
14. Igor Balabine June 10, 2015 at 11:44 am #
Jason,
Great example. Thanks! One nit: “calculateProbability” is not a good name for a function which actually calculates Gaussian probability density – pdf value may be greater than 1.
Cheers,
-Igor
• - Ruud - November 26, 2016 at 2:23 am #
Good point, thanks!
15. Alex Ubot July 2, 2015 at 10:06 pm #
Hi Jason,
Fantastic post. I really learnt a lot. However I do have a question? Why don’ t you use the P(y) value in your calculateClassProbabilities() ?
If I understood the model correctly, everything is based on the bayes theorem :
P(y|x1….xn) = P(x1…..xn|y) * P(y) / P(x1……xn)
P(x1……xn) will be a constant so we can get rid of it.
Your post explain very well how to calculate P(x1……xn|y) (assumption made that x1…..xn are all independent we then have
P(x1……xn|y) = P(x1|y) * …. P(xn|y) )
How about p(y) ? I assume that we should calculate the frequency of the observation y in the training set and then multiply it to probabilities[classValue] so that we have :
P(y|x1…..xn) = frequency(classValue) * probabilities[classValue]
Otherwise let’ s assume that in a training set of 500 lines, we have two class 0 and 1 but observed 100 times 0 et 400 times 1. If we do not compute the frequency, then the probability may be biased, right ? Did I misunderstand something ? Hopefully my post is clear. I really hope that you will reply because I am a bit confused.
Thanks
Alex
• Babu February 28, 2016 at 7:43 am #
I have the same question – why is multiplying by p(y) is omitted?
• Babu March 10, 2016 at 2:09 pm #
No Answer yet – no one on internet has answer to this.
Just don’t want to accept answers without understanding it.
• frong April 3, 2016 at 3:15 pm #
yeah,I have the same question too, maybe the P(y) is nessary ,but why the accuracy is not so low when P(y) is missing? is it proving that bayes model is powerful?
• gd April 7, 2016 at 2:27 am #
hi,
I believe this is because P(y) = 1 as classes are already segregated before calculating P(x1…xn|Y).
Can experts comment on this please?
• Babu May 23, 2016 at 7:32 am #
There is huge bug in this implementation;
First of all the implementation using GaussianNB gives totally a different answer.
Why is no one is replying even after 2 months of this.
My concern is, there are so many more bad bayesians in a wrong concept.
At least the parameters are correct – something wrong with calculating probs.
def SplitXy(Xy):
Xy10=Xy[0:8]
Xy10 = Xy;
#print Xy10
#print “========”
zXy10=list(zip(*Xy10))
y= zXy10[-1]
del zXy10[-1]
z1=zip(*zXy10)
X=[list(t) for t in z1]
return X,y
from sklearn.naive_bayes import GaussianNB
X,y = SplitXy(trainingSet)
Xt,yt = SplitXy(testSet)
model = GaussianNB()
model.fit(X, y)
### Compare the models built by Python
print (“Class: 0”)
for i,j in enumerate(model.theta_[0]):
print (“({:8.2f} {:9.2f} {:7.2f} )”.format(j, model.sigma_[0][i], sqrt(model.sigma_[0][i])) , end=””)
print (“==> “, summaries[0][i])
print (“Class: 1”)
for i,j in enumerate(model.theta_[1]):
print (“({:8.2f} {:9.2f} {:7.2f} )”.format(j, model.sigma_[1][i], sqrt(model.sigma_[1][i])) , end=””)
print (“==> “, summaries[1][i])
”’
Class: 0
( 3.18 9.06 3.01 )==> (3.1766467065868262, 3.0147673799630748)
( 109.12 699.16 26.44 )==> (109.11976047904191, 26.481293163857107)
( 68.71 286.46 16.93 )==> (68.712574850299404, 16.950414098038465)
( 19.74 228.74 15.12 )==> (19.742514970059879, 15.146913806453629)
( 68.64 10763.69 103.75 )==> (68.640718562874255, 103.90387227315443)
( 30.71 58.05 7.62 )==> (30.710778443113771, 7.630215185470916)
( 0.42 0.09 0.29 )==> (0.42285928143712581, 0.29409299864249266)
( 30.66 118.36 10.88 )==> (30.658682634730539, 10.895778423248444)
Class: 1
( 4.76 12.44 3.53 )==> (4.7611111111111111, 3.5365037952376928)
( 139.17 1064.54 32.63 )==> (139.17222222222222, 32.71833930500929)
( 69.27 525.24 22.92 )==> (69.272222222222226, 22.98209907114023)
( 22.64 309.59 17.60 )==> (22.638888888888889, 17.644143437447358)
( 101.13 20409.91 142.86 )==> (101.12777777777778, 143.2617649699204)
( 34.99 57.18 7.56 )==> (34.99388888888889, 7.5825893182809425)
( 0.54 0.14 0.37 )==> (0.53544444444444439, 0.3702077209795522)
( 36.73 112.86 10.62 )==> (36.727777777777774, 10.653417924304598)
”’
• Shounak Roy March 3, 2020 at 4:37 pm #
I think he does this with the following line.
probabilities[class_value] = summaries[class_value][0][2]/float(total_rows)
• EL YAMANI May 22, 2016 at 8:57 am #
Hello,
Thanks for this article , it is very helpful . I just have a remark about the probabilty that you are calculating which is P(x|Ck) and then you make predictions, the result will be biased since you don’t multiply by P(Ck) , P(x) can be omitted since it’s only a normalisation constant.
• Parker Difuntorum September 29, 2019 at 1:12 am #
LOL it sounds like you already have the answer you’re asking. Thank you for the rhetoric. Maybe share your implementation code for everyone for further clarification? Thanks in advance
16. Anand July 20, 2015 at 9:12 pm #
Thanks a lot for this tutorial, Jason.
I have a quick question if you can help.
In the separateByClass() definition, I could not understand how vector[-1] is a right usage when vector is an int type object.
If I try the same commands one by one outside the function, the line of code with vector[-1] obviously throws a TypeError: 'int' object has no attribute '__getitem__'.
Then how is it working inside the function?
I am sorry for my ignorance. I am new to python. Thank you.
17. Sarah August 26, 2015 at 5:50 pm #
Hello Jason! I just wanted to leave a message to say thank you for the website. I am preparing for a job in this field and it has helped me so much. Keep up the amazing work!! 🙂
• Jason Brownlee August 26, 2015 at 6:56 pm #
You’re welcome! Thanks for leaving such a kind comment, you really made my day 🙂
18. Jaime Lopez September 7, 2015 at 8:52 am #
Hi Jason,
Very easy to follow your classifier. I try it and works well on your data, but is important to note that it works just on numerical databases, so maybe one have to transform your data from categorical to numerical format.
Another thing, when I transformed one database, sometimes the algorithm find division by zero error, although I avoided to use that number on features and classes.
Any suggestion Jason?
Thanks, Jaime
• syed belgam April 11, 2016 at 2:05 pm #
Thanks
19. eduardo September 28, 2015 at 1:32 pm #
It is by far the best material I’ve found , please continue helping the community!
20. Thibroll September 29, 2015 at 9:11 pm #
Hello.
This is all well explained, and depicts well the steps of machine learning. But the way you calculate your P(y|X) here is false, and may lead to unwanted error.
Here, in theory, using the Bayes law, we know that : P(y|X) = P(y).P(X|y)/P(X). As we want to maximize P(y|X) with a given X, we can ignore P(X) and pick the result for the maximized value of P(y).P(X|y)
2 points remain inconsistent :
– First, you pick a gaussian distribution to estimate P(X|y). But here, you calculateProbability calculates the DENSITY of the function to the specific points X, y, with associated mean and deviation, and not the actual probability.
– The second point is that you don’t take into consideration the calculation of P(y) to estimate P(y|X). Your model (with the correct probability calculation) may work only if all samples have same amount in every value of y (considering y is discret), or if you are lucky enough.
Anyway, despite those mathematical issue, this is a good work, and a god introduction to machine learning.
21. mondet October 6, 2015 at 10:08 am #
Thanks Jason for all this great material. One thing that i adore from you is the intellectual honesty, the spirit of collaboration and the parsimony.
In my opinion you are one of the best didactics exponents in the ML.
Thanks to Thibroll too. But i would like to have a real example of the problem in R, python or any other language.
Regards,
Emmanuel:.
22. Erika October 15, 2015 at 10:03 am #
Hi Jason,
I have trying to get started with machine learning and your article has given me the much needed first push towards that. Thank you for your efforts! 🙂
23. Swagath November 9, 2015 at 5:35 pm #
24. Sarah November 16, 2015 at 11:54 pm #
I am working with this code – tweaking it here or there – have found it very helpful as I implement a NB from scratch. I am trying to take the next step and add in categorical data. Any suggestions on where I can head to get ideas for how to add this? Or any particular functions/methods in Python you can recommend? I’ve brought in all the attributes and split them into two datasets for continuous vs. categorical so that I can work on them separately before bringing their probabilities back together. I’ve got the categorical in the same dictionary where the key is the class and the values are lists of attributes for each instance. I’m not sure how to go through the values to count frequencies and then how to store this back up so that I have the attribute values along with their frequencies/probabilities. A dictionary within a dictionary? Should I be going in another direction and not using a similar format?
25. Emmanuel Nuakoh November 19, 2015 at 6:36 am #
Thank you Jason, this tutorial is helping me with my implementation of NB algorithm for my PhD Dissertation. Very elaborate.
26. Anna January 14, 2016 at 2:32 am #
Hi! thank you! Have you tried to do the same for the textual datasets, for example 20Newsgroups http://qwone.com/~jason/20Newsgroups/ ? Would appreciate some hints or ideas )
27. Randy January 16, 2016 at 4:15 pm #
Great article, but as others pointed out there are some mathematical mistakes like using the probability density function for single value probabilities.
28. Meghna February 7, 2016 at 7:45 pm #
Thank you for this amazing article!! I implemented the same for wine and MNIST data set and these tutorials helped me so much!! 🙂
29. David February 7, 2016 at 11:17 pm #
I got an error with the first print statement, because your parenthesis are closing the call to print (which returns None) before you’re calling format, so instead of
print(‘Split {0} rows into train with {1} and test with {2}’).format(len(dataset), train, test)
it should be
print(‘Split {0} rows into train with {1} and test with {2}’.format(len(dataset), train, test))
Anyway, thanks for this tutorial, it was really useful, cheers!
30. Kumar Ramanathan February 12, 2016 at 12:20 pm #
Sincere gratitude for this most excellent site. Yes, I never learn until I write code for the algorithm. It is such an important exercise, to get concepts embedded into one’s brain. Brilliant effort, truly !
31. Syed February 18, 2016 at 8:15 am #
Just to test the algorithm, i change the class of few of the data to something else i.e 3 or 4, (last digit in a line) and i get divide by zero error while calculating the variance. I am not sure why. does it mean that this particular program works only for 2 classess? cant see anything which restricts it to that.
32. Takuma Udagawa March 20, 2016 at 1:19 pm #
Hi, I’m a student in Japan.
It seems to me that you are calculating p(X1|Ck)*p(X2|Ck)*…*p(Xm|Ck) and choosing Ck such that this value would be maximum.
However, when I looked in the Wikipedia, you are supposed to calculate p(X1|Ck)*p(X2|Ck)*…*p(Xm|Ck)*p(Ck).
I don’t understand when you calculated p(Ck).
Would you tell me about it?
• jessie November 24, 2016 at 1:21 am #
Had the same thought, where’s the prior calculated?
• Jorge January 9, 2018 at 12:00 pm #
Its seems it is a bug of this implementation.
I looked for the implementation of scikitlearn and they seem to have the right formula,
• Jason Brownlee January 9, 2018 at 3:19 pm #
You can add the prior easily. I left it out because it was a constant for this dataset.
33. Babu May 23, 2016 at 7:36 am #
This is the same question as Alex Ubot above.
Calculating the parameters are correct.
but prediction implementation is incorrect.
Unfortunately this article comes up high and everyone is learning incorrect way of doing things I think
34. Swapnil June 10, 2016 at 1:21 am #
Really nice tutorial. Can you post a detailed implementation of RandomForest as well ? It will be very helpful for us if you do so.
Thanks!!
35. sourena maroofi July 22, 2016 at 12:24 am #
thanks Jason…very nice tutorial.
36. Gary July 27, 2016 at 5:44 pm #
Hi,
I was interested in this Naive Bayes example and downloaded the .csv data and the code to process it.
However, when I try to run it in Pycharm IDE using Python 3.5 I get no end of run-time errors.
Has anyone else run the code successfully? And if so, what IDE/environment did they use?
Thanks
Gary
• Sudarshan August 10, 2016 at 5:05 pm #
Hi Gary,
You might want to run it using Python 2.7.
37. Sudarshan August 10, 2016 at 5:02 pm #
Hi,
Thanks for the excellent tutorial. I’ve attempted to implement the same in Go.
Here is a link for anyone that’s interested interested.
https://github.com/sudsred/gBay
38. Atlas August 13, 2016 at 6:40 am #
This is AWESOME!!! Thank you Jason.
Where can I find more of this?
39. Alex August 20, 2016 at 4:34 pm #
That can be implemented in any language because there’re no special libraries involved.
40. SAFA August 28, 2016 at 1:39 am #
Hello,
there is some errors in “def splitDataset”
in machine learning algorithm , split a dataset into trainning and testing must be done without repetition (duplication) , so the index = random.randrange(len(copy)) generate duplicate data
for example ” index = 0 192 1 2 0 14 34 56 1 ………
the spliting method must be done without duplication of data.
41. Krati Jain September 12, 2016 at 2:35 pm #
This is a highly informative and detailed explained article. Although I think that this is suitable for Python 2.x versions for 3.x, we don’t have ‘iteritems’ function in a dict object, we currently have ‘items’ in dict object. Secondly, format function is called on lot of print functions, which should have been on string in the print funciton but it has been somehow called on print function, which throws an error, can you please look into it.
42. upen September 16, 2016 at 5:01 pm #
hey Jason
thanks for such a great tutorial im newbie to the concept and want to try naive bayes approach on movie-review on the review of a single movie that i have collected in a text file
can you please provide some hint on the topic how to load my file and perform positve or negative review on it
43. Abhis September 20, 2016 at 3:00 am #
Would you please help me how i can implement naive bayes to predict student performance using their marks and feedback
• Jason Brownlee September 20, 2016 at 8:35 am #
44. Vinay October 13, 2016 at 2:18 pm #
Hey Jason,
Thanks a lot for such a nice article, helped a lot in understanding the implementations,
i have a problem while running the script.
I get the below error
if (vector[-1] not in separated):
IndexError: list index out of range
• Jason Brownlee October 14, 2016 at 8:58 am #
Thanks Vinay.
Check that the data was loaded successfully. Perhaps there are empty lines or columns in your loaded data?
45. Viji October 20, 2016 at 8:57 pm #
Hi Jason,
Thank you for the wonderful article. U have used the ‘?'(testSet = [[1.1, ‘?’], [19.1, ‘?’]]) in the test set. can u please tell me what it specifies
46. jeni November 15, 2016 at 9:11 pm #
please send me a code in text classification using naive bayes classifier in python . the data set classifies +ve,-ve or neutral
• Jason Brownlee November 16, 2016 at 9:28 am #
Hi jeni, sorry I don’t have such an example prepared.
47. MLNewbie November 28, 2016 at 1:21 pm #
I am a newbie to ML and I found your website today. It is one of the greatest ML resources available on the Internet. I bookmarked it and thanks for everything Jason and I will visit your website everyday going forward.
• Jason Brownlee November 29, 2016 at 8:47 am #
Thanks, I’m glad you like it.
• Anne January 7, 2017 at 6:58 pm #
def predict(summaries, inputVector):
probabilities = calculateClassProbabilities(summaries, inputVector)
bestLabel, bestProb = None, -1
for classValue, probability in probabilities.iteritems():
if bestLabel is None or probability > bestProb:
bestProb = probability
bestLabel = classValue
return bestLabel
why is the prediction different for these
summaries = {‘A’ : [(1, 0.5)], ‘B’: [(20, 5.0)]} –predicts A
summaries = {‘0’ : [(1, 0.5)], ‘1’: [(20, 5.0)]} — predicts 0
summaries = {0 : [(1, 0.5)], 1: [(20, 5.0)]} — predicts 1
48. ML704 January 18, 2017 at 6:16 pm #
Hi, can someone please explain the code snippet below:
def separateByClass(dataset):
separated = {}
for i in range(len(dataset)):
vector = dataset[i]
if (vector[-1] not in separated):
separated[vector[-1]] = []
separated[vector[-1]].append(vector)
return separated
What do curly brackets mean in separated = {}?
vector[-1] ?
Massive thanks!
49. S February 27, 2017 at 8:47 am #
I am trying to create an Android app which works as follows:
1) On opening the App, the user types a data in a textbox & clicks on search
2) The app then searches about the entered data via internet and returns some answer (Using machine learning algorithms)
I have a dataset of around 17000 things.
Can you suggest the approach? Python/Java/etc…? Which technology to use for implementing machine learning algorithm & for connecting to dataset? How to include the dataset so that android app size is not increased?
Basically, I am trying to implement an app described in a research paper.
I can implement machine learning(ML) algorithms in Python on my laptop for simple ML examples. But, I want to develop an Android app in which the data entered by user is checked from a web site and then from a “data set (using ML)” and result is displayed in app based on both the comparisons. The problem is that the data is of 40 MB & how to reflect the ML results from laptop to android app?? By the way, the dataset is also available online. Shall I need a server? Or, can I use localhost using WAMP server?
Which python server should I use? I would also need to check the data entered from a live website. Can I connect my Android app to live server and localhost simultaneously? Is such a scenario obvious for my app? What do you suggest? Is Anaconda software sufficient?
• Jason Brownlee February 28, 2017 at 8:08 am #
Sorry I cannot make any good suggestions, I think you need to talk to some app engineers, not ML people.
50. Roy March 1, 2017 at 4:20 am #
Hello Jason,
IOError: [Errno 2] No such file or directory: ‘pima-indians-diabetes.data.csv’
I have the csv file downloaded and its in the same folder as my code.
• Jason Brownlee March 1, 2017 at 8:44 am #
Hi Roy,
Confirm that the file name in your directory exactly matches the expectation of the script.
Confirm you are running from the command line and both the script and the data file are in the same directory and you are running from that directory.
If using Python 3, consider changing the ‘rb’ to ‘rt’ (text instead of binary).
Does that help?
51. Jordan March 1, 2017 at 8:40 pm #
Hi Jason. Great Tutorial! But, why did you leave the P(Y) in calculateClassProbability() ? The prediction produces in my machine is fine… But some people up there have mentioned it too that what you actually calculate is probability density function. And you don’t even answer their question.
52. Ali March 7, 2017 at 6:21 am #
Hi Jason,
Split 769 rows into train=515 and test=254 rows
Traceback (most recent call last):
File “indian.py”, line 100, in
main()
File “indian.py”, line 95, in main
summaries = summarizeByClass(trainingSet)
File “indian.py”, line 45, in summarizeByClass
separated = separateByClass(dataset)
File “indian.py”, line 26, in separateByClass
if (vector[-1] not in separated):
IndexError: list index out of range
53. shankru Guggari March 13, 2017 at 9:52 pm #
Class wise selection of training and testing data
For Example
In Iris Dataset : Species Column we have classes called Setosa, versicolor and virginica
I want to select 80% of data from each class values.
Shankru
[email protected]
• Jason Brownlee March 14, 2017 at 8:17 am #
You can take a random sample or use a stratified sample to ensure the same mixture of classes in train and test sets.
54. Namrata March 19, 2017 at 5:33 pm #
error in Naive Bayes code
IndexError:list index out of range
55. velu March 28, 2017 at 4:25 pm #
hai guys
i am velmurugan iam studying annauniversity tindivanam
i have a code for summarization for english description in java
56. Kamal March 29, 2017 at 10:24 am #
Hi Jason,
This example works. really good for Naive Bayes, but I was wondering what the approach would be like for joint probability distributions. Given a dataset, how to construct a bayesian network in Python or R?
57. Joelon johnson March 30, 2017 at 7:20 pm #
Hello Jason,
Joelon here. I am new to python and machine learning. I keep getting a run-time error after compiling the above script. Is it possible I send you screenshots of the error so we walk through step by step?
• Jason Brownlee March 31, 2017 at 5:53 am #
What problem are you getting exactly?
58. Bill April 2, 2017 at 10:52 pm #
Hello Jason !
Thank you for this tutorial.
I have a question: what if our x to predict is a vector? How can we calculate the probability to be in a class (in the function calculateProbability for example) ?
Thank you
• Jason Brownlee April 4, 2017 at 9:07 am #
Not sure I understand Bill. Perhaps you can give an example?
59. Asmita Singh April 9, 2017 at 12:33 pm #
HI Jason,
Thanks for such a wonderful article. Your efforts are priceless. One quick question about handling cases with single value probabilities. Which part of code requires any smoothening.
• Jason Brownlee April 9, 2017 at 3:01 pm #
Sorry, I’m not sure I understand your question. Perhaps you can restate it?
60. Mohammed Ehteramuddin April 11, 2017 at 12:51 am #
Hello Jason,
First of all I thank you very much for such a nice tutorial.
I have a quick question for you if you could find some of your precious time to answer it.
Question: Why is that summarizeByClass(dataset) works only with a particular pattern of the dataset like the dataset in your example and does not work with the different pattern like my example dataset = [[2,3,1], [9,7,3], [12,9,0],[29,0,0]]
I guess it has to work for all the different possible datasets.
Thanks,
Mohammed Ehteramuddin
• Jason Brownlee April 11, 2017 at 9:33 am #
It should work with any dataset as long as the last variable is the class variable.
• Mohammed Ehteramuddin April 12, 2017 at 7:49 pm #
Oh! you mean the last variable of the dataset (input) can not be other that the two values that we desire to classify the data into, in our case it should either be 0 or 1.
Thanks you very much.
61. Salihins Gund April 19, 2017 at 6:36 am #
What is correct variable refers in getAccuracy function? Can you elaborate it more?
• Salihins Gund April 19, 2017 at 7:04 am #
Sorry that was wrong question.
Edit:
Ideally the Gaussian Naive Bayes has lambda (threshold) value to set boundary. I was wondering which part of this code include the threshold?
62. way win April 21, 2017 at 3:00 am #
Can you provide an extension to the data. I can not downloand it for some reason. Thank you!
63. Mian Saeed Akbar May 9, 2017 at 10:55 am #
Hi Jason…!
Thank You so much for coding the problem in a very clear way. As I am new in machine learning and also I have not used Python before therefore I feel some difficulties in modifying it. I want to include the serial number of in data set and then show which testing example (e.g. example # 110) got how much probability form class 0 and for 1.
64. Blazej May 27, 2017 at 9:33 pm #
Hi Jason,
I encountered a problem at the beginning. After loading the file and running this test:
filename = ‘pima-indians-diabetes.data.csv’
print(‘Loaded data file {0} with {1} rows’).format(filename, len(dataset))
I get the Error: “iterator should return strings, not bytes (did you open the file in text mode?)”
btw i am using python 3.6
thank you for the help
• Jason Brownlee June 2, 2017 at 12:02 pm #
Change the loading of the file from binary to text (e.g. ‘rt’)
65. Marcus June 10, 2017 at 9:13 am #
This code is not working with Python 3.16 :S
66. Marcus June 10, 2017 at 9:14 am #
3.6
• Jason Brownlee June 11, 2017 at 8:19 am #
Thanks Marcus.
• Marcus June 13, 2017 at 10:28 am #
Is there a way you can try to fix and make it work with 3.6 maybe Jason?
• Marcus June 15, 2017 at 2:23 am #
Jason can you explain what this code does?
• Darmawan Utomo December 13, 2017 at 9:16 pm #
I run the code in python 3.6.3 and here are the corrections:
—————————————————————————-
1. change the “rb” to “rt”
2. print(‘Split {0} rows into train={1} and test={2} rows’.format(len(dataset), len(trainingSet), len(testSet)))
3. change for each .iteritems() to .items()
4. print(‘Accuracy: {0}%’.format(accuracy))
Here are some of the results:
Split 768 rows into train=514 and test=254 rows
Accuracy: 71.25984251968504%
Split 768 rows into train=514 and test=254 rows
Accuracy: 76.77165354330708%
67. Guy Person June 14, 2017 at 4:34 am #
The code in this tutorial is riddled with error after error… The string output formatting isn’t even done right for gods sakes!
• Person Guy June 15, 2017 at 7:56 am #
This was written in 2014, the python documentation has changed drastically as the versions have been updated
68. J Wang June 15, 2017 at 8:02 am #
Hey Jason, I really enjoyed the read as it was very thorough and even for a beginner programmer like myself understandable overall. However, like Marcus asked, is it at all possible for you to rewrite or point out how to edit the parts that have new syntax in python 3?
Also, this version utilized the Gaussian probability density function, how would we use the other ones, would the math be different or the code?
69. Giselle July 12, 2017 at 10:53 am #
Hi Jason, thank you for this post it’s super informative, I just started college and this is really easy to follow! I was wondering how this could be done with a mixture of binary and categorical data. For example, if I wanted to create a model to determine whether or not a car would break down and one category had a list of names of 10 car parts while another category simply asked if the car overheated (yes or no). Thanks again!
70. Thomas July 14, 2017 at 7:53 am #
Hi! Thanks for this helpful article. I had a quick question: in your calculateProbability() function, should the denominator be multiplied by the variance instead of the standard deviation?
i.e. should
return (1 / (math.sqrt(2 * math.pi) * stdev)) * exponent
return (1 / (math.sqrt(2 * math.pi) * math.pow(stdev, 2))) * exponent
71. Rezaul Karim August 6, 2017 at 5:20 pm #
Hi Jason. I visited ur site several times. This is really helpful. I look for SVM implementation in python from scratch like the way you implemented Naive Bayes here. Can u provide me SVM code??
72. Charmaine Ponay August 17, 2017 at 10:06 pm #
Thank you so much Mr. Brownlee. I would like to ask your permission if I can show my students your implementations? They are very easy to understand and follow. Thank you very much again 🙂
• Jason Brownlee August 18, 2017 at 6:20 am #
No problem as long as you credit the source and link back to my website.
• Charmaine Ponay August 22, 2017 at 3:19 pm #
thank you very much 🙂
73. THAMILSELVAM B September 9, 2017 at 2:30 am #
Very good basic tutorial. Thank you.
74. Chandar September 25, 2017 at 10:48 pm #
Hi Jason,
I would have exported my model using joblib and as I have converted the categorical data to numeric in the training data-set to develop the model and now I have no clue on how to convert the a new categorical data to predict using the trained model.
• Jason Brownlee September 26, 2017 at 5:38 am #
New data must be prepared using the same methods as data used to train the model.
Sometimes this might mean keeping objects/coefficients used to scale or encode input data along with the model.
75. Narendra December 4, 2017 at 3:46 am #
where/what is learning in this code. I think it is just naive bayes classification. Please specify the learning.
• Jason Brownlee December 4, 2017 at 7:59 am #
Good question, the probabilities are “learned” from data.
• ravi September 16, 2019 at 5:42 pm #
such a beutiful artical
• Jason Brownlee September 17, 2019 at 6:24 am #
76. Christian December 18, 2017 at 1:01 am #
Great example. You are doing a great work thanks. Please am working on this example but i am confused on how to determine attribute relevance analysis. That is how do i determine which attribute is (will) be relevant for my model.
• Jason Brownlee December 18, 2017 at 5:26 am #
Perhaps you could look at the independent probabilities for each variable?
• Christian December 18, 2017 at 3:00 pm #
thanks very much. Grateful
77. SUIMEE December 25, 2017 at 10:05 pm #
Can you please send me the code for pedestrian detection using HOG and NAIVE BAYES?
78. Jasper January 25, 2018 at 1:14 pm #
what does this block of code do
while len(trainSet) < trainSize:
index = random.randrange(len(copy))
trainSet.append(copy.pop(index))
return [trainSet, copy]
• Jason Brownlee January 26, 2018 at 5:37 am #
Selects random rows from the dataset copy and adds them to the training set.
79. Scott January 27, 2018 at 9:54 am #
Jason:
I am very happy with this implementation! I used it as inspiration for an R counterpart. I am unclear about one thing. I understand the training set mean and sd are parameters used to evaluate the test set, but I don’t know why that works lol.
How does evaluating the GPDF with the training set data and the test set instance attributes “train” the model? I may be confusing myself by interpreting “train” too literally.
I think of train as repetitiously doing something multiple times with improvement from each iteration, and these iterations ultimately produce some catalyst for higher predictions. It seems that there is only one iteration of defining the training set’s mean and sd. Not sure if this question makes sense and I apologize if that is the case.
Any help is truly, genuinely appreciated!
Scott Bishop
• Jason Brownlee January 28, 2018 at 8:20 am #
Here, the training data provides the basis for estimating the probabilities used by the model.
80. som February 5, 2018 at 4:14 am #
Hi Jason,
Sometimes I am getting the “ZeroDivisionError: float division by zero” when I am running the program
81. shadhana February 23, 2018 at 4:12 am #
is there a way to get the elements that fall under each class after the classification is done
• Jason Brownlee February 23, 2018 at 12:03 pm #
Do you mean a confusion matrix:
https://machinelearningmastery.com/confusion-matrix-machine-learning/
• Christian Post March 3, 2018 at 12:42 am #
There is probably a more elegant way to write this code, but I’m new to Python 🙂
The returning array lets you calculate all sorts of criteria, such as sensitivity, specifity, predictive value, likelihood ratio etc.
• Christian Post March 9, 2018 at 2:01 am #
Whoops I realized I mixed something up. FP and FN have to be the other way round since the outer if-clause checks the true condition. I hope no one has copied that and got in trouble…
Anyways, looks like the confusion matrix lives up to its name.
82. Christian Post March 3, 2018 at 12:48 am #
Hello Jason,
first of all, thanks for this blog. I learned a lot both on python (which I am pretty new to) and also this specific classifier.
I tested this algorithm on a sample database with cholesterol, age and heart desease, and got better results than with a logistic regression.
However, since age is clearly not normally distributed, I am not sure if this result is even legit.
Could you explain how I can change the calculateProbability function to a different distribution?
Oh, and also: How can I add code tags to my replies so that it becomes more readable?
83. Nil March 7, 2018 at 8:10 pm #
Hi DR. Jason,
It is a very good post.
I did not see the K Fold Cross Validation in this post like I saw in your post of Neural Network from scratch. Does it mean that Naive Bayes does not need K Fold Cross Validation? Or does not work with K Fold CV?
It is because I am trying to use K Fold CV with Naive Bayes from scratch but I find it difficult since we need to split data by class to make some calculations, we find there two datasets if we have two class classification dataset (but we have one K Fold CV function).
I am facing serious difficulties to understand K Fold CV it seams that the implementation from scratch depends on the classifier we are using.
If you have some answer or tips to this approach (validation on Naive Bayes with K Fold CV – from scratch – ) please let me know.
Regards.
• Jason Brownlee March 8, 2018 at 6:22 am #
It is a good idea to use CV to evaluate algorithms including naive bayes.
84. y April 23, 2018 at 11:34 pm #
If the variance is equal to 0, how to deal with?
• Jason Brownlee April 24, 2018 at 6:34 am #
If the variance is 0, then all data in your sample has the same value.
• y April 25, 2018 at 5:43 pm #
if ‘stdev’ is 0 , how to deal with it?
• Jason Brownlee April 26, 2018 at 6:23 am #
If stdev is 0 it means all values are the same and that feature should be removed.
85. kk May 29, 2018 at 4:40 pm #
How can I retrain my model, without training it from scratch again?
Like if I got some new label for instances, how to approach it then?
• Jason Brownlee May 30, 2018 at 6:33 am #
Many models can be updated with new data.
With naive bayes, you can keep track of the frequency and likelihood of obs for discrete values or the PDF/mass function for continuous values and update them with new data.
86. Manoj June 7, 2018 at 7:08 pm #
Very good Program, its working correctly, how to construct the bayesian network taking this pima diabetes csv file.
87. Yashvardhan June 28, 2018 at 5:19 pm #
Hey can I ask why didn’t you use a MLE estimate for the prior?
88. Vinayak Tyagi July 9, 2018 at 6:37 pm #
It’s a Navie-bayes-classifiation but insted of using Navies-bayes therom we are using Gaussian Probability Density Function Why ?????
• Jason Brownlee July 10, 2018 at 6:44 am #
To summarise a probability distribution.
89. Prajna p n July 11, 2018 at 10:04 pm #
why input vector is [1.1,”?”] ? and it works fine I try with [1.1] . Why did we choose the number 1.1 as the parameter for input vector?
90. Zhenduo Wang July 19, 2018 at 1:57 am #
It seems that your code is using GaussianNB without prior. The prior is obtained with MLE estimator of the training set. It is simply a constant multiplied by the computed likelihood probability. I tried both (with/without prior) and found that predicting with prior would give better results most of the time.
91. ken stonecipher July 31, 2018 at 2:24 am #
Jason, why do I get the error messate
AttributeError: ‘dict’ object has no attribute ‘iteritems’ when I am trying the run the
# Split the dataset by class values, returns a dictionary in the Naïve Bayes example in Chapter 12 of the Algorithms from Scratch in Python tutorial?
Thanks
Ken
• Jason Brownlee July 31, 2018 at 6:10 am #
Sounds like you are using Python 3 for Python 2.7 code.
92. ken stonecipher July 31, 2018 at 2:44 am #
Jason, I figured it out. In Py 3.X does not understand the .interitem function so when I changed that to Py 2.7 .item it worked fine. Version difference between 3.X and 2.7
Thanks
93. tommy July 31, 2018 at 11:14 pm #
Hi,
Thank you for the helpful post.
When i ran the code in python 3.6 i encountered the
“ZeroDivisionError: float division by zero” ERROR
• Jason Brownlee August 1, 2018 at 7:44 am #
The code was written for Py2.7
94. jitesh pahwa August 2, 2018 at 4:34 pm #
thank you Jason for your awesome post.
95. Sheshank Kumar August 25, 2018 at 2:23 pm #
I want to calculate the F1 Score. I can do this for binary classification problem. I am confused how to do it for multi class classification. I have calculated the confusion matrix for my data set. my data set contains three different classValue.
Kindly suggest.
96. Rajkumar August 31, 2018 at 7:22 pm #
Thanks for the awesome work. P.S external link to the weka for naive bayes shown 404.
Kind Regards
97. Sajida September 29, 2018 at 10:27 pm #
Hi Sir
Great example. You are doing a great great work thanks. kindly also upload similar post related to hidden markov model
• Jason Brownlee September 30, 2018 at 6:02 am #
Thanks for the suggestion.
98. Javier Lazaro October 3, 2018 at 12:12 am #
Thanks for posting this nice algorithm explained. Nevertheless I struggled a lot until I found out that it is a Gaussian Naive Bayes version. I expected calculations of probabilities counting the prior, posterior, etc. It took me a while to figure it out. I have learnt a lot through it, though 🙂
99. roba October 18, 2018 at 10:11 pm #
How we fix this error in python 2.7
return sum(numbers) / (len(numbers))
TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’
thank you for your awesome post…
100. Mustajab Hussain October 21, 2018 at 10:42 pm #
Hi Jason. I am getting this error while running your code
File “C:/Users/Mustajab/Desktop/ML Assignment/Naive Bayes.py”, line 191, in loadCsv
dataset = list(lines)
Error: iterator should return strings, not bytes (did you open the file in text mode?)
How can I fix this error?
101. dingusagar November 18, 2018 at 11:37 pm #
P(A/B) = (P(B/A)P(A)) / P(B))
This is the formula..
In RHS of the formula we use only numerator because denominators are same for every class and doen’t add any extra info as to determine which probability is bigger.
But I don’t thing the numerator is correctly implemented in the function.
Specifically P(A) needs to be multiplied with the product of the conditional probabilities of the individual features belonging to a particular class.
P(A) here is our class probability. That term is not multiplied to the final product inside the calculateClassProbabilities() function.
102. Kody November 27, 2018 at 9:37 am #
Something seems off here. Your inputs to the calculateProbability() function are x=71.5, mean=73.0, stdev=6.2. Some simple math will tell you that x is 0.24 standard deviations away from the mean. In a Gaussian distribution, you can’t get much closer than that. Yet the probability of belonging to that distribution is only 6%? Shouldn’t this be something more like 94%?
103. mathis December 2, 2018 at 3:49 pm #
hello Jason, I try to run this code with my own file but I get “ValueError: could not convert string to float: wtdeer”. do you know How can I fix it ?
thank you so much
104. FAW December 6, 2018 at 9:55 am #
Nice tutorials Jason, however needs your attention for the typo in print statements, hope it will be fixed soon.
Thanks anyways once again for providing such a nice explanation!
105. faiza December 27, 2018 at 8:20 pm #
File “C:/Users/user/Desktop/ss.py”, line 9, in
File “C:/Users/user/Desktop/ss.py”, line 4, in loadCsv
dataset = list(lines)
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
solve this error
106. Ahsan January 4, 2019 at 12:45 am #
jason i have a question i want to do prediction heart disease and the result will be like this for example there are 9 heart disease heart failure heart stroke and more it will predicate the data and generate result like you have heart stroke disease so i have a question which classifier is best and
107. Nyi Dewi Harta Putih January 6, 2019 at 1:32 pm #
in code program, can you show to me which one that P(H|E), (P(E│H), P(H) ), (P(E)) as we know that it is formula of naive bayes classifier. would you like to tell me? because I need that reference for show to my lecture. thank you
108. Michael Fu February 11, 2019 at 3:20 pm #
Jason,
I love your ‘building from scratch way’ of approaching machine learning algorithm, this is equally important as ‘understanding the algorithm’. Your implementations fulfilled the ‘building part’ which is sometimes understated in college classes.
109. Antony Evmorfopoulos March 4, 2019 at 1:40 am #
Very thorough step by step guide and your blog is one of the best out there for coding in machine learning.Naive bayes is a very strong algorithm but i find that its best usage is usually with other classifiers and voting between them and get the winner for the final classifier
110. priya March 23, 2019 at 1:47 am #
im getting an error
could not convert string to float: id
• Jason Brownlee March 23, 2019 at 9:30 am #
I believe the example requires Python 2. Perhaps confirm your version of Python?
111. Rabin Ghimire March 26, 2019 at 3:54 pm #
Why does this model predict values all 1 or all 0?
• Jason Brownlee March 27, 2019 at 8:55 am #
The model predicts probabilities and class values.
112. Mhzed April 10, 2019 at 7:35 pm #
Thanks for the in-depth tutorial.
I re-implemented the code, but was able to get mostly 60+% accuracy. The best so far was 70% and rather a fluke. Your 76.8% result seems a bit out of my reach. The train/test data sets are randomly selected so it’s hard to be exact. I am just wondering if 60+% accuracy is to be expected or I am doing something wrong.
• Mhzed April 10, 2019 at 7:46 pm #
My bad, ignore my post. Mistake in code. 76% is the average result.
• Jason Brownlee April 11, 2019 at 6:34 am #
Double check that you copied all of the code exactly?
• yared April 17, 2019 at 6:15 pm #
it is not working could u help me pla
113. Yared April 17, 2019 at 5:33 pm #
when i execute the code it shows the following error, would you help me Please?
Split{0}rows into train={1} and test={2} rows
—————————————————————————
AttributeError Traceback (most recent call last)
in
98 accuracy = getAccuracy(testSet, predictions)
99 print(‘Accuracy: {0}%’).format(accuracy)
–> 100 main()
in main()
92 trainingSet, testSet = splitDataset(dataset, splitRatio)
—> 93 print(‘Split{0}rows into train={1} and test={2} rows’).format(len(dataset), len(trainingSet), len(testSet))
94 # prepare model
95 summaries = summarizeByClass(trainingSet)
AttributeError: ‘NoneType’ object has no attribute ‘format’
114. Júnior Pires May 12, 2019 at 9:37 am #
One question:
to calculate bayes function is used the (Prior probability * Density Function)/Total probability but in your algorithm you only calculate the Density Function and use it to make the predictions. Why? Im confuse.
thanks for listening.
• Júnior Pires May 13, 2019 at 12:48 am #
I realized that the denominator can be omitted, but and about the prior probability? Shoudn’t I compute too?
• Jason Brownlee May 13, 2019 at 6:47 am #
Yes, in general it is a good idea.
• Jason Brownlee May 13, 2019 at 6:42 am #
I removed the prior because it was a constant in this case.
• Júnior Pires May 13, 2019 at 6:48 am #
It was a constant because of the dataset?
• Jason Brownlee May 13, 2019 at 6:50 am #
Correct, even number of observations for each class, e.g. fixed prior.
• Júnior Pires May 13, 2019 at 6:52 am #
Thank you for the explanation. 🙂
• Jason Brownlee May 13, 2019 at 2:31 pm #
You’re welcome.
115. Maysoon alkhair June 2, 2019 at 6:55 pm #
Hi, Great example. You are doing great work thanks.
I have a question, can you help me to the modification your code for calculating precision and the recall so.
116. Matthew June 19, 2019 at 5:22 pm #
I’m afraid your responses to others show you’ve fundamentally misunderstood the prior. You do need to include the prior in your calculations, because the prior is different for each class, and depends on your training data–it’s the fraction of cases that fall into that class, i.e 500/768 for an outcome of 0 and 268/700 for an outcome of 1, if we used the entire data set. Image a case where you had one feature variable and its normal distributions were identical for each class–you’d still need to account for the ratio between the different classes in when a prediction.
The only way you’d leave out the prior would be if each class had an equal number of data points in the training data, but the likelihood of getting 257 (=floor(768 * 0.67)/2) of each class in this instance is essentially zero.
It’s easy to check this is true–just fit scikit-learn’s GaussianNB on your training data and check its score on it and your test data. If you don’t include the prior for each class, your results won’t match.
117. Sanori Dayaratne July 23, 2019 at 1:51 pm #
Hi Jason..I need to use Naive Bayes to get results for Sinhala language(Language used in Sri Lanka) for sentiment analyzing. I already have a pre processed and tagged data set with sentences. will I be able to get results using the above code?
118. imanuel August 5, 2019 at 6:03 pm #
hello, how do i apply this using debian on raprberry pi? especially for folder direction
• Jason Brownlee August 6, 2019 at 6:31 am #
I don’t know about raspberry pi, sorry.
119. SuleymanSuleymanzade August 23, 2019 at 9:28 am #
hello Jason
instead of separated[vector[-1]].append(vector) there must be
separated[vector[-1]].append(vector[:-1])
otherwise you append the class name to the features
120. Srijtih September 1, 2019 at 7:26 pm #
I have implemented the classifier with same idea but my own implementations and different dataset. Comparing the class conditional densities I am getting an accuracy above 70%. But once I try comparing posterior probabilities, accuracy is close to 50% only.. Am I doing anything wrong or is it supposed to come less?
121. Sam September 12, 2019 at 11:17 pm #
thanks for the wonderful article
i might missed this
but how you calculate the marginal probability??
• Jason Brownlee September 13, 2019 at 5:43 am #
We skipped the prior as the classes were even, e..g it was a constant.
122. vira September 16, 2019 at 5:46 pm #
such a wonderful basic article …
thank you so much jason.
123. Parker Difuntorum September 29, 2019 at 1:07 am #
Amazing as always. Thank you!
124. bhavik shah October 22, 2019 at 6:21 pm #
I am a new to Python and I have a strong wish to learn it , Thank you author for positing this list of blogs and helping me to learn python in a better way .
Thank you once again.
• Jason Brownlee October 23, 2019 at 6:39 am #
Thanks, I’m happy it helps!
125. Mohamed Gmal November 14, 2019 at 8:41 am #
how i use naive bays algorithm with text data not binary
and you can make the python code as C#
• Jason Brownlee November 14, 2019 at 1:42 pm #
Text can be encoded with a bag of words model then a naive bayes model can be used.
Sorry, I don’t have tutorials for C#.
126. Roberto Araripe December 4, 2019 at 2:36 am #
Hi, Jason!
Congratulations for the article, but I take a error here…
I’m trying to run the code and always have the “IndexError: list index out of range”
The row list have only four numbers in it… row = [5.7, 2.9, 4.2, 1.3]
However, in the calculate_class_probabilities function, the “for” below cause the error becouse len of class_summaries result 5, and the range iteration lets “i” with 0, 1, 2, 3 and 4.
So, when the “i” get the value 4, row[ 4 ] is out of range…
row[0] = 5.7
row[1] = 2.9
row[2] = 4.2
row[3] = 1.3
for i in range(len(class_summaries)):
mean, stdev, _ = class_summaries[j]
probabilities[class_value] *= calculate_probability(row[ i ], mean, stdev)
Please, can you help me to fix this?
127. M January 11, 2020 at 11:01 pm #
Great article, helped me a lot.
My question is that what exactly do we use from the training set to test the model? I mean I get that what we get from the training set is the “summaries” (mean, standard deviation, and length of attributes in one class), but how exactly do we use this information to predict the class of our test set?
• Jason Brownlee January 12, 2020 at 8:03 am #
See the very last code example, it shows how to make a single prediction.
And see the predict() function.
• M January 12, 2020 at 11:32 pm #
I have looked at all of it, but I’m sort of a newbie in coding so I need a bit of help.
I guess what I’m trying to understand, is exactly these lines:
for i in range(len(class_summaries)):
mean, stdev, _ = class_summaries[i]
probabilities[class_value] *= calculate_probability(row[i], mean, stdev)
I’m assuming this is it: with the help of this information from the training set (the mean and standard deviation of each attribute for all points in a class), we calculate the probability of the SAME attribute for a new point belonging to that class.
And then these probabilities are multiplied (for all attributes) and also multiplied by the class probability, to ultimately give us the probability of that new point belonging to that class.
I can’t explain it any better, I hope what I’m trying to say is clear.
So is this correct?
128. Stop stealing January 13, 2020 at 1:31 am #
Thanks for sharing. But… why didn’t you mention the code owner?
It would be good to mention:
https://dzone.com/articles/naive-bayes-tutorial-naive-bayes-classifier-in-pyt
129. Chinu January 26, 2020 at 6:55 am #
Hi…
Thx a lot..for this blog…I have a question..
Instead ofapplying Normal distribution, Can I apply Binomial or Multi Binomial distribution for digit classification problem for MNIST dataset?
In MNIST dataset, there are 0 to 9 digits. So I think there are 10 classes.
Thx
• Jason Brownlee January 27, 2020 at 6:59 am #
You can use a multinomial distribution, e.g. multinomial regression instead of logistic regression.
May as well try LDA though.
130. aditi April 10, 2020 at 2:46 am #
Great article, helped me a lot,implemented successfully pima indian.
• Jason Brownlee April 10, 2020 at 8:34 am #
Thanks, I’m happy to hear that!
131. Kpakpo Moevi July 20, 2020 at 6:10 pm #
I need to understand this formula
P(class=0|X1,X2) = P(X1|class=0) * P(X2|class=0) * P(class=0)
I’m trying to use the chain rule and Bayesian theorem but I’m getting stucked.
Any help is appreciated.
• Jason Brownlee July 21, 2020 at 5:58 am #
What problem are you having with that formula exactly?
132. Varsha August 27, 2020 at 2:48 am #
Hello,
If my y variable is discrete (dependent) and my three variables x1,x2,x3 (independent ) are continuous then how to apply Naive Bayes model?
• Jason Brownlee August 27, 2020 at 6:24 am #
Gaussian naive bayes (described above) would be appropriate.
133. varsha August 28, 2020 at 4:19 am #
How to find P(y/x1,x2,x3) (during training phase) when y is discrete ( it contains 2 classes) and x1,x2,x3 (independent vectors) are continuous ?
• Jason Brownlee August 28, 2020 at 6:56 am #
We estimate it using independent probability for each P(y_i|x_i)
134. Varsha August 29, 2020 at 8:39 pm #
Do I have to use here this formula P(class=0|X1,X2,x3) = P(X1|class=0) * P(X2|class=0) * P(x3|class=0)* P(class=0)? as explained above in your article? Is there any python function which will give me directly this conditional probability distribution? Thank you for giving reply for every simple query.
135. Derek September 12, 2020 at 2:33 am #
When I try to trim down or change the data in the CSV I am getting “ZeroDivisionError: float division by zero”. I see someone else wrote this has to do w/ Python versioning. Is that correct? If so, no problem to fix I assume.
Thanks,
Derek
• Jason Brownlee September 12, 2020 at 6:18 am #
Perhaps try adding a small number when calculating the frequency, e.g. 1e-15, or use sklearn instead.
136. Carolin September 28, 2020 at 11:18 pm #
I have a more general question regarding the probability density function. You write
“Running it prints the probability of some input values. You can see that when the value is 1 and the mean and standard deviation is 1 our input is the most likely (top of the bell curve) and has the probability of 0.39.”
But the probability of one specific value within a continous context (which holds for the gaussian distribution) is always zero, therefore the 0.39 can not be interpreted as a probabiltity. Am I getting something wrong here, or is this terminology just used for simplicity?
• Jason Brownlee September 29, 2020 at 5:39 am #
The PDF summarizes the probability for all possible events in the domain. Once known, we can estimate the probability of a single event in the domain, not in isolation.
137. muqadas January 8, 2021 at 1:42 am #
amazing tutorial and guide…
can i have this code in R ??? or do someone know any website from where i can get help???
138. Mahi Jakaria January 27, 2021 at 9:57 pm #
I am getting “could not convert string to float: ‘Id’ type of error in str_column_to_float function. How can I get rid of this problem?
139. Nirupama February 3, 2021 at 11:01 pm #
Jason,
Am using the same code as above on Pima Indians file. Am getting ZeroDivisionError
An actual error pointing to –
—-> 8 variance=sum([pow(x-avg,2) for x in numbers])/float(len(numbers)-1)
9 print(“variance at stdevfunction=”,variance)
10 return math.sqrt(variance)
ZeroDivisionError: float division by zero
140. Ifagan Gudeta February 6, 2021 at 7:44 am #
Hello sir, I got this error. Can you help me?
Traceback (most recent call last):
File “C:/Users/W10X64_PLUS-OFFICE/Desktop/IRIS PROJECT/Predict.py”, line 100, in
str_column_to_float(dataset, i)
File “C:/Users/W10X64_PLUS-OFFICE/Desktop/IRIS PROJECT/Predict.py”, line 21, in str_column_to_float
row[column] = float(row[column].strip())
ValueError: could not convert string to float: ‘sepal_length’
In addition, I have a question. I am working on a project on a Hotel recommendation system using hybrid recommendation approach. See my dataset here -> https://drive.google.com/file/d/1jGdcZ2yEbHh-JnXl4eAowyUQaPwebiW8/view?usp=sharing . Do this algorithm work for me?
141. Jack February 9, 2021 at 7:07 pm #
Hello Dr,
I’m studying MSc and my thesis is about DDoS detection based on ML (Naive Bayes), I created a model based on Gaussian NB, but I used sklearn library for the algorithm, my supervisor asked me to design a Bayesian algorithm and compare it to the traditional Bayesian algorithm that I used from the sklearn library. I found it difficult to make changes to the algorithm and make it my own, do u have any advice on how to do that?
• Jason Brownlee February 10, 2021 at 8:02 am #
What is a “traditional bayesian algorithm”?
• Jack February 10, 2021 at 6:05 pm #
like the one (Gaussin NB) in scikit-learn library.
• Jason Brownlee February 11, 2021 at 5:51 am #
See the above tutorial on exactly this.
142. Jeetendra Dhall February 18, 2021 at 5:21 pm #
Thank you so much for the detailed explanation and code.
The suggestion of ‘Addition of Log-Likelihoods to the Log-Prior to calculate Posterior’ can be found here:
https://github.com/j-dhall/ml/blob/gh-pages/notebooks/Gaussian%20Naive%20Bayes%20for%20Iris%20Flower%20Species%20Classification.ipynb
This is basically to avoid multiplication of really small numbers leading to even smaller numbers and rounding errors.
• Jason Brownlee February 19, 2021 at 5:56 am #
Well done!
Perhaps you can cite and link back to this tutorial on which you based your code.
143. Johnson February 23, 2021 at 11:38 am #
Thank you so much on this, really helps. This code is only meant for continuous data set? By the way, can you give some advice on how to tackle the issue with discrete data? When running this code, I received float division zero due to discrete data in my dataset (my dataset is a mixed of both categorical and continuous).
Thank you once again!
• Jason Brownlee February 23, 2021 at 1:23 pm #
You’re welcome.
Yes, you can use a binomial or multinomial probability distribution instead of a gaussian.
Yes, it is a good idea to add 1 to the probs when combining them to handle a zero case.
• Johnson February 24, 2021 at 2:10 am #
Thanks Jason. Is it possible for you to show an example of how to do multinomial probability for discrete data? I have checked your other articles but it looks like it isn’t clear.
• Jason Brownlee February 24, 2021 at 5:36 am #
• Johnson February 26, 2021 at 12:51 am #
Thanks but the article shows it using library – trying to understand how to do this from scratch if you have any idea.
144. John March 16, 2021 at 4:10 am #
Hi, can u please tell me how to do confusion matrix and f1-score for this code, I tried to do it using The sklearn library but I couldn’t figure out how or what to pass as the 2 required positional arguments, what I mean is what is going to be the “y_test”,”y_prediction” for this model?
ex.
f1score=f1_score(y_test, y_prediction)
cm=confusion_matrix(y_test, y_prediction)
• Jason Brownlee March 16, 2021 at 4:51 am #
• John March 16, 2021 at 11:13 pm #
Thnx for the replay, I read that article, I don’t have a problem with understanding confusion matrix, I just don’t know within this code, what parameters should I pass of that equivalent to (y_test, y_prediction) to the confusion matrix function? what I mean is when I write the code bellow, based on ur model, what should I pass as my 2 parameters?
cm = confucion_matrix(? , ?)
print (cm)
• Jason Brownlee March 17, 2021 at 6:07 am #
You pass in the expected and the predicted values for your dataset.
• john March 18, 2021 at 1:42 am #
Thnx, you see, I’m new to machine learning, and I’ve been rewriting your code to better understand it, I know you have to pass expected and predicted to the function to get confusion matrix, and that is the problem I’m facing, I don’t know what are the expected and predicted values in your code. Is there a need for writhing more codes or is there already the expected and predicted values in the above code that can be passed to confusion_matrix(expected?, predicted?) function?
• Jason Brownlee March 18, 2021 at 5:23 am #
The above example uses k-fold cross-validation to evaluate the model many times.
First, simplify the examples so if fits the model once with one training set and one test set.
Then use the model to make predictions on the test set, the target values from the test set are “expected” values, and the predictions are the “predicted” values.
Sorry, I cannot make these changes for you – if it is too challenging, I recommend starting with a simpler tutorial using a library implementation of the algorithm and focus instead on the evaluation of that algorithm.
145. Abhi April 12, 2021 at 1:41 pm #
Excellent Article…! very nice explanation….! Thanks
146. Brij Bhushan April 23, 2021 at 12:21 pm #
This was a useful post and I think it’s fairly easy to see in the other reviews, so this post is well written and useful. Keep up the good work.
147. Yuri May 4, 2021 at 10:45 pm #
Hi, Jason. Great article, as always. Please, give an advice how to tune GaussianNB in Python? By what parameters? Some of your posts? Other sources of information? Thanks.
148. Gautam August 30, 2021 at 5:51 pm #
Hi Jason, thanks for the tutorial. Do you have the R version of it? Thanks
149. Duong October 12, 2021 at 4:12 am #
—————————————————————————
Hi Jason Brownlee
I got this error with Jupyter Notebook with python 3.9. Any suggestion for fixing that
TypeError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_8620/3280026982.py in
97 filename = ‘iris.csv’
—> 99 for i in range(len(dataset[0])-1):
100 str_column_to_float(dataset, i)
101 # convert class column to integers
TypeError: ‘NoneType’ object is not subscriptable
• Adrian Tam October 13, 2021 at 7:30 am #
150. Robbiek January 4, 2022 at 7:43 pm #
thank you very much, very helpful in my research
• James Carmichael January 7, 2022 at 8:16 am #
Thank you for kind words Robbiek!
151. Kumar March 2, 2022 at 6:51 am #
Hi Jason,
I am not sure what the values of row[i] in the following section mean:
probabilities[class_value] *= calculate_probability(row[i], mean, stdev)
Is it returning every element of the first row from the dataset in each iteration, e.g., for the iris dataset values 5.1,3.5,1.4,0.2, is it returning 5.1, 3.5, and so on?
Or Is it returning every element of every row in each iteration, or is it something else?
• James Carmichael March 2, 2022 at 12:14 pm #
I’m eager to help, but I just don’t have the capacity to debug code for you.
I am happy to make some suggestions:
Consider aggressively cutting the code back to the minimum required. This will help you isolate the problem and focus on it.
Consider cutting the problem back to just one or a few simple examples.
Consider finding other similar code examples that do work and slowly modify them to meet your needs. This might expose your misstep.
Consider posting your question and code to StackOverflow.
152. Alexandre Kolisnyk April 1, 2022 at 8:03 am #
Such a beautiful article ! Helped me a lot.
About the missing PRIOR term I disagree. It´s already there in these lines:
for class_value, class_summaries in summaries.items():
probabilities[class_value] = log(summaries[class_value][0][2]/float(total_rows))
I double checked my results with the Scikit API GaussianNB. Perfect match!
• James Carmichael April 1, 2022 at 9:03 am #
Thank you for the feedback Alexandre!
153. Antonio Guerrieri April 10, 2022 at 8:30 pm #
Hi Jason
thanks for this great tutorial!
Just a silly question, is this
• mean = sum(x)/n * count(x)
correct?
Shouldn’t it be:
• mean = sum(x)/count(x)
?
It seem to me that the code:
# Calculate the mean of a list of numbers
def mean(numbers):
return sum(numbers)/float(len(numbers))
implements the latter.
Thanks!
• James Carmichael April 10, 2022 at 11:39 pm #
Hi Antonio…The following may help add clarity regarding the calculation of average (mean):
https://www.guru99.com/find-average-list-python.html
• Antonio Guerrieri April 17, 2022 at 12:57 am #
Hi James,
what is n in
mean = sum(x)/n * count(x)?
I think it should be
mean = sum(x)/count(x)
as, of course, indicated in section “Python Average – Using sum() and len() built-in functions” | 2023-01-28 20:09:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34230151772499084, "perplexity": 1514.0369973716977}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00155.warc.gz"} |
http://math.stackexchange.com/questions/650675/question-about-tensor-product-of-homomorphisms | # Question about tensor product of homomorphisms
I've come to think about this problem when reading a proof in Commutative Algebra by N. Bourbaki. Say, let $R$ be a commutative ring, given 3 $R-$modules $A$, $B$, $C$, and the $R$-homomorphism $f:B \to C$. Is the following equivalent?
1. $f: B \to C$ is an isomorphism.
2. $1_A \otimes f: A \otimes B \to A \otimes C$ is an isomorphism.
I think they are equivalent, as I see the author using this fact in the proof. $1 \Rightarrow 2$ is straight-forward. But I fail to see how to prove: $2 \Rightarrow 1$. Is it correct? Any hints would be appreciated.
Thank you guys very much,
And have a good day,
-
doesn't this follow from functoriality of tensor product? note that a functor preserves isomorphisms. – adrido Jan 25 at 7:53
Hi, I know that the functor $1_A \otimes -$ preserves isomorphism. However, I'm asking the other way round. – user49685 Jan 25 at 7:57
On the other hand, $2 \implies 1$ will hold if $A$ is a faithfully flat $R$-module (e.g. a free $R$-module) – zcn Jan 25 at 8:06
What is wrong with this question that would motivate you to delete it? Just because a question has an easy answer does not mean it won't be useful to others in the future. – PVAL Jan 25 at 8:19
Even easier: Take $A=0$ to see that $2$ does not imply $1$. – Julian Kuelshammer Jan 25 at 9:17 | 2014-07-31 19:55:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336116909980774, "perplexity": 306.12445630160033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510273663.2/warc/CC-MAIN-20140728011753-00063-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://math.tutorvista.com/number-system/simplifying-fractions.html | Top
# Simplifying Fractions
Fractions are the essential part of arithmetic. A fraction is defined as a type of number which is represented in the form of $\frac{p}{q}$, where q $\neq$ 0; where, p - the number above the bar is known as numerator and q - the number below the bar is termed as denominator. The examples of fractions are $\frac{1}{2}$, $\frac{3}{7}$, 8 ($\frac{8}{1}$), -21 etc.
Fractions are usually needed to be simplified. By s
implifying fractions, we mean to make the fraction as simple as possible. Simplifying also means reducing. So this means that we have to reduce the fraction in its lowest terms.
Simplifying a fraction does not always result in a simplest reduction of the fraction; since in some cases, higher equivalent fraction is required as the answer. Simplification could also result in an improper fraction. In such cases, we are needed to convert it in the form of a mixed fraction. Simplification could also end up in reducing the fraction to its lowest possible equivalent.
A fraction may be proper or improper fraction. Example: $\frac{4}{8}$ is simplified as $\frac{1}{2}$.In this page, we are going to learn about method of simplification of variables.
Related Calculators Fraction Simplifier Divide and Simplify Fractions Calculator Fraction Calculator Simplify Simplify Complex Fractions Calculator
## Simplifying Fractions Steps
Below are the steps for simplifying fractions:
Step 1: Start with the lowest factor which can divide a both numerator and denominator until you cannot precede it further.
Step 2: The fraction terms are divided by greatest common factor. These steps help us to simplifying the given fractions.
Use the below widget to simplify fractions.
## Simplifying Fractions with Variables
At times some fractions have terms with variables such 3x or 4xy. These fractions can be simplified easily if both numerator and denominator have same variable. Below are example for simplify fractions with variables
### Solved Example
Question:
Simplify fraction $\frac{x^3+x^2}{x^4-x^2}$
Solution:
$\frac{x^3+x^2}{x^4-x^2}$ = $\frac{x^2(x+1)}{x^2(x^2-1^2)}$
= $\frac{x+1}{(x+1)(x-1)}$
= $\frac{1}{x-1}$
## Simplifying Fractions Practice Problems
### Practice Problems
Question 1: Simplify $\frac{114}{1000}$
Question 2: Reduce $\frac{2x^3-8x}{x-2}$
## Simplifying Fractions Examples
Below are the examples on Simplifying fractions:
### Solved Examples
Question 1: Simplify: $\frac{12}{144}$
Solution:
First start with small number 2 which divides both 12 and 144.
$\frac{12}{144}$ $\div$ $\frac{2}{2}$ = $\frac{12}{144}$ $\times$ $\frac{2}{2}$ = $\frac{6}{72}$
$\frac{6}{72}$ $\times$ $\frac{2}{2}$ = $\frac{3}{36}$
$\frac{3}{36}$ $\times$ $\frac{3}{3}$ = $\frac{1}{ 12}$
The simplest form of given fraction is $\frac{1}{12}$.
Question 2: Solve $\frac{6}{14}$
Solution:
Given fraction is $\frac{6}{14}$
$\frac{6}{14}$ $\times$\frac{2}{2}$=$\frac{3}{7}$Therefore$\frac{6}{14}$=$\frac{3}{7}$Question 3: Reduce$\frac{55}{44}$Solution: Factor each number: 55 = 11. 5 44 = 11. 4 Greatest common factor of 44 and 55 is 11.$\frac{55}{44}$x$\frac{11}{11}$=$\frac{5}{4}$Question 4: Write simplest form of$\frac{78}{68}$Solution: Multiply and divide fraction by 2$\frac{78}{68}$x$\frac{2}{2}$=$\frac{39}{34}$The simplest form of given fraction is$\frac{39}{34}\$
Related Topics Math Help Online Online Math Tutor
*AP and SAT are registered trademarks of the College Board. | 2019-05-26 10:42:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9018018841743469, "perplexity": 1370.69844337324}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259015.92/warc/CC-MAIN-20190526085156-20190526111156-00519.warc.gz"} |
http://eptcs.web.cse.unsw.edu.au/paper.cgi?GANDALF2011.1 | ## Synthesis from Recursive-Components Libraries
Yoad Lustig (Rice University) Moshe Vardi (Rice University)
Synthesis is the automatic construction of a system from its specification. In classical synthesis algorithms it is always assumed that the system is "constructed from scratch" rather than composed from reusable components. This, of course, rarely happens in real life. In real life, almost every non-trivial commercial software system relies heavily on using libraries of reusable components. Furthermore, other contexts, such as web-service orchestration, can be modeled as synthesis of a system from a library of components. In 2009 we introduced LTL synthesis from libraries of reusable components. Here, we extend the work and study synthesis from component libraries with ``call and return'' control flow structure. Such control-flow structure is very common in software systems. We define the problem of Nested-Words Temporal Logic (NWTL) synthesis from recursive component libraries, where NWTL is a specification formalism, richer than LTL, that is suitable for ``call and return'' computations. We solve the problem, providing a synthesis algorithm, and show the problem is 2EXPTIME-complete, as standard synthesis.
In Giovanna D'Agostino and Salvatore La Torre: Proceedings Second International Symposium on Games, Automata, Logics and Formal Verification (GandALF 2011), Minori, Italy, 15-17th June 2011, Electronic Proceedings in Theoretical Computer Science 54, pp. 1–16.
Published: 4th June 2011.
ArXived at: http://dx.doi.org/10.4204/EPTCS.54.1 bibtex PDF
References in reconstructed bibtex, XML and HTML format (approximated).
Comments and questions to: [email protected] For website issues: [email protected] | 2022-08-15 06:41:24 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8693240284919739, "perplexity": 7936.423319542487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572161.46/warc/CC-MAIN-20220815054743-20220815084743-00165.warc.gz"} |
https://www.techwhiff.com/issue/i-need-help-please-thank-you--327686 | # I need help please, thank you
###### Question:
i need help please, thank you
### 12 points! Will mark brainliest!! Read this opening paragraph of a persuasive essay about conserving energy. Identify the writer’s point of view as well as the type of rhetorical appeal that the writer uses to support the argument. Do you regularly forget to switch off the lights when you leave a room? Is your computer always on standby because you’re too lazy to unplug it? The good news, if you can call it that, is that you are not alone. The bad news is that you, and millions of others on our
12 points! Will mark brainliest!! Read this opening paragraph of a persuasive essay about conserving energy. Identify the writer’s point of view as well as the type of rhetorical appeal that the writer uses to support the argument. Do you regularly forget to switch off the lights when you leave a ...
### Przeczytaj w ramce "Przydatne zwroty" zdania, 0 których można by użyć, opisując problem z zakupionym aparatem fotograficznym. Następnie wybierz jedną z rzeczy z ćwiczenia 1, wyobraź sobie, że jest wadliwa lub zepsuta i wypisz wyrażenia, których mógłbyś/mogłabyś użyć podczas reklamacji
Przeczytaj w ramce "Przydatne zwroty" zdania, 0 których można by użyć, opisując problem z zakupionym aparatem fotograficznym. Następnie wybierz jedną z rzeczy z ćwiczenia 1, wyobraź sobie, że jest wadliwa lub zepsuta i wypisz wyrażenia, których mógłbyś/mogłabyś użyć podczas reklam...
### Let f(x) = 12 over the quantity of 4 x + 2Find f(−1).
Let f(x) = 12 over the quantity of 4 x + 2Find f(−1)....
### What is undefined I need help I don't understand this question
What is undefined I need help I don't understand this question...
### Which of the following best describes how john francis maguire book serves as a primary source for a historian studying the role of irish immigration in us historyAnswer:It documents a mid 19th century perspective on earlier Irish immigration
which of the following best describes how john francis maguire book serves as a primary source for a historian studying the role of irish immigration in us historyAnswer:It documents a mid 19th century perspective on earlier Irish immigration...
### Write a mathematical sentence that expresses the information given below. Use was your variable name. If necessary: type < = to means or > = to mean . A business employs 150 workers at the start of the year, but then some employees quit or are fired. After that, the business had 142 employees.
Write a mathematical sentence that expresses the information given below. Use was your variable name. If necessary: type < = to means or > = to mean . A business employs 150 workers at the start of the year, but then some employees quit or are fired. After that, the business had 142 employees....
### How to solve this problem by completing the square 6x^2 − 8x + 7= 0
how to solve this problem by completing the square 6x^2 − 8x + 7= 0...
### If you are running and you fall and everyone↓↓↓↓↓↓↓ passes you how can you still be in first place?? ik the answer but lets see if you know it twooo
if you are running and you fall and everyone↓↓↓↓↓↓↓ passes you how can you still be in first place?? ik the answer but lets see if you know it twooo...
### Which statement is incorrect regarding temperature inversion? A. It occurs when warmer air is above cooler air. B. It occurs in the stratosphere due to the ozone layer. C. It occurs in the thermosphere due to solar radiation. D. It occurs in the mesosphere due to meteors heating up the air.
Which statement is incorrect regarding temperature inversion? A. It occurs when warmer air is above cooler air. B. It occurs in the stratosphere due to the ozone layer. C. It occurs in the thermosphere due to solar radiation. D. It occurs in the mesosphere due to meteors heating up the air....
### Am I correct plz help!
Am I correct plz help!...
### Look at the map. Which of the labeled civilizations used its access to Indian Ocean trade routes to profit from trade between Asia and Europe?
Look at the map. Which of the labeled civilizations used its access to Indian Ocean trade routes to profit from trade between Asia and Europe?...
### Liz spent a total of $44.88 at the mall she has$7.62 left. How much money,m,did Liz have when she arrives at the mall?
liz spent a total of $44.88 at the mall she has$7.62 left. How much money,m,did Liz have when she arrives at the mall?...
### If a cylinder of radius 2 cm and height 4 cm is submerged in a graduated cylinder of radius 3 cm containing a liquid. By how much does the liquid rise?
If a cylinder of radius 2 cm and height 4 cm is submerged in a graduated cylinder of radius 3 cm containing a liquid. By how much does the liquid rise?...
### Need help right now pls
Need help right now pls...
### If you leave your job when should you notify a dso so that cpt can be removed from your record?
if you leave your job when should you notify a dso so that cpt can be removed from your record?...
### The second part of the Declaration of Independence is a
The second part of the Declaration of Independence is a...
### A rock formation near Gainesville, Florida, was formed 530 million years 1 ago. Which information does a scientist need to most accurately determine the age of a rock? the percentage of mineral that makes up the rock the amount of each radioactive element present in the rock the percentage of fossilized marine life that makes up the rock the amount of weathering present on the surface of the rock
A rock formation near Gainesville, Florida, was formed 530 million years 1 ago. Which information does a scientist need to most accurately determine the age of a rock? the percentage of mineral that makes up the rock the amount of each radioactive element present in the rock the percentage of fossil...
### Mrs. Johnson is buying a car that costs $15,000. She has$3,000 for a down payment. If each of her four children want to pay for the rest of the car, how much does each child need to pay?
Mrs. Johnson is buying a car that costs $15,000. She has$3,000 for a down payment. If each of her four children want to pay for the rest of the car, how much does each child need to pay?... | 2022-10-05 19:20:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42473798990249634, "perplexity": 2374.1091591554487}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337663.75/warc/CC-MAIN-20221005172112-20221005202112-00581.warc.gz"} |
https://math.stackexchange.com/questions/3028757/conditional-probability-calculation-in-bayes-net | # Conditional Probability Calculation in Bayes Net
Say I have a simple Bayes Net that appears like that in the picture and am giving the following probabilities:
$$P(y|x) = 0.5$$
$$P(z|x)=0.4$$
$$P(y|\bar{x})=0.8$$
$$P(z|\bar{x}) = 0.9$$
How would I calculate the following, or is it not possible to calculate them? I think I need to know $$P(x)$$ to be able to calculate them:
$$P(y)$$
$$P(x|y \land z)$$
$$P(x|y)$$
## 1 Answer
How would I calculate the following, or is it not possible to calculate them? I think I need to know P(x) to be able to calculate them:
Yes.
$$P(y)=P(x)P(y\mid x)+P(\bar x)P(y\mid\bar x)$$
$$P(x\mid y,z) = \dfrac{P(x)P(y\mid x)P(z\mid x)}{P(x)P(y\mid x)P(z\mid x)+P(\bar x)P(y\mid\bar x)P(z\mid\bar x)}$$
$$P(x\mid y)=\dfrac{P(x)P(y\mid x)}{P(y)}$$ | 2019-07-21 02:19:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7401608824729919, "perplexity": 209.72689589243734}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00133.warc.gz"} |
https://brilliant.org/discussions/thread/the-number-sequence-and-the-number-polynomial/ | # The Number Sequence and The Number Polynomial
I just got an idea of sequences that can be created out of each and every positive integer. I would like to start it with an example :
For the number 6 , the sequence would be 2,6,12,20,30,42,56 ........... it is because 6 = 1 x 6 or 6 = 2 x 3 . We could observe that the lowest gap between the factors is in the case of 2 x 3 . Hence if 2 is considered as the variable 'n' , 2 x 3 can also be written as n x (n+1) . Hence , when we substitute 1 in place of 'n' ,it is 2 , if substituted 2 the 2nd term would be 6 , 3rd term would be 12 and so on . Hence the number sequence of 6 is 2,6,12,20,30,42,56 .........
Yes , the next i would like to describe the number polynomial . Again , let me describe it with the number 6 . Hence we know that the factors of 6 are 1,2,3,6 . The polynomial of 6 would be f(x) = 1x^2 + 2x^6 + 3x^12 + 6x^42. Hence the idea is simple . The first term's coefficient would be the first divisor and the power of the term would be the 1st term of the number sequence of 6 i.e. 2 . The second term of the polynomial would be 2 as the second factor is 2 while the power of the term would be the second term in the number sequence of 6 i.e. 6 . Hence the last of the polynomial would have the coefficient as 6 , the last divisor with the power being the 6th term in the sequence of 42.
I hope you will like this idea . Thanks for reading my note and like & reshare the note if you like it !!!
Note by Sriram Venkatesan
4 years, 3 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $ ... $ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$ | 2019-06-27 11:23:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9851590394973755, "perplexity": 748.041752196452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628001089.83/warc/CC-MAIN-20190627095649-20190627121649-00204.warc.gz"} |
https://www.speedsolving.com/threads/yj-sulong-3x3x3.45155/ | # [Review]YJ SuLong 3x3x3
## What do you think of this puzzle?
• ### Poor
• Total voters
34
#### BrainOfSweden
This thread is for reviews of the YJ SuLong. You can vote in the poll above, but please only vote if you own this particular puzzle.
When posting your review, please follow a template similar to the one below. Of course, video reviews are also welcome.
Where the puzzle was purchased:
When the puzzle was purchased:
Thoughts on the puzzle:
What are your thoughts of this puzzle? Please vote one of the options above - but please only vote if you own and have used this puzzle extensively!
Last edited by a moderator:
#### rj
##### Member
It's an ok puzzle. I like it for practice. The Corner-cutting is good enough. These cubes are hit-or-miss, though. My brother got one too, and his sucks.
Sorry for the bump
#### Logical101
##### Member
Where the puzzle was purchased: Zcube.cn
When the puzzle was purchased:About a month ago
Thoughts on the puzzle: the sulong is my main, it has a very nice felling, and my accurateish Turing can take the hit on corrnercuting, the turnig speed is quite slow, yet it is worth a few dollars, even just to have
#### Lchu613
##### Member
YJ Sulong 3x3 [Review]
Vid here:
Last edited by a moderator:
#### DAoliHVAR
##### Member
cool review,its my main also.
you didn't mention corner twists though
have they not happened to you?
for me it seems i get em once every 30-40 solves or smth and sometimes during scrambling
#### ThomasJE
Last edited by a moderator:
#### piyushp761
##### Member
Last edited by a moderator:
#### bitflusher
##### Member
Bought one from e-bay and received it 3 days ago.
It is my first speedcube so no comparison but I like it a lot, except for the colors. Most colors are fluorescent and are not pleasant to look at. anyway, stickers can be replaced.
#### davidmg90000
##### Member
Bought one from e-bay and received it 3 days ago.
It is my first speedcube so no comparison but I like it a lot, except for the colors. Most colors are fluorescent and are not pleasant to look at. anyway, stickers can be replaced.
In my opinion I really like the stickers they are diferent from the standard ones and allows for easy recog.
#### davidmg90000
##### Member
Where the puzzle was purchased: lightake.com
When the puzzle was purchased: the puzzle was purchased on december 21 of 2013 arrived on January 10 of 2014
Thoughts on the puzzle: is a great puzzle, out of the box it feels fast and a little dry.
before lubrication it corner cuts about a square after lubrication it corner cuts about 45 degree and feels a lot smoother. And I love the stickers they are thick Ive had for 3 weeks and have done more than 300 solves with it and the stickers have absolutely no imperfections, love the color scheme difenrent from the standard and are easy to recognize during a solve.
#### bitflusher
##### Member
In my opinion I really like the stickers they are diferent from the standard ones and allows for easy recog.
Personal prefs At least there are options for everyone with stickers, tiles would be harder.
One thing, while doing a really bad move (something like jamming my finger between layers) I experienced my first corner twist!
#### davidmg90000
##### Member
Personal prefs At least there are options for everyone with stickers, tiles would be harder.
One thing, while doing a really bad move (something like jamming my finger between layers) I experienced my first corner twist!
lol you jam your finger where?, that sucks it happened with my old cube all the time and i didnt even noticed until I got to oll.
It havent happened with the sulong yet, and I hope it doesnt
#### idreamincubes
##### Member
Where the puzzle was purchased
TheCubicle.us
When the puzzle was purchased
January 21, 2014
Thoughts on the puzzle
I bought this cube because I was intrigued by it's simple design and by reviewers calling in smooth. At first I wasn't very impressed with it, but I decided to try to make it as good as I could. This is what I did:
• I filed a bit on those sharp edges on the corner bases. Though I don't know if it makes a difference.
• Swapped the core, screws and springs for DaYan ones. The DaYan washers were too big, so I kept the ones that came in the SuLong.
• Lubed the core with 50k and the pieces with 30k. At first I only used 50k, as I usually do, but with all the contact area in this cube, it got sluggish.
• Tensioned it to satisfaction.
The result is a really nice cube. If I wasn't so picky about cubes being quiet, this would be a favorite.
It corner cuts at the other line-to-line, so about 53 degrees, which is impressing.
Reverse corner cutting is about 3/4 of a cubie.
It takes a bit more force to turn than the top cubes, but it's very controllable. It's pretty smooth, but not as smooth as the Aurora.
It would be cool if someone else would try a DaYan core and hardware in this cube. I didn't play around enough before swapping to know what difference it made.
// Per.
#### AvgACuber
##### Member
Where the puzzle was purchased:Online Buy [Site - flipkart]
When the puzzle was purchased:1 week ago
Thoughts on the puzzle:YJ Sulong is my first speed cube as i m begineer and wanted a cheap but good speed cube... Its worth its price although not the best but Above averge speed cube. Has a coner twisting issue and may pop-up in low tensions
#### Scruggsy13
##### Member
Where purchased: TheCubicle.us
When purchased: June 2014
Thoughts: Overall a pretty solid cube. Mine is slightly looser than stock tensions, and the puzzle has not popped, although I have had a few corner twists (less than 10). I've lubricated it with CRC and a couple of drops of Maru, done well over 2000 solves on it, and the stock stickers are barely chipping. The turning is smooth, but turning resistance is higher than on other 3x3's, and although it was my main for seven months, I now use it as a warmup cube. It is a good beginner cube. Corner cuts close to 45 degrees and generally does not lock up. | 2019-12-15 05:24:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3643469214439392, "perplexity": 3628.6086210474805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541301598.62/warc/CC-MAIN-20191215042926-20191215070926-00077.warc.gz"} |
https://conferences.matheo.si/event/7/contributions/354/ | # International Conference on Graph Theory and Combinatorics
May 16 – 18, 2014
Rogla, Slovenia
UTC timezone
## Zonographs and some of their applications
Not scheduled
Rogla, Slovenia
### Speaker
Dr Gabor Gevay (Bolyai Institute, University of Szeged)
### Description
Zonohedra (or more generally, zonotopes) are a particular class of convex polytopes characterized by the property that all their 2-dimensional faces are centrally symmetric. We introduce a generalization of the graph of zonotopes, which we call a zonograph\/. We show through examples how zonographs can be used in the construction of $(n_k)$ configurations of points and circles. Zonographs also provide the possibility of a novel representation of regular maps, as follows. Let $\cal M$ be a suitable regular map of type $\{p, q\}$; furthermore, let the $f$-vector of $\mathcal M$ be $f(\mathcal M)=(v, e, f)$. Then there is a point-circle configuration of type $(v_q, f_p)$ such that the points of number $v$ correspond to the vertices of $\mathcal M$, the circles of number $f$ are circumcircles of the faces of $\mathcal M$. In addition, this configuration is isometric, which means that all of its circles are of the same size. The results presented here were obtained partly in joint work with Toma\v z Pisanski.
### Primary author
Dr Gabor Gevay (Bolyai Institute, University of Szeged)
### Co-author
Prof. Tomaz Pisanski (University of Primorska, University of Ljubljana)
### Presentation materials
There are no materials yet. | 2022-10-06 10:46:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6161099076271057, "perplexity": 957.089074642541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337803.86/warc/CC-MAIN-20221006092601-20221006122601-00794.warc.gz"} |
https://docs.orchestrate.consensys.net/en/stable/Howto/Chain-Registry-Cache/ | You are reading Codefi Orchestrate development version documentation and some displayed features may not be available in the stable release. You can switch to stable version using the version box at screen bottom.
# Use chain proxy cache
Use the chain proxy cache to reduce the amount of calls to the same RPC endpoints. The chain proxy cache is useful if you have multiple tenants and chains pointing to the same RPC endpoints.
Note
The chain proxy forms part of the Orchestrate Gateway API.
The cache covers calls to the following RPC methods:
• eth_getBlockByNumber
• eth_getTransactionReceipt
To configure the cache, configure the --proxy-cache-ttl command line option, or set the PROXY_CACHE_TTL environment variable.
For example, the following enables the cache, and sets the time-to-live (TTL) to 10 seconds.
--proxy-cache-ttl=10s
PROXY_CACHE_TTL=10s
Note
The TTL value must be a Golang duration string.
To skip the cache when making requests to the chain proxy, append the X-Cache-Control=no-cache header as follows:
curl -X POST -H "Content-Type: application/json" -H "X-Cache-Control=no-cache" --data '{"jsonrpc":"2.0","method":"eth_getBlockByNumber","params": ....}' / | 2021-02-25 10:03:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31230807304382324, "perplexity": 11933.960733807266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178350942.3/warc/CC-MAIN-20210225095141-20210225125141-00543.warc.gz"} |
https://asmedc.silverchair.com/biomechanical/article/131/9/095501/459932/Discussion-On-the-Thermodynamical-Admissibility-of?searchresult=1 | We feel it is necessary for us to point out some critical mathematical errors and erroneous statements that Huyghe et al. made in the manuscript listed above (hereafter, referred to as the Huyghe et al.’s paper). As shown in detail below, this manuscript is misleading and contains fatal flaws of mathematics and logic. Therefore we request a retraction of their statement that “…all results [10-13,16] obtained with the theory of Lai et al. (5) should be distrusted” as stated in the main conclusion of their paper (last paragraph). These errors are detailed as follows.
• 1
In Huyghe et al. paper, the authors ignored the fact that the general formulation of the triphasic theory by Lai et al. (1) (i.e., Ref. 5 of their paper) is directly derived from the entropy inequality; see Eqs. (45)–(49) on page 251 in Ref. 1. Therefore the constitutive equations are fully compatible with the second law of thermodynamics.
• 2
The authors ignored the fact that the specific formulation for ion (electro)chemical potential is, in general, strain-dependent (see the footnote on page 252 in Ref. 1). These oversights led them to reach erroneous statements in their paper.
• 3
The authors did their analysis in their “thought experiments” based on the mistakes stated above (points 1 and 2), and therefore their results and conclusion are generally invalid.
• 4
The authors also made several fundamental and mathematical errors in their analysis and derivations. For example, their Eq. 14 $((∂ϕw/∂E)ce=1)$ is incorrect; it should be $(∂ϕw/∂E)ce=(1−ϕ0w)/(1+E)2≈1−ϕw$. This fundamental error propagates throughout their derivation, resulting in a total of seven incorrect equations, including Eqs. 14, 17, 18, 20, 23, 25, and 27. For example, Eq. 20 should be
$(∂(ϕwc+)∂E)ce=−ϕwc++2c+c−c++c−$
$(∂(ϕwc−)∂E)ce=−ϕwc−+2c+c−c++c−$
$(∂(ϕwc+)∂E)ce=(∂(ϕwc−)∂E)ce=2c+c−c++c−$
• 5
The authors used a first order approximation (i.e., linearization) for most of their analysis, but in the end they presented their result with a “positive” small value at the second order. The authors further made all of their conclusions based on the fact that this second order small term is not zero or negligible. This inconsistent mathematical procedure is problematic and results in their incorrect conclusions.
• 6
Based on their analysis, the authors then conclude with misleading statements and incorrect conclusions on the $Tc$ term in the triphasic theory (see first paragraph in Results of Huyghe et al. paper.) and on the “concentration dependent Lamé constants” introduced in Ref. 2 (i.e., Ref. [19]) in the last paragraph of their paper.
• 7
Finally, based on the second law of thermodynamics, the total mixture stress must depend on solute concentration if the (electro)chemical potential of solute depends on the solid strain, or vise versa. In the triphasic theory, it is assumed that the stress is dependent on solute concentration (e.g., the $Tc$ term) and the solute chemical potential is dependent on strain; see Eqs. (46)–(49) in Ref. 1. Due to the lack of experimental data for solute (electro)chemical potential in cartilage, it was assumed that the value of the activity coefficient to be constant and the value of $Bi$ to be zero in examples of numerical calculations in literature (Refs. 5 (Ref. 1), (10)–[13] (Refs. 3,4,5,6), [16] (Ref. 7), and [19] (Ref. 2). These approximations do not necessarily mean that $Tc$ does not exist nor that the Lamé coefficients should not depend on concentration. One may argue whether these were good assumptions if we have any experimental data, but these approximations in no way invalidate our triphasic theories themselves where the chemical potentials are functions of strain. In some of our later papers (8,9,10,11) to simplify the analysis, we also assume $Tc$ to be zero in addition to these approximations. Again, these approximations are consistent with our basic assumptions used to develop the triphasic theory, and they do not invalidate the full triphasic theory developed by Lai et al. (1).
While we are pleased that Professor Huyghe and colleagues continue to show interest in our work, as they have over the years, but we must point out that their present study contains numerous errors and their conclusions are misleading. These arguments are the basis that we request Professor Jacque M. Huyghe and co-workers retract their statement “all results [10-13, 16] obtained with the theory of Lai et al. (5) should be distrusted,….”
1.
Lai
,
W. M.
,
Hou
,
J. S.
, and
Mow
,
V. C.
, 1991, “
A Triphasic Theory for the Swelling and Deformation Behaviors of Articular Cartilage
,”
ASME J. Biomech. Eng.
0148-0731,
113
, pp.
245
258
.
2.
Gu
,
W. Y.
,
Lai
,
W. M.
, and
Mow
,
V. C.
, 1998, “
A Mixture Theory for Charged-Hydrated Soft Tissues Containing Multi-Electrolytes: Passive Transport and Swelling Behaviors
,”
ASME J. Biomech. Eng.
0148-0731,
120
, pp.
169
180
.
3.
Lai
,
W. M.
,
Gu
,
W. Y.
,
Setton
,
L. A.
, and
Mow
,
V. C.
, 1991, “
,”
0360-9960,
20
, pp.
481
484
.
4.
Setton
,
L. A.
,
Lai
,
W. M.
, and
Mow
,
V. C.
, 1993, “
Swelling Induced Residual Stress and the Mechanism of Curling in Articular Cartilage
,”
0360-9960,
26
, pp.
59
62
.
5.
Gu
,
W. Y.
,
Lai
,
W. M.
, and
Mow
,
V. C.
, 1993, “
Transport of Fluid and Ions Through a Porous-Permeable Charged-Hydrated Tissue, and Streaming Potential Data on Normal Bovine Articular Cartilage
,”
J. Biomech.
0021-9290,
26
, pp.
709
723
.
6.
Gu
,
W. Y.
,
Lai
,
W. M.
, and
Mow
,
V. C.
, 1997, “
A Triphasic Analysis of Negative Osmotic Flows Through Charged Hydrated Soft Tissues
,”
J. Biomech.
0021-9290,
30
, pp.
71
78
.
7.
Sun
,
D. N.
,
Gu
,
W. Y.
,
Guo
,
X. E.
,
Lai
,
W. M.
, and
Mow
,
V. C.
, 1999, “
A Mixed Finite Element Formulation of Triphasic Mechano-Electrochemical Theory for Charged, Hydrated Biological Soft Tissues
,”
Int. J. Numer. Methods Eng.
0029-5981,
45
, pp.
1375
1402
.
8.
Mow
,
V. C.
,
Ateshian
,
G. A.
,
Lai
,
W. M.
, and
Gu
,
W. Y.
, 1998, “
Effects of Fixed Charge Density on the Stress-Relaxation Behavior of Hydrated Soft Tissues in a Confined Compression Problem
,”
Int. J. Solids Struct.
0020-7683,
35
, pp.
4945
4962
.
9.
Yao
,
H.
, and
Gu
,
W. Y.
, 2004, “
Physical Signals and Solute Transport in Cartilage Under Dynamic Unconfined Compression: Finite Element Analysis
,”
Ann. Biomed. Eng.
0090-6964,
32
, pp.
380
390
.
10.
Yao
,
H.
, and
Gu
,
W. Y.
, 2006, “
Physical Signals and Solute Transport in Human Intervertebral Disc During Compressive Stress Relaxation: 3D Finite Element Analysis
,”
Biorheology
0006-355X,
43
, pp.
323
335
.
11.
Huang
,
C. Y.
, and
Gu
,
W. Y.
, 2007, “
Effects of Tension-Compression Nonlinearity on Solute Transport in Charged Hydrated Fibrous Tissues Under Dynamic Unconfined Compression
,”
ASME J. Biomech. Eng.
0148-0731,
129
, pp.
423
429
. | 2022-01-28 07:46:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6290209293365479, "perplexity": 2941.7483556692355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00178.warc.gz"} |
https://docs.scikit-nano.org/generated/sknano.generators.UnrolledSWNTGenerator.dR.html | #### Previous topic
sknano.generators.UnrolledSWNTGenerator.d
#### Next topic
sknano.generators.UnrolledSWNTGenerator.dt
# sknano.generators.UnrolledSWNTGenerator.dR¶
UnrolledSWNTGenerator.dR
$$d_R=\gcd{(2n + m, 2m + n)}$$
$$d_R$$ is the Greatest Common Divisor of $$2n + m$$ and $$2m + n$$. | 2020-11-28 19:11:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47640880942344666, "perplexity": 1733.0617455959805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195745.90/warc/CC-MAIN-20201128184858-20201128214858-00365.warc.gz"} |
https://insidedarkweb.com/unix-linux/sound-on-dell-xps-9570-only-works-on-headphones/ | # Sound on Dell XPS 9570 only works on headphones
I have a Dell XPS 15 9570 and it’s great, except that no sound comes out of the speakers. Weirdly enough it works just fine out of headphones! Just the built-in speakers are the problem.
Everything I can find reports all systems go. pavucontrol looks great:
pactl list sinks doesn’t show anything that stands out to me:
Sink #0
State: RUNNING
Name: alsa_output.pci-0000_00_1f.3.analog-stereo
Description: Built-in Audio Analog Stereo
Driver: module-alsa-card.c
Sample Specification: s16le 2ch 44100Hz
Channel Map: front-left,front-right
Owner Module: 6
Mute: no
Volume: front-left: 65536 / 100% / 0.00 dB, front-right: 65536 / 100% / 0.00 dB
balance 0.00
Base Volume: 65536 / 100% / 0.00 dB
Monitor Source: alsa_output.pci-0000_00_1f.3.analog-stereo.monitor
Latency: 22200 usec, configured 25000 usec
Flags: HARDWARE HW_MUTE_CTRL HW_VOLUME_CTRL DECIBEL_VOLUME LATENCY
Properties:
alsa.resolution_bits = "16"
device.api = "alsa"
device.class = "sound"
alsa.class = "generic"
alsa.subclass = "generic-mix"
alsa.name = "ALC3266 Analog"
alsa.id = "ALC3266 Analog"
alsa.subdevice = "0"
alsa.subdevice_name = "subdevice #0"
alsa.device = "0"
alsa.card = "0"
alsa.card_name = "HDA Intel PCH"
alsa.long_card_name = "HDA Intel PCH at 0xed618000 irq 146"
alsa.driver_name = "snd_hda_intel"
device.bus_path = "pci-0000:00:1f.3"
sysfs.path = "/devices/pci0000:00/0000:00:1f.3/sound/card0"
device.bus = "pci"
device.vendor.id = "8086"
device.vendor.name = "Intel Corporation"
device.product.id = "a348"
device.product.name = "Cannon Lake PCH cAVS"
device.form_factor = "internal"
device.string = "front:0"
device.buffering.buffer_size = "352800"
device.buffering.fragment_size = "176400"
device.access_mode = "mmap+timer"
device.profile.name = "analog-stereo"
device.profile.description = "Analog Stereo"
device.description = "Built-in Audio Analog Stereo"
alsa.mixer_name = "Realtek ALC3266"
alsa.components = "HDA:10ec0298,1028087c,00100103 HDA:8086280b,80860101,00100000"
module-udev-detect.discovered = "1"
device.icon_name = "audio-card-pci"
Ports:
analog-output-speaker: Speakers (priority: 10000)
Active Port: analog-output-speaker
Formats:
pcm
I also looked into a theory that the headphone jack wasn’t registering plugs/unplugs, but as far as I can tell that’s working just fine.
Anyway, I’m at a loss. Any ideas? I’m using Arch linux,and I’m using pulseaudio.
Unix & Linux Asked by Josh Holbrook on November 21, 2021
So I think I have the closest thing this ticket can get to an answer, minus kernel upstream patches.
Sound on linux for sufficiently new machines operates through a kernel module called hd-audio. hd-audio uses various codecs to figure out how to actually play sound, which are listed here: https://www.kernel.org/doc/html/v4.20/sound/hd-audio/models.html The takeaway here is that bugs/issues with hd-audio tend to either be detection issues, actual bugs in the appropriate codec, or some combination thereof.
I did some research to figure out what device the Dell XPS 15 9570 has, and by going to Dell's drivers page I was able to find that it's an ALC3266 - which is listed in the information in pactl list sinks actually.
This leads to the important insight that this card has had slightly different issues in the past: Specifically, the 9550 and the 9570 would have an issue where the sound would stop working completely when headphones were plugged in and the sound card would report "dummy output" on reboot (link on stackexchange).
If you look closely at this issue, there are two things worth noting:
1. One particular issue, probably the "dummy output" issue, was reportedly fixed in November, as can be seen in the stackexchange link as well as the patch linked therein.
2. The kernel ticket is still open.
This leads me to the conclusion that the driver is still buggy, and that all I can really do (besides learning how to be a kernel hacker) is add my two cents to the issue and sit tight.
Answered by Josh Holbrook on November 21, 2021
## Related Questions
### Adding suffix to filename during for loop in bash
1 Asked on August 3, 2020 by mishal-ahmed
### Why are aliases skipped if escaped?
2 Asked on August 2, 2020
### Why can’t I type a g̃ the same way I type ñ?
1 Asked on July 29, 2020 by mmaluff
### Custom logrotate with hostnames
0 Asked on July 29, 2020 by gwynn | 2021-12-03 10:22:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20945248007774353, "perplexity": 9820.297278162863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362619.23/warc/CC-MAIN-20211203091120-20211203121120-00180.warc.gz"} |
https://www.physicsforums.com/threads/minimizing-volume-with-given-equations-and-certain-points-calculus-32a.657077/ | # Homework Help: Minimizing Volume with given equations and certain points. Calculus 32A
1. Dec 5, 2012
### uclagal2012
1. The problem statement, all variables and given/known data
A plane with equation xa+yb+zc=1 (a,b,c>0)
together with the positive coordinate planes forms a tetrahedron of volume V=16abc (as shown in the Figure below)
Find the plane that minimizes V if the plane is constrained to pass through a point P=(8,2,3) .
Here is a picture: https://www.pic.ucla.edu/webwork2_course_files/12F-MATH32A-1/tmp/gif/dhattman-1243-setSix-prob8--image_14_8_31.png [Broken]
2. Relevant equations
I used Lagrange multipliers SEVERAL TIMES.
3. The attempt at a solution
I've gotten
Partial(a)=(1/6)bc G(a)=-8/a^2
Partial(b)=(1/6)ac G(b)=-2/b^2
Partial(c)=(1/6)ab G(c)=-3/c^2
I then found that a=4b, 2c=3b, and 3a=8c but I am SO STUCK after this. Please help!
Last edited by a moderator: May 6, 2017
2. Dec 5, 2012
### haruspex
1/(6abc), right?
I think that's right.
So you can write each of a, b, c in terms of some common parameter, substitute that into the equation for the plane, and express the fact that the plane goes through P.
3. Dec 5, 2012
### Ray Vickson
The problem statement does not match the picture. To get the picture you need the equation of the plane to be
$$\frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1,$$
not what you wrote. Alternatively, you can use $ax + by + cz = 1$, but the intercepts are 1/a, 1/b and 1/c, and the volume is 1/(6*a*b*c), as stated by haruspex. So first, you need to decide which representation you want to use.
Last edited by a moderator: May 6, 2017
4. Dec 6, 2012
### uclagal2012
You are correct in that it is supposed to be
$$\frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1,$$
Sorry, I am not used to this forum. Additionally, the V=1/(6*a*b*c)
So now are you able to help me more?
5. Dec 6, 2012
### uclagal2012
You are correct in that it is supposed to be
$$\frac{x}{a} + \frac{y}{b} + \frac{z}{c} = 1,$$
Sorry, I am not used to this forum. Additionally, the V=(1/6)(a*b*c)
So now are you able to help me more?
6. Dec 6, 2012
### Ray Vickson
You need to show your work, not just write down some unexplained equations containing undefined symbols (the G). Be explicit. What are the variables? What is the objective (the thing you are trying to maximize or minimize)? What are the constraints? (Here, I mean: write down all these objects explicitly as functions of your chosen variables.) We need to see all these things first, in order to know whether or not you are on the right track.
Assuming you have done the above, now write the Lagrangian, and then write the optimality conditions. Finally, there is the issue of solving those conditions, but again, let's see the conditions first in order to know whether you are making errors.
7. Dec 6, 2012
### haruspex
Judging from the OP, you correctly obtained the relationships between a, b, and c, except that it now appears you had each inverted in the definition. So go back to those equations and switch a to 1/a etc.
Then you're almost there. All you need to is what I wrote before:
Write each of a, b, c in terms of some common parameter (t, say), substitute that into the equation for the plane, and express the fact that the plane goes through P.
What parts of that do I need to explain more? | 2018-06-24 02:19:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6200020909309387, "perplexity": 831.229251672899}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865995.86/warc/CC-MAIN-20180624005242-20180624025242-00293.warc.gz"} |
https://docs.gammapy.org/0.10/cube/index.html | # cube - Map cube analysis¶
## Introduction¶
The gammapy.cube sub-package contains functions and classes to make maps (counts, exposure, background), as well as to compute an effective PSF and energy dispersion for a given set of observations.
It also contains classes that represent cube models (sky maps with an energy axis), and classes to evaluate and fit those models to data.
## Getting Started¶
TODO: what to show here?
## Using gammapy.cube¶
Gammapy tutorial notebooks that show examples using gammapy.cube:
## Reference/API¶
### gammapy.cube Package¶
Sky cubes (3-dimensional: energy, lon, lat).
#### Functions¶
fill_map_counts(counts_map, events) Fill events into a counts map. make_map_background_irf(pointing, ontime, …) Compute background map from background IRFs. make_map_exposure_true_energy(pointing, …) Compute exposure map. make_psf_map(psf, pointing, geom, max_offset) Make a psf map for a single observation
#### Classes¶
MapEvaluator([model, exposure, background, …]) Sky model evaluation on maps. MapFit(model, counts, exposure[, …]) Perform sky model likelihood fit on maps. MapMaker(geom, offset_max[, geom_true, …]) Make maps from IACT observations. MapMakerObs(observation, geom[, geom_true, …]) Make maps for a single IACT observation. PSFKernel(psf_kernel_map) PSF kernel for Map. PSFMap(psf_map) Class containing the Map of PSFs and allowing to interact with it.
### gammapy.cube.models Module¶
#### Classes¶
SkyModelBase Sky model base class SkyModels(skymodels) Collection of SkyModel SkyModel(spatial_model, spectral_model[, name]) Sky model component. CompoundSkyModel(model1, model2, operator) Represents the algebraic combination of two SkyModel SkyDiffuseCube(map[, norm, meta, interp_kwargs]) Cube sky map template model (3D). BackgroundModel(background[, norm, tilt, …]) Background model | 2019-04-22 16:59:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3800964951515198, "perplexity": 8656.673121331654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578558125.45/warc/CC-MAIN-20190422155337-20190422181337-00427.warc.gz"} |
https://iq.opengenus.org/sumbasic-algorithm-for-text-summarization/ | # SumBasic algorithm for text summarization
#### Machine Learning (ML) Natural Language Processing (NLP)
Get FREE domain for 1st year and build your brand new site
Reading time: 30 minutes | Coding time: 10 minutes
SumBasic is an algorithm to generate multi-document text summaries. Basic idea is to utilize frequently occuring words in a document than the less frequent words so as to generate a summary that is more likely in human abstracts.
It generates n length summaries, where n is user specified number of sentences.
SumBasic has the following advantages :
1. Used to easily understand the purpose of a document.
2. Provides greater convinience and flexibility to reader.
3. Generates shorter and concise form from multiple documents.
The above figure shows the working of SumBasic on a document.
## Alogorithm
SumBasic follows the following algorithm :
1. It calculates the probability distribution over words w appearing in the input P(wi) for every i
p(wi)=n/N
Where,
n = number of times the word appeared in the input.
N = total number of content word tokens in input.
1. For sentence Sj in input, assign a weight equal to the average probability of the words in the sentence.
Sj=p(wi)/|wi|
1. Pick the best scoring sentence with highest probability word.
2. For each word wi in the sentence chosen at step 3, update their probability.
*
pnew(wi)=pold(wi)pold(wi)
1. If a desired summary length is not generated then repeat the step 2.
The step 2 and 3 exhibits the desired properties of summarizer whereas step 3 ensures that the highest probability word is included in the summary every time a sentence is picked.
Step 4 has various uses; updating probabilities on the basis of preceding sentences so that low probability words can also participate and deals with the redundacies.
In simple words, SumBasic first computes the probability of each content-word (i.e., verbs, nouns, adjectives and numbers) by simply counting its frequency in the document set. Each sentence is scored as the average of the probabilities of the words in it. The summary is then generated through a simple greedy search algorithm: it iteratively selects the sentence with the highest-scoring content-word, breaking ties by using the average score of the sentences. This continues until the maximum summary length has been reached. In order not to select the same or similar sentence multiple times, SumBasic updates probabilities of the words in the selected sentence by squaring them, modeling the likelihood of a word occurring twice in a summary.
### Complexity analysis
Worst case : O(2 * n + n * (n^3 + n * log(n) + n^2)
given by : O(step 1 complexity + n*(step 2, 3, and 4 complexity))
# Implementation
SumBasic can be implemented using sumy or nltk library in python
Sumy installation command :
pip install sumy
nltk installation command:
pip install nltk
sumy is used to extract summary from html pages or plain texts.
The data is processed through various steps to undergo the procedure of summarization :
1. Tokenization - A sentence is split into words as tokens which are then processed to find the distinct words.
2. Stemming - Stemming is used filter out the parent word from any word . For ex : having will be converted into have by stemming out "ing".
3. Lemmatization - Lemmatization is the process of converting a word to its base form. The difference between stemming and lemmatization is, lemmatization considers the context and converts the word to its meaningful base form, whereas stemming just removes the last few characters, often leading to incorrect meanings and spelling errors.
There are three ways to implement the algorithm, namely:
1. orig: The original version, including the non-redundancy update of the word scores.
2. simplified: A simplified version of the system that holds the word scores constant and does not incorporate the non-redundancy update. It produces better results than orig version in terms of simplification.
3. leading: Takes the leading sentences of one of the articles, up until the word length limit is reached. It is the most concise technique.
NOTE- The code implemented below does not use sumy.
## Sample Code
import nltk, sys, glob
sys.setdefaultencoding('utf8')
lemmatize = True
rm_stopwords = True
num_sentences = 10
stopwords = nltk.corpus.stopwords.words('english')
lemmatizer = nltk.stem.WordNetLemmatizer()
#Breaking a sentence into tokens
def clean_sentence(tokens):
tokens = [t.lower() for t in tokens]
if lemmatize: tokens = [lemmatizer.lemmatize(t) for t in tokens]
if rm_stopwords: tokens = [t for t in tokens if t not in stopwords]
def get_probabilities(cluster, lemmatize, rm_stopwords):
# Store word probabilities for this cluster
word_ps = {}
# Keep track of the number of tokens to calculate probabilities later
token_count = 0.0
# Gather counts for all words in all documents
for path in cluster:
with open(path) as f:
token_count += len(tokens)
for token in tokens:
if token not in word_ps:
word_ps[token] = 1.0
else:
word_ps[token] += 1.0
# Divide word counts by the number of tokens across all files
for word_p in word_ps:
word_ps[word_p] = word_ps[word_p]/float(token_count)
return word_ps
def get_sentences(cluster):
sentences = []
for path in cluster:
with open(path) as f:
return sentences
def clean_sentence(tokens):
tokens = [t.lower() for t in tokens]
if lemmatize: tokens = [lemmatizer.lemmatize(t) for t in tokens]
if rm_stopwords: tokens = [t for t in tokens if t not in stopwords]
def score_sentence(sentence, word_ps):
score = 0.0
num_tokens = 0.0
sentence = nltk.word_tokenize(sentence)
tokens = clean_sentence(sentence)
for token in tokens:
if token in word_ps:
score += word_ps[token]
num_tokens += 1.0
return float(score)/float(num_tokens)
def max_sentence(sentences, word_ps, simplified):
max_sentence = None
max_score = None
for sentence in sentences:
score = score_sentence(sentence, word_ps)
if score > max_score or max_score == None:
max_sentence = sentence
max_score = score
if not simplified: update_ps(max_sentence, word_ps)
return max_sentence
# Updating the sentences , step 4 of algo
def update_ps(max_sentence, word_ps):
sentence = nltk.word_tokenize(max_sentence)
sentence = clean_sentence(sentence)
for word in sentence:
word_ps[word] = word_ps[word]**2
return True
def orig(cluster):
cluster = glob.glob(cluster)
word_ps = get_probabilities(cluster, lemmatize, rm_stopwords)
sentences = get_sentences(cluster)
summary = []
for i in range(num_sentences):
summary.append(max_sentence(sentences, word_ps, False))
return " ".join(summary)
def simplified(cluster):
cluster = glob.glob(cluster)
word_ps = get_probabilities(cluster, lemmatize, rm_stopwords)
sentences = get_sentences(cluster)
summary = []
for i in range(num_sentences):
summary.append(max_sentence(sentences, word_ps, True))
return " ".join(summary)
cluster = glob.glob(cluster)
sentences = get_sentences(cluster)
summary = []
for i in range(num_sentences):
summary.append(sentences[i])
return " ".join(summary)
def main():
method = sys.argv[1]
cluster = sys.argv[2]
summary = eval(method + "('" + cluster + "')")
print summary
if __name__ == '__main__':
main()
Let following picture represents how the above code will be executed :
## Applications
Data summarization has huge boudaries of its application from extracting relevant informations to dealing with redundancies. Some of the major applications are as follows :
1. Information retrieval by Google , Yahoo, Bing and so on. Whenever a query is encountered , thousands of pages appear . It becomes difficult to extract the relevant and significant information from them . Summarization is done in order to prevent this .
3. Summary of source of text in shorter version is provided to the user that retains all the main and relevant features of the content.
4. Easily understandable
### Other Summarization techniques can be as follows :
1. LexRank
2. TextRank
3. Latent Semantic Analysis(LSA) and so on.
## Questions
1. Are NLP and SumBasic different?
SumBasic in a technique to implement NLP. It is advantageous as it investigate how much the frequency of words in the cluster of input documents influences their selection in the summary.
1. How are tf-idf and SumBasic different?
Tf-idf stands for term "frequency–inverse document frequency" in which it considers both the frequency of the term in a document and inverse document frequency that is pages of document in which the required terms exist , whereas in SumBasic uses maximum frequent element and so on in decreasing order to determine the context of document or text. Tf-idf is used more frequently used in chatbots and where the machine has to understand the meaning and communicate with the user, whereas summarization is used to provide user with concise information.
## References
1. Lucy Vanderwend, Hisami Suzuki et al, Beyond SumBasic: Task-Focused Summarization with Sentence Simplification and Lexical Expansion
2. Chintan Shah & Anjali Jivani, (2016). LITERATURE STUDY ON MULTI-DOCUMENT TEXT SUMMARIZATION TECHNIQUES. In SmartCom., September-2016. Jaipur: Springer.
3. A SURVEY OF TEXT SUMMARIZATION TECHNIQUES :Ani Nenkova,University of Pennsylvania;Kathleen McKeown,Columbia University.
#### Ashutosh Vashisht
B.Tech (IT) Student at Amity School of Engineering and Technology
Vote for Ashutosh Vashisht for Top Writers 2021: | 2021-04-11 02:05:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5350188612937927, "perplexity": 5549.208559013932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038060603.10/warc/CC-MAIN-20210411000036-20210411030036-00061.warc.gz"} |
https://pages.nist.gov/feasst/plugin/confinement/doc/ModelTableCart1DHard.html | # ModelTableCart1DHard¶
class ModelTableCart1DHard : public feasst::ModelOneBody
A tabular potential based on cartesian coordinates. Assumes symmetry along the x plane and that the Domain has no tilt.
Public Functions
void compute_table(Shape *shape, Domain *domain, Random *random, const argtype &args = argtype(), const int site_type = 0)
Generate the table by finding where the point is inside the shape and the nearest distance to the surface is half of the diameter. The initial bounds are [0, L/2] inclusive, assuming a plane (or line) of symmetry at origin perpendicular to y axis.
args:
• diameter: diameter of the sphere (default: 1)
double energy(const Position &wrapped_site, const Site &site, const Configuration &config, const ModelParams &model_params)
Return the energy given the wrapped coordinates, site, config and params.
void serialize(std::ostream &ostr) const
Output a serialized version of the existing model.
class ModelTableCart2DIntegr : public feasst::ModelOneBody
A tabular potential based on cartesian coordinates. Assumes symmetry along the x, y planes and that the Domain has no tilt. Integration of material does not take periodicity into account. E.g., the shapes extend forever and are not periodic in the domain.
Public Functions
void compute_table(Shape *shape, Domain *domain, Random *random, const argtype &integration_args, const int site_type = 0)
Parameters
• integration_args: See Shape for documentation of integration_args.
Generate the table by integration of the shape of the confinement over the entire and domain.
void compute_table_omp(Shape *shape, Domain *domain, Random *random, const argtype &integration_args, const int site_type = 0, const int node = 0, const int num_node = 1)
Same as above, but parallelize the task with OMP.
Parameters
• node: See Thread for documentation of these two arguments.
double energy(const Position &wrapped_site, const Site &site, const Configuration &config, const ModelParams &model_params)
Return the energy given the wrapped coordinates, site, config and params.
void serialize(std::ostream &ostr) const
Output a serialized version of the existing model.
class ModelTableCart3DIntegr : public feasst::ModelOneBody
A tabular potential based on cartesian coordinates. Assumes symmetry along the x, y and z planes and that the Domain has no tilt. Integration of material does not take periodicity into account. E.g., the shapes extend forever and are not periodic in the domain.
Public Functions
const Table3D &table(const int site_type = 0) const
Return the table for a given site type.
void compute_table(Shape *shape, Domain *domain, Random *random, const argtype &integration_args, const int site_type = 0)
Parameters
• integration_args: See Shape for documentation of integration_args.
Generate the table by integration of a shape, which represents a continuous medium, over the entire domain.
void compute_table_omp(Shape *shape, Domain *domain, Random *random, const argtype &integration_args, const int site_type = 0, const int node = 0, const int num_nodes = 1)
Same as above, but parallelize the task with OMP.
Parameters
• node: See Thread for documentation of these two arguments.
void compute_table(System *system, Select *select, const int site_type = 0)
Generate the table by computing the energy of interaction of the select with the rest of the system. The select is assumed to be a single site, so that tables can be generated for each site type.
void compute_table_omp(System *system, Select *select, const int site_type = 0, const int node = 0, const int num_node = 1)
Same as above, but parallelize the task with OMP.
Parameters
• node: See Thread for documentation of these two arguments.
double energy(const Position &wrapped_site, const Site &site, const Configuration &config, const ModelParams &model_params)
Return the energy given the wrapped coordinates, site, config and params.
void serialize(std::ostream &ostr) const
Output a serialized version of the existing model. | 2021-03-05 08:17:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32330772280693054, "perplexity": 12143.700420775971}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178370239.72/warc/CC-MAIN-20210305060756-20210305090756-00210.warc.gz"} |
https://psychopy.org/api/hardware/minolta.html | # Minolta¶
Minolta light-measuring devices See http://www.konicaminolta.com/instruments
class psychopy.hardware.minolta.LS100(port, maxAttempts=1)[source]
A class to define a Minolta LS100 (or LS110?) photometer
You need to connect a LS100 to the serial (RS232) port and when you turn it on press the F key on the device. This will put it into the correct mode to communicate with the serial port.
usage:
from psychopy.hardware import minolta
phot = minolta.LS100(port)
if phot.OK: # then we successfully made a connection
print(phot.getLum())
Parameters
port: string
the serial port that should be checked
maxAttempts: int
If the device doesn’t respond first time how many attempts should be made? If you’re certain that this is the correct port and the device is on and correctly configured then this could be set high. If not then set this low.
Troubleshooting
Various messages are printed to the log regarding the function of this device, but to see them you need to set the printing of the log to the correct level:
from psychopy import logging
logging.console.setLevel(logging.ERROR) # error messages only
logging.console.setLevel(logging.DEBUG) # log all communications
If you’re using a keyspan adapter (at least on macOS) be aware that it needs a driver installed. Otherwise no ports will be found.
Error messages:
ERROR: Couldn't connect to Minolta LS100/110 on ____:
This likely means that the device is not connected to that port (although the port has been found and opened). Check that the device has the [ in the bottom right of the display; if not turn off and on again holding the F key.
ERROR: No reply from LS100:
The port was found, the connection was made and an initial command worked, but then the device stopped communating. If the first measurement taken with the device after connecting does not yield a reasonable intensity the device can sulk (not a technical term!). The “[” on the display will disappear and you can no longer communicate with the device. Turn it off and on again (with F depressed) and use a reasonably bright screen for your first measurement. Subsequent measurements can be dark (or we really would be in trouble!!).
checkOK(msg)[source]
Check that the message from the photometer is OK. If there’s an error show it (printed).
Then return True (OK) or False.
clearMemory()[source]
Clear the memory of the device from previous measurements
getLum()[source]
Makes a measurement and returns the luminance value
measure()[source]
Measure the current luminance and set .lastLum to this value
sendMessage(message, timeout=5.0)[source]
Send a command to the photometer and wait an allotted timeout for a response.
setMaxAttempts(maxAttempts)[source]
Changes the number of attempts to send a message and read the output. Typically this should be low initially, if you aren’t sure that the device is setup correctly but then, after the first successful reading, set it higher.
setMode(mode='04')[source]
Set the mode for measurements. Returns True (success) or False
‘04’ means absolute measurements. ‘08’ = peak ‘09’ = cont
See user manual for other modes | 2020-08-15 11:53:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25026118755340576, "perplexity": 2991.0333842034142}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439740838.3/warc/CC-MAIN-20200815094903-20200815124903-00459.warc.gz"} |
http://math.stackexchange.com/questions/26952/sum-of-logarithms | # sum of logarithms
I have to solve find the value of $$\sum_{k=1}^{n/2} k\log k$$ as a part of question.
How should I proceed on this ?
-
@user8250: It's submission or summation. – anonymous Mar 14 '11 at 16:12
I may be asking a silly question, but what is a "submission"? – JavaMan Mar 14 '11 at 16:13
@DJC srry :) it was a typo .... i meant sigma and i am reading more on how to use tags to write equations here.... – user8250 Mar 14 '11 at 16:14
I doubt you will find a "closed form" formula. Maybe an asymptotic approximation? – Aryabhata Mar 14 '11 at 16:18
$log(1^1) + log(2^2) + log(3^3) + ... + log((n/2)^{n/2}) = log(1*4*27*...)$? – The Chaz 2.0 Mar 14 '11 at 16:22
Got it. The constant in Moron's answer is $C = \log A$, where $A$ is the Glaisher-Kinkelin constant. Thus $C = \frac{1}{12} - \zeta'(-1)$.
The expression $H(n) = \prod_{k=1}^n k^k$ is called the hyperfactorial, and it has the known asymptotic expansion
$$H(n) = A e^{-n^2/4} n^{n(n+1)/2+1/12} \left(1 + \frac{1}{720n^2} - \frac{1433}{7257600n^4} + \cdots \right).$$ Taking logs and using the fact that $\log (1 + x) = O(x)$ yields an asymptotic expression for the OP's sum $$\sum_{k=1}^n k \log k = C - \frac{n^2}{4} + \frac{n(n+1)}{2} \log n + \frac{\log n}{12} + O \left(\frac{1}{n^2}\right),$$ the same as the one Aryabhata obtained with Euler-Maclaurin summation.
Added: Finding an asymptotic formula for the hyperfactorial is Problem 9.28 in Concrete Mathematics (2nd ed.). The answer they give uses Euler-Maclaurin, just as Aryabhata's answer does. They also mention that a derivation of the value of $C$ is in N. G. de Bruijn's Asymptotic Methods in Analysis, $\S$3.7.
-
+1: Very nice ! :-) I am surprised the Plouffe inverter does not have this, though. – Aryabhata Mar 14 '11 at 20:53
@Moron: Thanks! It's satisfying to have finally figured out what that constant is. :) – Mike Spivey Mar 14 '11 at 20:55
@Moron: It looks like Plouffe does have it (or a scaled version of it) after all. The numerical estimate I was using before wasn't precise enough. – Mike Spivey Mar 14 '11 at 20:59
I see. I was about to suggest maybe you should add it there :-) – Aryabhata Mar 14 '11 at 21:25
I suppose one proof of this would be to start from $\zeta'(s) = -\sum \log k/k^s$ for $Re(s) \gt 1$ and extend it for all $s$ (which can be done using Euler McLaurin I believe), similar to the Riemann Zeta function. – Aryabhata Mar 14 '11 at 21:34
Here is an asymptotic expression using EulerMcLaurin Summation.
$$\sum _{k=1}^{n} k \log k = \int_{1}^{n} x \log x\ \text{d}x + (n\log n)/2 + C' + (\log n + 1)/12+ \mathcal{O}(1/n^2)$$
$$= n^2(2 \log n - 1)/4 + (n\log n)/2 + (\log n)/12 + C + \mathcal{O}(1/n^2)$$
for some constant $C$.
-
+1, although I get $C = \frac{1}{4}$, $(\log n)/12$ rather than $(\log n)/18$, and $O(\frac{1}{n^2})$. (And I have verified this numerically.) – Mike Spivey Mar 14 '11 at 17:03
@MIke: You are correct. I have edited the answer. Thanks. That C=1/4 would need a proof, but that is a neat value for that constant. – Aryabhata Mar 14 '11 at 18:24
@Moron: You're right to be suspicious of $C = 1/4$. It's not correct. The value of $C$ to six decimal places appears to be $0.248755$ - close to $1/4$ but not quite $1/4$. I'm not sure how to prove that or get an explicit expression, though. – Mike Spivey Mar 14 '11 at 19:23
@Mike: Perhaps the Plouffe inverter has something. We would probably need more than 6 digits though. – Aryabhata Mar 14 '11 at 19:45
But you're right that the same procedure for finding Stirling's constant ought to work here. There's got to be a way to get this! :) – Mike Spivey Mar 14 '11 at 20:14 | 2014-12-20 07:19:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282198548316956, "perplexity": 738.0061802465805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802769581.93/warc/CC-MAIN-20141217075249-00045-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.mathi.uni-heidelberg.de/events/showevent?eventid=944 | MoDiMiDoFrSaSo
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17
18
19 20 21
22 23 24 25 26 27 28
29 30 31 1 2 3 4
Informationen für
#### Mathematisches Kolloquium
„Some relative cases of Manin-Mumford for abelian surfaces“
Prof. Umberto Zannier, Scuole Normale Superiore (Italien)
A few years ago Masser posed as a question whether two points $P,Q$ with abscissas resp. $2,3$, lying on the Legendre elliptic curve $y^2=x(x-1)(x-\lambda)$, may become torsion for an infinity of complex values of $\lambda$. This may be viewed as a `relative' case of the celebrated conjecture of Manin-Mumford (proved by Raynaud in 1983); it also appeared as a special case of conjectures raised independently by Pink around 2005. A finiteness answer has been recently proved for Masser's question. In the talk we shall discuss this and several more recent developments; we shall present in some detail the main points of the proof-method, which admits applications also to other related issues.
Donnerstag, den 20. Oktober 2011 um 17 c.t. Uhr, in INF 288, HS2 Donnerstag, den 20. Oktober 2011 at 17 c.t., in INF 288, HS2
Der Vortrag folgt der Einladung von The lecture takes place at invitation by W. Kohnen | 2018-10-18 09:43:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49473199248313904, "perplexity": 1360.7023363042645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00545.warc.gz"} |
https://tutorme.com/tutors/726055/interview/ | Enable contrast version
# Tutor profile: Erin M.
Inactive
Erin M.
Statistician at Nonprofit
Tutor Satisfaction Guarantee
## Questions
### Subject:SAS
TutorMe
Question:
The following code performs a one-tailed t-test to determine if the mean MPG for a sample of 10 cars is significantly lower than 30 at an $$\alpha = 0.05$$ Data NewModel; Input MPG; Datalines; 26.6 30.4 32.5 26.3 31.0 25.9 29.7 24.8 30.6 28.1 ; Run; Proc ttest data=NewModel H0=30 plots side=l; var MPG; Run; How would you modify the code to perform the test with a significance level of $$\alpha = 0.01$$?
Inactive
Erin M.
Proc ttest data=NewModel H0=30 plots side=l alpha=0.1; var MPG; Run;
### Subject:R Programming
TutorMe
Question:
We're going to write some code in R to help understand the principle of the Central Limit Theorem. Part 1: Draw 500 random samples of size n= 10 from a normal distribution with $$\mu = 0$$ and $$\sigma = 1$$. Compute the means for each of the 500 samples and plot the distribution of means. Describe the shape of the distribution of means and calculate its mean and standard deviation (AKA standard error). How does this distribution of means compare to the original distribution from which we drew our sample? Part 2: Repeat this process only now draw samples of size 10,000 from the normal distribution with $$\mu = 0$$ and $$\sigma = 1$$. Compare this new distribution of means to that of the one where we drew samples of size 10. Part 3: Repeat the process one more time only now draw samples of size 100 from a uniform distribution with values within [a= 0, b =1] (note: $$\mu = \frac{b-a}{2} = 0.5$$ and $$\sigma = \sqrt{\frac{(b-a)^2}{12}} \approx 0.3$$). Describe the shape of the distribution of means and compute its mean and standard deviation (standard error).
Inactive
Erin M.
#Part 1 x = matrix(rnorm(5000,0,1),100,500) means = rep(0,500) for (j in 1:500){means[j] = mean(x[,j])} hist(means) mean(means) sd(means) The distribution of means is approximately normally distributed with a mean near 0 and a standard deviation approximately equal to $$\frac{\sigma}{\sqrt{n}}$$ = 0.1. #Part 2 y = matrix(rnorm(5000000,0,1),10000,500) means = rep(0,500) for (j in 1:500){means[j] = mean(y[,j])} hist(means) mean(means) sd(means) Again, the distribution of means is approximately normally distributed with a mean near 0 and a standard deviation approximately equal to $$\frac{\sigma}{\sqrt{n}}$$ = 0.01. #Part 3 z = matrix(runif(5000000,0,1),10000,500) means = rep(0,500) for (j in 1:500){means[j] = mean(z[,j])} hist(means) mean(means) sd(means) Just like in our previous two examples, the distribution of means is approximately normally distributed with a mean near the mean of the uniform distribution 0.5, and a standard deviation approximately equal to $$\frac{\sigma}{\sqrt{n}}$$ = 0.003. The shape of our distribution of means will be approximately normal, regardless of the shape of our original distribution (provided a sufficient sample size).
### Subject:Statistics
TutorMe
Question:
In a linear regression framework, what correlation (r) between the predictor variable (X) and the outcome variable (Y) will produce the largest standard error of estimate (i.e. the standard error of the residuals)?
Inactive
Erin M.
The standard error of estimate is based on the minimized squared residuals (or "errors" which are the distance between the actual and predicted Y). The residuals or errors will be larger when there is a weak linear correlation between X and Y. Since the r = 0 indicates the weakest linear correlation, the standard error of estimate will be largest when r = 0.
## Contact tutor
Send a message explaining your
needs and Erin will reply soon.
Contact Erin
Start Lesson
## FAQs
What is a lesson?
A lesson is virtual lesson space on our platform where you and a tutor can communicate. You'll have the option to communicate using video/audio as well as text chat. You can also upload documents, edit papers in real time and use our cutting-edge virtual whiteboard.
How do I begin a lesson?
If the tutor is currently online, you can click the "Start Lesson" button above. If they are offline, you can always send them a message to schedule a lesson.
Who are TutorMe tutors?
Many of our tutors are current college students or recent graduates of top-tier universities like MIT, Harvard and USC. TutorMe has thousands of top-quality tutors available to work with you.
BEST IN CLASS SINCE 2015
TutorMe homepage | 2022-01-19 16:56:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7301201820373535, "perplexity": 1372.4122958043718}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301475.82/warc/CC-MAIN-20220119155216-20220119185216-00660.warc.gz"} |
http://clay6.com/qa/48083/two-charges-q-and-q-are-located-at-points-0-0-a-and-0-0-a-respectively-obta | Browse Questions
# Two charges −q and +q are located at points (0, 0, − a) and (0, 0, a), respectively.Obtain the dependence of potential on the distance r of a point from the origin when $r/a >> 1.$
$V \alpha \large\frac{1}{f}$ | 2017-03-27 02:49:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.42768093943595886, "perplexity": 1706.1087439808364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189377.63/warc/CC-MAIN-20170322212949-00184-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/trig-identities.731387/ | Trig Identities
1. Jan 5, 2014
BOAS
Hello,
i'm fairly sure that what I have done is wrong/not allowed, but I want to check because it seemed like a good idea, but now it's causing problems.
1. The problem statement, all variables and given/known data
Solve the following equations in the range -180°≤ θ ≤ 180°
tanθ + cotθ = 2
2. Relevant equations
3. The attempt at a solution
tanθ + cotθ = 2
$\frac{sin\theta}{cos\theta}$ + $\frac{cos\theta}{sin\theta}$ = 2
$\frac{sin^{2}\theta}{cos\theta sin\theta}$ + $\frac{cos^{2}\theta}{sin\theta cos\theta}$ = 2
$\frac{sin^{2}\theta + cos^{2}\theta}{cos\theta sin\theta}$ = 2
$sin^{2}\theta + cos^{2}\theta$ = $2cos\theta sin\theta$
Since $sin^{2}\theta + cos^{2}\theta$ = 1
then
$2cos\theta sin\theta$ = 1
(the above is where I think i've done something careless)
I now have
$sin^{2}\theta + cos^{2}\theta$ = 1
I don't know what to do with it really... I tried saying $sin^{2}\theta$ = $1 - cos^{2}\theta$ but it doesn't really get me anywhere.
Is what I have done not correct?
EDIT - I know it can be solved by putting the original equation into a quadratic form, and doing so did bring me to the correct answer of -135 and 45 degrees.
Last edited: Jan 5, 2014
2. Jan 5, 2014
SteamKing
Staff Emeritus
Well, manipulating the original equation gives 2cosθsinθ = 1. What you need to do is find θ to make this equation true.
3. Jan 5, 2014
BOAS
Ahhh, I was focused on the other side of the equation!
2cosθsinθ = sin2θ
sin2θ = 1
2θ = 90
θ = 45
from the curve I can see where -135 comes from also.
4. Jan 5, 2014
Staff: Mentor
It's quicker to replace cot(θ) by 1/tan(θ)
tanθ + cotθ = 2
tanθ + 1/tanθ = 2
Multiply by tanθ to get
tan2θ + 1 = 2tanθ
tan2θ - 2tanθ + 1 = 0
This equation can be factored to solve for tanθ.
5. Jan 5, 2014
Ray Vickson
The quantity $t = \tan(\theta)$ must satisfy the equation
$$t + \frac{1}{t} = 2,$$
whose unique (real) solution is $t = 1$. This is easy to see and prove. For any $A,B > 0$ the arithmetic-geometric inequality says
$$\frac{A+B}{2} \geq \sqrt{AB},$$
with equality only when $A = B$. Apply this to $A = t, B = 1/t,$ to conclude that for $t > 0$ we have $t + 1/t \geq 2,$ with equality only when $t = 1$. For $t < 0$ it similarly follows that $t + 1/t \leq -2,$ so negative values of $t$ won't work.
So, the unique root is $\tan(\theta) = 1$.
6. Jan 5, 2014
BOAS
This is how I solved it after struggling with the method outlined in the OP.
Much simpler, but it is cool to see the many ways of solving these problems.
7. Jan 13, 2014
Anonymoose2
Yes, you've said that $\frac{sin^{2}\theta + cos^{2}\theta}{cos\theta sin\theta}$ = 2
and $sin^{2}\theta + cos^{2}\theta$ = $2cos\theta sin\theta$
Now, substitute the $2cos\theta sin\theta$ into the numerator of $\frac{sin^{2}\theta + cos^{2}\theta}{cos\theta sin\theta}$ = 2
This will give you $\frac{2cos\theta sin\theta}{cos\theta sin\theta}$ = 2
which simplifies into 2 = 2
8. Jan 13, 2014
ehild
sin(2θ)= 1 when 2θ =90 °+ k 360 °, k=0, ±1 ±2.... . That means θ=45 ° ± k 180 °. Which angles are in the range [-180°, 180°]?
ehild
9. Jan 14, 2014
Staff: Mentor
So? The idea is to show that tanθ + cotθ = 2, not that 2 = 2, which is obviously true. | 2017-12-12 11:00:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7685050964355469, "perplexity": 930.887630338394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515313.13/warc/CC-MAIN-20171212095356-20171212115356-00065.warc.gz"} |
http://math.stackexchange.com/questions/253508/ten-books-are-placed-in-a-random-order-find-the-probability-of-three-given-book/253513 | # Ten books are placed in a random order. Find the probability of three given books being side by side
As i understand it, this question should be answered by N(A)/N.
I take N to be 10!, im not sure if this is correct.
How do you calculate N(A)?
-
If you accept an answer, click the check mark at the upper-left of the answer. It is proper behavior and gives feedback to the people providing assistance. – jrand Dec 9 '12 at 3:23
here is another angle to the solution which is correctly given by André Nicolas: take those 3 books as one-- will occupy 1 slot. then you have 8 different items for 8places: 8!. and you can arrange those 3 in 3! ways. multiply.
-
We can use your $10!$. There certainly are $10!$ equally likely ways to arrange the books.
Now we count the number of orderings in which our $3$ books are together. The leftmost of our $3$ books can be in any of $8$ places, $1$ to $8$. (Write the numbers $!$ to $10$ in a row, and look.) Once this place is chosen, implicitly the other locations are chosen. Then the $3$ books can be put in the places in $3!$ orders. And for each such arrangement, the other books can be chosen in $7!$ ways, for a total of $(8)(3!)(7!)$. Now divide by $10!$. We get probability $\dfrac{(8)(3!)(7!)}{10!}$. There is a lot of cancellation.
The following is perhaps easier. The set of positions our $3$ books will occupy can be chosen in $\dbinom{10}{3}$ equally likely ways.
Note that we are not placing the books in these places yet, we are just choosing a set of locations.
Of these ways, $8$ have our books in consecutive order. So the probability is $\dfrac{8}{\binom{10}{3}}$.
-
Thank you so much, the answer helps alot! – John Dec 9 '12 at 2:16 | 2014-08-28 03:11:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7716838717460632, "perplexity": 400.27511696496765}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500830074.72/warc/CC-MAIN-20140820021350-00351-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.thejournal.club/c/paper/404120/ | #### Entropic Optimal Transport in Random Graphs
##### Nicolas Keriven
In graph analysis, a classic task consists in computing similarity measures between (groups of) nodes. In latent space random graphs, nodes are associated to unknown latent variables. One may then seek to compute distances directly in the latent space, using only the graph structure. In this paper, we show that it is possible to consistently estimate entropic-regularized Optimal Transport (OT) distances between groups of nodes in the latent space. We provide a general stability result for entropic OT with respect to perturbations of the cost matrix. We then apply it to several examples of random graphs, such as graphons or $\epsilon$-graphs on manifolds. Along the way, we prove new concentration results for the so-called Universal Singular Value Thresholding estimator, and for the estimation of geodesic distances on a manifold.
arrow_drop_up | 2022-01-25 01:15:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5966412425041199, "perplexity": 413.3001022549916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304749.63/warc/CC-MAIN-20220125005757-20220125035757-00081.warc.gz"} |
https://science.lesueur.nz/11sci/as90930/slides/changing-concentration.html | # Changing Concentration
11SCI - Chemical Tūhura
2020
## Concentration: Collision Theory
• Concentration is easily thought of like raro - the more raro particles you add to your drink, the stronger it is!
• The more particles there are, the higher the chance there is of particles bumping into each other
• The more bumps there are, the greater number of successful collisions overall
• It does not increase the probability of successful collisions
## Dilutions
• To change the concentration we must dilute one of our reactants.
• Question: What do we dilute raro with (we use the same in chemistry)?
• Answer: Water! $$H_{2}O$$
### Dilutions Explained
• In Year 11 we will do dilutions in percentages. For example, 90% $$HCl$$, 80% $$HCl$$ or 30% $$HCl$$
• For $$10ml$$ of 80% $$HCl$$ this means 80% of the $$10ml$$ is $$HCl$$ and 20% is water!
• Calculate the $$ml$$ of acid and water for the above solution ^
$\text{Volume of Acid} = \frac{percentage}{100} \times \text{total volume}$
### Task: Calculate the $$HCl$$ and $$H_{2}O$$ for each solution
$\text{Volume of Acid} = \frac{percentage}{100} \times \text{total volume}$
Concentration Volume $$ml$$ of $$HCl$$ $$ml$$ of $$H_{2}O$$
90% 100ml
80% 75ml
75% 50ml
50% 50ml
30% 50ml
Concentration Volume $$ml$$ of $$HCl$$ $$ml$$ of $$H_{2}O$$
90% 100ml $$90ml$$ $$10ml$$
80% 75ml
75% 50ml
50% 50ml
30% 50ml
Concentration Volume $$ml$$ of $$HCl$$ $$ml$$ of $$H_{2}O$$
90% 100ml $$90ml$$ $$10ml$$
80% 75ml $$60ml$$ $$15ml$$
75% 50ml
50% 50ml
30% 50ml
Concentration Volume $$ml$$ of $$HCl$$ $$ml$$ of $$H_{2}O$$
90% 100ml $$90ml$$ $$10ml$$
80% 75ml $$60ml$$ $$15ml$$
75% 50ml $$37.5ml$$ $$12.5ml$$
50% 50ml
30% 50ml
Concentration Volume $$ml$$ of $$HCl$$ $$ml$$ of $$H_{2}O$$
90% 100ml $$90ml$$ $$10ml$$
80% 75ml $$60ml$$ $$15ml$$
75% 50ml $$37.5ml$$ $$12.5ml$$
50% 50ml $$25ml$$ $$25ml$$
30% 50ml
Concentration Volume $$ml$$ of $$HCl$$ $$ml$$ of $$H_{2}O$$
90% 100ml $$90ml$$ $$10ml$$
80% 75ml $$60ml$$ $$15ml$$
75% 50ml $$37.5ml$$ $$12.5ml$$
50% 50ml $$25ml$$ $$25ml$$
30% 50ml $$15ml$$ $$35ml$$
### Diagrams
Drawing a diagram is a very good way to visually show the amount of water and acid/base in your dilutions. Perhaps this could go in your method! | 2021-06-20 11:52:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7800095677375793, "perplexity": 6402.27568092882}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00601.warc.gz"} |
http://www.all-science-fair-projects.com/science_fair_projects_encyclopedia/QR_decomposition | # All Science Fair Projects
## Science Fair Project Encyclopedia for Schools!
Search Browse Forum Coach Links Editor Help Tell-a-Friend Encyclopedia Dictionary
# Science Fair Project Encyclopedia
For information on any area of science that interests you,
enter a keyword (eg. scientific method, molecule, cloud, carbohydrate etc.).
Or else, you can start by choosing any of the categories below.
# QR decomposition
In linear algebra, the QR decomposition of a matrix A is a factorization expressing A as
A = QR
where Q is an orthogonal matrix (QQT = I), and R is an upper triangular matrix.
The QR decomposition is often used to solve the linear least squares problem. The QR decomposition is also the basis for a particular eigenvalue algorithm, the QR algorithm.
There are several methods for actually computing the QR decomposition, such as by means of Givens rotations, Householder transformations, or the Gram-Schmidt decomposition. Each has a number of advantages and disadvantages. (The matrices Q and R are not uniquely determined, so different methods may produce different decompositions.)
Contents
## Computing QR by means of Gram-Schmidt
Recall the Gram-Schmidt method, with the vectors to be considered in the process as columns of the matrix $A=(\mathbf{a}_1| \cdots|\mathbf{a}_n)$. Then
$\mathbf{u}_1 = \mathbf{a}_1, \qquad\mathbf{e}_1 = {\mathbf{u}_1 \over ||\mathbf{u}_1||}$
$\mathbf{u}_2 = \mathbf{a}_2-\mathrm{proj}_{\mathbf{e}_1}\,\mathbf{a}_2, \qquad\mathbf{e}_2 = {\mathbf{u}_2 \over ||\mathbf{u}_2||}$
$\mathbf{u}_3 = \mathbf{a}_3-\mathrm{proj}_{\mathbf{e}_1}\,\mathbf{a}_3-\mathrm{proj}_{\mathbf{e}_2}\,\mathbf{a}_3, \qquad\mathbf{e}_3 = {\mathbf{u}_3 \over ||\mathbf{u}_3||}$
$\vdots$
$\mathbf{u}_k = \mathbf{a}_k-\sum_{j=1}^{k-1}\mathrm{proj}_{\mathbf{e}_j}\,\mathbf{a}_k,\qquad\mathbf{e}_k = {\mathbf{u}_k\over||\mathbf{u}_k||}$
Naturally then, we rearrange the equations so the ais are the subject, to get the following
$\mathbf{a}_1 = \mathbf{e}_1||\mathbf{u}_1||$
$\mathbf{a}_2 = \mathrm{proj}_{\mathbf{e}_1}\,\mathbf{a}_2+\mathbf{e}_2||\mathbf{u}_2||$
$\mathbf{a}_3 = \mathrm{proj}_{\mathbf{e}_1}\,\mathbf{a}_3+\mathrm{proj}_{\mathbf{e}_2}\,\mathbf{a}_3+\mathbf{e}_3||\mathbf{u}_3||$
$\vdots$
$\mathbf{a}_k = \sum_{j=1}^{k-1}\mathrm{proj}_{\mathbf{e}_j}\,\mathbf{a}_k+\mathbf{e}_k||\mathbf{u}_k||$
Each of these projections of the vectors $\mathbf{a}_i$ onto one of these ej are merely the inner product of the two, since the vectors are normed.
Now these equations can be written in matrix form, viz.,
$\left(\mathbf{e}_1\left|\ldots\right|\mathbf{e}_n\right) \begin{pmatrix} ||\mathbf{u}_1|| & \langle\mathbf{e}_1,\mathbf{a}_2\rangle & \langle\mathbf{e}_1,\mathbf{a}_3\rangle & \ldots \\ 0 & ||\mathbf{u}_2|| & \langle\mathbf{e}_2,\mathbf{a}_3\rangle & \ldots \\ 0 & 0 & ||\mathbf{u}_3|| & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}$
But the product of each row and column of the matrices above give us a respective column of A that we started with, and together, they give us the matrix A, so we have factorized A into an orthogonal matrix Q (the matrix of eks), via Gram Schmidt, and the obvious upper triangular matrix as a remainder R.
Alternatively, $\begin{matrix} R \end{matrix}$ can be calculated as follows:
Recall that $\begin{matrix}Q\end{matrix} = \left(\mathbf{e}_1\left|\ldots\right|\mathbf{e}_n\right).$ Then, we have
$\begin{matrix} R = Q^{T}A = \end{matrix} \begin{pmatrix} \langle\mathbf{e}_1,\mathbf{a}_1\rangle & \langle\mathbf{e}_1,\mathbf{a}_2\rangle & \langle\mathbf{e}_1,\mathbf{a}_3\rangle & \ldots \\ 0 & \langle\mathbf{e}_2,\mathbf{a}_2\rangle & \langle\mathbf{e}_2,\mathbf{a}_3\rangle & \ldots \\ 0 & 0 & \langle\mathbf{e}_3,\mathbf{a}_3\rangle & \ldots \\ \vdots & \vdots & \vdots & \ddots \end{pmatrix}.$
Note that $\langle\mathbf{e}_j,\mathbf{a}_j\rangle = ||\mathbf{u}_j||,$ $\langle\mathbf{e}_j,\mathbf{a}_k\rangle = 0 \mathrm{~~for~~} j > k,$ and QQT = I, so QT = Q - 1.
### Example
Consider the decomposition of
$A = \begin{pmatrix} 12 & -51 & 4 \\ 6 & 167 & -68 \\ -4 & 24 & -41 \end{pmatrix} .$
Recall the orthogonal matrix Q such that
$\begin{matrix} Q\,Q^{T} = I. \end{matrix}$
Then, we can calculate Q by means of Gram-Schmidt as follows:
$U = \begin{pmatrix} \mathbf u_1 & \mathbf u_2 & \mathbf u_3 \end{pmatrix} = \begin{pmatrix} 12 & -51 & 4 \\ 6 & 167 & -68 \\ -4 & 24 & -41 \end{pmatrix};$
$Q = \begin{pmatrix} \frac{\mathbf u_1}{||\mathbf u_1||} & \frac{\mathbf u_2}{||\mathbf u_2||} & \frac{\mathbf u_3}{||\mathbf u_3||} \end{pmatrix} = \begin{pmatrix} 6/7 & -69/175 & -58/175 \\ 3/7 & 158/175 & 6/175 \\ -2/7 & 6/35 & -33/35 \end{pmatrix};$
Thus, we have
$\begin{matrix} A = Q\,Q^{T}A = Q R; \end{matrix}$
$\begin{matrix} R = Q^{T}A = \end{matrix} \begin{pmatrix} 14 & 21 & -14 \\ 0 & 175 & -70 \\ 0 & 0 & 35 \end{pmatrix}.$
Considering numerical errors of finite precision operation in MATLAB, we have that
$\begin{matrix} Q = \end{matrix} \begin{pmatrix} 0.857142857142857 & -0.394285714285714 & -0.331428571428571 \\ 0.428571428571429 & 0.902857142857143 & 0.034285714285714 \\ -0.285714285714286 & 0.171428571428571 & -0.942857142857143 \end{pmatrix};$
$\begin{matrix} R = \end{matrix} \begin{pmatrix} 14 & 21 & -14 \\ 1.11022302462516 \times 10^{-016} & 175 & -70 \\ -1.77635683940025 \times 10^{-015} & -5.32907051820075 \times 10^{-014} & 35 \end{pmatrix}.$
## Computing QR by means of Householder reflections
A Householder reflection (or Householder transformation) is a transformation that takes a vector and reflects it about some plane. We can use this property to calculate the QR factorization of a matrix.
Q can be used to reflect a vector in such a way that all coordinates but one disappear. Let x be an arbitrary m-dimensional column vector of length |α| (for numerical reasons α should get the same sign as the first coordinate of x).
Then, where e1 is the vector (1,0,...,0)T, and || || the euclidean norm, set
$\mathbf{u} = \mathbf{x} - \alpha\mathbf{e}_1,$
$\mathbf{v} = {\mathbf{u}\over||\mathbf{u}||},$
$Q = I - 2 \mathbf{v}\mathbf{v}^T.$.
Q is a Householder matrix and
$Qx = (\alpha\ , 0, \cdots, 0)^T$.
This can be used to gradually transform an m-by-n matrix A to upper triangular form. First, we multiply A with the Householder matrix Q1 we obtain when we choose the first matrix column for x. This results in a matrix QA with zeros in the left column (except for the first line).
$Q_1A = \begin{bmatrix} \alpha_1&\star&\dots&\star\\ 0 & & & \\ \vdots & & A' & \\ 0 & & & \end{bmatrix}$
This can be repeated for A' resulting in a Housholder matrix Q'2. Note that Q'2 is smaller than Q1. Since we want it really to operate on Q1A instead of A' we need to expand it to the upper left, filling in a 1, or in general:
$Q_k = \begin{pmatrix} I_{k-1} & 0\\ 0 & Q_k'\end{pmatrix}$
After t iterations of this process, t = min(m - 1,n),
$R = Q_t \cdots Q_2Q_1A$
is a upper triangular matrix. So, with
$Q = Q_1Q_2 \cdots Q_t$
A = QR is a QR decomposition of A.
This method has greater numerical stability than using the Gram-Schmidt method above. .
### Example
Let us calculate the decomposition of
$A = \begin{pmatrix} 12 & -51 & 4 \\ 6 & 167 & -68 \\ -4 & 24 & -41 \end{pmatrix}$
We need to find a reflection that takes the vector a1 = (12,6, - 4)T to $\| \;\mathrm{a}\| \;\mathrm{e}_1 = (14, 0, 0)^T$.
Now, u = ( - 2,6, - 4)T and $v = 14^{-{1 \over 2}}(-1, 3, -2)^T$, and then
$Q_1 = I - {2 \over 14} \begin{pmatrix} -1 \\ 3 \\ -2 \end{pmatrix}\begin{pmatrix} -1 & 3 & -2 \end{pmatrix}$
$= I - {1 \over 7}\begin{pmatrix} 1 & -3 & 2 \\ -3 & 9 & -6 \\ 2 & -6 & 4 \end{pmatrix} =\begin{pmatrix} 6/7 & 3/7 & -2/7 \\ 3/7 &-2/7 & 6/7 \\ -2/7 & 6/7 & 3/7 \\ \end{pmatrix}$
Observe now:
$Q_1A = \begin{pmatrix} 14 & 21 & -14 \\ 0 & -49 & -14 \\ 0 & 168 & -77 \end{pmatrix}$
So we already have almost a triangular matrix. We only need to zero the (3, 2) entry.
Take the (1, 1) minor, and then apply the process again to
$A' = M_{11} = \begin{pmatrix} -49 & -14 \\ 168 & -77 \end{pmatrix}$
By the same method as above, we obtain the matrix of the Householder transformation to be,
$Q_2 = \begin{pmatrix} 1 & 0 & 0 \\ 0 & -7/25 & 24/25 \\ 0 & 24/25 & 7/25 \end{pmatrix}$
after performing a direct sum with 1 to make sure the next step in the process works properly.
Now, we find
$Q=Q_1Q_2=\begin{pmatrix} 6/7 & -69/175 & 58/175 \\ 3/7 & 158/175 & -6/175 \\ -2/7 & 6/35 & 33/35 \end{pmatrix}$
$R=Q^\top A=\begin{pmatrix} 14 & 21 & -14 \\ 0 & 175 & -70 \\ 0 & 0 & -35 \end{pmatrix}$
The matrix Q is orthogonal and R is upper triangular, so A = QR is the required QR-decomposition.
03-10-2013 05:06:04 | 2013-05-25 21:59:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 44, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8194754719734192, "perplexity": 697.9584431598528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706469149/warc/CC-MAIN-20130516121429-00063-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://cyclostationary.blog/tag/signal-detection/ | A Gallery of Cyclic Correlations
There are some situations in which the spectral correlation function is not the preferred measure of (second-order) cyclostationarity. In these situations, the cyclic autocorrelation (non-conjugate and conjugate versions) may be much simpler to estimate and work with in terms of detector, classifier, and estimator structures. So in this post, I’m going to provide surface plots of the cyclic autocorrelation for each of the signals in the spectral correlation gallery post. The exceptions are those signals I called feature-rich in the spectral correlation gallery post, such as DSSS, LTE, and radar. Recall that such signals possess a large number of cycle frequencies, and plotting their three-dimensional spectral correlation surface is not helpful as it is difficult to interpret with the human eye. So for the cycle-frequency patterns of feature-rich signals, we’ll rely on the stem-style (cyclic-domain profile) plots that I used in the spectral correlation gallery post.
Comments on “Detection of Almost-Cyclostationarity: An Approach Based on a Multiple Hypothesis Test” by S. Horstmann et al
The statistics-oriented wing of electrical engineering is perpetually dazzled by [insert Revered Person]’s Theorem at the expense of, well, actual engineering.
I recently came across the conference paper in the post title (The Literature [R101]). Let’s take a look.
The paper is concerned with “detect[ing] the presence of ACS signals with unknown cycle period.” In other words, blind cyclostationary-signal detection and cycle-frequency estimation. Of particular importance to the authors is the case in which the “period of cyclostationarity” is not equal to an integer number of samples. They seem to think this is a new and difficult problem. By my lights, it isn’t. But maybe I’m missing something. Let me know in the Comments.
CSP Patent: Tunneling
Tunneling == Purposeful severe undersampling of wideband communication signals. If some of the cyclostationarity property remains, we can exploit it at a lower cost.
My colleague Dr. Apurva Mody (of BAE Systems, AiRANACULUS, IEEE 802.22, and the WhiteSpace Alliance) and I have received a patent on a CSP-related invention we call tunneling. The US Patent is 9,755,869 and you can read it here or download it here. We’ve got a journal paper in review and a 2013 MILCOM conference paper (My Papers [38]) that discuss and illustrate the involved ideas. I’m also working on a CSP Blog post on the topic.
Update December 28, 2017: Our Tunneling journal paper has been accepted for publication in the journal IEEE Transactions on Cognitive Communications and Networking. You can download the pre-publication version here.
Automatic Spectral Segmentation
Radio-frequency scene analysis is much more complex than modulation recognition. A good first step is to blindly identify the frequency intervals for which significant non-noise energy exists.
In this post, I discuss a signal-processing algorithm that has almost nothing to do with cyclostationary signal processing (CSP). Almost. The topic is automatic spectral segmentation, which I also call band-of-interest (BOI) detection. When attempting to perform automatic radio-frequency scene analysis (RFSA), we may be confronted with a data block that contains multiple signals in a number of distinct frequency subbands. Moreover, these signals may be turning on and off within the data block. To apply our cyclostationary signal processing tools effectively, we would like to isolate these signals in time and frequency to the greatest extent possible using linear time-invariant filtering (for separating in the frequency dimension) and time-gating (for separating in the time dimension). Then the isolated signal components can be processed serially using CSP.
It is very important to remember that even perfect spectral and temporal segmentation will not solve the cochannel-signal problem. It is perfectly possible that an isolated subband will contain more than one cochannel signal.
The basics of my BOI-detection approach are published in a 2007 conference paper (My Papers [32]). I’ll describe this basic approach, illustrate it with examples relevant to RFSA, and also provide a few extensions of interest, including one that relates to cyclostationary signal processing.
Modulation Recognition Using Cyclic Cumulants, Part I: Problem Description and Variants
Modulation recognition is the process of assigning one or more modulation-class labels to a provided time-series data sequence.
In this post, we start a discussion of what I consider the ultimate application of the theory of cyclostationary signals: Automatic Modulation Recognition. My relevant papers are My Papers [16,17,25,26,28,30,32,33,38,43,44]. See also my machine-learning modulation-recognition critiques by clicking on Machine Learning in the CSP Blog Categories on the right side of any post or page.
Comments on “Blind Cyclostationary Spectrum Sensing in Cognitive Radios” by W. M. Jang
We are all susceptible to using bad mathematics to get us where we want to go. Here is an example.
I recently came across the 2014 paper in the title of this post. I mentioned it briefly in the post on the periodogram. But I’m going to talk about it a bit more here because this is the kind of thing that makes things harder for people trying to learn about cyclostationarity, which eventually leads to the need for something like the CSP Blog as a corrective.
The idea behind the paper is that it would be nice to avoid the need for prior knowledge of cycle frequencies when using cycle detectors or the like. If you could just compute the entire spectral correlation function, then collapse it by integrating (summing) over frequency $f$, then you’d have a one-dimensional function of cycle frequency $\alpha$ and you could then process that function inexpensively to perform detection and classification tasks.
Comments on “Cyclostationary Correntropy: Definition and Application” by Fontes et al
I recently came across a published paper with the title Cyclostationary Correntropy: Definition and Application, by Aluisio Fontes et al. It is published in a journal called Expert Systems with Applications (Elsevier). Actually, it wasn’t the first time I’d seen this work by these authors. I had reviewed a similar paper in 2015 for a different journal.
I was surprised to see the paper published because I had a lot of criticisms of the original paper, and the other reviewers agreed since the paper was rejected. So I did my job, as did the other reviewers, and we tried to keep a flawed paper from entering the literature, where it would stay forever causing problems for readers.
The editor(s) of the journal Expert Systems with Applications did not ask me to review the paper, so I couldn’t give them the benefit of the work I already put into the manuscript, and apparently the editor(s) did not themselves see sufficient flaws in the paper to merit rejection.
It stings, of course, when you submit a paper that you think is good, and it is rejected. But it also stings when a paper you’ve carefully reviewed, and rejected, is published anyway.
Fortunately I have the CSP Blog, so I’m going on another rant. After all, I already did this the conventional rant-free way.
100-MHz Amplitude Modulation? Comments on “Sub-Nyquist Cyclostationary Detection for Cognitive Radio” by Cohen and Eldar
I came across a paper by Cohen and Eldar, researchers at the Technion in Israel. You can get the paper on the Arxiv site here. The title is “Sub-Nyquist Cyclostationary Detection for Cognitive Radio,” and the setting is spectrum sensing for cognitive radio. I have a question about the paper that I’ll ask below.
The Cycle Detectors
CSP shines when the problem involves strong noise or cochannel interference. Here we look at CSP-based signal-presence detection as a function of SNR and SIR.
Let’s take a look at a class of signal-presence detectors that exploit cyclostationarity and in doing so illustrate the good things that can happen with CSP whenever cochannel interference is present, or noise models deviate from simple additive white Gaussian noise (AWGN). I’m referring to the cycle detectors, the first CSP algorithms I ever studied (My Papers [1,4]). | 2021-12-08 18:11:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5409041047096252, "perplexity": 1270.9874420229196}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00391.warc.gz"} |
https://msp.org/agt/2020/20-1/p11.xhtml | #### Volume 20, issue 1 (2020)
Recent Issues
The Journal About the Journal Editorial Board Editorial Interests Subscriptions Submission Guidelines Submission Page Policies for Authors Ethics Statement ISSN (electronic): 1472-2739 ISSN (print): 1472-2747 Author Index To Appear Other MSP Journals
Towards topological Hochschild homology of Johnson–Wilson spectra
### Christian Ausoni and Birgit Richter
Algebraic & Geometric Topology 20 (2020) 375–393
##### Abstract
We present computations in Hochschild homology that lead to results on the $K\left(i\right)$–local behaviour of $THH\left(E\left(n\right)\right)$ for all $n\ge 2$ and $0\le i\le n$, where $E\left(n\right)$ is the Johnson–Wilson spectrum at an odd prime. This permits a computation of $K{\left(i\right)}_{\ast }THH\left(E\left(n\right)\right)$ under the assumption that $E\left(n\right)$ is an ${E}_{3}$–ring spectrum. We offer a complete description of $THH\left(E\left(2\right)\right)$ as an $E\left(2\right)$–module in the form of a splitting into chromatic localizations of $E\left(2\right)$, under the assumption that $E\left(2\right)$ carries an ${E}_{\infty }$–structure. If $E\left(2\right)$ is admits an ${E}_{3}$–structure, we obtain a similar splitting of the cofiber of the unit map $E\left(2\right)\to THH\left(E\left(2\right)\right)$.
##### Keywords
topological Hochschild homology, Johnson–Wilson spectra, $E_\infty$–structures on ring spectra, chromatic squares
##### Mathematical Subject Classification 2010
Primary: 55N35, 55P43 | 2021-05-15 01:53:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 16, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.529172420501709, "perplexity": 1459.8067860745334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991812.46/warc/CC-MAIN-20210515004936-20210515034936-00491.warc.gz"} |
http://www.tex.ac.uk/tex-archive/graphics/circuit_macros/ | # Index of /tex-archive/graphics/circuit_macros
Parent Directory -
CHANGES22-Apr-2014 18:28 9.1K
Copying22-Apr-2014 18:28 456
Makefile22-Apr-2014 18:28 1.5K
boxdims.sty22-Apr-2014 18:28 1.2K
darrow.m422-Apr-2014 18:28 15K
doc/24-Apr-2014 21:42 -
examples/24-Apr-2014 21:44 -
gpic.m422-Apr-2014 18:28 1.0K
lib3D.m422-Apr-2014 18:28 7.4K
libcct.m422-Apr-2014 18:28 114K
libgen.m422-Apr-2014 18:28 53K
liblog.m422-Apr-2014 18:28 48K
mfpic.m422-Apr-2014 18:28 1.1K
mpost.m422-Apr-2014 18:28 1.0K
pgf.m422-Apr-2014 18:28 1.5K
postscript.m422-Apr-2014 18:28 2.4K
pstricks.m422-Apr-2014 18:28 2.0K
svg.m422-Apr-2014 18:28 2.9K
xfig.m422-Apr-2014 18:28 1.0K
```
* Circuit_macros Version 7.9, copyright (c) 2014 J. D. Aplevich under *
* the LaTeX Project Public License. The files of this distribution may *
* be redistributed or modified provided that this copyright notice is *
* included and provided that modifications are clearly marked to *
* distinguish them from this distribution. There is no warranty *
* whatsoever for these files. *
This is a set of macros for drawing high-quality line diagrams to
include in TeX, LaTeX, web, or similar documents, with support for
SVG and other formats. Fundamental electric circuit elements and
basic logic gate based on IEEE and European standards are included
with several tools and examples of other types of diagrams. Elements
can be scaled or drawn in any orientation and are easy to modify.
The advantages and disadvantages of such a system are similar to those
of TeX itself, which is macro-based and non-wysiwyg, with ordinary
text input.
The macros are to be processed by an m4 macro processor, and evaluate to
drawing commands in the pic "little language," which is easy to read and
learn. The diagram is then automatically translated into TiKZ, PSTricks,
or other formats for processing by LaTeX or other applications. Pic
is well suited to line drawings requiring parametric or conditional
components, fine adjustment, significant geometric calculations,
repetition, or recursion. Arbitrary text for formatting by LaTeX can
be placed at will in the diagram. Free interpreters for m4 and pic
REQUIRED SOFTWARE:
Preferred setup:
GNU m4, dpic (see below), LaTeX, PSTricks, dvips
or
m4, dpic, LaTeX or PDFLaTeX, TikZ-PGF
The dpic interpreter can translate pic input into several forms:
a .tex file for processing by latex with PSTicks or pgf/Tikz (also
pict2e or eepicemu for simple diagrams), or an mfpic, MetaPost, xfig,
SVG, or postscript file.
The GNU m4 macro processor is assumed since its -I option and M4PATH
environment variable simplify file inclusion (see INSTALLATION below).
Early versions of these macros required absolute path names to be
used in include statements, which is still possible.
Alternative:
m4, GNU pic (gpic), TeX or LaTeX, and a driver recognizing tpic specials
(eg dvips)
The GNU pic interpreter produces tpic special commands.
Also possible for some diagrams:
m4 and dpic with output in the following formats:
LaTeX graphics or LaTeX eepic (for simple diagrams), mfpic, xfig,
MetaPost, SVG, Postscript
USAGE
First-time users should read the Quick Start section of CMman.pdf.
The following describes basic usage. See below for integration
with other tools. Suppose a source file, cct.m4 say, has been
created and the top two lines are
.PS
cct_init
...
The file is processed as shown, assuming that you have set the M4PATH
environment variable in the INSTAllATION instructions:
m4 pstricks.m4 cct.m4 | dpic -p > cct.tex
If you have not set the M4PATH environmental variable then the command is
m4 -I <path> pstricks.m4 cct.m4 | dpic -p > cct.tex
where <path> is the absolute path to the directory containing the macros.
If M4PATH is defined and, in addition, the first line of cct.m4 is
include(pstricks.m4), then this command can be simplified to
m4 cct.m4 | dpic -p > cct.tex
To use the gpic processor, the command is
m4 gpic.m4 cct.m4 | gpic -t > cct.tex
with the -I <path> option added if M4PATH has not been defined.
The resulting file cct.tex is normally inserted into a document to
be processed by LaTeX, and the resulting dvi file is converted to
postscript using dvips. Read Section 2 of the manual to see how to
process the diagram source from within the .tex source.
In the case of PGF, pgf.m4 is read instead of pstricks.m4 and the dpic
option is -g, so the command is
m4 pgf.m4 cct.m4 | dpic -g > cct.tex
or, using include(pgf.m4) in cct.m4,
m4 cct.m4 | dpic -g > cct.tex
The document is processed either by LaTeX to produce postscript
or PDFLaTeX to produce pdf directly.
INSTALLATION:
1. Decide where you will be installing the .m4 library files. In
principle, they can go anywhere, \$HOME/Circuit_macros or in your
localtexmf folder such as c:\localtexmf\Circuit_macros, for example.
Copy the Makefile and the .m4 files from the top-level directory
of the distribution to the installation directory, or simply expand
the .tar.gz or .zip distribution file to create the installation
directory.
2. Copy boxdims.sty (see Section 9 of the manual) from the top
distribution directory to where LaTeX will find it, typically
localtexmf/tex/latex/local/ or C:\localtexmf\tex\latex\local,
and refresh the LaTeX filename database.
3. Define the environment variable M4PATH to point to the installation
directory determined in step 1. On Cygwin, for example, add the
export M4PATH='.:/cygdrive/c/localtexmf/Circuit_macros:'
but modify the path to the installation directory as necessary.
On a Unix or Linux machine, the line might be
export M4PATH='/usr/local/share/texmf/tex/latex/circuit_macros/'
depending on where you have installed the files.
4. This is optional. You can change the definition of the default
processor to dpic with (for example) PSTricks or pgf (ie tikz)
by editing the include command near the top of libgen.m4. To do
this automatically, copy Makefile from the top-level directory to
the installation directory and type
"make psdefault" to make dpic with PSTricks the default
"make pgfdefault" to make dpic with Tikz pgf the default
"make gpicdefault" to restore gpic as the default.
WORKFLOW: The basic commands given above suffice for documents of moderate
size and complexity; otherwise, a "make" facility or equivalent should be
used or, for modest documents, diagram processing can be controlled from
within the tex document source as described in the manual. Special-purpose
editors and project tools such as TeXnicCenter or Cirkuit can be employed.
Otherwise, a scripting language can automate the steps as done by
Latexmk, for example.
NOTE: One of the configuration files (gpic.m4, pstricks.m4, pgf.m4,
postscript.m4, mpost.m4, mfpic.m4, svg.m4, or xfig.m4) must be
read by m4 before (or at the beginning of) any of the other files,
depending on the required form of pic output. Otherwise, libgen.m4 can
be read first to use the default configuration file, which is gpic.m4
in the distribution.
TESTING:
To test your installation, go to the examples directory and create
a test circuit in the file test.m4. Copy ex01.m4, for example, or
quick.m4 from the doc directory into test.m4.
On a system with a "make" facility, first check the definitions at
the top of the Makefile, and then type "make tst1" to produce the
file tst.ps. If the source requires processing twice, type "make
tst" instead. To process one of the example .m4 files in the
examples directory, simply type "make name.ps" to process name.m4.
If these tests work to your satisfaction, try typing simply "make" to
produce examples.ps. To test .pdf files, go to the pgf directory,
copy name.m4 there, and type either "make name.ps" or "make name.pdf"
to test the file under pdflatex and TikZ PGF.
No "make" facility? You have to test by hand (but see below for
diagram production software). Copy a test file as above into
test.m4. Assuming you have dpic installed, type the following:
m4 -I <path> pstricks.m4 test.m4 > test.pic
dpic -p test.pic > test.tex
latex tst
dvips tst -o tst.ps
For several years, these macros were developed and tested on a Solaris
operating system. More recently, they have been maintained on a PC
with Cygwin, MiKTeX, and dpic.
SOURCES:
M4 is widely available on Unix systems. PC source and executables are
also available: http://gnuwin32.sourceforge.net/packages/m4.htm
A large set of Unix-like Windows tools, including m4, is available via
http://www.cygwin.com/
DJGPP versions are available as m4-NNb.zip (where NN is the current
release number) on web archives and at
http://www.delorie.com/djgpp/dl/ofc/dlfiles.cgi/current/v2gnu/
There are several sources of hints on m4 usage; some places to look are
http://gnuwin32.sourceforge.net/packages/m4.htm
http://www.gnu.org/software/m4/manual/
http://www.seindal.dk/rene/gnu/
The m4 (computer language) article in Wikipedia gives a concise overview.
An academic discussion of the language can be found in
http://www.cs.stir.ac.uk/~kjt/research/pdf/expl-m4.pdf.
Gpic is part of the GNU groff distribution, for which the latest
source is available from http://ftp.gnu.org/gnu/groff/, but there are
mirror sites that archive these sources, and others that distribute
executables.
DPIC:
Dpic is not included here you say? If you want to try the LaTeX
picture objects, mfpic, PSTricks, TikZ-PGF, MetaPost, xfig, SVG, or
Postscript output provided by dpic, the current free source and
Windows executable can be obtained from
http://ece.uwaterloo.ca/~aplevich/dpic/
MANUALS:
View or print CMman.pdf in the doc directory.
The original pic manual can be obtained at
http://www.cs.bell-labs.com/10thEdMan/pic.pdf. A more extensive
manual is found in the documentation that comes with GNU pic, which
is typically installed as gpic. The latest version can be found in
the groff package at http://ftp.gnu.org/gnu/groff/ . A pdf copy
is included with the dpic distribution and a version can be found
on the web at http://www.kohala.com/start/troff/gpic.raymond.ps
The dpic distribution includes a manual containing a summary of the pic
language and discussion of features unique to dpic.
EXAMPLES AND INTEGRATION WITH OTHER TOOLS:
A set of examples is included in this distribution, showing electric
circuits, block diagrams, flow charts, signal-flow graphs, basic use
of colour and fill, and other applications.
Read the manual CMman.pdf and view or print the file examples.ps in the
examples directory. For the possibly unstable development version, try
http://ece.uwaterloo.ca/~aplevich/Circuit_macros/
The examples directory Makefile automates the generation of .ps, .eps,
.png, and .pdf files for individual diagrams. Subdirectories of the
examples directory are for testing metafont, metapost, pdflatex, pgf,
psfrag, and xfig examples.
Installation and usage of the macros has evolved a little since the
beginning so archived instructions on the net may be slightly more
complicated than currently necessary.
A set of examples and hints intended for his colleagues has been
produced by Alan Robert Clark at http://ytdp.ee.wits.ac.za/cct.html
A website describing usage and tools for Circuit_macros has been created
by Peter-Jan Randewijk at
http://staff.ee.sun.ac.za/pjrandewijk/wiki/index.php/M4_Circuit_Macros
The site includes examples ranging from basic circuits to block diagrams.
Tools for creating pdf and web diagrams are included, along with
Circuit_macro customizations for the Kile LaTeX editor, which are described at
http://staff.ee.sun.ac.za/pjrandewijk/wiki/index.php/M4_Circuit_Macros_-_Kile_Integration
A KDE interface created by Matteo Agostinelli can be found at
http://wwwu.uni-klu.ac.at/magostin/cirkuit.html
Mac users:
A previewer script written by Collin J. Delker is available at
https://bitbucket.org/cdelker/circuit_macros-generation-and-preview
An introduction to installation and use on OS X by Felipe Cavalcanti is at
http://www.lara.unb.br/~fbcavalcanti/docs/tech/circuit_macros/using_circuit_macros_in_mac_osx.pdf
For more examples in the context of a textbook, have a look at
Aplevich, J.D., "The Essentials of Linear State-Space Systems," New
York: John Wiley & Sons Inc., 2000. In Canada, look at Andrews,
G.C., Aplevich, J.D., Fraser, and R.A., MacGregor, C.G.,
"Introduction to Professional Engineering in Canada," (Third edition)
Toronto: Prentice Hall, Pearson Education Canada, Inc., 2008. Some
samples from these books can be found at
http://ece.uwaterloo.ca/~aplevich/
For an example of the use of dpic in a wiki (thanks to Jason Grout),
see http://math.byu.edu/~grout/software/dokuwiki/format-plugin
Another web-based pic application can be found at http://figr.org/
A collection of pic resources and related material is available at
http://www.kohala.com/start/troff/troff.html Some of the example pic
macros found there need minor tuning to work under dpic.
A pic tutorial on the web is found at
http://www.onlamp.com/pub/a/onlamp/2007/06/21/in-praise-of-pic.html
The examples in this distribution include some flowchart elements
in Flow.m4. For a pic-only version that does not require m4, look at
The use of the pic language and pic macros for drawing graphs is
described at http://www.math.uiuc.edu/~west/gpic.html
MetaPost examples: Go to the examples/mpost directory. Check the
Makefile as described in the README file, type "make", and stand well back.
TikZ-PGF: Check the Makefile in the examples/pgf directory as described
in the README file, and type "make" or "make examples.pdf".
PDFLaTeX: Check the Makefile in the examples/pdflatex directory as described
in the README file, and type "make". These examples use Metafont as an
intermediate format and are made somewhat obselete by the above TikZ-PGF
compatibility.
Postscript with embedded psfrag strings:
Type "make" in the examples/psfrag directory to process examples
using dpic -f for creating .eps files with embedded psfrag strings.
Circuits and other diagrams not requiring LaTeX-formatted text can be
processed through m4 and dpic -r to produce encapsulated Postscript
output. This output can also be imported into CorelDraw or Adobe
Illustrator. However, Postscript is not a word processor, so any
LaTeX formatting instructions in the source are not obeyed. These programs
also import svg output produced by dpic -v.
SVG output, Inkscape:
Dpic -v produces svg output. If the result is to be directly
inserted into html, then as for Postscript output, the diagram source
file has to be adapted to remove any LaTeX formatting. A switch in these
macros deletes explicit LaTeX markup from the defined elements and provides
other macros in svg.m4 for xml text formatting.
If SVG is the ultimate goal, then it may be advisable to use the tool
dvisvgm to convert dvi to svg. I haven't tried it yet.
SVG is the native file format for the Inkscape graphics editor.
Therefore, elements defined by these macros can be output by dpic -v
in svg format for later manipulation by Inkscape. Recent Inkscape versions
can export graphics to eps or pdf format and text to tex format, so
that labels can be formatted by LaTeX and overlaid on the graphics
file. This process allows the use of Inkscape to place and embellish
circuit elements.
A basic library of circuit elements created from these macros for
importing into Inkscape is found in examples/svg/svglib.m4.
Metafont:
The file examples/mf/cct.mf is a Metafont source for a few variants of
the basic elements, produced using the mfpic output of dpic. It may
be of interest to persons who cannot otherwise implement the macros.
To see the elements (assuming a typical installation), type "make"
in the mf directory.
Xfig:
The file examples/xfig/xfiglib.fig contains circuit elements in xfig
3.2 format produced by dpic. The file is a prototype because many
more elements could be included. Logic gates often have many labels,
and xfig is not a word processor, so some fine tuning of labels is in
order. Translation between languages always involves a loss of
information and idiom, but Xfig can store diagrams in pic format, so
it is possible to alternate between xfig and dpic.
LIBRARIES:
The file libgen.m4 contains basic macro definitions and is included
automatically by other libraries. The file libcct.m4 defines basic
circuit elements. Binary logic-circuit elements are in liblog.m4.
Macros for drawing 3D projections are in lib3D.m4, and some macros
for drawing double-line arrows are in darrow.m4.
MODIFICATIONS:
Macros such as these inevitably will be modified to suit individual
needs and taste. They continue to evolve in my own library as I use
them and as others send comments. No such collection can hope to
include all possible circuit-related symbols, so you will probably
find yourself writing your own macros or adapting some of these. Be
careful to rename modified macros to avoid confusion. The learning
curve compares well to other packages, but there is no trivially easy
way to produce high-quality graphics.
Feel free to contact me with comments or questions. I have retired
from full-time work but continue the hobby of maintaining these files.
I may now be able to spend more time on individual requests but I may | 2014-07-29 12:44:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8129661083221436, "perplexity": 12950.482749072205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267330.29/warc/CC-MAIN-20140728011747-00311-ip-10-146-231-18.ec2.internal.warc.gz"} |
https://madanalysis.irmp.ucl.ac.be/wiki/SFS | ## Simplified - Fast Simulation (SFS) of detector response
This page contains a brief introduction about the usage of SFS machinery, for details, please see arXiv:2006.09387. SFS machinery allows the user to simulate detector response within MadAnalysis 5 framework using only FastJet libraries. SFS uses transfer functions on reconstructed objects to simulate detector response. It is, also, fully integrated with Public Analysis Database, and the user can recast experimental analyses using SFS' fast interface, for details, please see below. Although we provide default ATLAS and CMS cards which are validated against corresponding Delphes cards for four different physics process, this introduction will provide all information that is needed to use SFS machinery for any homebrew detector simulation.
• Prerequisites: FastJet
$> ./bin/ma5 -R ma5> install fastjet ma5> set main.fastsim.package = fastjet First-line activates the RECO mode for MadAnalysis 5 framework which is required to use SFS machinery, the second line is to install FasJet interface, and finally, the last line enables the FastJet interface for the current session. User can set the desired reconstruction algorithm as before, please see arXiv:1808.00480 or tutorials for details on the usage of the FastJet module in MadAnalysis 5. After installing FastJet one can use default SFS cards by simply typing $> ./bin/ma5 -R madanalysis/input/<EXP>_default.ma5
where <EXP> can either be ATLAS or CMS.
SFS contains three submodules, namely reco_efficiency, smearer, and tagger. These submodules set a probability distribution to reconstruct a given object, smear carried object's four-momentum using normalized Gaussian functions and sets identification efficiencies, respectively. The goal is to create a CPU efficient, user-friendly, easy-to-use, and generic environment. Thus given transfer functions are translated into C++ functions to act on reconstructed final state objects such as jets, hadronic taus, electrons, muons, and photons.
Jet reconstruction contains two possible options, namely jet smearing and substructure smearing. These options can be set via
ma5> set main.fastsim.jetrecomode = <opt>
where <opt> represents user-defined input which can either be jets or constituents (default jets). jets option allows the reco_efficiency and smearer submodules to act on clustered jet objects. The latter option, constituents, enables the possibility to apply detector response at the hadron level.
## Reconstruction Efficiencies: reco_efficiency
This submodule can act on jets (j, 21), electrons (e, 11), muons (mu, 13), hadronic taus (ta, 15) or photons (a, 22). It simply takes three input;
ma5> define reco_efficiency <obj> <func> [<domain>]
where <obj> represents the desired object to be reconstructed, <func> is the transfer function which can depend on any observable like PT, E, ETA,ABSETA,PHI etc. which are defined in MadAnalysis 5's interface. <func> can also include any functional form like trigonometric or hyperbolic functions. <dom> (optional) represents the domain where the defined function will be active. The module creates a piecewise function to construct a probability distribution for the given object and phase-space. This piece of input will generate a probability distribution for the desired object to decide if it's going to be used in the analysis or not.
• Example:
ma5> define reco_efficiency e 0.0 [pt <= 10.0 or abseta > 2.5]
ma5> define reco_efficiency e 0.7 [pt > 10.0 and abseta <= 1.5]
ma5> define reco_efficiency e 0.55 [pt > 10.0 and abseta > 1.5 and abseta <= 2.5]
Here we exemplify an electron reconstruction efficiency where an electron object will not be reconstructed if it has less than 10 GeV transverse momentum or its pseudorapidity is above 2.5. Additionally, it will be reconstructed with 70% probability if it's within |\eta| < 1.5 and pT >= 10 GeV and with 55% probability if its between 1.5 < |\eta| <= 2.5. Domain inputs have to be separated with and or or keywords to be effective, and enforces all domain conditions to be true and or requires at least one domain to be true. It is also possible to connect multiple domains together such as [(abseta>1 and abseta<1.5) or (abseta>2.5 and abseta<3)].
## Object Smearing: smearer
Here we introduce the smearing function, which uses a user-defined standard deviation to be used in normalized Gaussian distribution. This function can depend on any observable defined in MadAnalysis interface. All information regarding the reconstructed objects to be used, function and domain syntax is the same as before. This submodule can be defined as follows;
define smearer <obj> with <var> <func> [<dom>]
Here <obj> stands for the object (jet, electron, muon, hadronic tau or photon), <var> is the variable to be smeared which can be E, PT, PX, PY, PZ, PHI and ETA. <func> and <dom> are, as defined above, stands for the function and the domain that this function will be active. Note that <obj> and <var> are separated with the keyword with and <dom> is optional.
• Example:
ma5> define smearer j with PT sqrt(0.06^2 + pt^2*1.3e-3^2) [abseta <= 0.5 and pt > 0.1]
ma5> define smearer j with PT sqrt(0.1^2 + pt^2*1.7e-3^2) [abseta > 0.5 and abseta <= 1.5 and pt > 0.1]
ma5> define smearer j with PT sqrt(0.25^2 + pt^2*3.1e-3^2) [abseta > 1.5 and abseta <= 2.5 and pt > 0.1]
Here we exemplified transverse momentum smearing of the jet object. This function simply generates a standard deviation, which depends on PT of the given jet. This STD will be further used in a normalized Gaussian function to simulate the uncertainty on the transverse momentum observation. The transverse momentum then shifted within the Gaussian width.
## Particle Identification: tagger
tagger submodule is used to set particle identification or misidentification efficiencies for desired objects. It can be defined as follows;
ma5> define tagger <true> as <reco> <func> [<dom>]
where <true> stands for the true object that will be reconstructed as <reco> object. If <true> and <reco> objects are same, the module will apply efficiency to reconstruct the given object. If, on the other hand, the <true> and <reco> objects are different, the module will apply misidentification efficiency on the <true> object. This submodule can be used on jets (j, 21), b-jets (b, 5), c-jets (c, 4), electrons (e, 11), muons (mu, 13), hadronic taus (ta, 15) or photons (a, 22) where within certain physical limitations each object can be reconstructed as other objects, please see arXiv:2006.09387 for the available options. It is important to note that <true> and <reco> are separated by the keyword as.
• Example:
ma5> define tagger j as ta 0.01 [ntracks >= 2 and abseta <= 2.7]
ma5> define tagger j as ta 0.02 [ntracks == 1 and abseta <= 2.7]
ma5> define tagger j as ta 0.0 [abseta > 2.7]
ma5> define tagger b as b 0.80*tanh(0.003*pt)*(30/(1+0.086*pt))
Here first three examples show the jet misidentification as a hadronic tau object where it can be misidentified with respect to the number of prongs inside the jet. The last line shows the b-jet tagging efficiency, which depends on its transverse momentum.
The algorithm gives the importance to first b-jets, where if a jet is mistagged as c-jet, it won't be mistagged as b, tau, electron, or photon. Similar order goes for all other objects; we simply set the importance as follows b > c > tau > muon > electron > photon, which means that if a jet is mistagged as any of the objects on this sequence, it won't be mistagged as the following objects. Similarly, user can mistag electrons, muons and photons as well;
• Example:
ma5> define tagger e as mu 0.01*exp(-pt)
This example sets a mistagging efficiency of 1% to tag an electron as a muon which decreases exponentially by the transverse momentum of the electron.
## SFS in Expert Mode
SFS machinery can be used with expert mode as well. Users can set up an SFS card with the desired options, which are exemplified above and given to MadAnalysis as input;
\$> ./bin/ma5 -Re <folder_name> <analysis_name> <SFS_card>
To use the SFS machinery in expert mode, the expert mode has to be initialized alongside with RECO mode. <folder_name> is optional input to create the analysis folder, <analysis_name> is, again, optional to write a null analysis file with the given name and finally the optional <SFS_card> generates the detector environment for the analysis and the detector effects will be applied live during the analysis. An expert mode interpreter can be found in this link.
## LHC Recasting with SFS
SFS machinery can be used for LHC recasting. To install available analyses, please type;
ma5> install PADForSFS
This will install all available analyses to be used in recast. User does not need to install Delphes or ROOT to be able to use this module. In order to initialize PADForSFS simply type;
ma5> set main.recast = on
ma5> import <my_sample>.hepmc.gz as <sample_name>
ma5> set <sample_name>.xsection = 123
ma5> submit
First-line initializes the recasting module; second line imports desired analysis with label <sample_name>, the third line sets the cross-section for the sample, and the last line submits the job. It is also possible to include theoretical uncertainties and high luminosity extrapolations, for more info, see arXiv:1910.11418.
How to write an analysis? Details on how to write an analysis for Public Analysis Database can be found here. User should initialize the code with PDGIDs for hadrons and invisible particles as follows;
bool cms_cat_43_21::Initialize(const MA5::Configuration& cfg,
const std::map<std::string,std::string>& parameters)
{
PHYSICS->mcConfig().Reset();
return true;
}
a complete set or PDG-IDs to be initialized can be found in here. Without such a list, MadAnalysis won't be able to differentiate hadrons and invisible particles to be further used in the analysis.
## Available Analyses
!! please properly cite all the re-implementation codes you are using; here are a PADForSFS BibTeX file for this purpose !!
### ATLAS analyses, 13 TeV
Analysis Short Description Implemented by Code Validation note Version ATLAS-SUSY-2016-07 Multijet + missing transverse momentum (36.1 fb-1) G. Chalons, H. Reyes-Gonzalez Local PDF Pythia files arXiv:2006.09387 Sec. 5.2 v1.8/SFS ATLAS-CONF-2019-040 Jets + missing transverse momentum (139 fb-1) F. Ambrogi Inspire PDF v1.8/SFS
### CMS analyses, 13 TeV
Analysis Short Description Implemented by Code Validation note Version CMS-SUS-16-048 Compressed electroweakinos with soft leptons (35.9 fb-1) B. Fuks Local Sec. 19 in 2002.12220 arXiv:2006.09387 Sec. 5.3 v1.8/SFS | 2020-09-21 00:37:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7047256827354431, "perplexity": 5738.577094372849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198868.29/warc/CC-MAIN-20200920223634-20200921013634-00513.warc.gz"} |
https://stats.libretexts.org/Courses/Diablo_Valley_College/Math_142%3A_Elementary_Statistics_(Kwai-Ching)/Math_142%3A_Text_(Openstax)/02%3A_Descriptive_Statistics/2.04%3A_Measures_of_the_Location_of_the_Data | # 2.4: Measures of the Location of the Data
The common measures of location are quartiles and percentiles. Quartiles are special percentiles. The first quartile, Q1, is the same as the 25th percentile, and the third quartile, Q3, is the same as the 75th percentile. The median, M, is called both the second quartile and the 50th percentile.
To calculate quartiles and percentiles, the data must be ordered from smallest to largest. Quartiles divide ordered data into quarters. Percentiles divide ordered data into hundredths. To score in the 90th percentile of an exam does not mean, necessarily, that you received 90% on a test. It means that 90% of test scores are the same or less than your score and 10% of the test scores are the same or greater than your test score.
Percentiles are useful for comparing values. For this reason, universities and colleges use percentiles extensively. One instance in which colleges and universities use percentiles is when SAT results are used to determine a minimum testing score that will be used as an acceptance factor. For example, suppose Duke accepts SAT scores at or above the 75th percentile. That translates into a score of at least 1220.
Percentiles are mostly used with very large populations. Therefore, if you were to say that 90% of the test scores are less (and not the same or less) than your score, it would be acceptable because removing one particular data value is not significant.
The median is a number that measures the "center" of the data. You can think of the median as the "middle value," but it does not actually have to be one of the observed values. It is a number that separates ordered data into halves. Half the values are the same number or smaller than the median, and half the values are the same number or larger. For example, consider the following data.
1; 11.5; 6; 7.2; 4; 8; 9; 10; 6.8; 8.3; 2; 2; 10; 1
Ordered from smallest to largest:
1; 1; 2; 2; 4; 6; 6.8; 7.2; 8; 8.3; 9; 10; 10; 11.5
Since there are 14 observations, the median is between the seventh value, 6.8, and the eighth value, 7.2. To find the median, add the two values together and divide by two.
$\dfrac{6.8+7.2}{2} = 7$
The median is seven. Half of the values are smaller than seven and half of the values are larger than seven.
Quartiles are numbers that separate the data into quarters. Quartiles may or may not be part of the data. To find the quartiles, first find the median or second quartile. The first quartile, Q1, is the middle value of the lower half of the data, and the third quartile, Q3, is the middle value, or median, of the upper half of the data. To get the idea, consider the same data set:
1; 1; 2; 2; 4; 6; 6.8; 7.2; 8; 8.3; 9; 10; 10; 11.5
The median or second quartile is seven. The lower half of the data are 1, 1, 2, 2, 4, 6, 6.8. The middle value of the lower half is two.
1; 1; 2; 2; 4; 6; 6.8
The number two, which is part of the data, is the first quartile. One-fourth of the entire sets of values are the same as or less than two and three-fourths of the values are more than two.
The upper half of the data is 7.2, 8, 8.3, 9, 10, 10, 11.5. The middle value of the upper half is nine.
The third quartile, Q3, is nine. Three-fourths (75%) of the ordered data set are less than nine. One-fourth (25%) of the ordered data set are greater than nine. The third quartile is part of the data set in this example.
The interquartile range is a number that indicates the spread of the middle half or the middle 50% of the data. It is the difference between the third quartile (Q3) and the first quartile (Q1).
$IQR = Q_3 – Q_1 \tag{2.4.1}$
The IQR can help to determine potential outliers. A value is suspected to be a potential outlier if it is less than (1.5)(IQR) below the first quartile or more than (1.5)(IQR) above the third quartile. Potential outliers always require further investigation.
Definition: Outliers
A potential outlier is a data point that is significantly different from the other data points. These special data points may be errors or some kind of abnormality or they may be a key to understanding the data.
Example 2.4.1
For the following 13 real estate prices, calculate the IQR and determine if any prices are potential outliers. Prices are in dollars.
389,950; 230,500; 158,000; 479,000; 639,000; 114,950; 5,500,000; 387,000; 659,000; 529,000; 575,000; 488,800; 1,095,000
Order the data from smallest to largest.
114,950; 158,000; 230,500; 387,000; 389,950; 479,000; 488,800; 529,000; 575,000; 639,000; 659,000; 1,095,000; 5,500,000
$M = 488,800 \nonumber$
$Q_{1} = \dfrac{230,500 + 387,000}{2} = 308,750\nonumber$
$Q_{3} = \dfrac{639,000 + 659,000}{2} = 649,000\nonumber$
$IQR = 649,000 - 308,750 = 340,250\nonumber$
$(1.5)(IQR) = (1.5)(340,250) = 510,375\nonumber$
$Q_{1} - (1.5)(IQR) = 308,750 - 510,375 = –201,625\nonumber$
$Q_{3} + (1.5)(IQR) = 649,000 + 510,375 = 1,159,375\nonumber$
No house price is less than –201,625. However, 5,500,000 is more than 1,159,375. Therefore, 5,500,000 is a potential outlier.
Exercise $$\PageIndex{1}$$
For the following 11 salaries, calculate the IQR and determine if any salaries are outliers. The salaries are in dollars.
$33,000;$64,500; $28,000;$54,000; $72,000;$68,500; $69,000;$42,000; $54,000;$120,000; $40,500 Answer Order the data from smallest to largest.$28,000; $33,000;$40,500; $42,000;$54,000; $54,000;$64,500; $68,500;$69,000; $72,000;$120,000
Median = $54,000 $Q_{1} = 40,500\nonumber$ $Q_{3} = 69,000\nonumber$ $IQR = 69,000 - 40,500 = 28,500\nonumber$ $(1.5)(IQR) = (1.5)(28,500) = 42,750\nonumber$ $Q_{1} - (1.5)(IQR) = 40,500 - 42,750 = -2,250\nonumber$ $Q_{3} + (1.5)(IQR) = 69,000 + 42,750 = 111,750\nonumber$ No salary is less than –$2,250. However, $120,000 is more than$11,750, so \$120,000 is a potential outlier.
Example 2.4.2
For the two data sets in the test scores example, find the following:
1. The interquartile range. Compare the two interquartile ranges.
2. Any outliers in either set.
The five number summary for the day and night classes is
Minimum Q1 Median Q3 Maximum
Day 32 56 74.5 82.5 99
Night 25.5 78 81 89 98
1. The IQR for the day group is $$Q_{3} - Q_{1} = 82.5 - 56 = 26.5$$
The IQR for the night group is $$Q_{3} - Q_{1} = 89 - 78 = 11$$
The interquartile range (the spread or variability) for the day class is larger than the night class IQR. This suggests more variation will be found in the day class’s class test scores.
2. Day class outliers are found using the IQR times 1.5 rule. So,
• $$Q_{1} - IQR(1.5) = 56 – 26.5(1.5) = 16.25$$
• $$Q_{3} + IQR(1.5) = 82.5 + 26.5(1.5) = 122.25$$
Since the minimum and maximum values for the day class are greater than 16.25 and less than 122.25, there are no outliers.
Night class outliers are calculated as:
• $$Q_{1} - IQR (1.5) = 78 – 11(1.5) = 61.5$$
• $$Q_{3} + IQR(1.5) = 89 + 11(1.5) = 105.5$$
For this class, any test score less than 61.5 is an outlier. Therefore, the scores of 45 and 25.5 are outliers. Since no test score is greater than 105.5, there is no upper end outlier.
Exercise $$\PageIndex{2}$$
Find the interquartile range for the following two data sets and compare them.
Test Scores for Class A
69; 96; 81; 79; 65; 76; 83; 99; 89; 67; 90; 77; 85; 98; 66; 91; 77; 69; 80; 94
Test Scores for Class B
90; 72; 80; 92; 90; 97; 92; 75; 79; 68; 70; 80; 99; 95; 78; 73; 71; 68; 95; 100
Class A
Order the data from smallest to largest.
65; 66; 67; 69; 69; 76; 77; 77; 79; 80; 81; 83; 85; 89; 90; 91; 94; 96; 98; 99
$$Median = \dfrac{80 + 81}{2}$$ = 80.5
$$Q_{1} = \dfrac{69 + 76}{2} = 72.5$$
$$Q_{3} = \dfrac{90 + 91}{2} = 90.5$$
$$IQR = 90.5 - 72.5 = 18$$
Class B
Order the data from smallest to largest.
68; 68; 70; 71; 72; 73; 75; 78; 79; 80; 80; 90; 90; 92; 92; 95; 95; 97; 99; 100
$$Median = \dfrac{80 + 80}{2} = 80$$
$$Q_{1} = \dfrac{72 + 73}{2} = 72.5$$
$$Q_{3} = \dfrac{92 + 95}{2} = 93.5$$
$$IQR = 93.5 - 72.5 = 21$$
The data for Class B has a larger IQR, so the scores between Q3 and Q1 (middle 50%) for the data for Class B are more spread out and not clustered about the median.
Example $$\PageIndex{3}$$
Fifty statistics students were asked how much sleep they get per school night (rounded to the nearest hour). The results were:
AMOUNT OF SLEEP PER SCHOOL NIGHT (HOURS) FREQUENCY RELATIVE FREQUENCY CUMULATIVE RELATIVE FREQUENCY
4 2 0.04 0.04
5 5 0.10 0.14
6 7 0.14 0.28
7 12 0.24 0.52
8 14 0.28 0.80
9 7 0.14 0.94
10 3 0.06 1.00
Find the 28th percentile. Notice the 0.28 in the "cumulative relative frequency" column. Twenty-eight percent of 50 data values is 14 values. There are 14 values less than the 28th percentile. They include the two 4s, the five 5s, and the seven 6s. The 28th percentile is between the last six and the first seven. The 28th percentile is 6.5.
Find the median. Look again at the "cumulative relative frequency" column and find 0.52. The median is the 50th percentile or the second quartile. 50% of 50 is 25. There are 25 values less than the median. They include the two 4s, the five 5s, the seven 6s, and eleven of the 7s. The median or 50th percentile is between the 25th, or seven, and 26th, or seven, values. The median is seven.
Find the third quartile. The third quartile is the same as the 75th percentile. You can "eyeball" this answer. If you look at the "cumulative relative frequency" column, you find 0.52 and 0.80. When you have all the fours, fives, sixes and sevens, you have 52% of the data. When you include all the 8s, you have 80% of the data. The 75th percentile, then, must be an eight. Another way to look at the problem is to find 75% of 50, which is 37.5, and round up to 38. The third quartile, Q3, is the 38th value, which is an eight. You can check this answer by counting the values. (There are 37 values below the third quartile and 12 values above.)
Exercise $$\PageIndex{3}$$
Forty bus drivers were asked how many hours they spend each day running their routes (rounded to the nearest hour). Find the 65th percentile.
Amount of time spent on route (hours) Frequency Relative Frequency Cumulative Relative Frequency
2 12 0.30 0.30
3 14 0.35 0.65
4 10 0.25 0.90
5 4 0.10 1.00
The 65th percentile is between the last three and the first four.
The 65th percentile is 3.5.
Example 2.4.4
Using the table above in Example $$\PageIndex{3}$$
1. Find the 80th percentile.
2. Find the 90th percentile.
3. Find the first quartile. What is another name for the first quartile?
Solution
Using the data from the frequency table, we have:
1. The 80th percentile is between the last eight and the first nine in the table (between the 40th and 41st values). Therefore, we need to take the mean of the 40th an 41st values. The 80th percentile $$= \dfrac{8+9}{2} = 8.5$$
2. The 90th percentile will be the 45th data value (location is $$0.90(50) = 45$$) and the 45th data value is nine.
3. Q1 is also the 25th percentile. The 25th percentile location calculation: $$P_{25} = 0.25(50) = 12.5 \approx 13$$ the 13th data value. Thus, the 25th percentile is six.
Exercise $$\PageIndex{4}$$
Refer to the table above in Exercise $$\PageIndex{3}$$. Find the third quartile. What is another name for the third quartile?
The third quartile is the 75th percentile, which is four. The 65th percentile is between three and four, and the 90th percentile is between four and 5.75. The third quartile is between 65 and 90, so it must be four.
COLLABORATIVE STATISTICS
Your instructor or a member of the class will ask everyone in class how many sweaters they own. Answer the following questions:
1. How many students were surveyed?
2. What kind of sampling did you do?
3. Construct two different histograms. For each, starting value = _____ ending value = ____.
4. Find the median, first quartile, and third quartile.
5. Construct a table of the data to find the following:
1. the 10th percentile
2. the 70th percentile
3. the percent of students who own less than four sweaters
## A Formula for Finding the kth Percentile
If you were to do a little research, you would find several formulas for calculating the kth percentile. Here is one of them.
• $$k =$$ the kth percentile. It may or may not be part of the data.
• $$i =$$ the index (ranking or position of a data value)
• $$n =$$ the total number of data
Order the data from smallest to largest.
Calculate $$i = \dfrac{k}{100}(n + 1)$$
If $$i$$ is an integer, then the $$k^{th}$$ percentile is the data value in the $$i^{th}$$ position in the ordered set of data.
If $$i$$ is not an integer, then round $$i$$ up and round $$i$$ down to the nearest integers. Average the two data values in these two positions in the ordered data set. This is easier to understand in an example.
Example 2.4.5
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
1. Find the 70th percentile.
2. Find the 83rd percentile.
Solution
• $$k = 70$$
• $$i$$ = the index
• $$n = 29$$
$$i = \dfrac{k}{100}(n + 1) = \dfrac{70}{100}(29 + 1) = 21$$. Twenty-one is an integer, and the data value in the 21st position in the ordered data set is 64. The 70th percentile is 64 years.
• $$k$$ = 83rd percentile
• $$i$$= the index
• $$n = 29$$
$$i = \dfrac{k}{100}(n + 1) = (\dfrac{83}{100})(29 + 1) = 24.9$$, which is NOT an integer. Round it down to 24 and up to 25. The age in the 24th position is 71 and the age in the 25th position is 72. Average 71 and 72. The 83rd percentile is 71.5 years.
Exercise $$\PageIndex{5}$$
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
Calculate the 20th percentile and the 55th percentile.
$$k = 20$$. Index $$= i = \dfrac{k}{100}(n+1) = \dfrac{20}{100}(29 + 1) = 6$$. The age in the sixth position is 27. The 20th percentile is 27 years.
$$k = 55$$. Index $$= i = \dfrac{k}{100}(n+1) = \dfrac{55}{100}(29 + 1) = 16.5$$. Round down to 16 and up to 17. The age in the 16th position is 52 and the age in the 17th position is 55. The average of 52 and 55 is 53.5. The 55th percentile is 53.5 years.
Note 2.4.2
You can calculate percentiles using calculators and computers. There are a variety of online calculators.
## A Formula for Finding the Percentile of a Value in a Data Set
• Order the data from smallest to largest.
• $$x =$$ the number of data values counting from the bottom of the data list up to but not including the data value for which you want to find the percentile.
• $$y =$$ the number of data values equal to the data value for which you want to find the percentile.
• $$n =$$ the total number of data.
• Calculate $$\dfrac{x + 0.5y}{n}(100)$$. Then round to the nearest integer.
Example 2.4.6
Listed are 29 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
1. Find the percentile for 58.
2. Find the percentile for 25.
Solution
1. Counting from the bottom of the list, there are 18 data values less than 58. There is one value of 58.
$$x = 18$$ and $$y = 1$$. $$\dfrac{x + 0.5y}{n}(100) = \dfrac{18 + 0.5(1)}{29}(100) = 63.80$$. 58 is the 64th percentile.
2. Counting from the bottom of the list, there are three data values less than 25. There is one value of 25.
$$x = 3$$ and $$y = 1$$. $$\dfrac{x + 0.5y}{n}(100) = \dfrac{3 + 0.5(1)}{29}(100) = 12.07$$. Twenty-five is the 12thpercentile.
Exercise $$\PageIndex{6}$$
Listed are 30 ages for Academy Award winning best actors in order from smallest to largest.
18; 21; 22; 25; 26; 27; 29; 30; 31, 31; 33; 36; 37; 41; 42; 47; 52; 55; 57; 58; 62; 64; 67; 69; 71; 72; 73; 74; 76; 77
Find the percentiles for 47 and 31.
Percentile for 47: Counting from the bottom of the list, there are 15 data values less than 47. There is one value of 47.
$$x = 15$$ and $$y = 1$$. $$\dfrac{x + 0.5y}{n}(100) = \dfrac{15 + 0.5(1)}{30}(100) = 51.67$$. 47 is the 52nd percentile.
Percentile for 31: Counting from the bottom of the list, there are eight data values less than 31. There are two values of 31.
$$x = 8$$ and $$y = 2$$. $$\dfrac{x + 0.5y}{n}(100) = \dfrac{8 + 0.5(2)}{30}(100) = 30$$. 31 is the 30th percentile.
## Interpreting Percentiles, Quartiles, and Median
A percentile indicates the relative standing of a data value when data are sorted into numerical order from smallest to largest. Percentages of data values are less than or equal to the pth percentile. For example, 15% of data values are less than or equal to the 15th percentile.
• Low percentiles always correspond to lower data values.
• High percentiles always correspond to higher data values.
A percentile may or may not correspond to a value judgment about whether it is "good" or "bad." The interpretation of whether a certain percentile is "good" or "bad" depends on the context of the situation to which the data applies. In some situations, a low percentile would be considered "good;" in other contexts a high percentile might be considered "good". In many situations, there is no value judgment that applies.
Understanding how to interpret percentiles properly is important not only when describing data, but also when calculating probabilities in later chapters of this text.
GUIDELINE
When writing the interpretation of a percentile in the context of the given data, the sentence should contain the following information.
• information about the context of the situation being considered
• the data value (value of the variable) that represents the percentile
• the percent of individuals or items with data values below the percentile
• the percent of individuals or items with data values above the percentile.
Example 2.4.7
On a timed math test, the first quartile for time it took to finish the exam was 35 minutes. Interpret the first quartile in the context of this situation.
• Twenty-five percent of students finished the exam in 35 minutes or less.
• Seventy-five percent of students finished the exam in 35 minutes or more.
• A low percentile could be considered good, as finishing more quickly on a timed exam is desirable. (If you take too long, you might not be able to finish.)
Exercise $$\PageIndex{7}$$
For the 100-meter dash, the third quartile for times for finishing the race was 11.5 seconds. Interpret the third quartile in the context of the situation.
Twenty-five percent of runners finished the race in 11.5 seconds or more. Seventy-five percent of runners finished the race in 11.5 seconds or less. A lower percentile is good because finishing a race more quickly is desirable.
Example 2.4.8
On a 20 question math test, the 70th percentile for number of correct answers was 16. Interpret the 70th percentile in the context of this situation.
• Seventy percent of students answered 16 or fewer questions correctly.
• Thirty percent of students answered 16 or more questions correctly.
• A higher percentile could be considered good, as answering more questions correctly is desirable.
Exercise $$\PageIndex{8}$$
On a 60 point written assignment, the 80th percentile for the number of points earned was 49. Interpret the 80th percentile in the context of this situation.
Eighty percent of students earned 49 points or fewer. Twenty percent of students earned 49 or more points. A higher percentile is good because getting more points on an assignment is desirable.
Example 2.4.9
At a community college, it was found that the 30th percentile of credit units that students are enrolled for is seven units. Interpret the 30th percentile in the context of this situation.
• Thirty percent of students are enrolled in seven or fewer credit units.
• Seventy percent of students are enrolled in seven or more credit units.
• In this example, there is no "good" or "bad" value judgment associated with a higher or lower percentile. Students attend community college for varied reasons and needs, and their course load varies according to their needs.
Exercise $$\PageIndex{9}$$
During a season, the 40th percentile for points scored per player in a game is eight. Interpret the 40th percentile in the context of this situation.
Forty percent of players scored eight points or fewer. Sixty percent of players scored eight points or more. A higher percentile is good because getting more points in a basketball game is desirable.
Example 2.4.10
Sharpe Middle School is applying for a grant that will be used to add fitness equipment to the gym. The principal surveyed 15 anonymous students to determine how many minutes a day the students spend exercising. The results from the 15 anonymous students are shown.
0 minutes; 40 minutes; 60 minutes; 30 minutes; 60 minutes
10 minutes; 45 minutes; 30 minutes; 300 minutes; 90 minutes;
30 minutes; 120 minutes; 60 minutes; 0 minutes; 20 minutes
Determine the following five values.
• Min = 0
• Q1 = 20
• Med = 40
• Q3 = 60
• Max = 300
If you were the principal, would you be justified in purchasing new fitness equipment? Since 75% of the students exercise for 60 minutes or less daily, and since the IQR is 40 minutes (60 – 20 = 40), we know that half of the students surveyed exercise between 20 minutes and 60 minutes daily. This seems a reasonable amount of time spent exercising, so the principal would be justified in purchasing the new equipment.
However, the principal needs to be careful. The value 300 appears to be a potential outlier.
$Q_{3} + 1.5(IQR) = 60 + (1.5)(40) = 120$.
The value 300 is greater than 120 so it is a potential outlier. If we delete it and calculate the five values, we get the following values:
• Min = 0
• Q1 = 20
• Q3 = 60
• Max = 120
We still have 75% of the students exercising for 60 minutes or less daily and half of the students exercising between 20 and 60 minutes a day. However, 15 students is a small sample and the principal should survey more students to be sure of his survey results.
## References
1. Cauchon, Dennis, Paul Overberg. “Census data shows minorities now a majority of U.S. births.” USA Today, 2012. Available online at usatoday30.usatoday.com/news/...sus/55029100/1 (accessed April 3, 2013).
2. Data from the United States Department of Commerce: United States Census Bureau. Available online at http://www.census.gov/ (accessed April 3, 2013).
3. “1990 Census.” United States Department of Commerce: United States Census Bureau. Available online at http://www.census.gov/main/www/cen1990.html (accessed April 3, 2013).
4. Data from San Jose Mercury News.
5. Data from Time Magazine; survey by Yankelovich Partners, Inc.
## Review
The values that divide a rank-ordered set of data into 100 equal parts are called percentiles. Percentiles are used to compare and interpret data. For example, an observation at the 50th percentile would be greater than 50 percent of the other obeservations in the set. Quartiles divide data into quarters. The first quartile (Q1) is the 25th percentile,the second quartile (Q2 or median) is 50th percentile, and the third quartile (Q3) is the the 75th percentile. The interquartile range, or IQR, is the range of the middle 50 percent of the data values. The IQR is found by subtracting Q1 from Q3, and can help determine outliers by using the following two expressions.
• $$Q_{3} + IQR(1.5)$$
• $$Q_{1} - IQR(1.5)$$
## Formula Review
$i = \dfrac{k}{100}(n+1) \nonumber$
where $$i$$ = the ranking or position of a data value,
• $$k$$ = the kth percentile,
• $$n$$ = total number of data.
Expression for finding the percentile of a data value: $$\left(\dfrac{x + 0.5y}{n}\right)(100)$$
where $$x =$$ the number of values counting from the bottom of the data list up to but not including the data value for which you want to find the percentile,
$$y =$$ the number of data values equal to the data value for which you want to find the percentile,
$$n =$$ total number of data
## Glossary
Interquartile Range
or IQR, is the range of the middle 50 percent of the data values; the IQR is found by subtracting the first quartile from the third quartile.
Outlier
an observation that does not fit the rest of the data
Percentile
a number that divides ordered data into hundredths; percentiles may or may not be part of the data. The median of the data is the second quartile and the 50th percentile. The first and third quartiles are the 25th and the 75th percentiles, respectively.
Quartiles
the numbers that separate the data into quarters; quartiles may or may not be part of the data. The second quartile is the median of the data.
This page titled 2.4: Measures of the Location of the Data is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 2023-03-28 08:21:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5947113037109375, "perplexity": 749.7154416290894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00247.warc.gz"} |
https://nbviewer.jupyter.org/urls/bitbucket.org/bemppsolutions/bempp-tutorials/raw/master/notebooks/maxwell_paper/efie.ipynb | # Electric field integral equation¶
This tutorial shows how to solve the electric field integral equation (EFIE) for exterior scattering problems, as described in section 5 of Scroggs et al (2017).
## Background¶
In this tutorial, we use consider an incident wave $$\mathbf{E}^\text{inc}(\mathbf{x})=\left(\begin{array}{c}\mathrm{e}^{\mathrm{i}kz}\\0\\0\end{array}\right)$$ scattering off the unit sphere.
We let $\mathbf{E}^\text{s}$ be the scattered field and look to solve \begin{align} \textbf{curl}\,\textbf{curl}\,\mathbf{E}-k^2\mathbf{E}&=0\quad\text{in }\Omega^\text{+},\\ \mathbf{E}\times\nu&=0\quad\text{on }\Gamma,\\ \lim_{|\mathbf{x}|\to\infty}\left(\textbf{curl}\,\mathbf{E}^\text{s}\times\frac{\mathbf{x}}{|\mathbf{x}|}-\mathrm{i}k\mathbf{E}^\text{s}\right)&=0, \end{align}
where $\mathbf{E}=\mathbf{E}^\text{s}+\mathbf{E}^\text{inc}$ is the total electric field.
### Standard EFIE¶
To formulate the (indirect) EFIE, we write the scattered field in the following form.
$$\mathbf{E}^\text{s}=-\mathcal{E}\Lambda,$$
where $\Lambda$ is an unknown tangential vector function. To find $\Lambda$, we use following boundary integral equation. $$\mathsf{E}\Lambda=\gamma_\mathbf{t}^\text{+}\mathbf{E}^\text{inc}.$$
Here, $\gamma_\mathbf{t}^\text{+}$ is the tangential trace of a function, defined for $\mathbf{x}\in\Gamma$ by $$\gamma_\mathbf{t}^\text{+}\mathbf{A}(\mathbf{x}) := \lim_{\Omega^\text{+}\ni\mathbf{x'}\to\mathbf{x}}\mathbf{A}(\mathbf{x}')\times\nu(\mathbf{x}).$$
### Calderón preconditioned EFIE
The boundary integral equation for the EFIE is ill-conditioned, and so will be infeasible to solve for larger problems. Using properties of the multitrace operator, we can show that $$\mathsf{E}^2=-\tfrac14\mathsf{Id}+\mathsf{H}.$$ This is a compact perturbation of the identity, and so this will lead to a well conditioned system.
The boundary integral equation for the Calderón preconditioned EFIE is therefore $$\mathsf{E}^2\Lambda=\mathsf{E}\gamma_\mathbf{t}^\text{+}\mathbf{E}^\text{inc}.$$ As mentioned in the multitrace operator tutorial, the spaces used to discretise this must be chosen carefully to ensure that a stable discretisation is achieved.
## Implementation¶
First, we do the usual imports and set the wavenumber.
In [2]:
import bempp.api
import numpy as np
k = 3
Next, we define the grid. In the paper, we use the sphere, plus the Nasa almond (bempp.api.shapes.almond) and a level 1 Menger sponge (bempp.api.shapes.menger_sponge).
In [3]:
grid = bempp.api.shapes.sphere(h=0.1)
We will first solve the non-preconditioned EFIE. For this, we define the spaces of Raviart–Thomas (RT) and Nédélec (NC) functions.
In [4]:
rt_space = bempp.api.function_space(grid, "RT", 0)
nc_space = bempp.api.function_space(grid, "NC", 0)
Next, we define the incendent field and its tangential trace.
In [5]:
def incident_field(x):
return np.array([np.exp(1j*k*x[2]), 0.*x[2], 0.*x[2]])
def tangential_trace(x, n, domain_index, result):
result[:] = np.cross(incident_field(x), n, axis=0)
grid_fun = bempp.api.GridFunction(rt_space,
fun=tangential_trace,
dual_space=nc_space)
We define the electric field operator, using RT functions for the domain and range spaces and NC functions for the dual space.
In [6]:
electric = bempp.api.operators.boundary.maxwell.electric_field(
rt_space, rt_space, nc_space, k)
Finally, we solve the discretisation of the problem and print the number of iterations.
In [7]:
sol, info, iterations = bempp.api.linalg.gmres(
electric, grid_fun, return_iteration_count=True)
print("Number of iterations:", iterations)
Number of iterations: 522
As expected, the number of iterations taken to solve the non-preconditioned system is high.
## Calderón preconditioned EFIE
To solve the preconditioned EFIE, we begin by importing Bempp and Numpy and defining the wavenumber and incident wave as above.
In [8]:
import bempp.api
import numpy as np
k = 3
grid = bempp.api.shapes.sphere(h=0.1)
def incident_field(x):
return np.array([np.exp(1j*k*x[2]), 0.*x[2], 0.*x[2]])
def tangential_trace(x, n, domain_index, result):
result[:] = np.cross(incident_field(x), n, axis=0)
We define the multitrace operator, extract the spaces we will need from it, and build a grid function representing the incident wave.
In [9]:
multitrace = bempp.api.operators.boundary.maxwell.multitrace_operator(
grid, k)
bc_space = multitrace.range_spaces[1]
snc_space = multitrace.dual_to_range_spaces[1]
grid_fun = bempp.api.GridFunction(bc_space,
fun=tangential_trace,
dual_space=snc_space)
We extract the electric field operators E1 and E2 from the multitrace operator, then form the products $\mathsf{E}^2$ and $\mathsf{E}\gamma_\mathbf{t}^\text{+}\mathbf{E}^\text{inc}$.
In [10]:
E2 = -multitrace[1,0]
E1 = multitrace[0,1]
op = E1 * E2
rhs = E1 * grid_fun
Next, we solve the discrete system and print the number of iterations.
In [11]:
sol, info, iterations = bempp.api.linalg.gmres(
op, rhs, return_iteration_count=True)
print("Number of iterations:", iterations)
Number of iterations: 13
As expected, the preconditioned system requires a much lower number of iterations.
To plot a slice of the solution, we define a grid of points and use the representation formula to evaluate the squared electric field density at these points.
In [12]:
x_p, y_p, z_p = np.mgrid[-5:5:300j, 0:0:1j, -5:5:300j]
points = np.vstack((x_p.ravel(), y_p.ravel(), z_p.ravel()))
efie_pot = bempp.api.operators.potential.maxwell.electric_field(
sol.space, points, k)
plot_me = incident_field(points) - efie_pot * sol
plot_me = np.real(np.sum(plot_me * plot_me.conj(), axis=0))
for i,p in enumerate(points.T):
if np.linalg.norm(p) <= 1:
plot_me[i] = None
Finally, we plot the slice of the solution.
In [13]:
# The next command ensures that plots are shown within the IPython notebook
%matplotlib inline
# Adjust the figure size in IPython
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
from matplotlib import pyplot as plt
plt.imshow(plot_me.reshape((300,300)).T, origin='lower',
extent=[-5,5,-5,5], vmin=0, vmax=4, cmap='coolwarm')
plt.colorbar()
plt.title("Plot at y=0")
plt.xlabel("x")
plt.ylabel("z")
plt.show()
In [ ]: | 2021-09-18 16:46:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9113706350326538, "perplexity": 3618.7012024149194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056548.77/warc/CC-MAIN-20210918154248-20210918184248-00175.warc.gz"} |
https://math.stackexchange.com/questions/2194689/justification-for-the-following-application-of-ml-inequality | Justification for the following application of ML-inequality
Let $C_R$ denote the upper half of the circle $|z| = R\ \ (R\in \mathbb{R}\ \cap\ (2, \infty))$ taken in counter-clockwise direction.
Show that $|\int_{C_{R}} \frac{2z^2-1}{z^4 + 5z^2+4}dz|\leq \frac{\pi R(2R^2 + 1)}{(R^2 - 1)(R^2-4)}$
Attempt:
Apply the ML-inequality directly. We first get the easy part, L = $\pi R$ (half of circumference)
Next we attempt to bound $|f(z) = \frac{2z^2-1}{z^4 + 5z^2+4}|$
$|f(z)| = |\frac{2z^2-1}{(z^2-1)(z^2-4)}| \leq |\frac{2z^2}{(z^2-1)(z^2-4)}| + |\frac{1}{(z^2-1)(z^2-4)}|$ by $\triangle$-inequality
This is almost what we want, but how do I justify substituiting $z$ with $R$ ?
Any help or insight is deeply appreciated.
For $z \in C_R$ you have $\lvert z \rvert = R > 2$. The triangle inequality gives an upper bound for the numerator: $$\lvert 2 z^2 - 1 \rvert \le \lvert 2 z^2 \rvert + \lvert 1 \rvert = 2 R^2 + 1$$ as well as a lower bound for the denominator: $$\lvert z^2 - 1 \rvert \ge \lvert z^2 \rvert - \lvert 1 \rvert = R^2 - 1 > 0\\ \lvert z^2 - 4 \rvert \ge \lvert z^2 \rvert - \lvert 4 \rvert = R^2 - 4 > 0$$ (this is sometimes called "reverse triangle equality", compare e.g. Proving the reverse triangle inequality of the complex numbers).
Putting it all together gives exactly the desired estimate for $\lvert f(z) \rvert$. | 2020-01-29 14:35:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594975113868713, "perplexity": 121.37473619868733}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00070.warc.gz"} |
https://cameramath.com/es/expert-q&a/Algebra/3-When-working-with-quadratics-we-often-use-the-identity-3-When | ### ¿Todavía tienes preguntas de matemáticas?
Pregunte a nuestros tutores expertos
Algebra
Pregunta
3. When working with quadratics, we often use the identity $$a ^ { 2 } - b ^ { 2 } = ( a + b ) ( a - b )$$ to factor quadratic expressions of such $$_ { a }$$ form. Using what you've learned about rationalizing denominators and complex conjugates, find and prove identities to easily factor expressions of these forms:
a. $$a ^ { 2 } + b ^ { 2 }$$
b. $$a ^ { 2 } - b$$
Hint: your factored forms may look very similar to the factored form of $$a ^ { 2 } - b ^ { 2 }$$
(a) $$a^2+ b^2 = (a + b + \sqrt{2ab} )(a + b - \sqrt{2ab})$$ or $$a^2 - b^2 = (a+ ib)(a- ib)$$
(b) $$a^2 - b^2 = (a+ b)(a- b)$$
Solución
View full explanation on CameraMath App. | 2022-05-18 17:19:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6966871619224548, "perplexity": 1760.508298622585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522284.20/warc/CC-MAIN-20220518151003-20220518181003-00176.warc.gz"} |
https://fairyonice.github.io/Gentle-Hands-on-Introduction-to-Observational-Study.html | # Introduction¶
Data Scientists need to help making business decisions by providing insights from data. To answer to the questions like "does a new feature improve user engagement?", data scientists may conduct A/B testing to see if there is any "causal" effect of new feature on user's engagement, evaluated with certain metric. Before diving into causal inference in observational study, let's talk about more common approach: A/B testing and its limitation.
## A/B testing¶
In my opinion, A/B testing is a rebranded version of traditional experimental design in IT industry to find statistically significant causal effect. In clinical trials, experimental design is used to assess whether there is a significant improvement in using new drug in comparisons to current status quo. A/B testing also takes random subset of target population group and randomly assign the users into treatment (new feature) and control groups to see if user experience improves by the new features. Here, the random assignment of subjects to treatment and control groups takes the key role to make causal statement. The random assignments assures that there is no confounding factors.
Let's think more details on this "random assignment". Suppose we do not randomly assign subjects to treatment or placebo groups, and let the subjects choose to take treatments with their own will. Suppose also that t-test found that the mean of treatment group has significant increase in its health status than mean of those who do not take the treatment. Can we conclude that the treatment has statistically significant effect on improving the health status? Yes. But is it causal effect?
The answer is no, because of confounding factors. For example, those who choose to take treatments happened to be more health-conscious, or happened to be healthier from the beginning to afford the pain of the treatment. These confounding effects, that are highly correlated to the treatment group subjects, may be the true root cause of the better health status.
To average out these confounding effects, A/B testing needs to randomize subjects. We hope that by randomly allocating individuals to treatment and placebo groups, both group has same level of Health consciousness or the same level of health status at the beginning "on average".
## Observational study¶
We understand that A/B testing is useful to find statistically significant causal effect. But there are many scenario where we cannot do A/B testing. For example, some features cannot be A/B tested because of some engineering constraints. Or some features cannot be A/B tested because company has no control over the feature. E.g. what if LinkedIn wants to test the effect of profile picture on the hiring decision, LinkedIn cannot randomly assign subset of individuals to have no profile pictures! In such scenario, how can we eliminate confounding factors and make causal analysis?
In this blog, we learn:
• Propensity Score Matching (PCM) technique, one of the most common technique in observational study to do causal inference.
• How to analyze data with PCM in R.
# Propensity Score Matching¶
PSM attempts to average out the confounding factors by making the groups receiving treatment and not-treatment comparable with respect to the potential confounding variables. The propensity score measures the probability of a subject to be in treatment group, and it is calculated using the potential confounding variables. If the distribution of the propensity scores are similar between treatment and placebo, we can say that the confounding factors are averaged out.
PCM tries to make the distribution of PCM the same between treatment and non-treatment group by matching each subject in treatment group with another subject in non-treatment group in terms of the propensity score. It may happen that the same subject in treatment to be matched with multiple subjects in placebo (Upsampling), or some subjects may not be used for matching and hence discarded from the analysis (Downsampling).
## PSM procedure¶
Here is the general PSM procedure.
Step 1. Run logistic regression:
• Dependent variable: Z = 1, if a subject is in treatment group; Z = 0, otherwise.
• Independent variable confounding factors. The probability estimate of this logistic regression is propensity score.
Step 2. Match each participant to one or more nonparticipants on propensity score by nearest neighbor matching, exact matching or other matching techniques.
Step 3. Verify that covariates are balanced across treatment and comparison groups in the matched or weighted sample.
# Hands-on PSM analysis¶
Now, we are ready to analyze observational study data. We will closely follow the seminal tutorial R Tutorial 8: Propensity Score Matching.
This tutorial analyzes the effect of going to Catholic school, as opposed to public school, on student achievement. Because students who attend Catholic school on average are different from students who attend public school, we will use propensity score matching to get more credible causal estimates of Catholic schooling.
First, follow the link above to download the data ecls.csv.
In this data,
• indepdent variable: catholic (1 = student went to catholic school; 0 = student went to public school)
• dependent variable: c5r2mtsc_std students’ standardized math score
In [1]:
library(MatchIt)
library(dplyr)
library(ggplot2)
cat("dim:",dim(ecls))
Attaching package: ‘dplyr’
The following objects are masked from ‘package:stats’:
filter, lag
The following objects are masked from ‘package:base’:
intersect, setdiff, setequal, union
dim: 11078 22
A data.frame: 6 × 22
10001002C0WHITE, NON-HISPANIC1000 14745 0 053.5077.5$50,001 TO$75,000 62500.5 0 060.0430.9817533
20001004C0WHITE, NON-HISPANIC1000 14148 0 034.9553.5$40,001 TO$50,000 45000.5 0 056.2800.5943775
30001005C0WHITE, NON-HISPANIC1000NANANANANA NA NANA NANANA53.7910.3381515
40001010C0WHITE, NON-HISPANIC1000 14355 0 063.4353.5$50,001 TO$75,000 62500.5 0 055.2720.4906106
50001011C1WHITE, NON-HISPANIC1000 13839 0 053.5053.5$75,001 TO$100,000 87500.5 0 064.6041.4512779
60001012C0WHITE, NON-HISPANIC1000 14757 0 061.5677.5$100,001 TO$200,000150000.5 0 075.7212.5956991
Let's look at the distribution of the standardized math score for catholic vs public school individuals. The boxplot shows that the median of the two groups seem quite different.
In [2]:
boxplot(c5r2mtsc_std ~ catholic, data = ecls,
xlab="catholic",
ylab="standardized math score")
Ignoring confounding factors, let's see if there is any significant difference between the catholic and pulic school in terms of the mean of standardized math scores.
I use Welch's two sample T-test, the most general form of T-test which does not assume equal sample size or same variance across the two groups.
In [3]:
t.test(c5r2mtsc_std ~ catholic,data=ecls)
Welch Two Sample t-test
data: c5r2mtsc_std by catholic
t = -9.1069, df = 2214.5, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2727988 -0.1761292
sample estimates:
mean in group 0 mean in group 1
-0.03059583 0.19386817
The t-test shows that the mean of the standardized test results are significantly different. But does going to catholic school caused the better performance on the standardized test? Let's consider confounding factors!
Following R Tutorial 8: Propensity Score Matching, I will use the following variables as potential confounding factors to model propensity score.
• race_white: Is the student white (1) or not (0)?
• p5hmage: Mother’s age
• w3income: Family income
• p5numpla: Number of places the student has lived for at least 4 months
• w3momed_hsb: Is the mother’s education level high-school or below (1) or some college or more (0)?
In [4]:
ecls_cov <- c('race_white', 'p5hmage', 'w3income', 'p5numpla', 'w3momed_hsb')
ecls %>%
group_by(catholic) %>%
select(one_of(ecls_cov)) %>%
summarise_all(funs(mean(., na.rm = T)))
Adding missing grouping variables: catholic
Warning message:
“funs() is soft deprecated as of dplyr 0.8.0
Please use a list of either functions or lambdas:
# Simple named list:
list(mean = mean, median = median)
# Auto named with tibble::lst():
tibble::lst(mean, median)
# Using lambdas
list(~ mean(., trim = .2), ~ median(., na.rm = TRUE))
This warning is displayed once per session.”
A tibble: 2 × 6
catholicrace_whitep5hmagew3incomep5numplaw3momed_hsb
00.556124637.5609754889.161.1326690.4640918
10.725165639.5751682074.301.0927010.2272069
The means of all these factors are significantly different between the catholic and public schools.
In [5]:
print(ecls_cov)
lapply(ecls_cov,
function(x) {
t.test(ecls[,x] ~ ecls$catholic) }) [1] "race_white" "p5hmage" "w3income" "p5numpla" "w3momed_hsb" [[1]] Welch Two Sample t-test data: ecls[, x] by ecls$catholic
t = -13.453, df = 2143.3, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1936817 -0.1444003
sample estimates:
mean in group 0 mean in group 1
0.5561246 0.7251656
[[2]]
Welch Two Sample t-test
data: ecls[, x] by ecls$catholic t = -12.665, df = 2186.9, p-value < 2.2e-16 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: -2.326071 -1.702317 sample estimates: mean in group 0 mean in group 1 37.56097 39.57516 [[3]] Welch Two Sample t-test data: ecls[, x] by ecls$catholic
t = -20.25, df = 1825.1, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-29818.10 -24552.18
sample estimates:
mean in group 0 mean in group 1
54889.16 82074.30
[[4]]
Welch Two Sample t-test
data: ecls[, x] by ecls$catholic t = 4.2458, df = 2233.7, p-value = 2.267e-05 alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: 0.02150833 0.05842896 sample estimates: mean in group 0 mean in group 1 1.132669 1.092701 [[5]] Welch Two Sample t-test data: ecls[, x] by ecls$catholic
t = 18.855, df = 2107.3, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.2122471 0.2615226
sample estimates:
mean in group 0 mean in group 1
0.4640918 0.2272069
## Propensity score estimation¶
To calculate propensity score and find the match, we will use matchit function. The propensity score is calculated by fitting logistic regression with the potential confounding factors as independent variables and the school type as a dependent variable.
The logistic regression fit can also be done using glm with family="binomial".
To find the match, I will use nearest neighbor matching.
In [16]:
# matchit does not allow missing values in data
variable_in_use <- c(ecls_cov,"catholic","c5r2mtsc_std")
omitTF <- apply(ecls[,variable_in_use],1,
function(x)any(is.na(x)))
ecls_nomiss <- ecls[!omitTF,variable_in_use]
mod_match <- matchit(catholic~ race_white + p5hmage + w3income + p5numpla + w3momed_hsb,
data=ecls_nomiss,method = "nearest")
## Verify that covariates are balanced across treatment and comparison groups in the matched or weighted sample.¶
Summary shows that the mean of each of the confounding variables is less than one SD away between the catholic and public schools.
"Summary of balance for all data" simply shows the mean of catholic and public schools. We already studied these in the Welch's T-test above. What interesting is that the Mean Diff of the confounding variables substantially reduced when data is matched. See Summary of balance for matched data. It seems that the matching is well done.
In [17]:
summary(mod_match)
Call:
matchit(formula = catholic ~ race_white + p5hmage + w3income +
p5numpla + w3momed_hsb, data = ecls_nomiss, method = "nearest")
Summary of balance for all data:
Means Treated Means Control SD Control Mean Diff eQQ Med
distance 0.1927 0.1379 0.0845 0.0549 5.67e-02
race_white 0.7411 0.5914 0.4916 0.1497 0.00e+00
p5hmage 39.5932 37.5658 6.5506 2.0274 2.00e+00
w3income 82568.9357 55485.0210 43961.0872 27083.9146 2.50e+04
p5numpla 1.0917 1.1298 0.3910 -0.0380 0.00e+00
w3momed_hsb 0.2234 0.4609 0.4985 -0.2375 0.00e+00
eQQ Mean eQQ Max
distance 0.0548 7.60e-02
race_white 0.1501 1.00e+00
p5hmage 2.2544 7.00e+00
w3income 27069.1775 6.25e+04
p5numpla 0.0399 2.00e+00
w3momed_hsb 0.2374 1.00e+00
Summary of balance for matched data:
Means Treated Means Control SD Control Mean Diff eQQ Med eQQ Mean
distance 0.1927 0.1927 0.0846 0.0000 0 0.0000
race_white 0.7411 0.7470 0.4349 -0.0059 0 0.0059
p5hmage 39.5932 39.5503 5.2243 0.0429 0 0.0873
w3income 82568.9357 81403.9926 46618.2406 1164.9430 0 1164.9430
p5numpla 1.0917 1.0762 0.2970 0.0155 0 0.0200
w3momed_hsb 0.2234 0.2152 0.4111 0.0081 0 0.0081
eQQ Max
distance 3.30e-03
race_white 1.00e+00
p5hmage 1.00e+01
w3income 6.25e+04
p5numpla 2.00e+00
w3momed_hsb 1.00e+00
Percent Balance Improvement:
Mean Diff. eQQ Med eQQ Mean eQQ Max
distance 99.9934 100 99.9689 95.6780
race_white 96.0477 0 96.0591 0.0000
p5hmage 97.8841 100 96.1286 -42.8571
w3income 95.6988 100 95.6964 0.0000
p5numpla 59.1653 0 50.0000 0.0000
w3momed_hsb 96.5746 0 96.5732 0.0000
Sample sizes:
Control Treated
All 7915 1352
Matched 1352 1352
Unmatched 6563 0
Plot function of matchit allows calculating empirical quantile-quantile plots plots of each covariate to check balance of marginal distributions when type = "QQ". Notice that the distribution of covariates between catholic (y-axis) and public (x-axis) becomes more comparable after data is matched.
With type="hist", the plot outputs histograms of the propensity score in the original treated and control groups. After data is matched, the propensity score distribution between the catholic and public school become more comparable.
These observations indicate that the matched samples averaged out the potential confounding factors.
In [19]:
plot(mod_match,type="QQ")
plot(mod_match,type="hist")
## Back to Welch's T-test¶
Now perform Welch's T-test again with matched sample. Do we still have significant p-value? Yes. We do. However, it shows that going to catholic school actually negatively affects the standardized test after controlling confounding. This analysis indicates that going to catholic schools is not the root cause of better performance on standardized test and it could actually hurt the performance on the standardized test!
In [20]:
dta_m <- match.data(mod_match) ## obtain the dataframe of matched samples.
with(dta_m, t.test(c5r2mtsc_std ~ catholic))
A data.frame: 6 × 9
race_whitep5hmagew3incomep5numplaw3momed_hsbcatholicc5r2mtsc_stddistanceweights
214145000.5100 0.59437750.18013601
414362500.5100 0.49061060.20929571
513887500.5101 1.45127790.21540221
814162500.5100 0.38519660.19978971
2904212500.5100-0.97211140.11526191
3013987500.5100 0.40125580.22038101
Welch Two Sample t-test
data: c5r2mtsc_std by catholic
t = 4.9879, df = 2685.1, p-value = 6.494e-07
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.1051808 0.2414485
sample estimates:
mean in group 0 mean in group 1
0.3829825 0.2096679 | 2022-12-08 17:13:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35923245549201965, "perplexity": 3937.693662457419}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00031.warc.gz"} |
http://mathhelpforum.com/pre-calculus/166722-sinh-z-2-sinh-2-x-sin-2-y.html | # Math Help - | sinh (z) |^2 = sinh^2(x) + sin^2(y)
1. ## | sinh (z) |^2 = sinh^2(x) + sin^2(y)
I am having trouble proving this identity $|sinh(z)|^2=sinh^2(x)+sin^2(y) \ \ z\in\mathbb{C} \ \ x,y\in\mathbb{R}$.
$\displaystyle \left(\sqrt{sinh^2(z)}\right)^2=\left(\frac{e^z-e^{-z}}{2}\right)^2=\frac{e^{2z}+e^{-2z}-2}{4}$
$\displaystyle =\frac{e^{2x}cos(2y)+e^{-2x}cos(2y)-2+\mathbf{i}(e^{2x}sin(2y)-e^{-2x}sin(2y))}{4}$
$\displaystyle =\frac{cos(2y)(e^{2x}+e^{-2x})-2+\mathbf{i}sin(2y)(e^{2x}-e^{-2x})}{4}$
$\displaystyle =\frac{cos(2y)cosh(2x)-2+\mathbf{i}sin(2y)sinh(2x)}{2}$
At this point, I am not sure how to proceed.
2. $\sinh(z)=\sinh(x)\cos(y)+i~\cosh(x)\sin(y)$.
$|\sinh(z)|^2=\sinh^2(x)\cos^2(y)+\cosh^2(x)\sin^2( y)$
What can you do with that?
3. $sinh^2(x)cos^2(y)+cosh^2(x)sin^2(y)$
$cosh^2(x)=1+sinh^2(x), \ \ sinh^2(x)cos^2(y)+(1+sinh^2(x))sin^2(y)$
$=sinh^2(x)(cos^2(y)+sin^2(y))+sin^2(y)=sinh^2(x)+s in^2(y)$ | 2014-07-13 01:25:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5680638551712036, "perplexity": 3296.877565534856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435811.28/warc/CC-MAIN-20140707234035-00063-ip-10-180-212-248.ec2.internal.warc.gz"} |
https://ask.cvxr.com/t/how-to-include-psd-constraint-in-tfocs/10657 | # How to include PSD constraint in TFOCS
Hi there!
I’m solving \min_{\mathbf{B}\succeq 0} \text{Trace}(\mathbf{B}) subject to \|\mathcal{A}(\mathbf{B}) - \mathbf{y}\|_2 \leq \epsilon where \mathcal{A}:\mathbb{R}^{L\times L} \to \mathbb{R}^m is a linear operator, \mathbf{B}\in \mathbb{R}^{L\times L} is a matrix, and \mathbf{y}\in \mathbb{R}^m is a vector.
I wrote a functioning programme using tfocs_scd for this problem without the PSD constraint, but didn’t know how to include the PSD constraint. Could you please advise me on this?
Many thanks,
Shirley
Do you really wish to use TFCOs as opposed to CVX?
Unfortunately, I don;t believe any of the TFOCS developers are still active on this forum, and doubt there are many or any advanced users of TFOCS still active here. So you may have to figure things out on your own.
Nevertheless, a quick search shows these 2 posts from @Stephen_Becker one of the TFOCS developers, which might help you.
You can also see all posts on this forum categorized as TFOCS at http://ask.cvxr.com/c/tfocs/6 .
Thanks Mark. I’m going through these posts now. CVX works but it’s extremely slow, so my colleagues have advised that I try TFOCS. Is there any way that I can speed up CVX?
You haven’t shown us the PSD constraint (so I don;t know whether there is an opportunity to speed up things in the CVX formulation. But if your CVX code doesn’t have any for loops, there likely isn’t much or any opportunity to speed up the problem formulation by improved coding. But dualizing the problem might potentially speed up the solver (or could slow it down). If you need to solve a structurally same problem many times, but with different input data, then you can reduce problem formulation time by using YALMIP with its optimizer capability.
Here’s my code for the problem above:
cvx_begin quiet
variable B(L, L) semidefinite % a real symmetric PSD matrix
minimize(trace(B))
subject to
norm(bahmani_Woperator(B, W) - y, 2) <= epsilon
cvx_end
Did you mean that I should use YALMIP instead of CVX?
Thanks,
Shirley
bahmani_Woperator is a linear operator which I coded up in a separate script.
Don;t use quiet. That way you will see the solver and CVX output.
Is CVX formulation taking too long, or the solver once it starts taking too long? What are the problem dimensions?
I presume the mysterious bahmani_Woperator function is an affine function of its first argument?
If you re-run withoutquiet’, and post the CVX and solver output, maybe one of the forum readers can provide an assessment.
Hi Mark, I think I figured out how to use TFOCS for this problem! Instead of using prox_nuclear as my objective function, I should use prox_trace to impose the PSD constraint. This is because the proximity operator of trace is a combination of the proximity operator of trace and the indicator set of positive semi-definite matrices. So we do not need to explicitly include the positive semi-definite constraint.
Thanks for your help! - Shirley
Great. Please let us know the speed comparison on pertinent problem sizes.
I trust you will compare optimal objective values between TFOCS and CVX (perhaps TFOCS default solver tolerance is not as tight as CVX)), and verify that TFOCS solutions satisfy all intended constraints to within feasibility tolerance. | 2023-03-22 09:59:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6003360152244568, "perplexity": 1306.0136038982964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00734.warc.gz"} |
https://thottingal.in/blog/2020/02/29/procrustes-analysis-based-handwriting-recognition/ | Many months back, I started an experiment to see if Malayalam handwriting recognition can be done in a non-machine learning based approach. This blog post explains the approach, the work done so far and results.
Handwriting recognition can be done while the user is writing(called online handwriting recognition) and recognizing a sample somebody wrote in the past(offline recognition). Online and offline recognition problems are different problems. This is because, in online recognition, it is possible to capture additional details such as pen up, pen down, pen movement directions and rotations. These details provide an extra advantage on recognition. In offline recognition, we only have a sample written by somebody in the past. It is also important to note that optical character recognition(OCR) for printed content on a paper has differences with handwriting recognition. Printed content often have regularities(single font, uniformly aligned content etc.)
My experiment was about online handwriting recognition.
## Approach
In general, our problem is pattern matching of curves. The curves get complicated depending on the script. The curves are also very irregular because this is written by a user on a touch interface, mostly mobiles, using fingers(rarely, a stylus). So, we need a set of base representation of these curves and then compare the curves from user input and see if how close they are with the reference images.
### The curves and strokes
The curves created in a two dimensional space can be represented by x,y coordinates. We need the coordinates at start, end positions of a drawing. Start is “pen down” point and end is “pen up” point for a handwriting. We also need to get coordinates for all relevant intermediate points in the curve. These relevant points are inflection points. So when we have [(0,10), (20, 30)], we mean a straight line from (0,10) to (20, 30). Note that we are assuming straight lines between each points. This is approximation of any possible path between those two points.
This is just one way of writing ക. It is possible that there are more than one style of writing a letter. We can use that also as base image. ക is usually written with a single stroke. Meaning, you take the pen up only after writing the full letter. But consider the letter +- it has two strokes. You need to take the pen up after a stroke. We also need to make sure that the direction of pen movement is taken into consideration. So the array of points should follow the same order of pen movement.
Considering all these, we can have a representation of reference image of ക as :
We can represent it in a json format.
"ക": {
"samples": [
{
"strokes": [
[
{"x": 107, "y": 72 },
{"x": 139, "y": 12 },
{"x": 190, "y": 4 },
{"x": 206, "y": 43 },
{"x": 208, "y": 175 },
{"x": 168, "y": 240 },
{"x": 133, "y": 250 },
{"x": 112, "y": 148 },
{"x": 89, "y": 238 },
{"x": 57, "y": 262 },
{"x": 15, "y": 267 },
{"x": 0, "y": 236 },
{"x": 13, "y": 171 },
{"x": 174, "y": 102 },
{"x": 250, "y": 102 },
{"x": 277, "y": 123 },
{"x": 281, "y": 198 },
{"x": 266, "y": 247 }
]
]
}
]
},
## How to prepare the curve data
The above data should be prepared for every letter in the script. That is very tedious process. We need some tool to get this representation when we draw it. So I wrote a javascript application with an HTML5 canvas where you draw ,using mouse or finger and get this simplified representation of the curve.
Curve simplification is an interesting problem, but fortunately I did not had to implement it myself. There is an excellend javascript library that does exactly this, named simplifyjs. I just used it. It also allows setting the smoothness factor, depending on that we get more points to represent a curve or less points.
# Compare the writing
So, once we have the reference data prepared for a letter, we should compare it against a representation of user drawn letter. This is the core part of handwriting recognition. As far as the user interface is concerned, you can imagine that there is a canvas where a user can draw a letter using finger or mouse or stylus. We get a representation of that drawing in the same data structure we used for reference image.
Comparing to irregular curves for a match is a difficult problem. One curve may be bigger than other. One may be smoother than other. We need accommodate all kind of distortions. The drawing may be tilted too. There is a technique to compare two shapes considering all these challenges - Procrustes Analysis.
## Procrustes Analysis
In statistics, Procrustes analysis is a form of statistical shape analysis used to analyse the distribution of a set of shape(See Procrustes Analysis, Wikipedia). To compare the shapes of two or more objects, the objects must be first optimally “superimposed”. Procrustes superimposition (PS) is performed by optimally translating, rotating and uniformly scaling the objects.
(Image credit: Wikimedia commons Author: Christian Peter Klingenberg, Uploaded by user:Was a bee )
For comparing a letter drawn by a user with the reference image, same approach was used. They are compared by rotating and scaling. the rotation is limited to 45 degree. Otherwise z and s may match!. Rotation and scaling is done in steps and at each step a score of match is calculated. If it cross a predefined threshold, we declare it as a match.
For Procruster based matching, I used curvematcher library
## Implementation
The software implementation of this system has the following parts
• The reference image database in json format
• The matching logic
• The user interface of the system
• The training UI or UI to easily create the reference data.
Source code is availabe at gitlab and ou can try the system here: https://smc.gitlab.io/handwriting
As you can observe from the above video, the letter ‘4’ is a multistroke letter. This is supported by checking if the pen down of second stroke is within the drawing box of previous stroke. You can also see distortions, writing angle, spacing - all taken care of.
## Training
To prepare the reference data, a web application is being developed. It is similar to the handwriting recognition, but here a developer will draw a letter and tell which character it is. This data is then exported to the json file.
It is available at https://smc.gitlab.io/handwriting/training
This application is still missing many features.
## What is next
• Expand the reference data to cover most of common malayalam script.
• Expand further with style alternatives for each letter.
• Reduce the UI glitches
• Finish the training UI
• Convert it as a real input method that works with Operating systems, so that you can type directly to applications.
• An experimental support for Tamil is added. Enhance that with the help of Tamil speaking friends.
• More languages.
• Post-recognition corrections of words using spellcheck, predictive entry mechanisms likes Markov chain can improve the accuracy significantly.
• While writing letters like “കോ”, we write േ sign first, while the actual data is ക + ോ. So we need a visual to unicode reordering logic.
I am not getting enough time to spend on this project these days. If anybody interested in helping, please contact me.
Anish was contributing to this project recently. He presented this project in IndiaOS 2020(Video) | 2020-03-28 14:20:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3677748739719391, "perplexity": 1997.601520881866}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370491998.11/warc/CC-MAIN-20200328134227-20200328164227-00221.warc.gz"} |
https://proofwiki.org/wiki/Definition:Quadratic_Algebra | ## Definition
A quadratic algebra $A$ is a filtered algebra whose generator consists of degree one elements, with defining relations of degree 2.
A quadratic algebra $A$ is determined by a vector space of generators $V = A_1$ and a subspace of homogeneous quadratic relations $S \subseteq V \times V$.
Thus :
$A = T \left({V}\right) / \left \langle {S}\right \rangle$
and inherits its grading from the tensor algebra $T \left({V}\right)$.
If the subspace of relations may also contain inhomogeneous degree 2 elements, $S \subseteq k \times V \times \left({V \times V}\right)$, this construction results in a filtered quadratic algebra.
A graded quadratic algebra $A$ as above admits a quadratic dual: the quadratic algebra generated by $V^*$ and with quadratic relations forming the orthogonal complement of $S$ in $V^* \times V^*$. | 2020-01-23 04:36:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8054330348968506, "perplexity": 227.4090921008272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250608295.52/warc/CC-MAIN-20200123041345-20200123070345-00244.warc.gz"} |
http://mathhelpforum.com/differential-equations/76856-differential-equation-substitution.html | # Math Help - Differential equation - substitution
1. ## Differential equation - substitution
find the general solution (x+y)y' = (3y-x), where y(1)=2
I did the substitution y=ux
I solved to get: ln|v-1| - 2/(v-1) = -ln|x| + A
I then rearranged and introduced u=y/x back into the equation:
ln|y-x| - 2x/(y-x) = A
For y i get
y = Ce^(2x/(y-x)) + x
I'm not too confident about this solution however, if anyone could check and see if im right i'd appreciate it, and if I'm wrong highlight where you think ive made an error , thanks
2. ## Differential equation
Hello mitch_nufc
Originally Posted by mitch_nufc
find the general solution (x+y)y' = (3y-x), where y(1)=2
I did the substitution y=ux
I solved to get: ln|v-1| - 2/(v-1) = -ln|x| + A
I then rearranged and introduced u=y/x back into the equation:
ln|y-x| - 2x/(y-x) = A
For y i get
y = Ce^(2x/(y-x)) + x
I'm not too confident about this solution however, if anyone could check and see if im right i'd appreciate it, and if I'm wrong highlight where you think ive made an error , thanks
Yes, that looks OK to me. I differentiated your result, eliminated the term in $Ce^{\frac{2x}{y-x}}$, and got back to the original differential equation.
Incidentally, you can use $y(1) = 2$ to get $C = e^{-1}$.
3. Hello, mitch_nufc!
Find the general solution: . $(x+y)y' \:=\: (3y-x),\;\;\text{ where }y(1)=2$
I did the substitution: $y = vx$
I solved to get: . $\ln|v-1| - \frac{2}{v-1} \:=\: -\ln|x| + A$
I then rearranged and introduced $v=\tfrac{y}{x}$ back into the equation:
. . $\ln|y-x| - \frac{2x}{y-x}\:=\: A$
For $y$, i got: . $y \:= \:Ce^{\frac{2x}{y-x}} + x$ . . . . So did I!
Ya done good!
Now use that initial condition to determine $C.$ | 2014-08-20 10:51:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.945806622505188, "perplexity": 2378.7156305521976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500804220.17/warc/CC-MAIN-20140820021324-00117-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://hal.inria.fr/inria-00099116 | # Factorization in Z[x]: the searching phase
1 POLKA - Polynomials, Combinatorics, Arithmetic
INRIA Lorraine, LORIA - Laboratoire Lorrain de Recherche en Informatique et ses Applications
Abstract : In this paper we describe ideas used to accelerate the Searching Phase of the Berlekamp--Zassenhaus algorithm, the algorithm most widely used for computing factorizations in $\ZZ[x]$. Our ideas do not alter the theoretical worst-case complexity, but they do have a significant effect in practice: especially in those cases where the cost of the Searching Phase completely dominates the rest of the algorithm. A complete implementation of the ideas in this paper is publicly available in the library NTL~\cite{Shoup00}. We give timings of this implementation on some difficult factorization problems.
Keywords :
Type de document :
Communication dans un congrès
International Symposium on Symbolic and Algebraic Computation - ISSAC 2000, Aug 2000, St Andrews/United Kingdom, ACM, pp.1--7, 2000
Domaine :
https://hal.inria.fr/inria-00099116
Contributeur : Publications Loria <>
Soumis le : mardi 26 septembre 2006 - 08:51:06
Dernière modification le : jeudi 11 janvier 2018 - 06:19:48
### Identifiants
• HAL Id : inria-00099116, version 1
### Citation
John Abbott, Victor Shoup, Paul Zimmermann. Factorization in Z[x]: the searching phase. International Symposium on Symbolic and Algebraic Computation - ISSAC 2000, Aug 2000, St Andrews/United Kingdom, ACM, pp.1--7, 2000. 〈inria-00099116〉
### Métriques
Consultations de la notice | 2018-01-20 06:14:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27546972036361694, "perplexity": 6330.413665442762}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889325.32/warc/CC-MAIN-20180120043530-20180120063530-00413.warc.gz"} |
https://jira.lsstcorp.org/browse/DM-5105?focusedCommentId=44142&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel | # new conda 'mkl' dependent packages break meas_base tests
XMLWordPrintable
#### Details
• Type: Bug
• Status: Done
• Resolution: Done
• Fix Version/s: None
• Component/s:
• Labels:
None
• Story Points:
4
• Team:
SQuaRE
#### Description
Continuum release/rebuilt a number of packages last friday to depend on the the Intel MKL library.
https://www.continuum.io/blog/developer-blog/anaconda-25-release-now-mkl-optimizations
There are [new feature named] versions that continue to use openblas but the MKL versions appear to be installed by default. This causes at least multiple meas_base tests to fail.After extensive testing, I have confirmed that the meas_base tests do not fail with the equivalent 'nomkl' package.
In addition, mkl is closed source software that requires you to accept and download a license file or it is time-bombed to stop working after a trial period.
docker-centos-7: [ 36/36 ] meas_base 2015_10.0-9-g6daf04b+7 ... docker-centos-7: docker-centos-7: ***** error: from /opt/lsst/software/stack/EupsBuildDir/Linux64/meas_base-2015_10.0-9-g6daf04b+7/build.log: docker-centos-7: tests/sincPhotSums.py docker-centos-7: docker-centos-7: tests/measureSources.py docker-centos-7: docker-centos-7: tests/testApertureFlux.py docker-centos-7: docker-centos-7: tests/testJacobian.py docker-centos-7: docker-centos-7: tests/testScaledApertureFlux.py docker-centos-7: docker-centos-7: The following tests failed: docker-centos-7: /opt/lsst/software/stack/EupsBuildDir/Linux64/meas_base-2015_10.0-9-g6daf04b+7/meas_base-2015_10.0-9-g6daf04b+7/tests/.tests/sincPhotSums.py.failed docker-centos-7: /opt/lsst/software/stack/EupsBuildDir/Linux64/meas_base-2015_10.0-9-g6daf04b+7/meas_base-2015_10.0-9-g6daf04b+7/tests/.tests/measureSources.py.failed docker-centos-7: /opt/lsst/software/stack/EupsBuildDir/Linux64/meas_base-2015_10.0-9-g6daf04b+7/meas_base-2015_10.0-9-g6daf04b+7/tests/.tests/testApertureFlux.py.failed docker-centos-7: /opt/lsst/software/stack/EupsBuildDir/Linux64/meas_base-2015_10.0-9-g6daf04b+7/meas_base-2015_10.0-9-g6daf04b+7/tests/.tests/testJacobian.py.failed docker-centos-7: /opt/lsst/software/stack/EupsBuildDir/Linux64/meas_base-2015_10.0-9-g6daf04b+7/meas_base-2015_10.0-9-g6daf04b+7/tests/.tests/testScaledApertureFlux.py.failed docker-centos-7: 5 tests failed
(the exact cause of the test failures was not investigated as this should not have happened)
This change has also broken the ability to import an existing conda env from 2016-02-05 or earlier that uses scipy due to some sort of package version resolution problem. Explicit declaring it as the scipy package without mkl fixes the resolution problem.
There is a new 'nomkl' package, when installed, any subsequent package installations will default to versions without mkl. However, this does not fix any already installed packages.
I am traumatized by the lack of reproducible build envs even within a few days of each other. After discussion in the Tucson office, I'm going to pin the lsstsw and newinstall.sh conda package versions with a commitment from square to update them on a monthly basis. I already have a test version of lsstsw/bin/deploy that defaults to a bundled package but with a option flag to use bleeding edge.
#### Activity
Hide
Tim Jenness added a comment -
Continuum releasing their speed optimizations to the public seems like a great thing to me Why wouldn't we want our code to be faster? At least they indicate how to disable threading (and I guarantee you that end users will be using MKL anaconda so we really should be supporting it).
Show
Tim Jenness added a comment - Continuum releasing their speed optimizations to the public seems like a great thing to me Why wouldn't we want our code to be faster? At least they indicate how to disable threading (and I guarantee you that end users will be using MKL anaconda so we really should be supporting it).
Hide
Mario Juric added a comment -
Josh, +1 on the pinning for reproducibility, but the fact our code crashes with a differently built numpy may indicate a problem on our end. Just like when two compilers reveal nasty bugs and unwarranted assumptions, this may be telling us something similar.
Show
Mario Juric added a comment - Josh, +1 on the pinning for reproducibility, but the fact our code crashes with a differently built numpy may indicate a problem on our end. Just like when two compilers reveal nasty bugs and unwarranted assumptions, this may be telling us something similar.
Hide
Joshua Hoblitt added a comment -
Mario Juric I'm still internally debating if we should pin just for the build slaves/binary release builds and let end users act as a canary or pin everything by default. I'm leaning towards the later, with a -b flag (bleed) to disable it, at least initially, because I fear user frustration. What do you think?
Show
Joshua Hoblitt added a comment - Mario Juric I'm still internally debating if we should pin just for the build slaves/binary release builds and let end users act as a canary or pin everything by default. I'm leaning towards the later, with a -b flag (bleed) to disable it, at least initially, because I fear user frustration. What do you think?
Hide
Joshua Hoblitt added a comment -
I'm not dismissing that it may be a coding error that's being exposed but this isn't time I've been bitten by builds breaking on the scale of a few days. I don't even want to think about trying to build something from a year ago at this point...
Show
Joshua Hoblitt added a comment - I'm not dismissing that it may be a coding error that's being exposed but this isn't time I've been bitten by builds breaking on the scale of a few days. I don't even want to think about trying to build something from a year ago at this point...
Hide
Tim Jenness added a comment -
Just in case it wasn't clear, I've actually been using MKL Anaconda for months now on my Mac (it never actually expired, it just kept telling me it was about to).
Show
Tim Jenness added a comment - Just in case it wasn't clear, I've actually been using MKL Anaconda for months now on my Mac (it never actually expired, it just kept telling me it was about to).
Hide
Mario Juric added a comment -
Re pinning, sounds OK to me (in a perfect world, we'd have both). Is there a (not exceedingly difficult) way to pin to a particular set of versions that a some release of Anaconda uses? Most users just download Anaconda and never bother to 'conda update', so that may cover a large number of people out there.
(off to teach & then some meetings, will disappear for a few hours).
Show
Mario Juric added a comment - Re pinning, sounds OK to me (in a perfect world, we'd have both). Is there a (not exceedingly difficult) way to pin to a particular set of versions that a some release of Anaconda uses? Most users just download Anaconda and never bother to 'conda update', so that may cover a large number of people out there. (off to teach & then some meetings, will disappear for a few hours).
Hide
Frossie Economou added a comment -
We've reached a compromise, we'll pin for the CI, and let float up for the monthlies. We'll try it for a bit see how it works.
Show
Frossie Economou added a comment - We've reached a compromise, we'll pin for the CI, and let float up for the monthlies. We'll try it for a bit see how it works.
Hide
Tim Jenness added a comment - - edited
Dominique Boutigny reports the same problem but he has tracked down the segv:
Program received signal SIGSEGV, Segmentation fault. 0x00007ffff4703e00 in fftw_execute () from /home/boutigny/LSST/lsstsw/miniconda/lib/python2.7/site-packages/numpy/core/../../../../libmkl_intel_lp64.so
so maybe an interaction between MKL and our FFTW library.
Does Linux anaconda ship its own libfftw? (it does not on Mac but on Mac my meas_base builds are fine).
Show
Tim Jenness added a comment - - edited Dominique Boutigny reports the same problem but he has tracked down the segv: Program received signal SIGSEGV, Segmentation fault. 0x00007ffff4703e00 in fftw_execute () from /home/boutigny/LSST/lsstsw/miniconda/lib/python2.7/site-packages/numpy/core/../../../../libmkl_intel_lp64.so so maybe an interaction between MKL and our FFTW library. Does Linux anaconda ship its own libfftw ? (it does not on Mac but on Mac my meas_base builds are fine).
Hide
Joshua Hoblitt added a comment - - edited
Tim JennessThere's no conda package named fftw that gets pulled in as a dep and find doesn't turn up anything that looks like a fftw shared object.
Show
Joshua Hoblitt added a comment - - edited Tim Jenness There's no conda package named fftw that gets pulled in as a dep and find doesn't turn up anything that looks like a fftw shared object.
Hide
Joshua Hoblitt added a comment -
I've opened a PR on lsstsw that pins the conda/pip package versions along with a -b flag (bleed...) that disables the pinning. The travis config has been updated to run deploy both with and without this flag and to displace a diff on the conda package spec.
newinstall.sh still needs to be modified but I wanted to get feedback on this approach before continuing.
Show
Joshua Hoblitt added a comment - I've opened a PR on lsstsw that pins the conda/pip package versions along with a -b flag (bleed...) that disables the pinning. The travis config has been updated to run deploy both with and without this flag and to displace a diff on the conda package spec. newinstall.sh still needs to be modified but I wanted to get feedback on this approach before continuing.
Hide
Tim Jenness added a comment - - edited
To summarize the discussion on DM-5123: MKL has its own implementations of fftw/lapack/blas and the linker gets very confused when a separate libfftw has also been loaded.
Show
Tim Jenness added a comment - - edited To summarize the discussion on DM-5123 : MKL has its own implementations of fftw/lapack/blas and the linker gets very confused when a separate libfftw has also been loaded.
Hide
Tim Jenness added a comment - - edited
Optionally pinning like this seems to be fine. Minor comments on PR.
Show
Tim Jenness added a comment - - edited Optionally pinning like this seems to be fine. Minor comments on PR.
Hide
Joshua Hoblitt added a comment - - edited
Tim Jenness's comments on the lsstsw PR have been addressed. A new PR has been opened on miniconda2 which downloads and applies the conda package spec from the lsstsw repo. https://github.com/lsst/miniconda2/pull/3 Note that the download URL needs to be updated before being merged. There is a published miniconda2 package tagged as 'newinstall-testing'.
Show
Joshua Hoblitt added a comment - - edited Tim Jenness 's comments on the lsstsw PR have been addressed. A new PR has been opened on miniconda2 which downloads and applies the conda package spec from the lsstsw repo. https://github.com/lsst/miniconda2/pull/3 Note that the download URL needs to be updated before being merged. There is a published miniconda2 package tagged as 'newinstall-testing'.
Hide
Joshua Hoblitt added a comment -
I've updated the miniconda2 PR to be have the correct URL for merge to master. Tim Jenness am I clear to go ahead and merge? I want to get this out before an end-user gets caught.
Show
Joshua Hoblitt added a comment - I've updated the miniconda2 PR to be have the correct URL for merge to master. Tim Jenness am I clear to go ahead and merge? I want to get this out before an end-user gets caught.
Hide
Tim Jenness added a comment -
Ship it.
Show
Tim Jenness added a comment - Ship it.
Hide
Joshua Hoblitt added a comment -
Merged. A 3.19.0.lsst1 tag has been added to miniconda2.
Show
Joshua Hoblitt added a comment - Merged. A 3.19.0.lsst1 tag has been added to miniconda2 .
Hide
Joshua Hoblitt added a comment -
miniconda2-3.19.0.lsst1 has been published from b1913.
Show
Joshua Hoblitt added a comment - miniconda2-3.19.0.lsst1 has been published from b1913 .
Hide
Joshua Hoblitt added a comment -
I'm going to leave this ticket open until a sanity check build of lsst_distrib via newinstall.sh from master has completed.
Show
Joshua Hoblitt added a comment - I'm going to leave this ticket open until a sanity check build of lsst_distrib via newinstall.sh from master has completed.
Hide
Joshua Hoblitt added a comment - - edited
I've opened a PR to bump the version of miniconda2 in newinstall.sh; will self merge as soon as travis finishes.
Show
Joshua Hoblitt added a comment - - edited I've opened a PR to bump the version of miniconda2 in newinstall.sh ; will self merge as soon as travis finishes.
Hide
Joshua Hoblitt added a comment - - edited
Grrrrr. Looks like I broke the PATH construction in miniconda2. (I have no idea why this worked in the pre-merge testing)
Show
Joshua Hoblitt added a comment - - edited Grrrrr. Looks like I broke the PATH construction in miniconda2 . (I have no idea why this worked in the pre-merge testing)
Hide
Joshua Hoblitt added a comment -
Published miniconda2-3.19.0.lsst2 from b1914 with a one liner fix for the path construction.
Show
Joshua Hoblitt added a comment - Published miniconda2-3.19.0.lsst2 from b1914 with a one liner fix for the path construction.
Hide
Joshua Hoblitt added a comment -
It's confirmed, w_2016_05 now builds from a clean sandbox with newinstall.sh.
Show
Joshua Hoblitt added a comment - It's confirmed, w_2016_05 now builds from a clean sandbox with newinstall.sh .
#### People
Assignee:
Joshua Hoblitt
Reporter:
Joshua Hoblitt
Reviewers:
Mario Juric, Tim Jenness
Watchers:
Frossie Economou, Jonathan Sick, Joshua Hoblitt, Kian-Tat Lim, Mario Juric, Tim Jenness | 2021-06-25 01:29:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3033210039138794, "perplexity": 5840.668422238889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488560777.97/warc/CC-MAIN-20210624233218-20210625023218-00570.warc.gz"} |
http://www.physicsforums.com/forumdisplay.php?f=75&order=desc&page=120 | # Linear & Abstract Algebra
- Vector spaces and linear transformations. Groups and other algebraic structures along with Number Theory.
Meta Thread / Thread Starter Last Post Replies Views Views: 5,345 Announcement: Follow us on social media and spread the word about PF! Jan16-12 Please post any and all homework or other textbook-style problems in one of the Homework & Coursework Questions... Feb23-13 09:24 AM micromass 1 36,230 Is the derivative operator d/dx bounded with respect to the norm defined by integral from 0 to 1 of f g*... Nov14-09 01:01 PM esisk 2 1,313 hi can someone tell me...how to correctly use the 10 axioms.. for example: does the set of all 3x3 symmetric matrices... Nov14-09 04:11 AM HallsofIvy 3 2,637 I'm going to be doing research next semester on a topic of almost entirely my choice. I'd like to research anything... Nov13-09 03:07 PM Newtime 3 1,678 First, sorry for my poor English and any impolite behavior might happen. Here's two wave function(pic1) and problem... Nov13-09 07:56 AM boladore 0 2,903 I'm trying to show that a set W of polynomials in P2 such that p(1)=0 is a subspace of P2. Then find a basis for W... Nov13-09 06:00 AM HallsofIvy 11 4,777 The question posed is "Classify up to similarity all 3 x 3 complex matrices A s.t. A^{3} = I. I think the biggest... Nov11-09 09:17 AM g_edgar 4 2,537 Hi, I've been trying to show that the set of matrices that preserve L1 norm (sum of absolute values of each... Nov11-09 09:11 AM g_edgar 5 3,307 I'm looking at the exercises of Hungerfod's Algebra. Some looks easy but it seems the proofs are not so obvious.... Nov11-09 12:27 AM Spartan Math 4 7,282 i)]+1=0 :I had a friend with a T-shirt displaying this deceptively simple equation. I know it to be true, but I have... Nov10-09 06:23 AM widewombat 51 23,597 can explain to me what is mean of span? from book, it say "every vector in the space can be expressed as linear... Nov10-09 01:02 AM Dafe 6 1,611 If I have a matrix A, and I use n different row operations of this form: a_kR_i + R_j \rightarrow R_i to construct a... Nov9-09 08:01 PM slider142 5 1,040 The positive powers of 2 mod 5^m cycle with period 4*5^(m-1), which you can prove by showing that 2 is a primitive... Nov9-09 10:59 AM DoctorBinary 8 5,255 Let E = span{v1, v2} be the linear subspace of R3 spanned by the vectors v1 = (0,1,-2) and v2 = (1, 1, 1). Find... Nov8-09 09:12 PM epkid08 8 3,341 Suppose that a casino introduces a game in which a player bets $1 and can either win$2 or lose it, both with equal... Nov8-09 02:39 AM ha9981 0 2,078 I am working on some homework that I already handed in, but I cant get one of the problems. The fourth problem on the... Nov7-09 11:46 AM prez 3 6,010 Hi, I am trying to verify with anybody generally intrigued by the Riemann Hypothesis whether they might find of any... Nov7-09 05:14 AM Luca 0 1,355 Let's say I have a matrix A: A=\begin{bmatrix} f(x)& z_1(x)& z_2(x)\\ 0& a(x)& b(x)\\ 0& c(x)& d(x) ... Nov6-09 09:24 PM HallsofIvy 9 1,195 I did a problem in class today that evaluated f(t)=e^{At} for A_{2,2}=\begin{bmatrix}2&1 \\-1&4 \end{bmatrix} to a... Nov6-09 04:30 PM tiny-tim 1 880 right now, my concept for their difference is that linear transformations are 1 to 1, where as nonlinear... Nov5-09 04:04 AM HallsofIvy 2 10,313 I had a question about the following theorem. Basis Representation Theorem: Let k be any integer larger than 1. ... Nov5-09 02:12 AM ppd 3 6,418 Suppose A is a finite abelian group and p is a prime. A^p={a^p : a in A} and A_p={x:x^p=1,x in A}. How to show A/A^p... Nov4-09 04:33 PM ilia1987 1 2,248 Hi, I'm applying the Lanczos algorithm to find the minimal eigenvalue of some huge matrix. Now that I've got it I'm... Nov3-09 01:06 PM maverick_starstrider 0 1,890 What would be a formula for determining an estimate of a track runner's speed in comparison to other racers before... Nov3-09 12:16 PM bobbobwhite 0 618 For example, if alpha = 1, there is an unqiue solution. then what value(s) of alpha will make the system... Nov3-09 09:05 AM ZannX 2 5,543 I have a system of linear equations which can be expressed as XA=Y where X and Y are row vectors. The vector Y and the... Nov3-09 08:42 AM ZannX 5 2,310 Hi all, so I was looking at Legendre symbols, and I saw that \left(\frac{2}{p}\right)=(-1)^{\frac{p^2-1}{8}}. ... Nov3-09 02:23 AM thomas430 5 3,065 Hi all I am looking for a way of estimating stress and displacement of the following frame. I used ANSYS to analyze... Nov2-09 12:51 PM srosendal 0 5,818 Im trying to prove N(N(P)) = N(P) So N(P) = set oh h where h^-1Ph = p Then N(N(P)) = k where k^-1hk = h the... Nov2-09 11:18 AM Niall101 4 1,909 Let the vectors a1,a2,a3 €R3 and b1,b2,b3 € R4 be given by a1 a2 a3 1 -2 3 2 2 1 ... Nov2-09 05:28 AM orange12 3 919 given the function Z(s)= \prod _{k=0}^{\infty}\zeta (s+k) with \zeta (s) being the Riemann Zeta function the... Nov1-09 06:08 PM Petek 3 2,036 Hey. Got some troubles with this. 4 1 11 A= 1 3 11 0 2 6 ... Nov1-09 05:08 PM orange12 3 1,041 First off I am NOT asking you to solve this for me. I'm just trying to understand the concept behind this problem. ... Oct31-09 05:35 PM HallsofIvy 14 2,429 I have been working on integer factorization with pq = n and have written a paper on the special case for q < 2p. I... Oct31-09 06:10 AM mgb777 0 1,298 Let A_i (i=1,...,k) be a nonsingular complex matrix which size is M by M. The question is how to find a complex... Oct31-09 03:11 AM HallsofIvy 1 1,014 There are square pyramidal numbers and tetrahedral numbers, defined Square pyramidal numbers = n ( n + 1 )( 2 n +... Oct30-09 01:52 PM CRGreathouse 6 4,206 Hi, I'm not sure to which forum my question related. Is there a connection between models used in mathematics of... Oct30-09 10:59 AM yaron123 0 2,659 Hi, For a given problem, A*X = B where A is a NxN matrix, X and B are Nx1 accordingly, I understand that a poor... Oct29-09 01:12 PM ZannX 0 1,718 can u help me with this question If n ≥ 6 is a composite integer, then n |(n − 1)! If n is a prime number, then n... Oct29-09 07:04 AM Bingk 6 1,329 Hello, If we are given that b3|a2, how do we show that b|a? I started off looking at prime factorizations, but I... Oct29-09 06:41 AM Bingk 6 1,331 This is my first post on the physicsforums so go easy on me :) I am writing a simple program to generate the zero's... Oct29-09 04:38 AM marcusmath 2 3,108
Display Options for Linear & Abstract Algebra Mentors
Showing threads 4761 to 4800 of 8552 Mentors : 2
Sorted By Thread Title Last Post Time Thread Start Time Number of Replies Number of Views Thread Starter Thread Rating Sort Order Ascending Descending From The Last Day Last 2 Days Last Week Last 10 Days Last 2 Weeks Last Month Last 45 Days Last 2 Months Last 75 Days Last 100 Days Last Year Beginning
Forum Tools Search this Forum Search this Forum : Advanced Search | 2014-09-18 07:41:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5967061519622803, "perplexity": 2025.8364635890778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657126053.45/warc/CC-MAIN-20140914011206-00143-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://www.physicsforums.com/threads/wave-equation-for-sound-waves.198898/ | Wave equation for sound waves
1. Nov 17, 2007
pardesi
i saw the 'proof' of the wave equation for a sound wave in a medium assuming the wave equation for a dissplacement wave.
that is the equtaion $$s=s_{0} \sin(kx-wt)$$ is supposed to hold for all points for a wave propagating in the x direction.
then using this he found out the excess pressure at any point $$x$$ a any time $$t$$.
well what he did was let at time t=0 wave started and at time t say the dislpacement of any point x be s and that of $$x+\delta x$$ be $$s+ \delta s$$.then we have
change in volume $$\delta V=-A\delta s=-Ak \cos(kx-wt) \delta x$$
hence he said excess pressure on the material at x is $$\delta P=\frac{-B \delta V}{V}$$
but my question is the fluid at x is no more at x but rather at x+s so how come the pressure calculated from the bulk modulus equtaion is that at x
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook
Can you help with the solution or looking for help too?
Draft saved Draft deleted | 2017-01-19 17:19:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6917908191680908, "perplexity": 852.792819455753}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280723.5/warc/CC-MAIN-20170116095120-00291-ip-10-171-10-70.ec2.internal.warc.gz"} |
http://docs.lucedaphotonics.com/examples/plot_expanded_spiral.html | # Spiral with Expanded Waveguides and Spline Bends¶
If you want to reduce losses you can use spline bends and expanded waveguides in your spirals.
# Importing demolib and IPKISS
import demolib.all as pdk
from ipkiss3 import all as i3
from picazzo3.wg.spirals import FixedLengthSpiral
## Defining the Spline Rounding Algorithm¶
In order to create a custom rounding algorithm we need to inherit from spline import ShapeRoundAdiabaticSpline and override the defaults of the bend radius and the adiabatic angles.
class RA(i3.ShapeRoundAdiabaticSpline):
return 5.0
return (45.0, 45.0)
## Defining the trace template¶
We take the basic template from the PDK and derive an ExpandedWaveguideTemplate version of it, which does two things:
• flaring out to a different width, with a specified taper length and minimum lengths for the narrow and wider waveguides.
• creating bends using a given rounding algorithm
wg_tmpl = pdk.SWG450()
expanded_wg_tmpl = i3.ExpandedWaveguideTemplate(name="expanded_wg_tmpl",
trace_template=wg_tmpl)
rounding_algorithm=RA, # use splines instead of circular arcs: smoother transition
taper_length=10.0, # length of the taper between the regular waveguide and the expanded waveguide
min_wire_length=1.0, # minimum length of the regular waveguide section")
expanded_width=2.0,
min_expanded_length=1.0, # minimum length of the expanded section. If shorter, don't expand
)
## Defining a spiral¶
spiral = FixedLengthSpiral(total_length=4000.0,
n_o_loops=4,
trace_template=expanded_wg_tmpl)
spiral_lo = spiral.Layout(incoupling_length=10.0,
spacing=4,
stub_direction="V", # either H or V
growth_direction="V",
)
spiral_lo.visualize(annotate=True)
print "The length of the spiral is {} um".format(spiral_lo.trace_length())
Out:
The length of the spiral is 4000.0 um | 2020-11-27 02:56:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5295167565345764, "perplexity": 8610.555893522356}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189038.24/warc/CC-MAIN-20201127015426-20201127045426-00435.warc.gz"} |
http://mymathforum.com/algebra/347061-proof-challenging-inequality.html | My Math Forum Proof for a challenging inequality
Algebra Pre-Algebra and Basic Algebra Math Forum
September 14th, 2019, 12:27 PM #1 Newbie Joined: Sep 2019 From: New York Posts: 3 Thanks: 0 Proof for a challenging inequality I have something I believe to be true, but I'm uncertain, so I'm looking for a proof. For positive real numbers a,b,c,d Prove that if a>=b and c<=d, then a/c <= b/d Last edited by bluekaterpillar; September 14th, 2019 at 12:30 PM.
September 14th, 2019, 01:12 PM #2 Global Moderator Joined: May 2007 Posts: 6,835 Thanks: 733 $ad\ge bc$, since $a\ge b$ and $d\ge c$. Divide both sides by $dc$ and get $\frac{a}{c} \ge \frac{b}{d}$. Thanks from topsquark
September 14th, 2019, 01:17 PM #3 Newbie Joined: Sep 2019 From: New York Posts: 3 Thanks: 0 Thanks! I can't believe I missed that.
Tags challenging, inequality, proof
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post idontknow Elementary Math 2 November 5th, 2017 01:16 AM bueno Geometry 1 December 7th, 2016 02:34 AM elim Real Analysis 4 October 3rd, 2016 10:50 AM Jakarta Number Theory 5 March 4th, 2013 04:56 AM ZardoZ Algebra 3 September 11th, 2012 02:16 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-10-21 22:14:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5673967599868774, "perplexity": 6925.14573320753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987795253.70/warc/CC-MAIN-20191021221245-20191022004745-00243.warc.gz"} |
https://firme.com.co/citric-acid-betxavf/fitdist-exponential-r-02fd8a | # fitdist exponential r
• > > > fitdist exponential r
modelling hopcount from traceroute measurements How to proceed? See our full R Tutorial Series and other blog posts regarding R programming. -- Brian D. Ripley, ripley at stats.ox.ac.uk Professor of Applied Statistics, http://www.stats.ox.ac.uk/~ripley/ University of Oxford, Tel: +44 1865 272861 (self) 1 South Parks Road, +44 1865 272866 (PA) Oxford OX1 3TG, UK Fax: +44 1865 272595, I tried using JMP for the same and get two distinct recommendations when using the unscaled values. Cite. Vote. Thus, the distribution is represented by a single point on the plot. This tutorial uses the fitdistrplus package for fitting distributions.. library(fitdistrplus) r distributions goodness-of-fit exponential. The creation code for exponential origins has the same procedure. Marie Laure Delignette-Muller, R egis Pouillot , Jean-Baptiste Denis and Christophe Dutang December 17, 2009 Here you will nd some easy examples of use of the functions of the package fitdistrplus. I would gladly be corrected). Fitting distribution with R is something I have to do once in a while.A good starting point to learn more about distribution fitting with R is Vito Ricci's tutorial on CRAN. I'm trying to fit the following data to an exponential curve using fitdist. Compare Distributions Show Distribution Number of Parameters -2*LogLikelihood AICc X LogNormal 2 1016.29587 1020.50639 Johnson Sl 3 1015.21183 1021.6404 GLog 3 1016.29587 1022.72444 Exponential 1 1021.58662 1023.65559 Johnson Su 4 1015.21183 1023.9391 Gamma 2 1021.02475, http://r.789695.n4.nabble.com/Fitting-gamma-and-exponential-Distributions-with-fitdist-tp3477391p3477391.html, http://r.789695.n4.nabble.com/Fitting-gamma-and-exponential-Distributions-with-fitdist-tp3477391p3480133.html, https://stat.ethz.ch/mailman/listinfo/r-help, http://www.R-project.org/posting-guide.html, http://r.789695.n4.nabble.com/Fitting-gamma-and-exponential-Distributions-with-fitdist-tp3477391p3480265.html, http://r.789695.n4.nabble.com/Fitting-gamma-and-exponential-Distributions-with-fitdist-tp3477391p3480422.html, [R] Parameter estimation of gamma distribution, [R] outout clarification of fitdist {fitdistrplus} output. fitdist, mledist, qmedist, mmedist, mgedist, quantile.bootdist for another generic function to calculate quantiles from the fitted distribution Learn more about curve fitting MATLAB The method argument in R’s fitdistrplus::fitdist() function also accepts mme (moment matching estimation) and qme (quantile matching estimation), but remember that MLE is the default. For the Normal, log-Normal, geometric, exponential and Poisson distributions the closed-form MLEs (and exact standard errors) are used, and start should not be supplied.. For all other distributions, direct optimization of the log-likelihood is performed using optim.The estimated standard errors are taken from the observed information matrix, calculated by a numerical approximation. However, I am getting errors with both distributions. There was a small error in the data creation step and have fixed it as below: test <- c(895.1358,2915.7447,335.5472,1470.4022,194.5461,1814.2328, 1056.3067,3110.0783,11441.8656,142.1714,2136.0964,1958.9022, 891.89,352.6939,1341.7042,167.4883,2502.0528,1742.1306, 837.1481,867.8533,3590.4308,1125.9889,1200.605,4321.0011, 1873.9706,323.6633,1912.3147,865.6058,2870.8592,236.7214, 580.2861,350.9269,6842.4969,1886.2403,265.5094,199.9825, 1215.6197,7241.8075,2381.9517,3078.1331,5461.3703,2051.3997. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. [R] Goodness of fit test for estimated distribution, [R] Fitting weibull and exponential distributions to left censoring data. Plot exponential density in R. With the output of the dexp function you can plot the density of an exponential distribution. Updated in May 2020 to show a full example with qplot. Specify optional comma-separated pairs of Name,Value arguments.Name is the argument name and Value is the corresponding value.Name must appear inside quotes. The maximum values of an exponential distribution again converge to the Gumbel distribution . Example: fitdist(x,'Kernel','Kernel','triangle') fits a kernel distribution object to the data in x using a triangular … delay E.g. For all other distributions, direct optimization of the log-likelihood is performed using optim.The estimated standard errors are taken from the observed information matrix, calculated by a numerical approximation. e^y or we can say exponential of y. fitdist and plot.fitdist: for a given distribution, estimate parameters and provide goodness-of-fit graphs and statistics bootdist: for a fitted distribution, simulates the uncertainty in the estimated parameters by bootstrap resampling ... exponential logistic beta lognormal gamma [R] Goodness of fit test for estimated distribution [R] Fitting weibull and exponential distributions to left censoring data [R] Fitting weibull, exponential and lognormal distributions to … Fits the exponentional distribution to the given data. Follow edited Nov 20 '13 at 1:47. View source: R/fit_distribution.R. I intend to fit an exponential distribution function to data and find the parameter lambda (1/mean). shown in dashed line which is way different from the data. The R code below calculates the MLE for a given data ... {\alpha}$is the reciprocal of the sample mean of the$\log(X_i /\hat{m})$'s, which happen to have an exponential distribution. > x - 5 > exp(x) # = e 5 [1] 148.4132 > exp(2.3) # = e 2.3 [1] 9.974182 > exp(-2) # = e -2 [1] 0.1353353 Any help will be greatly appreciated! The R code below calculates the MLE for a given data ... {\alpha}$ is the reciprocal of the sample mean of the $\log(X_i /\hat{m})$'s, which happen to have an exponential distribution. Example: fitdist(x,'Kernel','Kernel','triangle') fits a kernel distribution object to the data in x using a triangular … Figure 1: Exponential Density in R. Example 2: Exponential Cumulative Distribution Function (pexp Function) We can also use the R programming language to return the corresponding values of the exponential cumulative distribution function for an input vector of quantiles. Arguments data. Thank, Yes. Figure 1: Exponential Density in R. Example 2: Exponential Cumulative Distribution Function (pexp Function) We can also use the R programming language to return the corresponding values of the exponential cumulative distribution function for an input vector of quantiles. I am trying to fit gamma and exponential distributions using fitdist function. I have tried out the following scaling and it seems to work fine: scaledVariable <- (test-min(test)+0.001)/(max(test)-min(test)+0.002) The gamma distribution parameters are obtained using the scaled variable and samples obtained from this distributions are scaled back using: scaled <- (randomSamples*(max(test) - min(test) + 0.002)) + min(test) - 0.001 Is there a better way to scale the variable??? See Also See fitdistrplusfor an overview of the package. [R] Fitting weibull, exponential and lognormal distributions to left-truncated data. This is part of our series on sampling in R. To hop ahead, select one of the following links. [R] outout clarification of fitdist {fitdistrplus} output [R] estimate the parameter of exponential distribution, etc. Fitting gamma and exponential Distributions with fitdist. Specify optional comma-separated pairs of Name,Value arguments.Name is the argument name and Value is the corresponding value.Name must appear inside quotes. dist. The aim is to show you by examples how to use these functions to help you to specify a parametric distribution from data corresponding to a Updated in August 2020 to show broom’s newer nest-map-unnest pattern and use tibbles instead of data frames. Delignette-Muller ML and Dutang C (2015), fitdistrplus: An R Package for Fitting Distributions. The latter is also known as minimizing distance estimation. [R] Goodness of fit test for estimated distribution, [R] Fitting weibull and exponential distributions to left censoring data. The value of e is approximately equal to 2.71828….. Syntax: exp(y) Parameters: y: It is any valid R number either positive or negative. This is the code I have but the graph doesn't fit the data. In the following block of code we show you how to plot the density functions for \lambda = 1 and \lambda = 2. On Wed, Apr 27, 2011 at 9:42 PM, vioravis wrote: Joshua, thanks for your reply. About the Author: David Lillis has taught R to many researchers and statisticians. I tried using JMP for the same and get two distinct recommendations when, http://r.789695.n4.nabble.com/Fitting-gamma-and-exponential-Distributions-with-fitdist-tp3477391p3480422.html, [R] Parameter estimation of gamma distribution, [R] outout clarification of fitdist {fitdistrplus} output. The exponential distribution is special because of its utility in modeling events that occur randomly over time. A character string "name" naming a distribution for which the corresponding density function dname, the corresponding distribution function pname and the corresponding quantile function qname must be defined, or directly the density function.. method. Answered: Cris LaPierre on 4 Apr 2019 Accepted Answer: Cris LaPierre. Fit data to an exponential curve using fitdist. In addition: The Great Place to Work® Institute (GPTW) is an international certification organization that audits and certifies great workplaces. When using the unscaled values, Log Normal appears to be best fit. The fitdist function returns an S3 object of class "fitdist" for which print, summary and plot functions are provided. Adjusts the scaling for estimation and returns the estimate parameters at the original scaling. A numeric vector defining the breaks of the cells used to compute the chi-squared statistic. Search everywhere only in this topic Advanced Search. 1. fitdist and plot.fitdist: for a given distribution, estimate parameters and provide goodness-of-fit graphs and statistics bootdist: for a fitted distribution, simulates the uncertainty in the estimated parameters by bootstrap resampling ... exponential logistic beta lognormal gamma Extends the fitdistr() function (of the MASS package) with several functions to help the fit of a parametric distribution to non-censored or censored data. For our data the fitted exponential model fits the data less well than the quadratic model, but still looks like a good model. The fitdist function returns an S3 object of class "fitdist" for which print, summary and plot functions are provided. R exp Function exp(x) function compute the exponential value of a number or number vector, e x . Fit of univariate distributions to non-censored data by maximum likelihood (mle), moment matching (mme), quantile matching (qme) or maximizing goodness-of-fit estimation (mge). The original code no longer worked with broom versions newer than 0.5.0. Comments. The main problem is a confusion between two similarly named functions in different packages: MASS::fitdistr() (for which specifying "normal" for the densfun argument works) and fitdistrplus::fitdist() (for which it doesn't). R functions (Ricci2005). See @Rakurai's answer for details on how to use fitdistrplus::fitdist(); this answer focuses on MASS::fitdistr().. distr. (5 replies) I am trying to fit gamma and exponential distributions using fitdist function in the "fitdistrplus" package to the data I have and obtain the parameters along with the AIC values of the fit. For some distributions (normal, uniform, logistic, exponential), there is only one possible value for the skewness and the kurtosis. I am trying to fit gamma and exponential distributions using fitdist function in the "fitdistrplus" package to the data I have and obtain the... R › R help. Hi, I am not incredibly knowledgeable about gamma distributions, but looking at your data, you have a tiny mean:variance ratio, which, I believe, means that the bulk of the distribution will be near 0 and you may run into computational problems (again I think. [R] estimate the parameter of exponential distribution, etc. The aim is to show you by examples how to use these functions to help you to specify a parametric distribution from data corresponding to a salary is from a continuous exponential distribution in R? How could I check if my data e.g. In this paper, we present the R package tdistrplus (Delignette-Muller, Pouillot, Denis, and Dutang2015) implementing several methods for tting univariate parametric distributions. The t of a distribution using fitdist assumes that the corresponding d, p, q functions (standing respectively for the density, the distribution and the quantile functions) are de ned. A rst objective in developing this package was to provide R users with a … Returns: … Exponential is proud to share that we have been certified as a Great Place to Work® by Great Place to Work® Institute for the period of March 2019 – Feb 2020 for India! Generic methods … test.fig; Hello, I'm trying to fit the following data to an exponential curve using fitdist. [R] Rmix package and different distributions, [R] Fitting Theoretical Distributions to Daily Rainfall Data. I would prefer fitting a distribution without scaling it. This is the code I have but the graph doesn't fit the data. Guess the distribution from which the data might The exponential distribution is used to model events that occur randomly over time, and its main application area is studies of lifetimes. Improve this question. [R] Fitting weibull, exponential and lognormal distributions to left-truncated data. Description Usage Arguments Value. Censored data may contain left censored, right censored and interval censored values, with several lower and upper bounds. We generate N = 1000 exponentially distributed random variables with as the parent. The exponential distribution uses the following parameters. fitdist in R is unable to provide a fit in this case. The t of a distribution using fitdist assumes that the corresponding d, p, q functions (standing respectively for the density, the distribution and the quantile functions) are de ned. You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. Follow 15 views (last 30 days) liv_ped on 4 Apr 2019. VarName6 = [1; 0.5294; 0.2941; 0.2794; 0.1764; 0.1323]; It is a special case of the gamma distribution with the shape parameter a = 1. 0. Details. [R] estimate the parameter of exponential distribution, etc. For that purpose, you need to pass the grid of the X axis as first argument of the plot function and the dexp as the second argument. The vector m follows the truncated exponential equation (F_M) and it is shown by solid black line in figure. I have given an reproducible example with the errors I am getting below. Perhaps you can transform your data for estimation and then transform it back (not sure if this would yield equivalent results)? exp() function in R Language is used to calculate the power of e i.e. fitdist_parameters_exponential: Fit Distribution Parameters Exponential In dnepple/tprstats: TSB Statistics Package. There was a small error in the data creation step and have fixed it as below: I tried using JMP for the same and get two distinct recommendations when. When fitting GLMs in R, we need to specify which family function to use from a bunch of options like gaussian, poisson, binomial, quasi, etc. A numeric vector. Marie Laure Delignette-Muller, R egis Pouillot , Jean-Baptiste Denis and Christophe Dutang December 17, 2009 Here you will nd some easy examples of use of the functions of the package fitdistrplus. Journal of Statistical Software, 64(4), 1-34. 2 Fitting distributions Concept: finding a mathematical function that represents a statistical variable, e.g. fitdist Fitting distributions with R. December 1, 2011 | mages. Description. Here is histogram of my sample:. Even though I've used fitdist(x,distname), the fitted exp. Nelly Reduan has shared OneDrive?files with you. This all makes me think it might be a convergence issue. fitdist(test + 10^4. [R] Rmix package and different distributions, [R] Fitting Theoretical Distributions to Daily Rainfall Data. # r rexp - exponential distribution in r rexp(6, 1/7) [1] 10.1491772 2.9553524 24.1631472 0.5969158 1.7017422 2.7811142 Related Topics. Share. Fit data to an exponential curve using fitdist. Arguments f. An object of class "fitdist", output of the function fitdist, or a list of "fitdist" objects.. chisqbreaks. In Part 6 we will look at some basic plotting syntax. Error code 100 when using the function “fitdist” from the fitdistrplus package. Details. Fitting distribution with R is something I have to do once in a while.A good starting point to learn more about distribution fitting with R is Vito Ricci's tutorial on CRAN. ) a likelihood ratio test can be easily implemented using the unscaled values, with several lower and upper.! In addition: the creation code for exponential origins has the same procedure 6 we will look some. Parameter of exponential distribution, etc a full example with qplot fitdist_parameters_exponential: fit distribution Parameters exponential in:! Institute ( GPTW ) is an international certification organization that audits and certifies Great.. = 1 Tutorial Series and other blog posts regarding R programming represented by a single point the! Error code 100 when using the loglikelihood provided by fitdist or fitdistcens parameter. If this would yield equivalent results ) special case of the following of... Like a good model functions for \lambda = 2 would yield equivalent results ) transform data. The maximum values of an exponential curve using fitdist Hello, I getting! Must appear inside quotes converge to the Gumbel distribution certification organization that audits and certifies workplaces! Distribution function to data and find the parameter of exponential distribution is to... 1/Mean ) function returns an S3 object of class fitdist '' for which print, summary plot. 9:42 PM, vioravis wrote: Joshua, thanks for your reply the quadratic model but... Different distributions, [ R ] estimate the parameter lambda ( 1/mean ) no longer worked broom. And interval censored values, Log Normal appears to be best fit international certification organization that audits and Great... Output of the package makes me think it might be a convergence issue but. And its main application area is studies of lifetimes and lognormal distributions to Daily Rainfall data show full... Intend to fit an exponential distribution in R is unable to provide fit. Namen, ValueN censored data may contain left censored, right censored and interval values! With qplot object of class fitdist '' for which print, summary and plot functions are.... Answered: Cris LaPierre on 4 Apr 2019 Accepted Answer: Cris LaPierre on Apr!, etc might be a convergence issue the errors I am getting errors both. Ml and Dutang C ( fitdist exponential r ), 1-34 distributions.. library ( fitdistrplus fit! Error code 100 when using the function “ fitdist ” from the.. Code no longer worked with broom versions newer than 0.5.0 to Work® Institute GPTW. Tsb Statistics package Parameters at the original scaling: Cris LaPierre ’ s newer pattern... Salary is from a continuous exponential distribution is special because of its utility in modeling events occur... In addition: the creation code for exponential origins has the same procedure contain censored! R exp function exp ( x ) function compute the chi-squared statistic 4 Apr 2019: … code., select one of the following links wrote: Joshua, thanks for your.. Of data frames value of a number or number vector, e x it back ( not sure if would! Makes me think it might be a convergence issue estimate the parameter of distribution... Our Series on sampling in R. with the errors I am getting below code I but! And exponential distributions using fitdist an overview of the package is used to model events that occur randomly time... Pm, vioravis wrote: Joshua, thanks for your reply intend to fit gamma and exponential distributions to Rainfall. ( x ) function compute the chi-squared statistic the parent many researchers statisticians. 4 Apr 2019 Accepted Answer: Cris LaPierre gamma distributions ) a likelihood ratio test can easily! In addition: the creation code for exponential origins has the same.! Function to data and find the parameter lambda ( 1/mean ) data may left... ] outout clarification of fitdist { fitdistrplus } output [ R ] Fitting weibull and exponential distributions left-truncated.: finding a mathematical function that represents a Statistical variable, e.g application area is studies lifetimes... See our full R Tutorial Series and other blog posts regarding R programming other blog posts regarding programming... Author: David Lillis has taught R to many researchers and statisticians Joshua thanks... With R. December 1, 2011 at 9:42 PM, vioravis wrote: Joshua, thanks your. Gptw ) is an international certification organization that audits and certifies Great workplaces researchers and.! Nelly Reduan has shared OneDrive? files with you any order as Name1, Value1,,... I 'm trying to fit the data 2020 to show a full example with qplot can be easily implemented the! Censored and interval censored values, Log Normal appears to be best fit perhaps you specify! … Error code 100 when using the function “ fitdist ” from the data perhaps you can specify name... ] Goodness of fit test for estimated distribution, etc 6 we will at! Series on sampling in R. with the output of the package transform your data for estimation and then transform back! Place to Work® Institute ( GPTW ) is an international certification organization fitdist exponential r audits and certifies Great.! Pattern and use tibbles instead of data frames 2019 Accepted Answer: Cris LaPierre Parameters exponential in:. Equivalent results ) { fitdistrplus } output [ R ] estimate the parameter lambda ( )! Exponential curve using fitdist pattern and use tibbles instead of data frames weibull and exponential distributions using fitdist scaling. The data, Apr 27, 2011 at 9:42 PM, vioravis wrote: Joshua, thanks for reply..., e.g: Cris LaPierre on 4 Apr 2019 Accepted Answer: Cris LaPierre with broom newer... Value arguments.Name is the corresponding value.Name must appear inside quotes ) fit to. Argument name and value pair arguments in any order as Name1,,! How to plot the density of an exponential distribution is special because of its utility in events. Distribution again converge to the Gumbel distribution origins has the same procedure, 64 4. ( 4 ), fitdistrplus: an R package for Fitting distributions is special because of its utility in events. On 4 Apr 2019 Accepted Answer: Cris LaPierre on 4 Apr 2019 \lambda = 2 }... Basic plotting syntax in dashed line which is way different from the data and interval censored,... Breaks of the package corresponding value.Name must appear inside quotes ) liv_ped on 4 Apr 2019 represents a Statistical,... With qplot Theoretical distributions to left-truncated data 1 and \lambda = 2 the argument name and value pair arguments any... ) fit data to an exponential distribution days ) liv_ped on 4 Apr 2019 for which print summary... And \lambda = 1 and \lambda = 1 and \lambda = 2 ] Goodness of fit test estimated... In August 2020 to show broom ’ s newer nest-map-unnest pattern and use tibbles instead of data.... ( GPTW ) is an international certification organization that audits and certifies Great workplaces ] estimate parameter! Of exponential distribution function to data and find the parameter of exponential distribution in R our the. And its main application area is studies of lifetimes Joshua, thanks for reply. An R package for Fitting distributions with R. December 1, 2011 at 9:42 PM vioravis! Code for exponential origins has the same procedure object of class fitdist '' for print. In Part 6 we will look at some basic plotting syntax distributions using fitdist exponential r cells. For which print, summary and plot functions are provided longer worked with broom versions than. Versions newer than 0.5.0 broom versions newer than 0.5.0 ] estimate the parameter of distribution. Latter is Also known as minimizing distance estimation code no longer worked with broom versions newer than.! Fit gamma and exponential distributions to left censoring data can be easily implemented using function... '' for which print, summary and plot functions are provided easily using. Values, Log Normal appears to be best fit: … Error code 100 when using loglikelihood! Data less well than the quadratic model, but still looks like a good model library., e x, Apr 27, 2011 | mages and different distributions [! Normal appears to be best fit estimated rate is very small breaks the! Also known as minimizing distance estimation instead of data frames that occur randomly over time fitted... Test can be easily implemented using the unscaled values, Log Normal to... Fit an exponential curve using fitdist wrote: Joshua, thanks for your.. Case of the package value of a number or number vector, e x am trying to gamma... Specify several name and value pair arguments in any order as Name1 Value1... Single point on the plot ) function compute the chi-squared statistic adjusts the for! Known as minimizing distance estimation 1000 exponentially distributed random variables with as the.! 2015 ), 1-34 censored values, with several lower and upper bounds journal Statistical... Its main application area is studies of lifetimes exponential density in R. with the errors am! Am trying to fit an exponential distribution, etc fitdist exponential r events that occur over! In the following block of code we show you how to plot the density functions for =... Find the parameter of exponential distribution again converge to the Gumbel distribution Lillis has taught R to researchers! Has the same procedure pattern and use tibbles instead of data frames still looks like good! Also known as minimizing distance estimation and other blog posts regarding R programming I would Fitting. In August 2020 to show a full example with qplot basic plotting syntax overview! 30 days ) liv_ped on 4 Apr 2019 Accepted Answer: Cris LaPierre on 4 Apr 2019: Error.
##### Menú
Close
Vestibulum id ligula porta felis euismod semper. Nulla vitae elit libero, a pharetra augue. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. Maecenas mollis interdum!
Close
You are now subscribed, thank you!
Close
There was a problem with your submission. Please check the field(s) with red label below.
Close
Your message has been sent. We will get back to you soon!
Close | 2022-09-28 00:04:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5503056645393372, "perplexity": 2920.4351914511967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335059.31/warc/CC-MAIN-20220927225413-20220928015413-00167.warc.gz"} |
http://stats.stackexchange.com/questions?sort=votes | # All Questions
152k views
### Making sense of principal component analysis, eigenvectors & eigenvalues
In today's pattern recognition class my professor talked about PCA, eigenvectors & eigenvalues. I got the mathematics of it. If I'm asked to find eigenvalues etc. I'll do it correctly like a ...
102k views
### Python as a statistics workbench
Lots of people use a main tool like Excel or another spreadsheet, SPSS, Stata, or R for their statistics needs. They might turn to some specific package for very special needs, but a lot of things can ...
76k views
### The Two Cultures: statistics vs. machine learning?
Last year, I read a blog post from Brendan O'Connor entitled "Statistics vs. Machine Learning, fight!" that discussed some of the differences between the two fields. Andrew Gelman responded favorably ...
111k views
### What is your favorite “data analysis” cartoon?
This is one of my favorites: One entry per answer. This is in the vein of the Stack Overflow question What’s your favorite “programmer” cartoon?. P.S. Do not hotlink the cartoon without the site's ...
98k views
### Why square the difference instead of taking the absolute value in standard deviation?
In the definition of standard deviation, why do we have to square the difference from the mean to get the mean (E) and take the square root back at the end? Can't we just simply take the absolute ...
62k views
### What is the intuition behind beta distribution?
Disclaimer: I'm not a statistician but a software engineer. Most of my knowledge in statistics comes from self-education, thus I still have many gaps in understanding concepts that may seem trivial ...
90k views
### What is the difference between “likelihood” and “probability”?
The wikipedia page claims that likelihood and probability are distinct concepts. In non-technical parlance, "likelihood" is usually a synonym for "probability," but in statistical usage there is a ...
35k views
### How to understand the drawbacks of K-means
K-means is a widely used method in cluster analysis. In my understanding, this method does NOT require ANY assumptions, i.e., give me a dataset and a pre-specified number of clusters, k, and I just ...
71k views
### Bayesian and frequentist reasoning in plain English
How would you describe in plain English the characteristics that distinguish Bayesian from Frequentist reasoning?
170k views
### Difference between logit and probit models
What is the difference between Logit and Probit model? I'm more interested here in knowing when to use logistic regression, and when to use Probit. If there is any literature which defines it using ...
15k views
### What are common statistical sins?
I'm a grad student in psychology, and as I pursue more and more independent studies in statistics, I am increasingly amazed by the inadequacy of my formal training. Both personal and second hand ...
59k views
51k views
### Is normality testing 'essentially useless'?
A former colleague once argued to me as follows: We usually apply normality tests to the results of processes that, under the null, generate random variables that are only asymptotically or ...
16k views
### Is $R^2$ useful or dangerous?
I was skimming through some lecture notes by Cosma Shalizi (in particular, section 2.1.1 of the second lecture), and was reminded that you can get very low $R^2$ even when you have a completely linear ...
55k views
### Explaining to laypeople why bootstrapping works
I recently used bootstrapping to estimate confidence intervals for a project. Someone who doesn't know much about statistics recently asked me to explain why bootstrapping works, i.e., why is it that ...
89k views
### How would you explain Markov Chain Monte Carlo (MCMC) to a layperson?
Maybe the concept, why it's used, and an example.
86k views
### Interpretation of R's lm() output
the help pages in R assume I know what those numbers mean. I don't :) I'm trying to really intuitively understand every number here. I will just post the output and comment on what I found out. There ...
88k views
### How to understand degrees of freedom?
From Wikipedia, there are three interpretations of the degrees of freedom of a statistic: In statistics, the number of degrees of freedom is the number of values in the final calculation of a ...
95k views
### What is the difference between data mining, statistics, machine learning and AI?
What is the difference between data mining, statistics, machine learning and AI? Would it be accurate to say that they are 4 fields attempting to solve very similar problems but with different ...
114k views
### How to choose the number of hidden layers and nodes in a feedforward neural network?
Is there a standard and accepted method for selecting the number of layers, and the number of nodes in each layer, in a FF NN? I'm interested in automated ways of building neural networks.
25k views
Suppose we have data set $(X_i,Y_i$) with $n$ points. We want to perform a linear regression, but first we sort the $X_i$ values and the $Y_i$ values independently of each other, forming data set $(... 9answers 24k views ### Why does a 95% CI not imply a 95% chance of containing the mean? It seems that through various related questions here, there is consensus that the "95%" part of what we call a "95% confidence interval" refers to the fact that if we were to exactly replicate our ... 33answers 62k views ### What is the best introductory Bayesian statistics textbook? Which is the best introductory textbook for Bayesian statistics? One book per answer, please. 8answers 51k views ### What's the difference between a confidence interval and a credible interval? Joris and Srikant's exchange here got me wondering (again) if my internal explanations for the the difference between confidence intervals and credible intervals were the correct ones. How you would ... 14answers 309k views ### What is the meaning of p values and t values in statistical tests? After taking a statistics course and then trying to help fellow students, I noticed one subject that inspires much head-desk banging is interpreting the results of statistical hypothesis tests. It ... 9answers 123k views ### What is the difference between test set and validation set? I found this confusing when I use the neural network toolbox in Matlab. It divided the raw data set into three parts: training set validation set test set I notice in many training or learning ... 7answers 108k views ### When conducting multiple regression, when should you center your predictor variables & when should you standardize them? In some literature, I have read that a regression with multiple explanatory variables, if in different units, needed to be standardized. (Standardizing consists in subtracting the mean and dividing ... 11answers 12k views ### What is a data scientist? Having recently graduated from my PhD program in statistics, I had for the last couple of months began searching for work in the field of statistics. Almost every company I considered had a job ... 10answers 56k views ### Is there any reason to prefer the AIC or BIC over the other? The AIC and BIC are both methods of assessing model fit penalized for the number of estimated parameters. As I understand it, BIC penalizes models more for free parameters than does AIC. Beyond a ... 7answers 16k views ### Why is Euclidean distance not a good metric in high dimensions? I read that 'Euclidean distance is not a good distance in high dimensions'. I guess this statement has something to do with the curse of dimensionality, but what exactly? Besides, what is 'high ... 18answers 52k views ### Does Julia have any hope of sticking in the statistical community? I recently read a post from R-Bloggers, that linked to this blog post from John Myles White about a new language called Julia. Julia takes advantage of a just-in-time compiler that gives it wicked ... 8answers 27k views ### Is Facebook coming to an end? Recently, this paper has received a lot of attention (e.g. from WSJ). Basically, the authors conclude that Facebook will lose 80% of its members by 2017. They base their claims on an extrapolation ... 14answers 49k views ### Amazon interview question—probability of 2nd interview I got this question during an interview with Amazon: 50% of all people who receive a first interview receive a second interview 95% of your friends that got a second interview felt they had a good ... 9answers 246k views ### How to summarize data by group in R? [closed] I have R data frame like this: ... 10answers 94k views ### What are the differences between Factor Analysis and Principal Component Analysis? It seems that a number of the statistical packages that I use wrap these two concepts together. However, I'm wondering if there are different assumptions or data 'formalities' that must be true to use ... 5answers 34k views ### Which “mean” to use and when? So we have arithmetic mean (AM), geometric mean (GM) and harmonic mean (HM). Their mathematical formulation is also well known along with their associated stereotypical examples (e.g., Harmonic mean ... 9answers 68k views ### How should I transform non-negative data including zeros? If I have highly skewed positive data I often take logs. But what should I do with highly skewed non-negative data that include zeros? I have seen two transformations used:$\log(x+1)$which has the ... 64answers 131k views ### Statistics Jokes Well, we've got favourite statistics quotes. What about statistics jokes? So, what's your favourite statistics joke? 22answers 79k views ### R vs SAS, why is SAS preferred by private companies? I learned R but it seems that companies are much more interested in SAS experience. What are the advantages of SAS over R? 7answers 148k views ### What is the difference between fixed effect, random effect and mixed effect models? In simple terms, how would you explain (perhaps with simple examples) the difference between fixed effect, random effect and mixed effect models? 8answers 14k views ### Detecting a given face in a database of facial images I'm working on a little project involving the faces of twitter users via their profile pictures. A problem I've encountered is that after I filter out all but the images that are clear portrait ... 7answers 37k views ### Algorithms for automatic model selection I would like to implement an algorithm for automatic model selection. I am thinking of doing stepwise regression but anything will do (it has to be based on linear regressions though). My problem ... 8answers 57k views ### How would you explain covariance to someone who understands only the mean? ...assuming that I'm able to augment their knowledge about variance in an intuitive fashion ( Understanding "variance" intuitively ) or by saying: It's the average distance of the data ... 8answers 160k views ### In linear regression, when is it appropriate to use the log of an independent variable instead of the actual values? Am I looking for a better behaved distribution for the independent variable in question, or to reduce the effect of outliers, or something else? 19answers 6k views ### How to annoy a statistical referee? I recently asked a question regarding general principles around reviewing statistics in papers. What I would now like to ask, is what particularly irritates you when reviewing a paper, i.e. what's the ... 26answers 21k views ### Free statistical textbooks Are there any free statistical textbooks available? 26answers 28k views ### Locating freely available data samples I've been working on a new method for analyzing and parsing datasets to identify and isolate subgroups of a population without foreknowledge of any subgroup's characteristics. While the method works ... 2answers 269k views ### How do I get the number of rows of a data.frame in R? [closed] After reading a dataset: dataset <- read.csv("forR.csv") How can I get R to give me the number of cases it contains? Also, will the returned value include of ... 19answers 22k views ### The Sleeping Beauty Paradox The situation Some researchers would like to put you to sleep. Depending on the secret toss of a fair coin, they will briefly awaken you either once (Heads) or twice (Tails). After each waking, ... 5answers 29k views ### Can a probability distribution value exceeding 1 be OK? On the Wikipedia page about naive Bayes classifiers, there is this line:$p(\mathrm{height}|\mathrm{male}) = 1.5789\$ (A probability distribution over 1 is OK. It is the area under the bell curve ...
15 30 50 per page | 2016-06-26 08:08:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7432665228843689, "perplexity": 1015.5979721180938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783395039.24/warc/CC-MAIN-20160624154955-00015-ip-10-164-35-72.ec2.internal.warc.gz"} |
https://jgaa.info/getPaper?id=543 | An Adaptive Version of Brandes' Algorithm for Betweenness Centrality Matthias Bentert, Alexander Dittmann, Leon Kellerhals, André Nichterlein, and Rolf Niedermeier Vol. 24, no. 3, pp. 483-522, 2020. Regular paper. Abstract Betweenness centrality-measuring how many shortest paths pass through a vertex-is one of the most important network analysis concepts for assessing the relative importance of a vertex. The well-known algorithm of Brandes [J. Math. Sociol. '01] computes, on an $n$-vertex and $m$-edge graph, the betweenness centrality of all vertices in $O(nm)$ worst-case time. In later work, significant empirical speedups were achieved by preprocessing degree-one vertices and by graph partitioning based on cut vertices. We contribute an algorithmic treatment of degree-two vertices, which turns out to be much richer in mathematical structure than the case of degree-one vertices. Based on these three algorithmic ingredients, we provide a strengthened worst-case running time analysis for betweenness centrality algorithms. More specifically, we prove an adaptive running time bound $O(kn)$, where $k < m$ is the size of a minimum feedback edge set of the input graph. Submitted: May 2020. Reviewed: August 2020. Revised: October 2020. Accepted: October 2020. Final: October 2020. Published: October 2020. Communicated by Yoshio Okamoto article (PDF) BibTeX | 2021-07-25 02:59:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7606421113014221, "perplexity": 1484.002184056821}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151563.91/warc/CC-MAIN-20210725014052-20210725044052-00603.warc.gz"} |
https://calendar.math.illinois.edu/?year=2015&month=04&day=07&interval=day | Department of
# Mathematics
Seminar Calendar
for events the day of Tuesday, April 7, 2015.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
March 2015 April 2015 May 2015
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 7 1 2 3 4 1 2
8 9 10 11 12 13 14 5 6 7 8 9 10 11 3 4 5 6 7 8 9
15 16 17 18 19 20 21 12 13 14 15 16 17 18 10 11 12 13 14 15 16
22 23 24 25 26 27 28 19 20 21 22 23 24 25 17 18 19 20 21 22 23
29 30 31 26 27 28 29 30 24 25 26 27 28 29 30
31
Tuesday, April 7, 2015
11:00 am in 243 Altgeld Hall,Tuesday, April 7, 2015
#### Baez-Dolan Stabilization for Rezk's n-Fold Complete Segal Spaces and the Importance of Left Properness
###### David White [email] (Denison)
Abstract: We will begin with an overview of an old problem, due to Baez and Dolan, and discuss where in the solution of this problem model category theoretic considerations arise. This requires placing left proper model structures on algebras over certain colored operads. We will discuss how this can be accomplished. Our path will take us through a discussion of model structures on commutative monoids and non-reduced symmetric operads, as well as a new filtration due to Batanin and Berger.
1:00 pm in 347 Altgeld Hall,Tuesday, April 7, 2015
#### A Necessary and Sufficient Condition for Reality of Eigenvalues of Anharmonic Oscillators in the Complex Plane
###### Kwang Shin [email] (U of West Georgia)
Abstract: All self-adjoint anharmonic oscillators have real eigenvalues only. Self-adjointness is a sufficient condition for real eigenvalues but not a necessary condition. A large class of non-self-adjoint PT-symmetric anharmonic oscillators have real eigenvalues only. In this talk, we will consider all anharmonic oscillators in the complex plane and give a necessary and sufficient condition for classes of anharmonic oscillators in the complex plane to have infinitely many real eigenvalues.
1:00 pm in Altgeld Hall 243,Tuesday, April 7, 2015
#### Marked length rigidity for NPC Euclidean cone metrics
###### Christopher J. Leininger (UIUC Math)
Abstract: Otal proved that for negatively curved Riemannian metrics on compact surfaces, the marked length spectrum---the function which assigns the length of the geodesic representative to each homotopy class of curves---determines the metric up to isometry homotopic to the identity. This was extended to nonpositively curved (NPC) Riemannian metrics by Croke-Fathi-Feldman, and to negatively curved cone metrics by Hersonsky-Paulin. In his thesis, Frazier considered the case of NPC Euclidean cone metrics, and showed that the marked length spectrum distinguishes such metrics from the classes above, but was unable to prove that they could be distinguished by such from each other. In joint work with Anja Bankovic, we prove that NPC Euclidean cone metrics are determined by their marked length spectrum. From the proof, we conjecture that they are (almost) determined by a much coarser invariant, namely the support of the associated Liouville current. I'll explain all the terms and sketch the relatively short proof.
2:00 pm in 347 Altgeld Hall,Tuesday, April 7, 2015
#### Two-species asymmetric exclusion process, critical exponents and height models
###### Birgit Kaufmann (Purdue University)
Abstract: We discuss recent work on the two-species asymmetric exclusion process, partially in collaboration with D. Huse and G. Schuetz. The dynamics of this stochastic process can be described by a master equation with an integrable Hamiltonian. We used the Bethe Ansatz to calculate the dynamical critical exponent. In analogy to the single species exclusion process, we define a height model that reflects the nearest-neighbor interactions of the multi-particle exclusion process and derive the partial differential equations for this model. Depending on the parameters of the model, the dynamics is of KPZ type, diffusive type or a mixture of both. It is interesting to see that these equations also follow directly from the Master equation approach.
3:00 pm in 241 Altgeld Hall,Tuesday, April 7, 2015
#### Cops and Robbers on diameter two graphs
Abstract: The game of Cops and Robbers is a combinatorial game where a set of cops try to catch a robber, while moving along the edges of a fixed graph. The most well-known conjecture in this area states that the number of cops needed is at most the square root of the number of vertices. Even in very special graphs, like graphs of diameter two, the best upper bound is unknown. I will give the best known bound in the diameter two case and describe one possible attempt at obtaining the correct upper bound, show why the proof is incomplete, and where the gaps are that I couldn't fill in.
3:00 pm in 243 Altgeld Hall,Tuesday, April 7, 2015
#### Intersection theory on the moduli of disks
###### Rahul Pandharipande (ETH Zurich)
Abstract: I will discuss a descendent integration theory for open Riemann surfaces. A careful discussion of the disk case will be given. This leads to a conjecture for the analogues of the KdV and Virasoro constraints (in all genera). Joint work with J. Solomon and R. Tessler.
4:00 pm in 159 Altgeld Hall,Tuesday, April 7, 2015
#### Least-Squares Monte Carlo Approach to the Calculation of Capital Requirements
###### Daniel Bauer (Georgia State University)
Abstract: The calculation of capital requirements for financial institutions usually entails a reevaluation of the company’s assets and liabilities at some future point in time for a (large) number of stochastic forecasts of economic and firm-specific variables. The complexity of this nested valuation problem leads many companies to struggle with the implementation. Relying on a well-known method for pricing non-European derivatives, the current paper proposes and analyzes a novel approach to this computational problem based on least-squares regression and Monte Carlo simulations. We show convergence of the algorithm, we analyze the resulting estimate for practically important risk measures, and we derive optimal basis functions based on spectral methods. Our numerical examples demonstrate that the algorithm can produce accurate results at relatively low computational costs, particularly when relying on the optimal basis functions.
4:00 pm in 245 Altgeld Hall,Tuesday, April 7, 2015
#### Mathematical modeling and simulations and its role in lowering the carbon emissions of trucks
###### Mihai Dorobantu (Eaton Corporation)
Abstract: Heavy Duty trucks consume over 25% of on-road fuel and yet account for less than 7% of vehicles. Drastically reducing that consumption is critical to both bending the curve of CO2 emissions as well as reducing our dependency on foreign oil. Key to transforming an industry as conservative as trucking are mathematical tools enabled by compute power, availability of big data and advances in active controls and optimization. We illustrate the challenges and implications of applied mathematics to vehicle fuel economy in a collection of case studies, ranging from transmissions design to efficient and affordable hybrid systems and full vehicle controls, including human behavior. We will show how efficient controllers are built from a meshing of physics based and data driven models, and examine the potential of parallelizing massive optimization problems associated discrete systems architectures choices.
4:00 pm in 243 Altgeld Hall,Tuesday, April 7, 2015
#### Operator Monotone Functions
###### Li Gao (UIUC Math)
Abstract: In this talk, we will study an important class of functions called operator monotone functions. They are real functions whose extensions to Hermitian matrices preserve order. I will give a sketch proof for Loewner' theorem which characterize operator monotone functions on intervals. | 2022-01-23 02:20:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5205543637275696, "perplexity": 750.4385852885382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303956.14/warc/CC-MAIN-20220123015212-20220123045212-00186.warc.gz"} |
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1981 | A novel non-linear approach to minimal area rectangular packing
• This paper disscuses the minimal area rectangular packing problem of how to pack a set of specified, non-overlapping rectangels into a rectangular container of minimal area. We investigate different mathematical programming approaches of this and introduce a novel approach based on non-linear optimization and the \\\"tunneling effect\\\" achieved by a relaxation of the non-overlapping constraints.
$Rev: 13581$ | 2017-02-21 05:17:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6923238039016724, "perplexity": 1391.4150530304496}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170651.78/warc/CC-MAIN-20170219104610-00452-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.vlfeat.org/matconvnet/gpu/ | # Using GPU acceleration
GPU support in MatConvNet builds on top of MATLAB GPU support in the Parallel Computing Toolbox. This toolbox requires CUDA-compatible cards, and you will need a copy of the corresponding CUDA devkit to compile GPU support in MatConvNet (see compiling).
All the core computational functions (e.g. vl_nnconv) in the toolbox can work with either MATLAB arrays or MATLAB GPU arrays. Therefore, switching to use the GPU is as simple as converting the input CPU arrays in GPU arrays.
In order to make the very best of powerful GPUs, it is important to balance the load between CPU and GPU in order to avoid starving the latter. In training on a problem like ImageNet, the CPU(s) in your system will be busy loading data from disk and streaming it to the GPU to evaluate the CNN and its derivative. MatConvNet includes the utility vl_imreadjpeg to accelerate and parallelize loading images into memory (this function is currently a bottleneck will be made more powerful in future releases). | 2020-02-23 12:23:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31751203536987305, "perplexity": 2137.235446609052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145767.72/warc/CC-MAIN-20200223093317-20200223123317-00335.warc.gz"} |
https://avidemia.com/single-variable-calculus/review-of-fundamentals/absolute-value-equations-and-inequalities/ | ## Equations
In Section 1.5, we learned that
$\bbox[#F2F2F2,5px,border:2px solid black]{\large |x|=c\geq 0\qquad \Leftrightarrow \qquad x=\pm c}$
provided $c\geq0$. So to solve equations involving an absolute value follow these steps:
1. Isolate the absolute value expression on one side and the rest of terms on the other side. That is, rewrite the equation as
$|P|=Q$ where $P$ and $Q$ are two expressions in $x$ [to indicate the dependence on $x$, we may write them as $P(x)$ and $Q(x)$].
2. Equate the expression inside the absolute value notation once with + the quantity on the other side and once with – the quantity on the other side.
$P=Q\qquad\text{or}\qquad P=-Q$
3. Solve both equations.
• When $Q<0$, the equation will not have a solution because always $|P|\geq0$. When $Q$ is an expression, we need to substitute the solutions in $Q$ to make sure that $Q\geq0$.
Example 1
Solve each equation:
(a) $|2x-3|=5$
(b) $|5x-7|+9=0.$
Solution
(a)
$|2x-3|=5\quad\Leftrightarrow\quad2x-3=\pm5$ \begin{align*}
2x-3 & =5\Rightarrow x=4\\
2x-3 & =-5\Rightarrow x=-1
\end{align*}
So the solutions are $x=4$ and $x=-1$. We can check to see these values satisfy the equation, but it is not necessary because the right-hand side is a positive number.
(b)
$|5x-7|=-9$ because always $|5x-7|\geq0$ and $-9<0$, this equation does not have a solution.
Example 2
Solve each equation:
(a) $|3-2x|+5x=18$
(b) $|4x+3|+3x=10$
Solution
(a) We isolate the absolute value expression on one side:
$|3-2x|=18-5x\tag{i}$ which is equivalent to
$3-2x=18-5x\qquad\text{or\}\qquad3-2x=-18+5x}}.$ Solving each equation:
$3-2x=18-5x\Rightarrow3x=15\Rightarrow x=5$ or
$3-2x=-18+5x\Rightarrow7x=21\Rightarrow x=3$ If we substitute $5$ for $x$ in $18-5x$ it becomes $18-25<0$. Because the RHS of (i) is negative and the LHS is nonnegative, $x=5$ cannot be a solution. But if we substitute 3 for $x$ in $18-5x$, it becomes $18-15=3>0$, so the RHS and the LHS of (i) are both nonnegative and $x=3$ is the only solution.
Alternatively, we can substitute $x=5$ and $x=3$ in the original equation and check if they satisfy the equation.
(b) Similar to (a)
$|4x+3|=10-3x\tag{ii}$ $\quad\Leftrightarrow\quad4x+3=\pm(10-3x)$ We have to solve two equations:
$(1)\qquad4x+3=10-3x\Rightarrow7x=7\Rightarrow x=1$ $(2)\qquad4x+3=-10+3x\Rightarrow x=-13$ Substituting $1$ for $x$ in RHS of (ii) ($10-3x$) gives $7$. Because the LHS and RHS of (ii) are nonnegative, $x=1$ is a solution. Substituting $-13$ for $x$ in the RHS of (ii) gives a positive number, so $x=-13$ is another solution. Therefore the solutions are $x=1$ and $x=-13$.
Alternatively we can substitute $x=1$ and $x=-13$ in the original equation and see which one satisfies the equation.
When there are more than one absolute value, for example when we have
$|P|+|R|=Q$ where $P, Q$, and $R$ are some expressions, the above technique may not work. In such cases, we need to find where $P$ and $R$ are positive and where they are negative and then solve the equation in the same way that we solve regular equations.
Example 3
Solve $|2x+4|+4|14-3x|=30$.
Solution
Using the definition of the absolute value
$|2x+4|=\begin{cases} 2x+4 & x\ge-2\\ -(2x+4) & x<-2 \end{cases}$ $|14-3x|=|3x-14|=\begin{cases} 3x-14 & x\ge\frac{14}{3}\approx4.67\\ 14-3x & x<\frac{14}{3} \end{cases}$
When $x\ge14/3\approx4.67$:
\begin{align*}
|2x+4|+4|14-3x|=14x-52 & =30\\
14x-52 & =30\\
14x & =82\\
x & =\frac{82}{14}=\frac{41}{7}\approx5.86
\end{align*}
Because $x=41/7$ lies in the interval $[14/3,\infty)$, it is consistent with our assumption that $x\ge14/3$ and thus $x=41/7$ is a solution.
When $-2\le x<\frac{14}{3}$:
\begin{align*}
|2x+4|+4|14-3x|=-10x+60 & =30\\
-10x+60 & =30\\
-10x & =-30\\
x & =3
\end{align*}
Because $x=3$ lies in the interval $[-2,\frac{14}{3}]$, it is consistent with our assumption that $-2\le x<14/3$ and thus $x=3$ is a solution.
When $x<-2$:
\begin{align*}
|2x+4|+4|14-3x|=-14x+52 & =30\\
-14x+52 & =30\\
-14x & =-22\\
x & =\frac{11}{7}
\end{align*}
Because $x=11/7$ does not lie in the interval $(-\infty,-2)$, it is \uline{not} consistent with our assumption that $x<-2$, and thus $x=11/7$ cannot be a solution. Therefore, the solution set is
$\{3,\frac{41}{7}\}.$ It does not matter in which interval we consider the endpoints because
$\text{at }x=-2\qquad-14x+52=-10x+60=80$ and
$\text{at }x=14/3\qquad-10x+60=14x-52=40/3.$
## Inequalities
To solve absolute value inequalities, recall (see Section 1.4):
1. $|x|<c$ is equivalent to $-c<x<c$.
2. $|x|>c$ is equivalent $x>c$ or $x<-c$
where $c$ is a positive number.
The above equivalent statements hold true if we replace $<$ by $\le$ and $>$ by $\ge$.
Example 4
Solve each of the following inequalities
(a) $|x-3|<5$
(b) $|5-4x|>6.$
Solution
(a) The inequality $|x-3|<5$ is equivalent to
$-5<x-3<5$
$-2<x<8$ The solution set is the interval $(-2,8)$.
(b) The inequality $|5-4x|>6$ is equivalent to
$5-4x>6\qquad\text{or}\qquad5-4x<-6}}$
$-4x>1\qquad\text{or}\qquad-4x<-11}}$
Divide each term by $-4$:
$x<-\frac{1}{4}\qquad\text{or}\qquad x>\frac{1}{4}$ [For the last step, recall that when we divide both sides of an inequality by a negative number, the direction of the inequality changes.]
$\{x|\ x<-\frac{1}{4}\quad\text{or}\quad x>\frac{1}{4}\}=(-\infty,-\frac{1}{4})\cup(\frac{1}{4},\infty).$ | 2021-03-04 12:57:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9555333256721497, "perplexity": 225.7243083739116}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369054.89/warc/CC-MAIN-20210304113205-20210304143205-00583.warc.gz"} |
http://mathhelpforum.com/geometry/170127-geometry-similar-triangles.html | # Math Help - Geometry:Similar Triangles
1. ## Geometry:Similar Triangles
Just want to know if i got this right guys. Im trying very hard now and i got serious with school im getting good grades ! thanks guys.
But here i got this similar triangles questions. They are done... is just to know if im right
2. These triangles are not similar. 4/12 is not equal to 20/15.
-Dan
PS Okay, I appear to be wrong. I'll check out why later.
Ah. I see now. That is a "y" not a 4. Please excuse.
3. Originally Posted by topsquark
These triangles are not similar. 4/12 is not equal to 20/15.
-Dan
Ratios
4. Yes, that's fine.
You can think of the sides of LMN are magnified by the exact same amount
to be the sides of PQR.
Let k be the magnification factor
$k[MN]=[QR]\Rightarrow\ k=\frac{[QR]}{[MN]}$
$k[LM]=[PQ]\Rightarrow\ k=\frac{[PQ]}{[LM]}$
$k[LN]=[PR]\Rightarrow\ k=\frac{[PR]}{[LN]}$
Therefore
$\frac{y}{12}=\frac{x}{18}=\frac{20}{15}$
or
$\frac{y}{20}=\frac{12}{15}$
$\frac{x}{20}=\frac{18}{15}$
5. Originally Posted by Archie Meade
Yes, that's fine.
You can think of the sides of LMN are magnified by the exact same amount
to be the sides of PQR.
Let k be the magnification factor
$k[MN]=[QR]\Rightarrow\ k=\frac{[QR]}{[MN]}$
$k[LM]=[PQ]\Rightarrow\ k=\frac{[PQ]}{[LM]}$
$k[LN]=[PQ]\Rightarrow\ k=\frac{[PQ]}{[LN]}$
Therefore
$\frac{y}{12}=\frac{x}{18}=\frac{20}{15}$
or
$\frac{y}{20}=\frac{12}{15}$
$\frac{x}{20}=\frac{18}{15}$
Thanks i knew i was getting good at this. Sorry for not pointing out it was a fraction
6. Note there was a typo in my post.
7. Originally Posted by Archie Meade
Note there was a typo in my post.
Sorry if i dont understand much but is TYPO an error?
8. yes, a writing error, I had written PQ a 2nd time instead of PR.
9. NP Just have to thank you ! | 2015-05-05 05:54:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9247986674308777, "perplexity": 1917.4853383846742}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430455235303.29/warc/CC-MAIN-20150501044035-00094-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41598-019-53371-3?error=cookies_not_supported&code=33657c55-4cfe-4fe1-88be-e51195925656 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Continuation of tropical Pacific Ocean temperature trend may weaken extreme El Niño and its linkage to the Southern Annular Mode
### Subjects
A Publisher Correction to this article was published on 04 February 2020
## Abstract
Observational records show that occurrences of the negative polarity of the Southern Annular Mode (low SAM) is significantly linked to El Niño during austral spring and summer, potentially providing long-lead predictability of the SAM and its associated surface climate conditions. In this study, we explore how this linkage may change under a scenario of a continuation of the ocean temperature trends that have been observed over the past 60 years, which are plausibly forced by increasing greenhouse gas concentrations. We generated coupled model seasonal forecasts for three recent extreme El Niño events by initialising the forecasts with observed ocean anomalies of 1 September 1982, 1997 and 2015 added into (1) the current ocean mean state and into (2) the ocean mean state updated to include double the recent ocean temperature trends. We show that the strength of extreme El Niño is reduced with the warmer ocean mean state as a result of reduced thermocline feedback and weakened rainfall-wind-sea surface temperature coupling over the tropical eastern Pacific. The El Niño-low SAM relationship also weakens, implying the possibility of reduced long-lead predictability of the SAM and associated surface climate impacts in the future.
## Introduction
The Southern Annular Mode (SAM) is the leading mode of variability of the Southern Hemisphere (SH) extratropical circulation on weekly and longer timescales that describes a meridional vacillation of the eddy-driven jet and associated storm track1,2,3,4,5. The spatial pattern of the positive polarity of SAM (high SAM) is characterised by a nearly zonally symmetric annular pattern of positive anomalies of pressure/geopotential height in the SH midlatitudes and negative anomalies in the Antarctic region, associated with a poleward shift of the eddy-driven jet and storm track3. The negative polarity of SAM (low SAM) is characterised by an equatorward shift of the eddy-driven jet and associated storm track. SAM, which is intrinsic to the troposphere, exhibits a decorrelation time of about two weeks6 and can be realistically simulated in an atmospheric general circulation model forced with climatological sea surface temperatures (SSTs) at the lower boundary7,8. In addition to the well established trend toward the higher polarity phase of SAM in response to the Antarctic ozone depletion, year-to-year variations of seasonal mean SAM during spring and summer can be promoted by the El Niño-Southern Oscillation (ENSO)4,9,10,11,12,13,14,15 and by downward coupling of anomalous conditions in the Antarctic stratospheric polar vortex that develops as early as austral winter16,17,18.
The relationship between ENSO and SAM is particularly important for a long-lead prediction of SAM and its surface climate anomalies because ENSO can be skillfully predictable at lead times of 2–3 seasons and beyond4,19. During austral spring and summer, between 10–36% of the variance of SAM is explained by its relationship with eastern Pacific-type (EP) ENSO4,9,10,13. The warm phase of ENSO (El Niño) is associated with low SAM and the cold phase of ENSO (La Niña) is associated with high SAM. Seasonal mean SAM during austral spring and early summer is predictable with an atmosphere-ocean coupled forecast system at up to 6 month lead time, and the long-lead predictive skill is likely to be attributed to the relationship of SAM with eastern Pacific ENSO4.
The predictability of SAM stemming from ENSO leads to predictability of SH extratropical atmosphere, ocean and sea-ice variations in regions which are strongly influenced by SAM13,20,21,22,23,24,25. This is in addition to the predictability arising from direct impacts of ENSO26. Because of the potential benefits of predicting extratropical climate as a result of the co-variation of SAM with ENSO, it seems natural to question how the ENSO-SAM relationship will change in a future climate given that there has been much focus on how ENSO and its impacts might change in warmer climte27,28. Unfortunately, there is great uncertainty in how this relationship may change in the future because, for instance, the models used in the Climate Coupled Model Intercomparison Project (CMIP5)29 fall significantly short in simulating the observed linkage between ENSO and SAM in the current climate, demonstrating large inter-model spread all year round30. Therefore, an idealised model experiment would be useful to explore how SAM might respond to ENSO in a future climate.
In this study, we specifically focus on the response of SAM to extreme El Niño in a future waremer climate because characteristics of extreme El Niño and extreme La Niña31 are not entirely symmetric and their impacts on the atmosphere are not exactly opposite to each other32,33. This focus on extreme El Niño is also to increase the signal in the model experiments. We explore our research question by superimposing 1982, 1997 and 2015 El Niño oceanic conditions on a hypothetically warmer mean state, which we derive as a continuation of the observed ocean temperature trends since 1960. This is done by using a series of seasonal forecast sensitivity experiments with the global coupled model seasonal prediction system POAMA (Predictive Ocean and Atmosphere Model for Australia)34.
The pattern of the observed ocean surface temperature trends of 1960–2014 is characterised by significant surface warming trends in the tropical western Pacific and Indian Oceans together with a slight surface cooling trend along the equatorial eastern Pacific35,36,37,38,39 (Fig. 1a). In the subsurface along the equator, there has been a significant cooling trend along and below the thermocline in the equatorial Pacific (Fig. 1b). This overall trend pattern is often described as a “La Niña-like” mean state change for convenience40. This trend pattern and magnitude in SSTs is well captured by the 1st empirical orthogonal function (EOF) eigenvector of decadally smoothed SST variability, which is well separated from the 2nd EOF eigenvector that represents the Inter-decadal Pacific Oscillation (IPO)/Pacific Decadal Oscillation (PDO)41 (Supplementary Fig. S1). This observed trend pattern is distinctively different from the trend patterns simulated by the CMIP multi-model mean historical and future projections in responses to increasing greenhouse gases42. The projected trend shows greater warming over the tropical eastern Pacific compared to its surrounding oceans and flattening of the thermocline in the equatorial Pacific32,42,43 (i.e. a “El Niño-like” mean state change).
An El Niño-like mean state change in response to increasing greenhouse gases is explained by weakening of the Pacific Walker circulation and the zonal SST gradient in the tropical Pacific, consistent with the energetic and hydrological balances under increasing greenhouse gas forcing44,45,46. The amplitude of El Niño is closely linked to the degree of mean warming in the eastern equatorial Pacific in the CMIP5 models, the majority of which show El Niño-like mean state change patterns42. Consequently, a significant increase in the frequency of extreme El Niño is projected in the 21st century compared to the 20th century28. However, the fidelity of this El Niño-like simulated mean state change in response to greenhouse gas forcing and its potential impact on ENSO characteristics have been vigorously debated36,39,47,48,49,50,51. The El Niño-like mean state change in some models has been attributed to systematic model biases such as a too regular and symmetric El Niño-La Niña cycle together with ocean mixed layers that are too deep in the tropical Pacific Ocean51,52; too weak inter-basin teleconnections in the tropical oceans53; and a too-cold cold tongue in the equatorial Pacific with too high relative humidity and too low wind speed39. On the other hand, the ocean dynamical thermostat mechanism40,54, the non-linear ENSO warming suppression mechanism55, and/or the inter-basin warming contrast mechanisms56,57 suggest that equatorial eastern Pacific SST would warm more slowly than the equatorial western Pacific SST as the Earth warms. Therefore, the possibility cannot be ruled out that the observed La Niña-like ocean temperature trend has been in part forced by global warming39,56,58, and it may continue into the future. Thus, we suggest that it is a valid and valuable question to address how the characteristics and teleconnection of extreme El Niño will change in the future if the observed La Niña-like long-term mean state warming continues, a scenario which has not yet been given much attention.
We address this premise by conducting four forecast sensitivity experiments whereby the ocean initial conditions are altered in a manner following the approaches of earlier studies using POAMA59,60. These experiments are summarised in Table 1. The present climate El Niño experiment (pElNiño) was conducted by initialising the coupled forecast model POAMA with observed ocean initial conditions for 00 UTC 1 September of 1982, 1997 and 2015 that represent the developing stages of the three extreme El Niños observed in the modern instrumental record43. We also initialised POAMA with the mean ocean conditions for 00 UTC 1 September computed over 1981–2013 in order to produce a set of climatological forecasts for the present climate (the present climatology experiment; pClim). To explore changes to extreme El Niño events and their linkages to SAM in the climate warmed up by enhanced observed ocean temperature trends, we computed the trends in ocean temperatures and salinity at all available vertical levels, latitudes, and longitudes on 00 UTC 1 September for the period of 1960–2014. We then doubled their magnitudes in order to produce a stronger impact in our modeling framework (Supplementary Fig. S2). We added these doubled trends to the observed ocean initial conditions used in pElNiño to generate forecasts of El Niño that occur with a warmer ocean mean state (the warmer climate El Niño experiment; wElNiño). Finally, we generated the climatological forecasts of the warmer climate by adding the doubled ocean temperature and salinity trends to the climatological ocean initial conditions used in pClim. We refer to this as the warmer climatology experiment, wClim. We denote present and warmer climate predicted El Niño with respect to their respective present and warmer climatologies as pElNiño’ and wElNiño’, respectively.
The atmosphere and land component models of POAMA were initialised with 33 different conditions drawn from observed states during 1981–2013 in order to scramble atmosphere and land initial conditions within the observed range so to generate ensemble forecasts. The CO2 concentration was fixed to 345 ppm, which is the default value of POAMA retrospective forecast set, and the ozone concentration was prescribed by the observed monthly climatology34. Therefore, differences in the predicted atmospheric circulation in our experiments can be interpreted as responses to the differences in ocean forecasts in the experiments. We initialised all experiments on 1 September and limited our interest to the three month mean forecasts for October to December (OND) because ENSO and SAM are better correlated on a seasonal time scale than on shorter time scales and the three month mean observed correlation is maximum in OND4.
## Revisiting the Observed Relationship Between ENSO and SAM
Austral spring-summer SAM is influenced by both ENSO and Antarctic stratospheric vortex variation, the latter of which is considered to be a stronger driver16. SAM has also shown a strong positive trend from the 1980s to the late 1990s due to the anthropogenically-driven Antarctic ozone depletion61. To highlight the relationship of SAM with ENSO, which is largely an interannual variation, we de-trended the SAM index for the period of 1979–2016. Then, we regressed out the component of SAM related to the stratospheric polar vortex variation from the de-trended SAM index, using the SH stratosphere-troposphere coupled mode index18 as a predictor representing anomalous stratospheric conditions (i.e. de-trended residual SAM; see Methods for the climate indices used in the study).
Earlier studies18,62 reported no significant relationship between the Niño3.4 SST index, which represents eastern Pacific ENSO variability, and the Antarctic stratospheric vortex variation during the last 40 years, but SAM is significantly better correlated with Niño3.4 SST after the removal of the influence of the SH stratospheric polar vortex variation from the SAM index (r = −0.53, statistically significant at the 0.1% level (i.e. p < 0.001); Fig. 2c). The three extreme El Niño events of 1982, 1997 and 2015, which were accompanied by a substantial spread in the magnitude of raw SAM (e.g. SAM was neutral during the late spring of the strong 2015 El Niño), all exhibited strong low SAM after the removal of the stratospheric influence. This result supports the connection between ENSO and SAM in OND.
Figure 3 shows the SST anomalies of the three extreme El Niño events and de-trended residual MSLP anomalies of OND. Although there are differences in the spatial details and magnitudes of anomalies of SSTs, all three El Niño events are associated with low pressure anomalies equatorward of 60°S with a wavenumber 3 structure and high pressure anomalies poleward of 60°S, which typify low SAM. Strong stationary Rossby wave propagation from the Maritime Continent poleward and eastward to the Amundsen-Bellingshausen Seas is a distinctive teleconnection driven by El Niño, which is likely to further contribute to the amplitude of low SAM by placing a strong low pressure anomaly centre north of 60°S and a strong high pressure anomaly centre slightly south of 60°S. As noted in Fig. 2a, SAM was observed to be neutral despite the extreme strength of El Niño in OND 2015 because anomalous Antarctic stratospheric vortex strengthening18 and associated ozone depletion were acting to produce high SAM (https://ozonewatch.gsfc.nasa.gov/meteorology/figures/merra/ozone/toms_areas_2015_omi+merra.pdf). After removing this influence of the stratospheric polar vortex, the residual SAM is strongly negative with zonally symmetric pressure anomalies like those of 1982 and 1997 (Fig. 3 right panels). This implies that in 2015 the forcing of high SAM from a strengthening of the polar stratospheric vortex and associated depletion of Antarctic ozone may have completely countered the forcing of low SAM from El Niño in the OND season, thereby lowering the predictability of SAM and the associated surface climate.
The promotion of low SAM by El Niño has previously been shown to result from intensification and equatorward contraction of the the Hadley circulation because the El Niño SST anomalies act to increase near equatorial diabatic heating (latent heat release as a result of moist atmospheric convection), which is flanked by enhanced diabatic cooling (longwave radiation to space as a result of enhanced subsidence and reduced latent heat release) in the subtropics (Fig. 4a). Thus, westerlies on the equatorward side of the climatological subtropical jet strengthen (Fig. 4b). This increase in westerly winds closer to the equator allows deeper penetration of extratropical baroclinic Rossby waves into the tropics11,12. As a result, eddy momentum flux divergence in the tropics shifts closer to the equator, and momentum flux convergence anomalously increases in the midlatitudes (30–40°S), while decreasing in the higher latitudes (50–70°S; Fig. 4c). Extratropical baroclinicity and the associated storm track thus shift equatorward, which is manifest as low SAM. These general features resulting from the eastern Pacific SST variations during El Niño, which were proposed in the literature, are confirmed by the composites of the three extreme El Niños in Fig. 4 and by the anomalies of the individual extreme El Niño in Supplementary Fig. S3.
## Experiment Results
### El Niño and SAM in the present climate
Figure 5 displays 33-member ensemble mean anomalies of SSTs and MSLP averaged during OND for the present climate 1982, 1997 and 2015 El Niños. Anomalies are formed relative to the present climate as pElNiño’ = pElNiño-pClim. The strong El Niño events are skillfully predicted at this short lead time of 1 month, although the amplitude of 1982 El Niño is substantially underpredicted (by ~0.7 °C; Supplementary Fig. S4). In all three El Niño cases, overall patterns of forecast MSLP anomalies feature low SAM with low pressure anomalies being dominant equatorward of 60°S and high pressure anomalies over the polar cap, and a stationary Rossby wave train extending from the Maritime Continent poleward and eastward to the south eastern Pacific, which are consistent with the observed anomalies shown in Fig. 3. However, a strong low-pressure center observed over the southern Atlantic Ocean is missed in all three of the simulated El Niño cases (Fig. 5 right panels). Reproducibility of low SAM in pElNiño’, which is initialised with realistic ocean conditions but random atmosphere and land conditions, confirms that strong El Niño can be an important forcing of low SAM, and the polarity of SAM would have been more negative during El Niño of 2015 if the Antarctic polar vortex were not significantly stronger than usual.
### Warmer climate mean state
For the results of the forecasts of extreme El Niño in the idealised future warmer climate (wElNiño) to be scientifically reliable and attributable to the change in the mean state of the initial conditions, key features of the observed ocean temperature trends added in the initial conditions and resultant ocean circulation changes should be faithfully maintained through the October to December verification period. This is demonstrated by comparing mean differences in SSTs, equatorial Pacific upper ocean temperatures and tropical Pacific mixed layer circulations between wClim and pClim to the observed trends in OND over 1960–2014 in Fig. 6. We note that although we added doubled the observed ocean temperature trends to the climatological ocean initial conditions for wClim, the magnitudes of the ocean surface and subsurface warming over the following four months are more or less comparable to the magnitudes of the observed trends of the 55 years of 1960–2014. This loss of amplitude in the mean state trend may be partly due to the scrambled atmosphere initial conditions and their damping effect until the atmosphere and the ocean are brought to a balance in the first month of forecasts24, or the lack of changing the greenhouse gas forcing in the warmer climate runs. Nevertheless, it is encouraging to see the overall similarities between the warmer minus present climate differences and the observed trends for the spatial structures of SSTs, equatorial Pacific subsurface temperatures and tropical Pacific mixed layer ocean circulations. For instance, the pattern correlation between Figs. 6a,b is 0.55 (p < 0.001 with 5760 grid values), and the enhanced zonal SST gradient between the tropical western Pacific (120–160°E, 5°S-5°N)40 and eastern Pacific (Niño3.4 region) is maintained in OND forecasts in wClim minus pClim although the gradient is simulated with only about one third of the observed magnitude (0.3 °C compared to 0.9 °C). The observed reduction of upwelling into the mixed layer (averaged 0–45 m depth) in the equatorial eastern Pacific is also faithfully captured by wClim forecasts (Figs. 6e,f). On the other hand, forecasts of SSTs and subsurface temperatures are significantly different from the observed trends in the far eastern Pacific east of 120°W, which should be borne in mind when we diagnose below how the change in the mean state acts to change El Niño growth.
The atmospheric response to the warmer climate (Fig. 7a) shows more intense warming in the upper troposphere in the tropics than in the higher latitudes of the SH, steepening the meridional temperature gradient between the tropics and the south pole. This change results in a poleward shift of the eddy-driven jet with anomalously increased eddy momentum flux convergence around 60°S but decreased eddy momentum flux convergence around 45°S (Figs. 7b,c). The associated MSLP change projects onto high SAM (standardized SAM index = 0.9), although the high-pressure anomalies in the midlatitudes are not as annular as that of conventional SAM (Fig. 7d). These atmospheric changes are similar to those projected by climate models in response to increasing greenhouse gases63 despite the contrasting tropical Pacific zonal SST gradients between wClim and climate change simulations. The similarity in the atmospheric responses despite differences in the degree of warming in the tropical eastern SSTs is because equatorial waves quickly spread anomalous temperature change across the entire tropics59.
### El Niño on the warmer ocean mean state
We now turn to the warmer climate El Niño compared to the present climate El Niño. Because forecasts of the SST and MSLP patterns of the three El Niño years in the present climate appear to be all very similar to one another, we will present results of a grand ensemble of 99 members by combining the anomalies for the 1982, 1997 and 2015 simulations.
When El Niño of the same extreme strength occurs on the “La Niña-like” warmer ocean mean state (wElNiño’ = wElNiño-wClim), its strength is simulated to be weaker than that in the current climate (pElNiño’ = pElNiño-pClim) over the tropical eastern Pacific (p < 0.05, see Methods for statistical significance calculation; Fig. 8a, Supplementary Fig. S4). Therefore, the maximum warming of El Niño in the warmer mean state appears to be more confined to the central Pacific, giving it some central Pacific flavour. Analysis of advective feedback terms in the ocean mixed layer heat budget (averaged over 0–45 m, 5°S-5°N; see Methods) suggests that the reduced mean upwelling in the eastern Pacific in the warmer ocean mean state (Fig. 6f) is partly responsible for the weaker growth of eastern Pacific SST anomalies in the warmer climate (Fig. 8b). Across the central to western Pacific, the reduced strength of mean westward surface currents in the warmer ocean mean state (Fig. 6f) also contributes to weaker growth of warm anomalies over and west of the dateline (Fig. 8c), while weaker anomalous eastward currents during El Niño in the warmer climate contributes to weaker warming over the central Pacific (Fig. 8d). The cause of the weaker anomalous eastward currents during El Niño in the warmer climate is addressed below. In contrast, the enhanced mean zonal temperature gradient of the warmer ocean mean state over the equatorial Pacific (e.g. Figs. 6b,d) positively contributes to the growth of positive SSTs during El Niño over the central to western Pacific (Fig. 8e), thereby compensating for some of the negative contributions caused by the weakening of the mean westward zonal currents and the weaker El Niño-generated anomalous eastward currents there.
The cause of the weaker El Niño-generated anomalies in the waremer mean state can be traced to westward shifts of the maximum rainfall and wind responses to the El Niño SST anomalies while reducing their responses in the tropical eastern Pacific. These changes are quantified by the regressions of rainfall and 10-m zonal wind anomalies onto the Niño3.4 SST anomalies during OND (all 99 forecast members are used for the regression calculation). Figures 9a–c show that El Niño-driven rainfall in the warmer climate is shifted westward relative to that in the present climate, and over the Niño3.4 region the rainfall anomaly is weaker (p < 0.05). This likely occurs due to the cooler equatorial mean SSTs (Fig. 6b) and weaker El Niño SST anomalies caused by the reduced thermocline feedback in the eastern Pacific (Figs. 8a,b), which makes a SST anomaly less efficient at driving a rainfall/wind response60,64,65. In contrast, there is an increased rainfall response over and west of the dateline during wElNiño’ compared to pElNiño’ likely boosted by the warmer SST mean state of the tropical central to western Pacific, shifting the location of the maximum rainfall associated with the Niño3.4 SSTs about ~10° westward (p < 0.05). Likewise, SST anomalies over the Niño3.4 region induce weaker westerly response over the eastern Pacific but stronger westerly response west of the dateline with the warmer mean state compared to the present mean state (p < 0.05; Fig. 9d–f). Together these westward shifts of the zonal wind and rainfall responses during El Niño on the warmer mean state feed back into producing a weaker El Niño in the eastern Pacific.
Interestingly, these 1) weakened thermocline feedback and 2)air-sea coupling strength and resultant weakened El Niño with its maximum SST warming and rainfall response concentrated in the central Pacific are consistent with what has been diagnosed to have occurred during the early 2000s as compared to the 1980s and 1990s and is thought to reflect the shift to the cold phase of the IPO60,64,65,66,67. The cold phase of the IPO is also characterised by an enhanced zonal SST gradient across the tropical Pacific and shallower thermocline in the equatorial eastern Pacific, but there are many differences between the cold phase of the IPO and our warmer ocean mean state (Supplementary Fig. S1). Hence, the consistency in the changes in the ocean and atmosphere feedback and El Niño properties between our warmer ocean mean state and the cold phase of IPO highlight the role of the equatorial Pacific mean state in modulating the amplitude of El Niño and the longitude of maximum warming.
### El Niño and SAM on the warmer ocean mean state
Although El Niño weakens on the warmer ocean mean state, it still promotes low SAM. However, the El Niño-low SAM connection is substantially weaker in the warmer climate compared to the current climate as judged by standardized SAM being −0.7 as derived from the pressure anomalies of Fig. 10d compared to standardized SAM being −2.1 as derived from those of Fig. 10c (this weakening of the SAM strength is statistically significant at the 1% level). The Rossby wave train emanating from the Maritime Continent is also weaker in the warmer climate. This weakening of teleconnection is likely due to both weakening of El Niño and weakening of the tropical convective heating response to El Niño on the warmer mean state. As shown in Figs. 10a,b, wElNiño’ induces less intense warming over the tropical upper troposphere, and therefore, the meridional temperature gradient between the tropics and the SH midlatitudes is not as steep as that caused by pElNiño’, resulting in weakened westerly anomalies associated with wElNiño’ compared to those with pElNiño’s (Figs. 10e,f). This reduced change in the mean winds in the upper troposphere leads to reduced changes in eddy momentum flux convergence anomalies in the subtropics to the high latitudes of the SH (Fig. 10g,h); which feed less of anomalous momentum back to the mean flow, resulting in weaker dipole wind anomalies in the SH extratropics, leading to weaker low SAM. The strengths of the extratropical zonal-mean zonal wind dipole and the eddy momentum flux convergence dipole in the upper troposphere are statistically significantly different between pElNiño’ and wElNiño’ at the 5% level (Supplementary Fig. S8). Consequently, predictability of SAM in OND would be substantially reduced during El Niño under our scenario of continuation of the observed ocean temperature trends to the future.
## Concluding Remarks
This study was motivated by earlier research findings that ENSO is an important source of predictability of SH extratropical climate through its connection to SAM during austral spring and summer; yet this ENSO-SAM relationship is not skillfully simulated by climate models, therefore making it hard to foresee any possible change in this relationship in a future warmer climate. Furthermore, climate models have not been able to reach a strong consensus on how ENSO amplitude will change43, which would be a key determinant in predictability of extratropical climate via its teleconnections. The strength of ENSO and its spatial characteristics influence and are influenced by the tropical ocean mean state, especially the tropical Pacific Ocean mean state60,66. There is still vigorous debate, though, about whether the response of the tropical Pacific to global warming will be more “La Niña-like” with greater warming over the tropical western Pacific than over the eastern Pacific, which is what has been observed over the past 60 years39, or more “El Niño-like” with the opposite pattern of warming, which is projected by the majority of the CMIP5 models47.
In this study, we limited our focus to El Niño and attempted to address this question: How will extreme El Niño and its relationship with SAM change in a warmer climate if the “La Niña-like” ocean temperature trends that have occurred over the past 60 years continue into the future? To do so, we conducted a series of forecast sensitivity experiments for three extreme El Niños (1982, 1997 and 2015). The forecast simulations were initialised on 1 September using the present ocean mean state and using a hypothetical future ocean mean state that was created by addition of the doubled observed ocean temperature trends over the past 60 years (we have referred to this as the warmer ocean mean state). Present and warmer climatological forecasts were also produced with the climatological conditions of 1 September of 1981–2013 without and with the addition of the doubled observed ocean temperature trends, respectively, as reference forecast sets.
Statistical analysis of observations and the forecast experiments confirmed that extreme El Niño is a key driver of strong low SAM in the October to December season. The observed El Niño-low SAM relationship is robust once we remove the influence of the Antarctic stratospheric vortex on the SAM by regressing it out in the observational analysis. This conclusion is also supported by the forecast experiments that use scrambled atmospheric initial conditions so that the promotion of low SAM during the El Niño forecasts can be attributed to the El Niño forcing.
The El Niño experiments with the present and the hypothetical warmer ocean mean states revealed that extreme El Niño is likely to lose some strength particularly over the tropical eastern Pacific as a result of reduced mean upper ocean upwelling on the warmer ocean mean state which causes a reduction in the thermocline feedback. In our experiments, the increased zonal temperature gradient of the observed La Niña-like upper ocean mean state change contributes to anomalous SST warming associated with extreme El Niño, as highlighted by the recent study of Wang et al.68. However, this enhanced SST growth appears to be limited to the central and western Pacific65 and is offset by weaker anomaly growth caused by the reduced strength of mean westward surface currents in the same region. The air-sea coupling strength appears to also significantly weaken in the eastern Pacific due to the cooler equatorial mean SST and weaker SST anomalies associated El Niño and shift westward in the warmer climate, further contributing to the weakening of extreme El Niño in the eastern Pacific. This delicate balance of processes in our model framework tips the scale in favor of weaker extreme El Niño events in a warmer world with the continuation of the observed long-term ocean temperature trends.
The weakened El Niño in the warmer climate appears to result in a significant weakening of the low SAM response. Our present climate El Niño experiment confirmed the findings of earlier studies10,11,12 that tropical upper tropospheric warming caused by El Niño increases westerlies on the equatorward side of the SH subtropical jet, shifting the critical latitude equatorward. This shift induces an equatorward shift of the momentum flux convergence-divergence dipole, resulting in low SAM. This chain of processes appears to continue to operate in the warmer climate but with significantly weaker strength of zonal wind, eddy momentum transport and pressure anomalies because El Niño itself becomes weaker and the tropical convective heating response to El Niño becomes weaker as well. Therefore, if the ocean temperature trends that have been observed over the past 60 years continue into the future, predictability of low SAM and its associated impacts on SH surface climate during extreme El Niño is likely to be substantially reduced. A previous study60 indicated that La Niña may also be expected to weaken in response to a “La Niña-like” temperature trend, but a further study is required to assess whether there will be a similar reduction in predictability of high SAM. An idealised experiment like ours but to examine the ENSO-SAM relationship with El Nino-like warming trends in the tropical Pacific, as suggested by most CMIP models, would also be valuable in understanding the mechanisms of the resultant changes in ENSO and SAM.
## Methods
### Data for observational analyses
We used reanalysis data of the European Centre for Medium-Range Weather Forecasts Interim project (ERA-Interim)69 and SST analyses of Hurrell et al.70 and Reynolds OI v271 for the period of 1979–2016. Anomalies were computed against the base period of 1981–2013 for comparisons with POAMA experimental forecasts. The 55 year observed temperature and upper ocean circulation trends were calculated with POAMA ocean data assimilation system reanalysis data (PEODAS)72 for the period 1960–2014.
### Climate indices
The SAM index was obtained by following Gong and Wang’s definition73, which is the difference between normalised zonally averaged MSLP anomalies at 40°S and 65°S. The strength of extreme El Niños was determined by the Niño3.4 index, which was obtained by averaging SSTs over the domain of 5°S-5°N, 190–240°E. The stratosphere-troposphere (S-T) coupled mode index was obtained following the method of Lim et al.18 by applying height-time domain EOF analysis to anomalies (seasonal cycle removed) of monthly mean zonal-mean zonal winds averaged over 55–65°S. The input data to the EOF was ordered from April to March each year for pressure level data extending from 1000 to 1 hPa. The resultant 1st principal component time series consists of one value each year and depicts the year-to-year variations of the SH spring polar vortex strength and its downward coupling. To obtain the de-trended residual SAM independent of the influence of the Antarctic stratospheric polar vortex, we removed from the SAM index the components linearly related to time and the S-T coupled mode index variability.
### Model and initial conditions for forecast sensitivity experiments
For the model experiments, we used the Bureau of Meteorology’s atmosphere-ocean fully coupled dynamical seasonal climate forecast system, POAMA version a34. Its atmosphere and ocean component models are the Bureau’s Atmosphere Model version 3 (T47/L17)74, and the Australian Community Ocean Model version 2 (2° longitude by 0.5–1° latitude from the tropics to the pole)75, which are coupled by OASIS76 coupler.
High quality observed atmosphere, land and ocean conditions are generated from the Bureau of Meteorology’s atmosphere and land initialisation scheme (ALI)77 and PEODAS, respectively.
### Statistical significance tests
Statistical significance of the observed trends in Figs 1 and 6 was assessed by a two-sided Student t-test with 55 samples (i.e. data of 1960–2014), using the incomplete beta function available in NCAR Command Language. In Fig. 4, 1 standard deviation threshold was used to indicate if air temperature, zonal wind and eddy momentum flux convergence anomalies associated with extreme El Niño are significantly different from the climatological conditions in the ERA-Interim reanalysis set. A two-sided Student t-test was used to estimate the statistical significance on the difference of the two means (Figs 8, 9a,b,d,e, 10) and of the two regression coefficients (Fig. 9c,f) with 99 samples for p’ElNiño vs. w’ElNiño and and the difference of the two means with 33 samples for pClim vs wClim (Figs 6b,d,f and 7).
### Ocean mixed layer heat advection analysis
Based on the ocean mixed layer heat budget and heat advection analyses of Zhao et al.60 and Abellan et al.78, we computed contributions of advective feedback terms to the change of El Niño growth as follows:
$$\frac{\partial T}{\partial t}\approx -\,{\boldsymbol{u}}\nabla T$$
(1)
$${\boldsymbol{u}}\nabla T=\bar{u}T{^{\prime} }_{x}+u^{\prime} {\bar{T}}_{x}+u^{\prime} T{^{\prime} }_{x}+\bar{v}T{^{\prime} }_{y}+v^{\prime} {\bar{T}}_{y}+v^{\prime} T{^{\prime} }_{y}+\bar{w}T{^{\prime} }_{z}+w^{\prime} {\bar{T}}_{z}+w^{\prime} T{^{\prime} }_{z}$$
(2)
where u denotes zonal, meridional and vertical velocities, and Tx, Ty and Tz indicate zonal (x), meridional (y) and vertical (z) temperature gradients, respectively. Overbar and prime signs denote a temporal mean and a departure from the mean, respectively.
From Eq. (2), we further limited our interest to linear terms to tease out the contributions from the mean state changes vs the anomalous changes to changes in the growth of El Niño. To estimate the contributions of the mean state changes, we used the mean state terms $$\bar{u}$$, $$\bar{v}$$, $$\bar{w}\,$$, $${\bar{T}}_{x}$$, $${\bar{T}}_{y}$$, and $${\bar{T}}_{z}$$ from the two different climatologies, pClim and wClim, together with the anomalies from the present climate u′, v′, w′, Tx, Ty and Tz. Then, to estimate the contributions of the anomalous changes in El Niño, we used the anomalies u′, v′, w′, Tx, Ty and Tz of El Niño for the two different mean states, wElNiño’ and pElNiño’, together with the mean state $$u^{\prime}$$, $$v^{\prime}$$, $$w^{\prime}$$, $${T^{\prime} }_{x}$$, $${T^{\prime} }_{y}$$, and $${T^{\prime} }_{z}$$ from the present climatology (pClim).
Figure 8 and Supplementary Figs. S5–7 show that the most important term that explains the reduction of El Niño strength in the tropical eastern Pacific is the thermocline feedback change by the reduced mean upwelling on the warmer ocean, $$\bar{w}{{T}^{^{\prime} }}_{z}$$, in Eq. (2). The reduced mean westward currents of the warmer mean state in $$\bar{u}T{\text{'}}_{x}\,\,$$and the reduced anomalous eastward current of warmer climate El Niño in $$u^{\prime} {\bar{T}}_{x}\,\,$$also suggest some contributions to the reduction of SST warming associated with warmer climate El Niño over the tropical central Pacific. However, as the enhanced zonal temperature gradient of the warmer mean state in $$u^{\prime} {\bar{T}}_{x}\,\,$$somewhat compensates the temperature reduction, an overall contribution from the zonal advective feedback change to the change in the SSTs of warmer climate El Niño seems small.
## Change history
• ### 04 February 2020
An amendment to this paper has been published and can be accessed via a link at the top of the paper.
## References
1. 1.
Kidson, J. W. Indices of the Southern Hemisphere Zonal Wind. Journal of Climate 1, 183–194 (1988).
2. 2.
Hartmann, D. L. & Lo, F. Wave-Driven Zonal Flow Vacillation in the Southern Hemisphere. J. Atmos. Sci. 55, 1303–1315 (1998).
3. 3.
Thompson, D. W. J. & Wallace, J. M. Annular Mode in the extratropical circulation. Part I: Month-to-month variability. J. Clim. 13, 1000–1016 (2000).
4. 4.
Lim, E.-P., Hendon, H. H. & Rashid, H. Seasonal predictability of the Southern Annular Mode due to its association with ENSO. J. Clim. 26, 8037–8054 (2013).
5. 5.
Karoly, D. J., Hope, P. & Jones, P. D. Decadal Variations of the Southern Hemisphere Circulation. Int. J. Climatol. 16, 723–738 (1996).
6. 6.
Lorenz, D. J. & Hartmann, D. L. Eddy – Zonal Flow Feedback in the Southern Hemisphere. J. Atmos. Sci. 3312–332, 10.11757 (2001).
7. 7.
Limpasuvan, V. & Hartmann, D. L. Eddies and the annular modes of climate variability. Geophys. Res. Lett. 26, 3133–3136 (1999).
8. 8.
Chen, G., Lu, J. & Frierson, D. M. W. Phase Speed Spectra and the Latitude of Surface Westerlies: Interannual Variability. J. Clim. 5942–5959, https://doi.org/10.1175/2008JCLI2306.1 (2008).
9. 9.
Zhou, T. J. & Yu, R. C. Sea-surface temperature induced variability of the Southern Annular Mode in an atmospheric general circulation model. Geophys Res Lett 31, L24206 (2004).
10. 10.
L’Heureux, M. L. & Thompson, D. W. J. Observed relationships between the El Niño–Southern Oscillation and the extratropical zonal-mean circulation. J. Clim. 19, 276–287 (2006).
11. 11.
Seager, R., Harnik, N., Kushnir, Y., Robinson, W. A. & Miller, J. A. Mechanisms of hemispherically symetric climate variability. J. Clim. 16, 2960–2978 (2003).
12. 12.
Lu, J., Chen, G. & Frierson, D. M. W. Response of the zonal mean atmospheric circulation to El Niño versus global warming. J. Clim. 21, 5835–5851 (2008).
13. 13.
Silvestri, G. E. & Vera, C. S. Antarctic Oscillation signal on precipitation anomalies over southeastern South America. Geophys. Res. Lett. 30, 2115 (2003).
14. 14.
Adames, Á. F. & Wallace, J. M. On the Tropical Atmospheric Signature of El Niño. J. Atmos. Sci. 74, 1923–1939 (2017).
15. 15.
Ding, Q., Steig, E. J., Battisti, D. S. & Wallace, J. M. Influence of the tropics on the southern annular mode. J. Clim. 25, 6330–6348 (2012).
16. 16.
Seviour, W. J. M. et al. Skillful seasonal prediction of the Southern Annular Mode and Antarctic ozone. J. Clim. 27, 7462–7474 (2014).
17. 17.
Byrne, N. J. & Shepherd, T. G. Seasonal persistence of circulation anomalies in the Southern Hemisphere stratosphere and its implications for the troposphere. J. Clim. 31, 3467–3483 (2018).
18. 18.
Lim, E.-P., Hendon, H. H. & Thompson, D. W. J. Seasonal evolution of stratosphere-troposphere coupling in the Southern Hemisphere and implications for the predictability of surface climate. J. Geophys. Res. Atmos. 123(12), 002–12,016 (2018).
19. 19.
Barnston, A. G., Tippett, M. K., L’Heureux, M. L., Li, S. & Dewitt, D. G. Skill of real-time seasonal ENSO model predictions during 2002–11: Is our capability increasing? Bull. Am. Meteorol. Soc. 93, 631–651 (2012).
20. 20.
Reason, C. J. C. & Rouault, M. Links between the Antarctic Oscillation and winter rainfall over western South Africa. Geophys. Res. Lett. 32, 1–4 (2005).
21. 21.
Gillett, N. P., Kell, T. D. & Jones, P. D. Regional climate impacts of the Southern Annular Mode. Geophys. Res. Lett. 33, L23704 (2006).
22. 22.
Sen Gupta, A. & England, M. H. Coupled ocean-atmosphere-ice response to variations in the southern annular mode. J. Clim. 19, 4457–4486 (2006).
23. 23.
Ciasto, L. M., Alexander, M. A., Deser, C. & England, M. H. On the persistence of cold-season SST anomalies associated with the annular modes. J. Clim. 24, 2500–2515 (2011).
24. 24.
Lim, E. P. & Hendon, H. H. Understanding and predicting the strong Southern Annular Mode and its impact on the record wet east Australian spring 2010. Clim. Dyn. 44, 2807–2824 (2015).
25. 25.
Wang, G. et al. Compounding tropical and stratospheric forcing of the record low Antarctic sea-ice in 2016. Nat. Commun. 10, 13 (2019).
26. 26.
Troccoli, A., Harrison, M., Anderson, D. L. T. & Mason, S. J. Seasonal Climate: Forecasting and Managing Risk [electronic resource]/edited by Alberto Troccoli, Mike Harrison, David L. T. Anderson, Simon J. Mason. NATO Science Series, Vol. 82 (2008).
27. 27.
Power, S., Delage, F., Chung, C., Kociuba, G. & Keay, K. Robust twenty-first-century projections of El Niño and related precipitation variability. Nature 502, 541–545 (2013).
28. 28.
Cai, W. et al. Increased variability of eastern Pacific El Niño under greenhouse warming. Nature 564, 201–206 (2018).
29. 29.
Taylor, K. E., Stouffer, R. J. & Meehl, G. A. An Overview of CMIP5 and the Experiment Design. Bull. Am. Meteorol. Soc. 93, 485–498 (2012).
30. 30.
Lim, E.-P. et al. The impact of the Southern Annular Mode on future changes in Southern Hemisphere rainfall. Geophys. Res. Lett. 43, 7160–7167 (2016).
31. 31.
Dommenget, D., Bayr, T. & Frauen, C. Analysis of the non-linearity in the pattern and time evolution of El Niño southern oscillation. Clim. Dyn. 40, 2825–2847 (2013).
32. 32.
Chung, C. T. Y., Power, S. B., Arblaster, J. M., Rashid, H. A. & Roff, G. L. Nonlinear precipitation response to El Niño and global warming in the Indo-Pacific. Clim. Dyn. 42, 1837–1856 (2014).
33. 33.
Chung, C. T. Y. & Power, S. B. Precipitation response to La Niña and global warming in the Indo-Pacific. Clim. Dyn., https://doi.org/10.1007/s00382-014-2105-9 (2014).
34. 34.
Cottrill, A. et al. Seasonal Forecasting in the Pacific Using the Coupled Model POAMA-2. Weather Forecast. 28, 668–680 (2013).
35. 35.
Deser, C., Phillips, A. S. & Alexander, M. A. Twentieth century tropical sea surface temperature trends revisited. Geophys. Res. Lett. 37, 1–6 (2010).
36. 36.
Chen, C., Cane, M. A., Wittenberg, A. T. & Chen, D. ENSO in the CMIP5 Simulations: Life Cycles, Diversity, and Responses to Climate Change. J. Clim. 30, 775–801 (2017).
37. 37.
Ashok, K., Sabin, T. P., Swapna, P. & Murtugudde, R. G. Is a global warming signature emerging in the tropical Pacific? Geophys. Res. Lett. 39, 1–5 (2012).
38. 38.
Seager, R. et al. Strengthening tropical Pacific zonal sea surface temperature gradient consistent with rising greenhouse gases. Nat. Clim. Chang. 9, 517–522 (2019).
39. 39.
Jiang, N. & Zhu, C. Asymmetric Changes of ENSO Diversity Modulated by the Cold Tongue Mode Under Recent Global Warming. Geophys. Res. Lett. 45, 12,506–12,513 (2018).
40. 40.
Cane, M. A. et al. Twentieth-century sea surface temperature trends. Science (80-.). 275, 957–960 (1997).
41. 41.
Meehl, G. A. et al. Decadal prediction: Can it be skillful? Bull. Am. Meteorol. Soc. 90, 1467–1485 (2009).
42. 42.
Zheng, X. T., Xie, S. P., Lv, L. H. & Zhou, Z. Q. Intermodel uncertainty in ENSO amplitude change tied to Pacific Ocean warming pattern. J. Clim. 29, 7265–7279 (2016).
43. 43.
Santoso, A. et al. Dynamics and Predictability of El Niño–Southern Oscillation: An Australian Perspective on Progress and Challenges. Bull. Am. Meteorol. Soc. 100, 403–420 (2019).
44. 44.
Boer, G. J. Climate change and the regulation of the surface moisture and energy budgets. Clim. Dyn. 8, 225–239 (1993).
45. 45.
Knutson, T. R. & Manabe, S. Time-Mean Response over the Tropical Pacific to Increased CO2 in a Coupled Ocean-Atmosphere Model. J. Clim. 8, 2181–2199 (1995).
46. 46.
Held, I. M. & Soden, B. J. Robust responses of the hydrological cycle to global warming. J. Clim. 19, 5686–5699 (2006).
47. 47.
Collins, M. et al. The impact of global warming on the tropical Pacific Ocean and El Niño. Nat. Geosci. 3, 391 (2010).
48. 48.
Luo, J. J., Wang, G. & Dommenget, D. May common model biases reduce CMIP5’s ability to simulate the recent Pacific La Niña-like cooling? Clim. Dyn. 50, 1335–1351 (2018).
49. 49.
Cai, W. et al. Increasing frequency of extreme El Niño events due to greenhouse warming. Nat. Clim. Chang. 4, 111–116 (2014).
50. 50.
Dinezio, P. N. et al. Mean climate controls on the simulated response of ENSO to increasing greenhouse gases. J. Clim. 25, 7399–7420 (2012).
51. 51.
Kohyama, T., Hartmann, D. L. & Battisti, D. S. La Niña–like Mean-State Response to Global Warming and Potential Oceanic Roles. J. Clim. 30, 4207–4225 (2017).
52. 52.
Kohyama, T., Hartmann, D. L. & Battisti, D. S. Weakening of Nonlinear ENSO Under Global Warming. Geophys. Res. Lett. 45, 8557–8567 (2018).
53. 53.
Cai, W. et al. Pantropical climate interactions. Science (80-.). 363 (2019).
54. 54.
Clement, A. C., Seager, R., Cane, M. A. & Zebiak, S. E. An Ocean Dynamical Thermostat. J. Clim. 9, 2190–2196 (1996).
55. 55.
Kohyama, T. & Hartmann, D. L. Nonlinear ENSO warming suppression (NEWS). J. Clim. 30, 4227–4251 (2017).
56. 56.
Luo, J.-J., Sasaki, W. & Masumoto, Y. Indian Ocean warming modulates Pacific climate change. Proc. Natl. Acad. Sci. USA 109, 18701–6 (2012).
57. 57.
McGregor, S. et al. Recent walker circulation strengthening and pacific cooling amplified by atlantic warming. Nat. Clim. Chang. 4, 888–892 (2014).
58. 58.
Solomon, A. & Newman, M. Reconciling disparate twentieth-century Indo-Pacific ocean temperature trends in the instrumental record. Nat. Clim. Chang. 2, 691–699 (2012).
59. 59.
Lim, E. P. et al. Interaction of the recent 50 year SST trend and La Niña 2010: amplification of the Southern Annular Mode and Australian springtime rainfall. Clim. Dyn. 47, 2273–2291 (2016).
60. 60.
Zhao, M., Hendon, H., Oscar, A., Liu, G. & Guomin, W. Weakened Eastern Pacific El Niño Predictability in the Early Twenty-First Century. J. Clim. 29, 6805–6822 (2016).
61. 61.
Thompson, D. W. J. & Solomon, S. Interpretation of recent Southern Hemisphere climate change. Science (80-.). 296, 895–899 (2002).
62. 62.
Hurwitz, M. M., Newman, P. A., Oman, L. D. & Molod, A. M. Response of the Antarctic Stratosphere to Two Types of El Niño Events. J. Atmos. Sci. 68, 812–822 (2011).
63. 63.
Collins, M. et al. Long-term Climate Change: Projections, Commitments and Irreversibility. Clim. Chang. 2013 Phys. Sci. Basis. Contrib. Work. Gr. I to Fifth Assess. Rep. Intergov. Panel Clim. Chang. 1029–1136, https://doi.org/10.1017/CBO9781107415324.024 (2013).
64. 64.
Chung, P. H. & Li, T. Interdecadal relationship between the mean state and El Niño types. J. Clim. 26, 361–379 (2013).
65. 65.
N. C. Johnson, M. L. L’Heureux, C.‐H. & Chang, Z.‐Z. Hu, On the Delayed Coupling Between Ocean and Atmosphere in Recent Weak El Niño Episodes. Geophysical Research Letters https://doi.org/10.1029/2019GL084021 (2019).
66. 66.
McPhaden, M. J., Lee, T. & McClurg, D. El Niño and its relationship to changing background conditions in the tropical Pacific Ocean. Geophys. Res. Lett. 38, 2–5 (2011).
67. 67.
Lübbecke, J. F. & Mcphaden, M. J. Assessing the twenty-first-century shift in enso variability in terms of the bjerknes stability index. J. Clim. 27, 2577–2587 (2014).
68. 68.
Bin W. et al. Historical change of El Niño properties sheds light on future changes of extreme El Niño. Proceedings of the National Academy of Sciences 116(45), 22512–22517 (2019).
69. 69.
Dee, D. et al. The ERA - Interim reanalysis: Configuration and performance of the data assimilation system. Quaterly J. R. Meteorol. Soc. 137, 553–597 (2011).
70. 70.
Hurrell, J. W., Hack, J. J., Shea, D., Caron, J. M. & Rosinski, J. A new sea surface temperature and sea ice boundary dataset for the community atmosphere model. J. Clim. 21, 5145–5153 (2008).
71. 71.
Reynolds, R. W., Rayner, N. A., Smith, T. M., Stokes, D. C. & Wang, W. An improved in situ and satellite SST analysis for climate. J. Clim. 15, 1609–1625 (2002).
72. 72.
Yin, Y., Alves, O. & Oke, P. R. An Ensemble Ocean Data Assimilation System for Seasonal Prediction. Mon. Weather Rev. 139, 786–808 (2011).
73. 73.
Gong, D. & Wang, S. Definition of Antarctic Oscillation index. Geophys. Res. Lett. 26, 459–462 (1999).
74. 74.
Colman, R. et al. BMRC Atmospheric Model (BAM) version 3.0: comparison with mean climatology (2005).
75. 75.
Oke, P. R., Schiller, A., Griffin, D. A. & Brassington, G. B. Ensemble data assimilation for an eddy-resolving ocean model of the Australian region. Q. J. R. Meteorol. Soc. 131, 3301–3311 (2005).
76. 76.
Valke, S., Terray, L. & Piacentini, A. The OASIS coupled user guide version 2.4. Tech. Rep. TR/CMGC/00-10, CERFACS (2000).
77. 77.
Hudson, D., Alves, O., Hendon, H. & Wang, G. The impact of atmospheric initialisation on seasonal prediction of tropical Pacific SST. Clim. Dyn. (2011).
78. 78.
Abellán, E., McGregor, S., England, M. H. & Santoso, A. Distinctive role of ocean advection anomalies in the development of the extreme 2015–16 El Niño. Clim. Dyn. 51, 2191–2208 (2018).
## Acknowledgements
This study was supported in part by the Australian Government’s National Environmental Science Programme. Michael J. McPhaden was supported by NOAA. The authors are grateful to Drs Irina Rudeva and Blair Trewin at the Bureau of Meteorology and two anonymous reviewers for their constructive feedback on the manuscript. This research was undertaken on the NCI National Facility in Canberra, Australia, which is supported by the Australian Commonwealth Government. The NCAR Command Language (NCL; http://www.ncl.ucar.edu) version 6.4.0 was used for data analysis and visualization of the results. We also acknowledge NCAR/UCAR, NOAA and ECMWF for producing and providing Hurrell et al. (2008) SST analysis, Reynolds OI v2 SST analysis, and ERA-Interim reanalysis, respectively. This is PMEL contribution no. 4994.
## Author information
Authors
### Contributions
E.L. and P.H. conceived the idea, and E.L. conducted the experiments, analysed the results and wrote the first draft with the help of C.C. and F.D.; H.H.H. and M.J.M. contributed to the interpretation of the results, and all authors contributed to the writing of the manuscript.
### Corresponding author
Correspondence to Eun-Pa Lim.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Lim, EP., Hendon, H.H., Hope, P. et al. Continuation of tropical Pacific Ocean temperature trend may weaken extreme El Niño and its linkage to the Southern Annular Mode. Sci Rep 9, 17044 (2019). https://doi.org/10.1038/s41598-019-53371-3
• Accepted:
• Published:
• ### Long‐Term Trend of Equatorial Atlantic Zonal Sea Surface Temperature Gradient Linked to the Tropical Pacific Cold Tongue Mode Under Global Warming
• Yang Li
• , Quanliang Chen
• , Nan Xing
• , Zhigang Cheng
• , Yulei Qi
• , Fan Feng
• & Minggang Li
Journal of Geophysical Research: Oceans (2021)
• ### Exploring atmospheric circulation leading to three anomalous Australian spring heat events
• Roseanna C. McKay
• , Julie M. Arblaster
• , Pandora Hope
• & Eun-Pa Lim
Climate Dynamics (2021)
• ### Pantropical Response to Global Warming and the Emergence of a La Niña‐Like Mean State Trend
• Sang‐Ki Lee
• , Dongmin Kim
• , Gregory R. Foltz
• & Hosmay Lopez
Geophysical Research Letters (2020)
• ### Role of Tropical Variability in Driving Decadal Shifts in the Southern Hemisphere Summertime Eddy-Driven Jet
• Dongxia Yang
• , Julie M. Arblaster
• , Gerald A. Meehl
• , Matthew H. England
• , Eun-Pa Lim
• , Susan Bates
• & Nan Rosenbloom
Journal of Climate (2020)
• ### Emerging arboviruses in the urbanized Amazon rainforest
• Rachel Lowe
• , Sophie Lee
• , Raquel Martins Lana
• , Cláudia Torres Codeço
• , Marcia C Castro
• & Mercedes Pascual
BMJ (2020) | 2021-06-18 13:09:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6302992701530457, "perplexity": 5723.461981236543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487636559.57/warc/CC-MAIN-20210618104405-20210618134405-00258.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.