url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
http://mathhelpforum.com/calculus/17081-polar-coordinate-03-a.html | 1. Polar Coordinate 03
Find the area inside r= 3 Cos[x] and outside of r= 1 + Cos[x]
2. Hello, camherokid!
Find the area inside $r \,= \,3\cos x$ and outside $r \:= \:1 + \cos x$
I hope you made a sketch . . .
$r\:=\:3\cos x$ is a circle with center $\left(\frac{3}{2},\,0\right)$ and radius $r = \frac{3}{2}$
$r\:=\:1 + \cos x$ is a cardioid with intercepts: $(2,\,0),\: \left(1,\,\frac{\pi}{2}\right),\:\left(1,\,\frac{3 \pi}{2}\right)$
. . and "dimples in" to the origin from the left.
The polar formula for the area between two curves is: . $A \;=\;\frac{1}{2}\int^{\beta}_{\alpha}\left(r_{_2}^ 2 - r_{_1}^2\right)\,d\theta$
The curves intersect when: . $3\cos x \;=\;1 + \cos x\quad\Rightarrow\quad 2\cos x \:=\:1\quad\Rightarrow\quad \cos x \:=\:\frac{1}{2}$
. . Hence, they intersect at: . $\theta \:=\:\pm\frac{\pi}{3}$
Therefore: . $A \;=\;\frac{1}{2}\int^{\frac{\pi}{3}}_{-\frac{\pi}{3}}\bigg[(3\cos\theta)^2 - (1 + \cos x)^2\bigg]\,d\theta$
3. Originally Posted by camherokid
Find the area inside r= 3 Cos[x] and outside of r= 1 + Cos[x]
We will use the formula:
$A = \int_{ \alpha}^{ \beta} \frac {1}{2} (r_o^2 - r_i^2)~dx$
where $A$ is the area between the curves $r_o$ and $r_i$, $\alpha$ and $\beta$ are the limits of integration (the points of intersection), $r_o$ is the outer curve, and $r_i$ is the inner curve.
First find the points of intersection:
this is where $3 \cos x = 1 + \cos x$
$\Rightarrow \cos x = \frac {1}{2}$
$\Rightarrow x = \frac {\pi}{3}, \frac {5 \pi}{3}$
we want to go from $\frac {5 \pi}{3}$ to $\frac {\pi}{3}$, but we must go from a smaller angle to a bigger angle. changing $\frac {5 \pi}{3}$ to $- \frac {\pi}{3}$ fixes this problem. so our area is given by:
$A = \int_{- \pi / 3}^{ \pi / 3} \frac {1}{2} \left[ (3 \cos x)^2 - (1 + \cos x )^2 \right]~dx$
EDIT: Beaten by Soroban! any way, that's ok. I'm a bit rusty on polar areas, so it's good to have a confirmation that I did the right thing. Soroban, can you check the other posts camherokid put up today and make sure I didn't make any mistakes
4. Originally Posted by camherokid
Find the area inside r= 3 Cos[x] and outside of r= 1 + Cos[x]
the graph
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 27, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9205161333084106, "perplexity": 652.9140395005951}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890947.53/warc/CC-MAIN-20180122014544-20180122034544-00182.warc.gz"} |
http://math.stackexchange.com/questions/232402/when-do-we-have-radi-i-for-an-ideal-i-of-a-ring-r | # When do we have $Rad(I)=I$ for an ideal $I$ of a ring $R$?
This is kind of a follow-up question about calculating the radical of an ideal. Since
$Rad(I)$ is the intersection of all the prime ideals of $R$ that contain $I$,
which is a property I learned from this article in wikipedia, we have that $$Rad(I)=I$$ whenever $I$ is a prime ideal. My question is:
Can this be true for some $I$ which is not a prime ideal? [EDIT: And when is this NOT true?] Is there an equivalent easy-to-check conditions for this kind of $I$?
Let $R={\Bbb Z}[x]$, for example. $I=\langle x,2\rangle$ is a prime ideal and thus $Rad(I)=I$. For any ideal $I\unlhd R$, (say $I=\langle x^2+1\rangle$ or $I=\langle x^2+2\rangle$, etc.) the key point is to check $$Rad(I)\subset I$$ since $I\subset Rad(I)$ is always true. But I don't know a quick way to check this relation.
-
By definition, every intersection of a set of prime ideals is semiprime (aka a radical ideal).
Let $\cap P_i=I$, where the $P_i$ are all prime. Then the set of all prime ideals containing $I$ contains the $P_i$. Thus, $\cap\{P\mid P\supseteq I\}\subseteq \cap P_i$. The left hand intersection involves "more" prime ideals, and so the intersection of more ideals should be smaller than just the set of $P_i$.
Thus in total: $$\cap P_i=I\subseteq\cap\{P\mid P\supseteq I\}\subseteq \cap P_i$$
So, there is equality all across.
It is relatively easy to find examples where prime ideals do not intersect to a prime ideal. For example, the intersection of two prime ideals, neither of which contains the other, cannot be prime. (Explain why!)
-
What I learn from your answer is that to check if $Rad(I)=I$, I need to check if $I$ is some intersection of prime ideals. But how can I apply this to check, say, $I=\langle x^2+1\rangle$ in ${\Bbb Z}[x]$? – Jack Nov 7 '12 at 20:59
For the commutative case, $rad(I)=\{x\in R\mid \exists n\in \mathbb{N}, x^n\in I\}$. That might help you do specific commutative examples. Isn't it the case here that $x^2+1$ generates a prime ideal? – rschwieb Nov 7 '12 at 21:00
You can, in fact, have $I=\mathrm{Rad}(I)$ for nonprime $I$. For example, consider the ideal $\langle xy\rangle$ of the ring $\mathbb{Q}[x,y]$. This isn't prime since the generator isn't irreducible, but can easily be seen to be radical.
More generally, if you take a monomial ideal, i.e. an ideal generated by monomials, in a polynomial ring over some field, its radical will be generated by the "roots" of those monomials. E.g. if $I=\langle x^2y^4,z^3\rangle$, then $\sqrt{I}=\langle xy^2,z\rangle$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8769809007644653, "perplexity": 159.67978761948785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802772416.132/warc/CC-MAIN-20141217075252-00101-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://thephysicist.in/which-is-the-most-stable-nuclide/ | # Which is the most stable nuclide?
Which is the most stable nuclide? The popular verdict says $^{56} _{26}Fe$ is the most stable nuclide since it has the highest binding energy per nucleon $(\frac{B}{A}=8.790 \ MeV/A)$. However, this is incorrect. The most stable of all the nuclides is $^{62} _{28}Ni$. Its binding energy per nucleon is $(\frac{B}{A}=8.794 \ MeV/A)$. This post discusses why this incorrect information has prevailed for so long even in respectable academic spheres.
Contents
## How is nuclear stability estimated?
The stability of a nucleus is measured along the same principle of physics that is applied elsewhere; the lesser the energy of a system, the more stable it is. That is, for a system to be stable, has to release some energy during its formation from its constituents. Conversely, this much amount of energy must be supplied to the stable system to destabilize and break it. In the case of a nucleus, it is taken as a system that consists of protons and neutrons. Neither a proton nor a neutron is an elementary particle. A neutron or a proton is considered a hadron, a composite particle made up of two or more quarks. These quarks interact with the neighbouring quarks through nuclear forces (weak and strong). These forces are responsible for the energy of a nucleus. By default, a quark is a matter particle that has a nonzero rest mass. Thus it has some rest mass energy in accordance with the mass-energy equivalence equation: $E_0=mc^2$. This energy, too, accounts for the energy of the nucleus it is a part of. The stability of the nucleus is measured by a parameter called the binding energy.
### The Binding Energy of a Nucleus
A stable nucleus has less energy than the sum of the energies of its individual constituents. This difference is called the binding energy. If a nucleus has a mass M that is made up of $x$ protons (each of mass $m_p$) and $y$ neutrons (each of mass $m_n$), then, for the nucleus to be stable, the following condition must be satisfied. $$M<xm_p+ym_n$$
This tiny difference in mass, $\delta m$ is equivalent to a humongous amount of energy $E= \delta m c^2$. This is the nucleus’s binding energy, which is released when the nucleus is formed out of the constituent particles. On a statistical basis, nuclear physicists also define another parameter, binding energy per nucleon for stability estimates.
#### Binding Energy per Nucleon
Each proton or neutron inside a nucleus is considered as a nucleon. The total number of protons in a nucleus is denoted by $Z$. The total number of neutrons is denoted by $N$. Thus the total number of nucleons is denoted by $A=Z+N$. If the nucleus has a binding energy $B$, the binding energy per nucleon is given by $\frac{B}{A}$, expressed in electronvolts per $A$. The following scatter plot shows the variation of $\frac{B}{A}$ with respect to $A$ for various elements of the periodic table.
Credit: BCcampus Open Publishing
The binding energy per nucleon on a nuclide also shows how tightly bound a nucleus is. The higher the binding energy the tighter it is, thus the more stable and harder to break it apart. Binding energy per nucleon is also called “average binding energy ($\overline{B}$)”.
## How is the most stable nuclide identified?
The B/A vs A curve may look smooth and continuous, it is actually a scatter plot. The mass numbers are distinct. The fluctuations in the value of B/A around the central tendency is partly due to the shell effect. Yet, a lot can be inferred just by studying the central tendency curve which appears to be continuous. The curve appears to be smooth for $A \geq 40$. It attains a maximum around $A=60$ and then begins to fall smoothly for heavier elements. Thus, a reasonable identification of the most stable nuclide can be done by precisely locating the maximum. It may sound easy but it’s very hard and ambiguous to pinpoint this maximum. Through repeated experiments done using the mass spectrometers of various precisions, it has been confirmed that the iron group (chromium-Cr, manganese-Mn, Iron-Fe, Cobalt-Co, and nickel-Ni) is the most stable group of all elements. However, due to the narrow margin between $^{56} _{26} Fe$, $^{58} _{26} Fe$, $^{60} _{28} Ni$, and $^{62} _{28} Ni$, it had been very hard to attribute the maximum to any one of these based on the statistical data. This is where the astrophysical data was used to reach a verdict.
### Stellar Nucleosynthesis
Stellar nucleosynthesis is the process of the formation of heavier nuclei from lighter ones. Heavier nuclei require higher temperatures to be synthesized in the stellar cores. Starting from the fusion of helium-$^4 _2He$ from hydrogen-$^1 _1 H$, the process is exothermic. Thus, a sustainable nuclear chain reaction giving rise to heavier nuclei is feasible but only till the mass number A is near or below 60. Above A=60, nucleosynthesis becomes endothermic and cannot sustain as a chain reaction. Nuclides with A>>60 are formed only when stars collapse and produce shock waves that force smaller nuclides to combine. A thorough study of the end products of the sustained chain reactions in stellar nucleosynthesis has time and again confirmed that $^{56} _{26} Fe$ is the most abundant one of all. Thus, it was taken to be the most stable nuclide in conjunction with the binding energy data. However, with an increase in the sophistication of measurement of A with high-resolution mass spectrometers, this conclusion has been proven to be incorrect.
In his 1995 paper published in the American Journal of Physics, physicist M P Fewell has clearly settled the confusion. He has shown various reasons for the ambiguity that had prevailed earlier and has put forth his thorough justifications to conclude $^{62} _{28} Ni$ as the most stable nucleus of all. Below are a few of his reasonings.
Credit: Hyperphysics
#### The Processes in Stellar Nucleosynthesis
There are three processes in stellar nucleosynthesis: charged particle capture and photodisintegration. In a charged particle capture, an existing nucleus captures particles such as alpha ($^4 _2 He^{++}$), muon ($\mu ^-$) and electron ($e^-$), etc. and transforms into heavier nuclide. In a photodisintegration process, an existing nucleus captures a highly energetic photon and disintegrates into a lighter nuclide. Although both the processes require very high temperatures, photon capture tends to occur more frequently than the charged particle captures because charged particles encounter Coulomb forces in addition to the nuclear forces. That’s why sustained chain reactions in stellar nucleosynthesis end up with an abundance of $^{56} _{26} Fe$, where the equilibrium between both the processes is established. However, when estimated independent of the stellar nucleosynthesis, it is $^{62} _{28} Ni$ wins over $^{56} _{26} Fe$ albeit by a very narrow margin. Since the margin is less than 0.05%, it has been ignored over time.
## Conclusion
A revised and improved investigation of the data yields the conclusive answer that indeed it is $^{62} _{28} Ni$ that should be considered as the most stable nucleus of all.
Credit: Hyperphysics
Subscribe to receive notification on new posts.
Do you have a doubt in Physics? Ask,
Search for your new favourite Physics book.
Latest posts by ThePhysicist (see all)
+1
0
+1
0
+1
2
+1
0
+1
0
+1
0
+1
0
Print, Mail, or Share...
error: Alert: Content selection is disabled!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.928739607334137, "perplexity": 535.9708314215824}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943750.71/warc/CC-MAIN-20230322051607-20230322081607-00467.warc.gz"} |
http://www.mammal.cn/CN/Y1983/V3/I2/165 | • •
### 我国鼬獾的亚种分类及一新亚种的描述
1. 1. 陕西省动物研究所;
2. 广东昆虫研究所
• 出版日期:2011-11-23 发布日期:2011-11-22
### SUBSPECIFIC STUDY ON THE FERRET BADGER (MELOGALE MOSCHATA ) IN CHINA, WITH DESCRIPTION OF A NEW SUBSPECIES
ZHENG Yonglie1, XU Longhui2
1. 1. Institute of Zoology Shaanxi;
2. Institute of Kntomology Guangdong
• Online:2011-11-23 Published:2011-11-22
Abstract: This paper gives a systematic review of the ferret badger (Melogalc moschata ) recorded from China. In this species, 6 subspecies are recognized, in which 5 occur in China. The specimens discovered in Guanxi belong to M. m. taxilla, and is new record of subspecies in China. The specimens discovered in western Guangdong and Hainan Island,have been ever considered as M. m. moschala with in 50 years, but it is recognized as a new subspeoies now, and is described as follows:Melogale moschata Ilainanensis, subsp. nov.Holotype- No. 051,, adult, collected on January 26, 1963 from Dali, Hainan Island.Paratypc: No. 0188,, 0479,, adult. collected on January 24,1963 and on December 29, 1964 from Mountain Ba wangling, Mountain Dinoluo of Hainan Island. The type specimen arc deposited in the (Juangdong Institute of Entomology.Diagnosis: The dorsum is brown-tan in colour,the abdomen is apricot yellow. The tail likes a broom. Interorbilal breadth is narrower. Discrip-tion: The colour of the body, deep and bright, dorsum the body, brown-tan; needle-hair without white tip; the forehead has a apricot yellow stain, the colour of the body sides is paler, and is mingled slightly apricot yellow hair tipe. The lower part of the body is apricot yellow from jaw to the base of tail; hair of tail is hard and flully, the tipe of tail likes a broom.Skull strong, interorbital breadth narrow, generally smaller than 20mm; zygomatic arch, wide and hard; temporal ridges, straight forward and thickened. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16309264302253723, "perplexity": 29862.96403383051}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00626.warc.gz"} |
https://brilliant.org/problems/integration-no-no-its/ | # Integration? No no its ...
Calculus Level 4
$\int_{1}^{x}A(x)B(x)dx*\int_{1}^{x}C(x)D(x)dx-\int_{1}^{x}A(x)C(x)dx*\int_{1}^{x}B(x)D(x)dx=f(x)$
If f(x) is a nth degree polynomial and satisfies above equation for all real x, then area bounded by f(x) and the line y=x-1 can be represented as
$\tfrac{a}{b*c}$
Find the value of a-b+c
Assumptions:
n is an even natural number.
a,b may not be numbers.
c is a number.
A, B, C, D are non constant continuous functions of x.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281026721000671, "perplexity": 1732.8631972591888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947033.92/warc/CC-MAIN-20180424174351-20180424194351-00169.warc.gz"} |
https://www.cryptologie.net/home/1/35/ | Hey! I'm David, the author of the Real-World Cryptography book. I'm a crypto engineer at O(1) Labs on the Mina cryptocurrency, previously I was the security lead for Diem (formerly Libra) at Novi (Facebook), and a security consultant for the Cryptography Services of NCC Group. This is my blog about cryptography and security and other related topics that I find interesting.
# I'm officialy an intern at Cryptography Services posted April 2015
I haven't been posting for a while, and this is because I was busy looking for a place in Chicago. I finally found it! And I just accomplished my first day at Cryptography Services, or rather at Matasano since I'm in their office, or rather at NCC Group since everything must be complicated :D
I arrived and received a bag of swags along with a brand new macbook pro! That's awesome except for the fact that I spent way too much time trying to understand how to properly use it. A few things I've discovered:
• you can pipe to pbcopy and use pbpaste to play with the clipboard
• open . in the console opens the current directory in Finder (on windows with cygwin I use explorer .)
• in the terminal preference: check "use option as meta key" to have all the unix shortcuts in the terminal (alt+b, ctrl+a, etc...)
• get homebrew to install all the things
I don't know what I'll be blogging about next, because I can't really disclose the work I'll be doing there. But so far the people have been really nice and welcoming, the projects seem to be amazingly interesting (and yeah, I will be working on OpenSSL!! (the audit is public so that I can say :D)). The city is also amazing and I've been really impressed by the food. Every place, every dish and every bite has been a delight :)
# Talk: RSA and LLL attacks posted March 2015
I posted previously about my researches on RSA attacks using lattice's basis reductions techniques, I gave a talk today that went really well and you can check the slides on the github repo
Also on SlideShare
I wanted to record myself so I could have put that on youtube along with the slides but... I completely forgot once I got on stage. But this is OK as I got corrected on some points, it will make the new recording better :) I will try to make it as soon as possible and upload it on youtube.
comment on this story
# End to End encryption for Yahoo mail users (plugin) posted March 2015
Yahoo has released a plugin that allows end to end encryption for yahoo mail users. It's seems to be part of the new "yahoo" redesign:
we’ve heard you loud and clear: We’re building the best products to ensure a more secure user experience and overall digital ecosystem.
It's open sourced and they also setup a bug bounty program (from 50$to 15,000$)
While at this stage we’re rolling out the source code for feedback from the wider security industry
More on their tumblr (this sounds weird).
Glancing over the code it looks like it's cumbersome to use:
The extension requires a keyserver implementing this API to fetch keys for other users.
comment on this story
# Survey on RSA Attacks using Lattice reduction techniques (LLL) posted March 2015
And here's the survey of what I talked about previously: https://github.com/mimoo/RSA-and-LLL-attacks/raw/master/survey_final.pdf
It's my first survey ever and I had much fun writing it! I don't really know if I can call it a survey, it reads like a vulgarization/explanation of the papers from Coppersmith, Howgrave-Graham, Boneh and Durfee, Herrmann and May. There is a short table of the running times at the end of each sections. There is also the code of the implementations I coded at the end of the survey.
If you spot a typo or something weird, wrong, or badly explained. Please tell me!
comment on this story
# Implementation of Boneh and Durfee attack on RSA's low private exponents posted March 2015
I've Implemented a Coppersmith-type attack (using LLL reductions of lattice basis). It was done by Boneh and Durfee and later simplified by Herrmann and May. The program can be found on my github.
The attack allows us to break RSA and the private exponent d. Here's why RSA works (where e is the public exponent, phi is euler's totient function, N is the public modulus):
$ed = 1 \pmod{\varphi(N)}$ $\implies ed = k \cdot \varphi(N) + 1 \text{ over } \mathbb{Z}$ $\implies k \cdot \varphi(N) + 1 = 0 \pmod{e}$ $\implies k \cdot (N + 1 - p - q) + 1 = 0 \pmod{e}$ $\implies 2k \cdot (\frac{N + 1}{2} + \frac{-p -q}{2}) + 1 = 0 \pmod{e}$
The last equation gives us a bivariate polynomial $f(x,y) = 1 + x \cdot (A + y)$. Finding the roots of this polynomial will allow us to easily compute the private exponent d.
The attack works if the private exponent d is too small compared to the modulus: $d < N^{0.292}$.
To use it:
• look at the tests in boneh_durfee.sage and make your own with your own values for the public exponent e and the public modulus N.
• guess how small the private exponent d is and modify delta so you have d < N^delta
• tweak m and t until you find something. You can use Herrmann and May optimized t = tau * m with tau = 1-2*delta. Keep in mind that the bigger they are, the better it is, but the longer it will take. Also we must have 1 <= t <= m.
• you can also decrease X as it might be too high compared to the root of x you are trying to find. This is a last recourse tweak though.
Here is the tweakable part in the code:
# Tweak values here !
delta = 0.26 # so that d < N^delta
m = 3 # x-shifts
t = 1 # y-shifts # we must have 1 <= t <= m
# Pretty diagrams with Tikz in LaTeX posted March 2015
Because studying Cryptography is also about using LaTeX, it's nice to spend a bit of time understanding how to make pretty documents. Because, you know, it's nicer to read.
Here's an awesome quick introduction of Tikz that allows to make beautiful diagram with great precision in a short time:
And I'm bookmarking one more that seems go way further.
comment on this story
# Babun, Cmder and Tmux posted February 2015
I've used Cmder for a while on Windows. Which is a pretty terminal that brings a lot of tools and shortcuts from the linux world. I also have Chocolatey as packet manager. And all in all it works pretty great except Cmder is pretty slow.
I've ran into Babun yesterday, that seems to be kind of the same thing, but with zsh, oh-my-zsh and another packet manager: pact. The first thing I did was downloading tmux and learning how to use it. It works pretty well and I think I have found a replacement for Cmder =)
Here is a video of what is tmux:
# Implementation of Coppersmith attack (RSA attack using lattice reductions) posted February 2015
I've implemented the work of Coppersmith (to be correct the reformulation of his attack by Howgrave-Graham) in Sage.
You can see the code here on github.
I won't go too much into the details because this is for a later post, but you can use such an attack on several relaxed RSA models (meaning you have partial information, you are not totally in the dark).
I've used it in two examples in the above code:
## Stereotyped messages
For example if you know the most significant bits of the message. You can find the rest of the message with this method.
The usual RSA model is this one: you have a ciphertext c a modulus N and a public exponent e. Find m such that m^e = c mod N.
Now, this is the relaxed model we can solve: you have c = (m + x)^e, you know a part of the message, m, but you don't know x. For example the message is always something like "the password today is: [password]". Coppersmith says that if you are looking for N^1/e of the message it is then a small root and you should be able to find it pretty quickly.
let our polynomial be f(x) = (m + x)^e - c which has a root we want to find modulo N. Here's how to do it with my implementation:
dd = f.degree()
beta = 1
epsilon = beta / 7
mm = ceil(beta**2 / (dd * epsilon))
tt = floor(dd * mm * ((1/beta) - 1))
XX = ceil(N**((beta**2/dd) - epsilon))
roots = coppersmith_howgrave_univariate(f, N, beta, mm, tt, XX)
You can play with the values until it finds the root. The default values should be a good start. If you want to tweak:
• beta is always 1 in this case.
• XX is your upper bound on the root. The bigger is the unknown, the bigger XX should be. And the bigger it is... the more time it takes.
## Factoring with high bits known
Another case is factoring N knowing high bits of q.
The Factorization problem normally is: give N = pq, find q. In our relaxed model we know an approximation q' of q.
Here's how to do it with my implementation:
let f(x) = x - q' which has a root modulo q.
This is because x - q' = x - ( q + diff ) = x - diff mod q with the difference being diff = | q - q' |.
beta = 0.5
dd = f.degree()
epsilon = beta / 7
mm = ceil(beta**2 / (dd * epsilon))
tt = floor(dd * mm * ((1/beta) - 1))
XX = ceil(N**((beta**2/dd) - epsilon)) + 1000000000000000000000000000000000
roots = coppersmith_howgrave_univariate(f, N, beta, mm, tt, XX)
What is important here if you want to find a solution:
• we should have q >= N^beta
• as usual XX is the upper bound of the root, so the difference should be: |diff| < XX
1 comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4604431390762329, "perplexity": 1871.9297914865554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00295.warc.gz"} |
http://tex.stackexchange.com/questions/72917/how-to-add-a-keyword-with-a-blank-space-in-listings-package | # How to add a keyword with a blank space in Listings package? [duplicate]
I am trying to define a listing style in order to have fortran source codes which look like vim style colors.
For example, I would like to give a specific color to end program or end module or double precision ect ...
It is easy for keywords without blank character but I do not know how to do for keyword such as those above.
As you can see in the example below, the color of program is the good one but the color of end program is not the good one.
\documentclass{article}
\usepackage{xcolor}
\definecolor{keycolor}{RGB}{172, 42, 42}
\definecolor{vimvert}{RGB}{46, 139, 87}
\usepackage{listings}
% global parameters
\lstdefinestyle{global}{
basicstyle=\ttfamily\scriptsize\color{black!90},%
stringstyle=\itshape\color{magenta},%
showstringspaces=false,%
keywordstyle=\bfseries\color{keycolor},%
}
% fortran style
\lstdefinestyle{fortranstyle}{
language=Fortran,%
style=global,%
emph=[1]{implicit none, integer, real, double precision, character, len, parameter, structure, common},%
emphstyle=[1]\bfseries\color{vimvert},%
emph=[2]{program,end program, module, end module, subroutine, end subroutine, function, end function},%
emphstyle=[2]\color{violet}\bfseries\slshape,%
emph=[3]{call, true, false},%
emphstyle=[3]\color{teal}\slshape%
}
\begin{document}
\begin{lstlisting}[style=fortranstyle]
program calcPi
implicit none
integer :: i, nbreDansCercle
integer, parameter :: npts = 1000000000
double precision :: x, y, r, pi
nbreDansCercle = 0
do i = 1, npts, 1
call random_number(x)
call random_number(y)
x = 2.d0 * x - 1.d0
y = 2.d0 * y - 1.d0
r = x**2 + y**2
if (r < 1.d0) then
nbreDansCercle = nbreDansCercle + 1
end if
end do
pi = 4.d0 * dble(nbreDansCercle) / dble(npts)
write(*,"('pi = ', F20.17)") pi
end program calcPi
\end{lstlisting}
\end{document}
-
## marked as duplicate by Jubobs, Jesse, Werner, Claudio Fiandrino, Peter JanssonJan 16 '14 at 7:24
One solution would be the use of the moredelim key (see package manual section 3.3 "Delimters")
I defined two more delimeters:
• moredelim=[is][emphstyle]{|>}{<|} for implicit none and double precision that also don't get highlighted as you might want to define.
• moredelim=[is][emphstyle2]{|<}{>|} for end program and end module
There is one drawback: This has to be done manually or via Search-and-Replace and Regex but still, you can't simply copypaste code from your source or include it with \lstinputlisting.
## Code
\documentclass{article}
\usepackage{xcolor}
\definecolor{keycolor}{RGB}{172, 42, 42}
\definecolor{vimvert}{RGB}{46, 139, 87}
\usepackage{listings}
% global parameters
\lstdefinestyle{global}{
basicstyle=\ttfamily\scriptsize\color{black!90},%
stringstyle=\itshape\color{magenta},%
showstringspaces=false,%
keywordstyle=\bfseries\color{keycolor},%
}
% fortran style
\lstdefinestyle{fortranstyle}{
language=Fortran,%
style=global,%
emph={[1]integer, real, character, len, parameter, structure, common},%
emphstyle=[1]\bfseries\color{vimvert},%
emph={[2]program, module, subroutine, function},%
emphstyle=[2]\color{violet}\bfseries\slshape,%
emph={[3]call, true, false},%
emphstyle=[3]\color{teal}\slshape,%
moredelim=[is][emphstyle]{|>}{<|},%
moredelim=[is][emphstyle2]{|<}{>|}%
}
\begin{document}
\begin{lstlisting}[style=fortranstyle]
program calcPi
|>implicit none<|
integer :: i, nbreDansCercle
integer, parameter :: npts = 1000000000
|>double precision<| :: x, y, r, pi
nbreDansCercle = 0
do i = 1, npts, 1
end do
pi = 4.d0 * dble(nbreDansCercle) / dble(npts)
write(*,"('pi = ', F20.17)") pi
|<end program>| calcPi
\end{lstlisting}
\end{document}
## Output
PS: You could also use moredelim=[s][emphstyle]{implicit}{none} (note the missing i) but this works only if you use implicit with a following none. The same applies to double precision and most importantly end program.
It would fail when it encounters an end do because end is still a delimiter which does not find a program and it will all be set in emphstyle.
-
I think that your last solution in the post scriptum is the best one and the simplest one ! Thanks – Ger Sep 18 '12 at 7:03
Actually, moredelim=[s][emphstyle]{implicit}{none} does not work. If I did that, all the text after implicit none is green. – Ger Sep 20 '12 at 8:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3126582205295563, "perplexity": 7488.509248361365}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246659254.83/warc/CC-MAIN-20150417045739-00003-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://uwspace.uwaterloo.ca/handle/10012/9927 | ### Browse by
Welcome to the Department of Combinatorics and Optimization sub-community.
Research is organized into these collections:
• Combinatorics and Optimization Department: Faculty, student, and staff research arranged by type. This collection does not include graduate Theses & Dissertations.
• Combinatorics and Optimization Theses & Dissertations: Graduate student research required for degree completion.
### Recent Submissions
• #### Digital Signature Schemes Based on Hash Functions
(University of Waterloo, 2017-04-19)
Cryptographers and security experts around the world have been awakened to the reality that one day (potentially soon) large-scale quantum computers may be available. Most of the public-key cryptosystems employed today on ...
• #### Approximation Algorithms for Clustering and Facility Location Problems
(University of Waterloo, 2017-04-06)
Facility location problems arise in a wide range of applications such as plant or warehouse location problems, cache placement problems, and network design problems, and have been widely studied in Computer Science and ...
• #### Efficient Composition of Discrete Time Quantum Walks
(University of Waterloo, 2017-01-20)
It is well known that certain search problems are efficiently solved by quantum walk algorithms. Of particular interest are those problems whose efficient solutions involve nesting of search algorithms. The nesting of ...
• #### On The Density of Binary Matroids Without a Given Minor
(University of Waterloo, 2016-12-21)
This thesis is motivated by the following question: how many elements can a simple binary matroid with no $\PG(t,2)$-minor have? This is a natural analogue of questions asked about the density of graphs in minor-closed ...
• #### Structure in Stable Matching Problems
(University of Waterloo, 2016-12-14)
In this thesis we provide two contributions to the study of structure in stable matching problems. The first contribution is a short new proof for the integrality of Rothblum’s linear description of the convex hull of ...
• #### Symmetries
(University of Waterloo, 2016-10-03)
Automorphisms of graphs, hypergraphs and disgraphs are investigated. The invariance of the chromatic polynomial in the rotor effect is disproved. New invariance results are obtained. It is shown that given any integer k ...
• #### On Polynomial-time Path-following Interior-point Methods with Local Superlinear Convergence
(University of Waterloo, 2016-09-30)
Interior-point methods provide one of the most popular ways of solving convex optimization problems. Two advantages of modern interior-point methods over other approaches are: (1) robust global convergence, and (2) the ...
• #### FACES OF MATCHING POLYHEDRA
(University of Waterloo, 2016-09-30)
Let G = (V, E, ~) be a finite loopless graph, let b=(bi:ieV) be a vector of positive integers. A feasible matching is a vector X = (x.: j e: E) J of nonnegative integers such that for each node i of G, the sum of ...
• #### Packing and Covering Odd (u,v)-trails in a Graph
(University of Waterloo, 2016-09-27)
In this thesis, we investigate the problem of packing and covering odd $(u,v)$-trails in a graph. A $(u,v)$-trail is a $(u,v)$-walk that is allowed to have repeated vertices but no repeated edges. We call a trail \emph{odd} ...
• #### Applied Hilbert's Nullstellensatz for Combinatorial Problems
(University of Waterloo, 2016-09-23)
Various feasibility problems in Combinatorial Optimization can be stated using systems of polynomial equations. Determining the existence of a \textit{stable set} of a given size, finding the \textit{chromatic number} of ...
• #### Planar graphs without 3-cycles and with 4-cycles far apart are 3-choosable
(University of Waterloo, 2016-09-16)
A graph G is said to be L-colourable if for a given list assignment L = {L(v)|v ∈ V (G)} there is a proper colouring c of G such that c(v) ∈ L(v) for all v in V (G). If G is L-colourable for all L with |L(v)| ≥ k for all ...
• #### Computing the Residue Class of Partition Numbers
(University of Waterloo, 2016-09-14)
In 1919, Ramanujan initiated the study of congruence properties of the integer partition function $p(n)$ by showing that $$p(5n+4) \equiv 0 \mod{5}$$ and $$p(7n+5) \equiv 0 \mod{7}$$ hold for all integers $n$. These results ...
• #### SUBMODULAR FUNCTIONS, GRAPHS AND INTEGER POLYHEDRA
(University of Waterloo, 2016-09-12)
This thesis is a study of the faces of certain combinatorially defined polyhedra. In particular, we examine the vertices and facets of these polyhedra. Chapter 2 contains the essential mathematical background in polyhedral ...
• #### Covering Graphs and Equiangular Tight Frames
(University of Waterloo, 2016-09-02)
Recently, there has been huge attention paid to equiangular tight frames and their constructions, due to the fact that the relationship between these frames and quantum information theory was established. One of the problems ...
• #### ADMM for SDP Relaxation of GP
(University of Waterloo, 2016-08-30)
We consider the problem of partitioning the set of nodes of a graph G into k sets of given sizes in order to minimize the cut obtained after removing the k-th set. This is a variant of the well-known vertex separator ...
• #### Cyclically 5-Connected Graphs
(University of Waterloo, 2016-08-29)
Tutte's Four-Flow Conjecture states that every bridgeless, Petersen-free graph admits a nowhere-zero 4-flow. This hard conjecture has been open for over half a century with no significant progress in the first forty years. ...
• #### Unavoidable Minors of Large 5-Connected Graphs
(University of Waterloo, 2016-08-24)
This thesis shows that, for every positive integer $n \geq 5$, there exists a positive integer $N$ such that every $5-$connected graph with at least $N$ vertices has a minor isomorphic to one of thirty explicitly defined ...
• #### On the Strongly Connected Components of Random Directed Graphs with Given Degree Sequences
(University of Waterloo, 2016-08-24)
A strongly connected component of a directed graph G is a maximal subgraph H of G such that for each pair of vertices u and v in H, there is a directed path from u to v and a directed path from v to u in H. A strongly ...
• #### On the effectiveness of isogeny walks for extending cover attacks on elliptic curves
(University of Waterloo, 2016-08-23)
Cryptographic systems based on the elliptic curve discrete logarithm problem (ECDLP) are widely deployed in the world today. In order for such a system to guarantee a particular security level, the elliptic curve selected ...
• #### A Study of Time Representation in a Class of Short Term Scheduling Problems
(University of Waterloo, 2016-08-17)
The problem of scheduling operations has received significant attention from academia and industrial practitioners in the past few decades. A key decision in various scheduling operations problems is when to perform an ...
UWSpace
University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883
All items in UWSpace are protected by copyright, with all rights reserved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5085368752479553, "perplexity": 1207.8694081279518}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118743.41/warc/CC-MAIN-20170423031158-00040-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/show-that-points-1-1-3-3-4-3-are-equidistant-plane-5x-2y-7z-8-0-distance-of-a-point-from-a-plane_14307 | # Show that the points (1, –1, 3) and (3, 4, 3) are equidistant from the plane 5x + 2y – 7z + 8 = 0 - Mathematics and Statistics
Show that the points (1, –1, 3) and (3, 4, 3) are equidistant from the plane 5x + 2y – 7z + 8 = 0
#### SolutionShow Solution
Let p1 and p2 be the distances of points hati-hatj+3hatk and 3hati+4hatj+3hatk from bar r.(5hati+2hatj-7hatk)+8=0
The distance of the point A with position vector a from the plane barr.barn = p is given by
d=|bara.barn-p|/|barn|
therefore p_1=|(hati-hatj+3hatk).(5hati+2hatj-7hatk)-(-8)|/sqrt(5^2+2^2+(-7)^2)
=|1(5)-1(2)+3(-7)+8|/sqrt(25+4+49)
=|5-2-21+8|/sqrt(78)=|-10|/sqrt78=10/sqrt78
and p_2=|()()-(-8)|/sqrt(5^2+2^2+(-7)^2)
=|3(5)+4(2)+3(-7)+8|/sqrt(25+4+49)
=|15+8-21+8|/sqrt78
=10/sqrt78
∴ p1 = p2
Hence, points are equidistant from the plane.
Is there an error in this question or solution? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2637490928173065, "perplexity": 476.9186052680429}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178349708.2/warc/CC-MAIN-20210224223004-20210225013004-00206.warc.gz"} |
https://chemistry.stackexchange.com/questions/72222/ph-probe-bulb-what-is-happening-within-the-glass | # pH probe bulb - what is happening within the glass?
I am trying to understand how the glass bulb of a pH electrode of a pH meter works - the glass bulb itself. Not the reference electrode or the rest of the electrode (HCl, Ag/AgCl wire, etc...), the math, or the equilibrium yet. For this question just the glass bulb of the pH electrode.
Here is what I have learned so far. This is my current understanding. I'm not saying it is right - but it's where I am right now.
The thin-walled glass bulb has a conductive solution inside, and the outside should be kept in liquid at all times as well. This hydrates a thin layer of the glass on the outside and the inside. I assume these layers are manufactured differently, otherwise, the glass should be uniformly hydrated after a long enough time. It is important that the middle layer of the glass remains very low conductivity so that a potential difference can be maintained; there are likely other reasons as well.
The glass is amorphous and in this case, the exterior layers are somewhat porous, so there is a large volume of Si-O groups exposed to the solution. Protons will stick to these groups and establish a negative charge on the outside of the glass. The number is related to the pH or hydronium concentration of the solution on the outside of the electrode.
edit: I have just started to read this early discussion, where the idea that the glass itself may behave as a sort of buffer:
Hughes (3) has pointed out that the hydrogen ion concentration in the glass phase may be held relatively constant by the buffer action of the glass which is a mixture of the salt of a weak acid $(\ce{Na2SiO3})$ with the anhydride of that acid (excess $\ce{SiO2})$.
Note: The hydrated layer is also called a "gel layer", but it is not clear if this is formed naturally as the glass hydrates, or if there is a special gel-enabling material applied to each surface during manufacture.
1. Do these have to be specially prepared layers of more porous, hydratable glass on the inside and outside of the glass bulb? If so, roughly speaking how is this done? If not, what does limit the depth of the hydrated layer?
2. When inserted into an acid/base solution, is it just protons diffusing into the hydrated layer of the pH probe bulb by jumping between Si-O sites, or is it the hydronium ions in solution that is diffusing into the glass?
3. Why is it this often called an ion exchange process? (e.g. not in the Mettler link but in the other two links below, and several random textbooks pulled from a library shelf). Are there Li or Na ions in the glass that are moving? What is being "exchanged"?
below: From Theory and Practice of pH Measurement.
below: From The Glass pH Electrode by Petr Vany´sek.
• This question is a little complex, but I believe that understanding one or two underlying processes within the glass is all that I need here. – uhoh Apr 8 '17 at 23:56
• electrochem.org/dl/interface/sum/sum04/IF6-04-Pages19-20.pdf This short article gives some useful information for you. In particular, I believe it addresses the third part of your original question. 1st page, right around equation 2: "The exchange of hydronium (or written as proton, H+) between the solid membrane and the surrounding solution, and the equilibrium nature of this exchange, is the key principle of H3O+ sensing. " Equation 2 shows that the ion exchange is with the silicon of the glass membrane. – Tyberius Apr 21 '17 at 4:27
• @Tyberius Yep, that's the pdf I've linked to in the question and the source of the figure of the bulb labeled Figure 1, and it is one of the statements there that are bothering me and brought me here to get expert help. I think it is somewhat vague. I am not sure it actually clears up beyond all doubt that hydronium ions do all of the diffusing, and not just the protons, and 3. asks about exchange, and the metal ions within the glass (Li, Na) itself. After all The Chalkboard magazine column is not intended to be a scholarly reference source. – uhoh Apr 21 '17 at 5:15
• @Tyberius The glass has four interfaces created by three layers, and I'm also asking about diffusion and exchange within the volume of the two outer hydrated layers. If I dip a probe in a more acidic solution, is it the hydronium ions that are physically diffusing all the way into the hydrated layer, or in that one second of time as the pH meter updates to the new, correct value, is it really just protons jumping from one SiO site to the next? The dynamics are different. 3. asks about exchange, and the metal ions within the glass (Li, Na) itself - presumably the two hydrated layers. – uhoh Apr 21 '17 at 5:20
• Have you looked at the Wikipedia page for "glass electrode"? References 8 through 12 cited in the "Metallic function..." section might be worth tracking down. en.wikipedia.org/wiki/Glass_electrode – J. Ari Apr 27 '17 at 17:52
My reference for all information and pictures is Harris' Quantitative Chemical Analysis, 9th ed., pp 347-9. I think it'll be worth your while to consult those pages, but I'll try to summarize the important points here.
An ion-selective electrode is characterized by a thin membrane that, well, selectively binds ions. The glass electrode is an ion-selective electrode for $\ce{H+}$ made of amorphous silicate glass, which consists of connected $\ce{SiO4}$ tetrahedra.
Presumably, no special preparative techniques are required for the glass, and the depth of the hydrated gel layer is mediated by the strength and range of the intermolecular interactions between water and the glass.
Protons are the main ions that bind to the layer, leading to the selectivity of the electrode. They diffuse between the solution and the hydrated gel layer and displace the metal ions originally present on the surface of the glass, which describes an ion-exchange process. Note that they cannot, however, diffuse through the inner glass layer.
A few side remarks.
• Equilibrium is reached when the favorable binding of protons to the glass surface is balanced by the unfavorable electrostatic repulsion and chemical potential gradient that result from diffusion into the hydrated gel layer. This provides an equation relating the potential difference to the pH of the solution and allows for $\ce{pH}$ measurement.
• Something has to be able to move through the inner layer of the glass membrane to conduct a current and hence allow for a measurement of the potential difference. It turns out that sodium ions can move through this inner layer, but only sluggishly---the resistance of the glass membrane is about $10^8\,\Omega$.
• As you've mentioned, the electrical resistance of the glass is of the order of 100 MΩ or more, and is very sensitive to temperature, thus the need for a very high input impedance amplifier. However it does deliver a small current (nA). I had thought that the conductivity of the glass is due to electron (or hole) carriers, not the movement of sodium ions. My thinking was that ionic conductivity might "use up" sodium since its not necessarily replenished by the solutions on the outside or AgCl inside. Perhaps for Na sensitive ionic probes the situation is different. – uhoh Aug 7 '17 at 1:30
• I'm glad to help. (1) Yes, they are; I've updated my post. (2) I don't know enough about this to comment, but Harris does state that sodium ions are responsible for current: "The $\ce{H+}$-sensitive membrane may be thought of as two surfaces electrically connected by $\ce{Na+}$ transport." – a-cyclohexane-molecule Aug 7 '17 at 2:50
• Oops, sorry, forgot to write @uhoh. – a-cyclohexane-molecule Aug 7 '17 at 3:02
• Looks great, thank you for all this work! Below my meta question Does anybody here know how a pH probe's glass bulb electrode works? @Martin-マーチン commented "Sometimes it just takes more time. Don't give up just yet." :) +n! – uhoh Aug 7 '17 at 5:59
I guess H+ ions are able to diffuse through the glass layer. After all, if the glass would soften at 900 K, then the probability of reaching or exceeding the activation energy for diffusion of an ion (H+, Na+) at 300 K is about exp(-900/300) = 0.05 which is quite high. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6181577444076538, "perplexity": 943.3711695177083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578526807.5/warc/CC-MAIN-20190418201429-20190418223429-00492.warc.gz"} |
https://knowridge.com/2019/01/physicists-measure-weak-force-inside-atoms-for-first-time/ | # Physicists measure ‘weak force’ inside atoms for first time
Researchers have reported the first measurements of the weak interaction between protons and neutrons inside an atom.
The detection of the elusive force verifies a prediction of the Standard Model, the most widely accepted model explaining the behavior of three of the four known fundamental forces in the universe.
“This observation determines the most important component of the weak interaction between the neutron and the proton—and also between the neutron and all other nuclei,” says lead author W. Michael Snow, a professor in the Indiana University-Bloomington College of Arts and Sciences’ physics department and the director of the university’s Center for Spacetime Symmetries.
Snow is also a cospokesperson on the NDPGamma Experiment at Oak Ridge National Laboratory, where researchers conducted the experiments.
“The result deepens our understanding of one of the four fundamental forces of nature,” he adds.
These four forces are the strong force, electromagnetism, the weak force, and gravity. Protons and neutrons are made of smaller particles called quarks that the strong force binds together. The weak force exists in the distance inside and between protons and neutrons.
The goal of the experiment was to isolate and measure one component of this weak interaction.
Inside the atom
To detect the weak interaction inside protons and neutrons, the experiment’s leaders used a device called NPDGamma at Oak Ridge National Laboratory that controls the spin direction of cold neutrons the laboratory’s Spallation Neutron Source generates.
After the angular momentum, or spin, of these neutrons lined up, the team smashed them into protons in a liquid hydrogen target to produce gamma rays.
“The goal of the experiment was to isolate and measure one component of this weak interaction, which manifested as gamma rays that could be counted and verified with high statistical accuracy,” says coauthor David Bowman, team leader for neutron physics at Oak Ridge. “You have to detect a lot of gammas to see this tiny effect.”
Any “lopsidedness” in the direction of the resulting rays can only come from the weak force between the protons and neutrons.
By counting more gamma ray emissions opposite to the neutron spin than along the neutron spin, the researchers observed the influence of the weak interaction. The small size of this lopsidedness, about 30 parts per billion, is the smallest gamma asymmetry ever measured.
Researchers conducted the experiments to detect the weak force over nearly 20 years, with Snow playing a role in the work since the beginning.
“I’ve been involved in the experiment since the original proposal almost two decades ago,” says Snow, whose work on the project has spanned two major phases, including an initial phase that took place at Los Alamos National Laboratory.
What’s next?
Next, Snow is eager to delve deeper into new questions the recently reported study prompted, including exploring the connection between the weak force between the neutrons and protons and the strong force between the quarks inside them.
As part of this effort, researchers plan to search for the effect of the weak interaction on slow neutron spin rotation in liquid helium.
“There is a theory for the weak force between the quarks inside the proton and neutron, but the way that the strong force between the quarks translates into the force between the proton and the neutron is not fully understood,” says Snow. “That’s still an unsolved problem.”
He compared the measurement of the weak force in relation with the strong force as a kind of tracer, similar to a tracer in biology that reveals a process of interest in a system without disturbing it.
“The weak interaction allows us to reveal some unique features of the dynamics of the quarks within the nucleus of an atom,” Snow adds.
The NPDGamma result also helps enable a new search for possible violations of time reversal symmetry. This experiment, called the Neutron OPtics Time Reversal EXperiment, NOPTREX, will address the mystery of why there is more matter than antimatter in the universe. Snow is the cospokesperson for NOPTREX.
The paper appears in the journal Physical Review Letters.
Source: Indiana University. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.916985273361206, "perplexity": 826.1748441458456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481249.5/warc/CC-MAIN-20190216230700-20190217012700-00358.warc.gz"} |
http://math.stackexchange.com/questions/109563/uniqueness-of-compact-topology-for-a-group/109569 | # Uniqueness of compact topology for a group
Suppose $G$ is a compact $T_2$ group. Can there be other compact $T_2$ topologies on $G$ which also turn $G$ into a topological group? ($T_2$ refers to the Hausdorff separation axiom)
-
The topology of a compact Hausdorff space is maximal compact and minimal Hausdorff; that is, no finer topology is compact, and no coarser topology is compact. So if you have another compact Hausdorff topology, then it is neither finer nor coarser to the original one. – Mariano Suárez-Alvarez Feb 15 '12 at 9:00
It may be worth stating that if you pick a topology once and for all and ask about uniqueness of smooth structures (if it has any at all!), then the answer is yes. – Jason DeVito Feb 15 '12 at 13:49
Take the circle group $G=S^1=\mathbb R/\mathbb Z$. Any non-continuous automorphism of $\mathbb R$ which fixes pointwise the subgroup $\mathbb Z$ passes to the quotient and gives an automorphism $f$ of the abstract group $G$, which is not continuous. Now define a topology on $G$ so that a set $U$ is open iff $f(U)$ is open in the usual topology. This new topology is of course Hausdorff and compact, but it is different to the usual topology.
Interesting, the resulting topological group is isomorphic (as topological group) to the original topological group, still the topology on the set $G$ is different. But I wonder more whether I could somehow prevent the implicit use of the axiom of choice. One idea would be to prescribe the Borel $\sigma$-algebra of a second-countable space and only allow topologies whose open sets belong to that $\sigma$-algebra. – Thomas Klimpel Feb 15 '12 at 15:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9127157330513, "perplexity": 204.19104120994754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645378542.93/warc/CC-MAIN-20150827031618-00004-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://research.nsu.ru/ru/publications/search-for-rare-decays-of-z-and-higgs-bosons-to-j-%CF%88-and-a-photon- | # Search for rare decays of Z and Higgs bosons to J / ψ and a photon in proton-proton collisions at √s = 13 TeV
Результат исследования: Научные публикации в периодических изданияхстатьярецензирование
8 Цитирования (Scopus)
## Аннотация
A search is presented for decays of Z and Higgs bosons to a J / ψ meson and a photon, with the subsequent decay of the J / ψ to μ+μ-. The analysis uses data from proton-proton collisions with an integrated luminosity of 35.9fb-1 at s=13TeV collected with the CMS detector at the LHC. The observed limit on the Z → J / ψγ decay branching fraction, assuming that the J / ψ meson is produced unpolarized, is 1.4 × 10 - 6 at 95% confidence level, which corresponds to a rate higher than expected in the standard model by a factor of 15. For extreme-polarization scenarios, the observed limit changes from - 13.6 to + 8.6 % with respect to the unpolarized scenario. The observed upper limit on the branching fraction for H → J / ψγ where the J / ψ meson is assumed to be transversely polarized is 7.6 × 10 - 4, a factor of 260 larger than the standard model prediction. The results for the Higgs boson are combined with previous data from proton-proton collisions at s=8TeV to produce an observed upper limit on the branching fraction for H → J / ψγ that is a factor of 220 larger than the standard model value.
Язык оригинала английский 94 94 27 European Physical Journal C 79 2 https://doi.org/10.1140/epjc/s10052-019-6562-5 Опубликовано - 30 янв 2019
## Fingerprint
Подробные сведения о темах исследования «Search for rare decays of Z and Higgs bosons to J / ψ and a photon in proton-proton collisions at √s = 13 TeV». Вместе они формируют уникальный семантический отпечаток (fingerprint). | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.998865008354187, "perplexity": 2495.112834550992}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585121.30/warc/CC-MAIN-20211017052025-20211017082025-00564.warc.gz"} |
http://smurf.mimuw.edu.pl/node/1766 | ## Classical propositional logic
A formula of propositional logic is in the conjunctive normal form (CNF) if it is a conjunction of (possibly many) disjunctions of (possibly many) propositinal variables and negated propositional variables. Eg., $$(p_1\lor\lnot p_2\lor p_3)\land(p_2\lor p_4\lor \lnot p_5)\land(\lnot p_1\land p_2)$$ is a CNF formula.
A formula of propositional logic is in $$k$$-CNF if it is in CNF and each disjunction has at most $$k$$ disjuncts (variables or their negations). The formula given above is in 3-CNF, byt not in 2-CNF.
A formula of propositional logic is in the disjunctive normal form (DNF) if it is a disjunction of (possibly many) conjunctions of (possibly many) propositinal variables and negated propositional variables.
Exercise 1
Show that for each propositional formula $$\varphi$$ there exists a propositional formula $$\psi$$ in DNF, equivalent to $$\varphi$$, i.e., $$\varphi\leftrightarrow\psi$$ is a tautology.
Exercise 2
Show that for each propositional formula $$\varphi$$ there exists a propositional formula $$\psi$$ in CNF, equivalent to $$\varphi$$, i.e., $$\varphi\leftrightarrow\psi$$ is a tautology.
Exercise 3
Show that for each propositional formula $$\varphi$$ in CNF there exists a propositional formula $$\psi$$ in 3-CNF such that $$\psi$$ is satisfiable if and only if $$\varphi$$ is satisfiable.
Exercise 4
Give a polynomial time algorithm for the following decision problem:
Input: Propositional formula $$\varphi$$ in DNF.
Question: Is $$\varphi$$ satisfiable?
Exercise 5
Give a polynomial time algorithm for the following decision problem:
Input: Propositional formula $$\varphi$$ in 2-CNF.
Question: Is $$\varphi$$ satisfiable?
Exercise 6
We consider formulas built using connectives of conjunction and disjunction, only.
For such a formula $$\varphi$$ let $$\hat{\varphi}$$ denote its dualisation, i.e., the formula obtained from $$\varphi$$ by replacing every occurrence of $$\wedge$$ by $$\vee$$ and every occurrence of $$\vee$$ by $$\wedge$$.
* Prove that $$\varphi$$ is a tautology if and only if $$\lnot\hat{\varphi}$$ is a tautology.
* Prove that $$\varphi\leftrightarrow\psi$$ is a tautology if and only if $$\hat{\varphi}\leftrightarrow\hat{\psi}$$ is a tautology.
* Propose a method to define dualisation for formulas contaning addtionally logical constants $$\bot$$ and $$\top$$, such that the above equivalences remain valid.
Exercise 7
Prove that for any function $$f:\{0,1\}^k\to\{0,1\}$$ there exists a formula $$\varphi$$ using only the connectives $$\to$$ i $$\bot$$ and variables from the set $$\{p_1,\ldots, p_k\}$$ with the property that for any valuation $$\varrho$$ the following equality holds:
$$[[\varphi]]_\varrho = f(\varrho(p_1),\ldots, \varrho(p_k))$$.
(In other words, the formula $$\varphi$$ defines the function $$f$$.)
Exercise 8
Consider an infinite set of lads, each of which has a finite number of fiancees. Moreover, for each $$k\in N$$, any $$k$$ lads has at least $$k$$ fiancees. Demonstrate that it is possible to marry each lad with one of his fiancees, without commiting bigamy.
Exercise 9
Let $$k$$ be a fixed natural number. Prove, using the compactness theorem, that if every finite subgraph of an infinite graph $$G=\langle V,E\rangle$$ is $$k$$-collorable, then $$G$$ itself is $$k$$-collorable, too.
Exercise 10
For the formula $$\gamma =\ r\leftrightarrow (p_1\lor p_2)$$ the following equivalence holds: $$\varrho\models\gamma$$ if and ony if $$\varrho(r)=\max(\varrho(p_1),\varrho(p_2))$$.
Investigate whether there exists a set of formulas $$\Gamma$$ such that $$\varrho\models\Gamma$$ if and only if $$\varrho(r)=\max_{n\in\mathbb{N}}(\varrho(p_n))$$.
Exercise 11
Is the sequent $$\{p,q\to p,\lnot q\}\vdash\{p,q\}$$ provable in the Gentzen system for propositional logic?
Exercise 12
Decide if the following sequents are provable in the Gentzen system for propositional logic?
* $$(p\to q) \lor (q\to p)$$
* $$(p\to ( q \to p)) \to p$$
Exercise 13
In the Gentzen system, the sequent $$\Gamma,p\vdash\Delta,p$$ is an axiom, where $$p$$ is a propositional variable. Prove that every sequent of the form $$\Gamma,\varphi\vdash\Delta,\varphi$$ is provable in the Gentzen system. What can you assert about the size of this proof?
## Three-valued propositional logics
Exercise 14
A logic $$L$$ is called monotonic, if the conditions $$\Delta\models\varphi$$ and $$\Gamma\supseteq\Delta$$ imply $$\Gamma\models\varphi.$$
For a three-valued Sobociński logic we define that $$\Delta\models\varphi$$, iff for every valuation of propositional variables into $$\{0,\frac12,1\}$$, if the values of all sentences in $$\Delta$$ are 1, then the value of $$\varphi$$ is 1, too.
Is this logic monotonic?
Exercise 15
Answer the same question as in Exercise 14 for the logics of Heyting-Kleene-Łukasiewicz and Bochvar, as well as for the logic of lazy (short) Pascal evaluation.
Exercise 16
What is the computational complexity of the following decision problem:
Given: A formula $$\varphi$$ of propositional logic.
Question: Is there a valuation $$\varrho$$ of propositional variables into $$\{0,\frac12,1\}$$ such that in the three-valued logic of Bochvar $$[[\varphi]]_\varrho=1$$?
## Intuitionistic propositional logic
Exercise 16
Prove the following formula in the Natural Deduction system for the intuitionistic logic
* $$p \to \neg \neg p$$
* $$\neg (p \lor q) \to\neg p \land\neg q$$
* $$\neg p \land\neg q \to\neg (p \lor q)$$
* $$\neg p \lor\neg q \to\neg (p \land q)$$
Exercise 17
Prove that the following formulas are not tautologies of the intuitionistic logic. Use Kripke models:
* $$((p\to q) \to p) \to p$$
* $$\neg (p \land q) \to (\neg p \lor\neg q)$$
Exercise 18
Prove inexpressibility of connectives in the intuitionistic logic:
* $$\lor$$ can not be expressed using only $$\land$$, $$\to$$ and $$\bot$$
* $$\land$$ can not be expressed using only $$\lor$$, $$\to$$ and $$\bot$$
* $$\to$$ can not be expressed using only $$\lor$$, $$\land$$ and $$\bot$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9825029373168945, "perplexity": 419.7238774266443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316021.66/warc/CC-MAIN-20190821131745-20190821153745-00371.warc.gz"} |
http://www.frontierlattices.ch/indepth/elbm | ## Entropic Lattice Boltzmann Method (ELBM)
##### Lattice Boltzmann methods provide access to high Reynolds numbers through keeping low the Mach number and the Knudsen number as the two independent parameters of the simulation
Lattice Boltzmann methods(LBM) were introduced in the late 80’s – early 90’s as a new approach to CFD, and begun to find wide acceptance during the past decade. In LBM, one does not attempt a direct discretization of the governing fluid dynamics equations for mass, momentum and energy; instead, a kinetic equation of the Boltzmann type for a controlled number of discrete velocities is solved numerically on a regular grid. The entropic LBM is an advancement of LBM which satisfies the Second Law of thermodynamics (entropy of the system always increases).
The simplest Entropic lattice Boltzmann equation for the incompressible flow simulations can be understood with this example. Let ${v}_i$, $i=1,\dots,Q$ be a set of discrete velocities representing links of a regular -dimensional lattice, and $f_i(x,t)$ be the populations of the velocities at the node $x$ at the discrete time $t$. Using the notion of the entropy $H(f)=\sum_i f_i\ln(f_i/W_i)$ (weights $W_i$ depend on the choice of the lattice), the equilibrium $f_i^{\rm eq}(\rho,u)$ (analog of the Maxwell distribution) is derived as the minimizer of $H$ under fixed local (nodal) density $\rho=\sum_i f_i$ and momentum $\rho u=\sum_i f_i v_i$. The entropic lattice Bhatnagar-Gross-Krook equation (ELBGK) describes the dynamics of the populations due to the free streaming of particles along the direction of the lattice links and the local relaxation to the equilibrium at the nodes:
$f_i(x+v_i,t+1)-f_i(x,t)=\alpha\beta(f_i^{\rm eq}-f_i)$
where $\beta$ is a parameter relates to the kinematic viscosity while function $\alpha$ maintains the entropy balance in the relaxation step at every grid node and is found as the non-trivial root of the equation (termed the entropy estimate)
$$H(f+\alpha(f^{\rm eq}-f))=H(f)$$
Entropy estimate tells us that the entropy value should stay constant at the vanishing viscosity ($\beta=1$). This condition defines $\alpha$ as the maximal step of the over-relaxation without violating the Second Law ($H$-function decrease in the relaxation). Entropy estimate results in a confinement of the populations within the entropy contour during the relaxation, and leads to the unconditional stability of ELBGK. Observe that the entire nonlinearity (collision) in the ELBGK equation is on the right hand side, and is completely local in space, while the propagation in space (left hand side) is linear and exact. Furthermore, if the simulation is fully resolved, the entropy estimate leads to $\alpha=2$, and the ELBGK equation becomes its predecessor, the LBGK equation
$f_i(x+v_i,t+1)-f_i(x,t)=2\beta(f_i^{\rm eq}-f_i)$
With this stunningly simple formulation, the ELBM overcomes the stability problems of regular lattice Boltzmann method, while still retaining its locality, efficiency and flexibility.
With the proper choice of the lattice, the LBGK (or resolved ELBGK) recovers the Navier-Stokes equation with the kinematic viscosity, $\nu=c_{\rm s}^2\left(\frac{1}{2\beta}-\frac{1}{2}\right)$, where $c_{\rm s}$ is the lattice speed of sound (a constant depending on the choice of the lattice), so that the relaxation parameter $\beta$ can be matched to the desired value. Note that the kinematic viscosity in the LBM formulation is independent of the time step which is one of the major findings of the method enabling it to reach low viscosity leading to high Reynolds numbers flow regimes.
While the LBGK cannot reach this limit due to disruptive numerical instabilities at the sub-grid scale, the ELBGK is unconditionally stable by respecting the Second Law of thermodynamics. | {"extraction_info": {"found_math": true, "script_math_tex": 22, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9340842962265015, "perplexity": 610.4199747339414}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137190.70/warc/CC-MAIN-20140914011217-00329-ip-10-234-18-248.ec2.internal.warc.gz"} |
https://www.eevblog.com/forum/chat/accidental-creation-of-ac-power-using-a-9-volt-battery!/msg1204818/ | Author Topic: accidental creation of AC power using a 9 volt Battery?! (Read 12663 times)
0 Members and 1 Guest are viewing this topic.
t_ryner
• Regular Contributor
• Posts: 58
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #50 on: May 10, 2017, 04:20:06 am »
That was one of the reasons they got rid of the penny here. It cost more than a penny to make one.
Really they need to change the way sales taxes work, they should be part of the final price. Stores could charge even amounts for stuff and what you see is what you pay. Could pretty much get rid of nickels and dimes too. Not sure how it would work for natives that have status cards though, could just round to the closest 25 cents when deducting the tax from the price.
There have been efforts to do the same in the US, but unfortunately they have not been as successful as in Canada. I can understand the nostalgia of the penny but it really is a pointless denomination these days. When I was a kid you could actually buy a gumball from a dispenser in the mall for a penny but it's been a long time since you could go buy anything that I'm aware of. I mean I guess you could get a few 0603 resistors for a penny but you'd probably have to buy hundreds of them to get that price. Some fret that without pennies prices will all be rounded up and it will add up, ok that's true, but then why not have 1/10th pennies because even now prices are rounded up. All the gas stations sell fuel with that stupid 9/10th cent, it's stupid.
I also completely agree that taxes should all be rolled into the price printed on the shelves in stores, it would make it so much easier to just show the price out the door and then for situations where one is tax exempt it could have the details in small text on the tag showing how much of the displayed price is tax. Same for tips in restaurants, I absolutely hate tipping, I do it anyway because that's how the wage structure is set up but seriously, if it's expected to tip, just roll it into the menu prices and pay the staff accordingly. Don't make me guess how much I should pay.
So many people pay in debt or credit that hard change might eventually be phased out (in the long run). you made a good point- why don't people argue for tenth cent coins for gas? The card is definitely the future. Not as satisfying as a handful of cash though.
Brumby
• Supporter
• Posts: 10182
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #51 on: May 10, 2017, 04:26:54 am »
Is it legal to give/sell Australian coins to someone else? What if you were to send a few dollars worth of coins to somebody in the US and that person decided to shrink, melt or otherwise damage the coins? Are they going to be dragged over to Australia for punishment?
Really though, I suspect the law is one of those things that is on the books to enable punishment in extreme cases but I really doubt it's enforced. I mean can you find a single case of someone getting arrested and fined for smashing, painting or otherwise deliberately damaging a few pennies? Maybe they really are unreasonable and heavy handed there but it seems unlikely given the general personality of most of the Australians I've known.
Giving them to someone else isn't a problem in itself. I don't know if knowing their intention beforehand would constitute an offence.
Certainly, I have not heard of any cases where such an offender has been prosecuted - but that doesn't mean there hasn't been any. While the law in place allows for any level of defacing, I am inclined to believe that the odd coin or three isn't going to ruffle too many feathers - but the nature of the legislation pretty much cuts off any attempts to "skirt the edge".
My interpretation is "do it at your own risk", but you would be reasonably OK with zapping a few for personal use ... but I just wouldn't do it in the first place.
Brumby
• Supporter
• Posts: 10182
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #52 on: May 10, 2017, 04:39:42 am »
That was one of the reasons they got rid of the penny here. It cost more than a penny to make one.
Really they need to change the way sales taxes work, they should be part of the final price. Stores could charge even amounts for stuff and what you see is what you pay. Could pretty much get rid of nickels and dimes too. Not sure how it would work for natives that have status cards though, could just round to the closest 25 cents when deducting the tax from the price.
There have been efforts to do the same in the US, but unfortunately they have not been as successful as in Canada. I can understand the nostalgia of the penny but it really is a pointless denomination these days. When I was a kid you could actually buy a gumball from a dispenser in the mall for a penny but it's been a long time since you could go buy anything that I'm aware of. I mean I guess you could get a few 0603 resistors for a penny but you'd probably have to buy hundreds of them to get that price.
In Australia, they pulled the 1 and 2 cent coins from circulation in 1992 - and the 5 cent is being talked about in the same way.
Quote
Some fret that without pennies prices will all be rounded up and it will add up, ok that's true, but then why not have 1/10th pennies because even now prices are rounded up. All the gas stations sell fuel with that stupid 9/10th cent, it's stupid.
With 5 cents being the smallest increment when paying by cash in Australia, the rounding rules have been operating ever since 1992. It's quite simple. Rounding goes either up or down - to the closest 5 cents. For example: 58c and 59c are rounded to 60c - just as 61c and 62c are rounded down to 60c. 63c goes up to 65c and so on. When paying by card, however, the actual cents are used.
As for selling fuel with prices to 1 decimal place - I don't see any problem with that at all. It's just a RATE at which a product is sold in bulk. The only thing that matters is the total on the pump.
james_s
• Super Contributor
• Posts: 12319
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #53 on: May 10, 2017, 08:08:20 am »
Well the rounding makes sense to me, but that doesn't stop people from arguing about it. As George Carlin said, a lot of people are #%&ing stupid.
The 10th cent thing on gasoline is just dumb, I mean why do it? The *only* reason is that it's a psychological trick that makes a gallon look cheaper. No other consumer product that I can think of is priced down to 0.1 cent. I mean if you're going to go that far, why not charge $2.839250843/gal? What if I buy just one gallon? Can I have my 0.1 cent back? Brumby • Supporter • Posts: 10182 • Country: Re: accidental creation of AC power using a 9 volt Battery?! « Reply #54 on: May 10, 2017, 10:55:55 am » You don't want to try any foreign currency conversion, then. The 10th cent thing on gasoline is just dumb, I mean why do it? The *only* reason is that it's a psychological trick that makes a gallon look cheaper. There's nothing psychological about two fuel retailers offering petrol for 135.9c/litre and 135.5c/litre. The difference is real. Buy 100litres and you'll save 40 cents with the cheaper price. Those tenths of a cent become significant when they get multiplied out. « Last Edit: May 10, 2017, 11:04:03 am by Brumby » helius • Super Contributor • Posts: 3055 • Country: Re: accidental creation of AC power using a 9 volt Battery?! « Reply #55 on: May 10, 2017, 01:30:55 pm » At gas ("petrol") stations in the US, the price is always plus 9/10 of a cent. That tenths figure is never anything except 9: the digital price signs that you can see from the highway are not physically capable of showing any other digit there. Like many things I expect it is historical. I remember when the stocks pages in the newspaper quoted prices in fractions:$1.56 3/4 etc.
« Last Edit: May 10, 2017, 01:32:31 pm by helius »
• Frequent Contributor
• !
• Posts: 265
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #56 on: May 10, 2017, 02:39:49 pm »
I guess arcs aren't a good word to describe them. The 90 volts were only resulting from passing a pulse through a transformer. I'll try to get measurement for the voltage. Speaking of flyback transformers, passing 12v through one of the coils in the flyback resulted in a 1cm spark (eventually).
I thought you needed one of those big metal power transistors to make that work. I remember taking a flyback out of a computer monitor that my brothers friend (probably) wasn't using only to find out it wouldn't work without a pretty expensive transistor that will probably soon burn out. Also turns out that he was using it and was pretty pissed. I explained to him that it would work without it and they often put in extra parts like they did with transistor (5 transistors, then 6 then 7 then...) radios. Because more equals more better. You should see a 100 transistor radio! He took me seriously and plugged the remains back in. Unfortunately it popped the breaker before any cool smoke came out. Guess it needed that part.
jimdeane
• Regular Contributor
• Posts: 126
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #57 on: May 10, 2017, 04:31:22 pm »
Reminds me of the later Polaroid cameras - the kind that you just watched the photo develop, not teh old ones where you waited and then peeled the picture off the backer. The film cartridges for those used a flat pack 6V battery that had PLENTY of life left in it after the film was used, I used to snag up as many of those as I could. After a while they went with a smaller battery - same size cardboard carrier to the same dimensions as the film, but the bulge int he middle where the cells were was smaller - even THOSE had decent life left after the film was used. Everyone was gunning for the empties though - there as an article by Forrest Mims in Popular Electronics about how useful this battery was because of the cost and the form factor
My first "hacking" experience in my memory is my dad teaching me how to make a flashlight from one of those Polaroid battery packs, a couple of pieces of aluminum foil (or gum wrapper), tape, and a flashlight bulb. It was so cool to take that camping. Not only a flashlight, but one I MADE.
I'll have to thank him for that, it might have been the spark for my interests.
Red Squirrel
• Super Contributor
• Posts: 2494
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #58 on: May 10, 2017, 05:41:42 pm »
Reminds me of the later Polaroid cameras - the kind that you just watched the photo develop, not teh old ones where you waited and then peeled the picture off the backer. The film cartridges for those used a flat pack 6V battery that had PLENTY of life left in it after the film was used, I used to snag up as many of those as I could. After a while they went with a smaller battery - same size cardboard carrier to the same dimensions as the film, but the bulge int he middle where the cells were was smaller - even THOSE had decent life left after the film was used. Everyone was gunning for the empties though - there as an article by Forrest Mims in Popular Electronics about how useful this battery was because of the cost and the form factor
My first "hacking" experience in my memory is my dad teaching me how to make a flashlight from one of those Polaroid battery packs, a couple of pieces of aluminum foil (or gum wrapper), tape, and a flashlight bulb. It was so cool to take that camping. Not only a flashlight, but one I MADE.
I'll have to thank him for that, it might have been the spark for my interests.
Actually that's kind of what got my interest in electricity too, my grandpa showed me using batteries a wire and a light bulb and I thought it was the coolest thing. Decide to try the same with a household bulb and the socket and it worked. "unlimited free power!" Got my first 120v shock at like 11 lol.
james_s
• Super Contributor
• Posts: 12319
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #59 on: May 10, 2017, 06:07:22 pm »
I did roughly the same thing, I think I was 3 or 4, my dad got me a couple of those big cylindrical dry cells with the screw terminals on top, some sockets, knife switches and made me a bunch of wire leads with crimp connectors on the ends. I had hours of fun playing with those.
Later I realized a 6V bulb I had would screw into the candelabra socket in the lights in my parents bedroom, I flipped the switch and it went pop, the whole bulb turned silvery black.
• Frequent Contributor
• !
• Posts: 265
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #60 on: May 10, 2017, 08:25:19 pm »
Remember the flash that would burn out a light bulb every time you used it? So wasteful. How did they make them go out in order?
james_s
• Super Contributor
• Posts: 12319
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #61 on: May 10, 2017, 09:18:06 pm »
Are you referring to photographic flashbulbs?
Brumby
• Supporter
• Posts: 10182
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #62 on: May 11, 2017, 12:17:21 am »
Remember the flash that would burn out a light bulb every time you used it?
I believe it's this:
Quote
So wasteful.
No more wasteful than the same magnesium ribbon flash bulb used before electronic flashes came of age.
... and those bulbs were a bit easier to use than the original system:
Do you like the manual triggering mechanism of the first two?
Quote
How did they make them go out in order?
Good question.
« Last Edit: May 11, 2017, 12:24:37 am by Brumby »
james_s
• Super Contributor
• Posts: 12319
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #63 on: May 11, 2017, 12:32:37 am »
Flashbulbs are nothing more than magnesium wool or foil in a glass bulb filled with pure oxygen. They are triggered by a small electric current, even static discharges can do it. I once dropped a flash cube on the carpet at my grandmother's house and half the bulbs fired. I knew a guy who got a nasty burn on his leg because he had a couple flashbulbs in his pocket and one went off.
They were triggered by mechanical contacts in the camera, usually powered by AA batteries for the small stuff while larger professional flash heads often took a pair of C batteries.
CatalinaWOW
• Super Contributor
• Posts: 3682
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #64 on: May 11, 2017, 12:38:52 am »
There were a lot of different mechanisms for those "bulbs". The ones I remember are:
1. The flashcube had a bulb on each face. The film winding mechanism would rotate a new bulb to the front, and this also put it in contact with the electrical connections. High and low side for each bulb.
2. Another had a long string of bulbs similar to the picture above, and indexed the cartridge through the camera, connecting one bulb at a time to the camera contacts. Again, a pair of contacts for each bulb.
3. Another version of the strip bulbs had a common connection for all bulbs, and a single high for each bulb. A step switch in the camera indexed through the bulbs.
4. Some had some form of steering network in the bulb pack. I don't remember how it worked but believe it was purely passive.
t_ryner
• Regular Contributor
• Posts: 58
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #65 on: May 11, 2017, 12:40:00 am »
I'm enjoying how much activity is going on on this post- I never expected it to go from whatever I started with to disposable flashbulbs and polaroid cameras!
Flashbulbs are nothing more than magnesium wool or foil in a glass bulb filled with pure oxygen. They are triggered by a small electric current, even static discharges can do it. I once dropped a flash cube on the carpet at my grandmother's house and half the bulbs fired. I knew a guy who got a nasty burn on his leg because he had a couple flashbulbs in his pocket and one went off.
They were triggered by mechanical contacts in the camera, usually powered by AA batteries for the small stuff while larger professional flash heads often took a pair of C batteries.
james_s
• Super Contributor
• Posts: 12319
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #66 on: May 11, 2017, 12:44:36 am »
I vaguely remember there were also flashbulbs that were purely mechanically triggered. Seems like they had a tiny percussion cap that would fire when struck by a firing pin.
CatalinaWOW
• Super Contributor
• Posts: 3682
• Country:
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #67 on: May 11, 2017, 03:14:49 am »
I vaguely remember there were also flashbulbs that were purely mechanically triggered. Seems like they had a tiny percussion cap that would fire when struck by a firing pin.
Yeah, now that you mention it I believe you are right.
Ian.M
• Super Contributor
• Posts: 9047
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #68 on: May 11, 2017, 08:21:40 am »
Yes. Philips Magicube.
It had a pin protruding from the bulb base and was fired by the pin being struck laterally by a hairpin spring built into the cube for each bulb. The spring was held back by a little metal pin formed from the other end of it and the camera fired it by pushing the spring up to clear the retaining pin when you pressed the shutter release. You could easily fire it manually by prodding the spring.
Some fishing line for tripwires, some matchsticks to prod the springs and a Magicube on a stake made an interesting deterrent for people wandering around at night where they shouldn't be.
« Last Edit: May 11, 2017, 08:30:04 am by Ian.M »
6PTsocket
• Regular Contributor
• Posts: 212
Re: accidental creation of AC power using a 9 volt Battery?!
« Reply #69 on: August 26, 2018, 04:17:54 am »
The real high voltage in a CRT TV was generated by the flyback transformer. What you have there is the deflection yoke that magnetically bends the beam across the screen. Scopes did it electrostaticaly.
Sent from my SM-G900V using Tapatalk
Smf | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23679450154304504, "perplexity": 2389.0354364241634}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201601.26/warc/CC-MAIN-20200921081428-20200921111428-00776.warc.gz"} |
http://www.ucolick.org/~gdi/ | ## Latest Results
The Hubble UltraDeep Field 2009 (HUDF09) Project was conceived in 2007. In 2008, through a highly competitive proposal process, the HUDF09 team was awarded 192 orbits (12 days) of observations on the Hubble Space Telescope with the new Wide Field InfraRed Camera (WFC3/IR). More on the HUDF09 project...
## XDF
The Hubble eXtreme Deep Field (XDF) Project has combined 50 days of observations with the Hubble Space Telescope using the Advanced Camera for Surveys Wide Field Channel and the Wide Field Camera 3 InfraRed Channel to create the deepest image of the universe. More on the XDF project ...
# Recent Publications
### The Most Luminous z~9-10 Galaxy Candidates yet Found: The Luminosity Function, Cosmic Star-Formation Rate, and the First Mass Density Estimate at 500 Myr
Oesch, P. A., Bouwens, R. J., Illingworth, G. D., Labbe, I., Smit, R., Franx, M., van Dokkum, P. G., Momcheva, I., Ashby, M. L. N., Fazio, G. G., Huang, J., Willner, S. P., Gonzalez, V., Magee, D., Brammer, G. B., and Skelton, R. E.
We present the discovery of four surprisingly bright (H_160 ~ 26 - 27 mag AB) galaxy candidates at z~9-10 in the complete HST CANDELS WFC3/IR GOODS-N imaging data, doubling the number of z~10 galaxy candidates that are known, just 500 Myr after the Big Bang. These sources were identified in a search over the full CANDELS-Deep dataset, building on our previous analysis of the HUDF09/XDF fields and GOODS-S. Three of these four galaxies are significantly detected at 4.5-6.2sigma in the very deep Spitzer/IRAC 4.5 micron data. Furthermore, the brightest of our candidates (at z=10.2+-0.4) is robustly detected also at 3.6 micron (6.9sigma), revealing a flat UV spectral energy distribution with a slope beta=-2.0+-0.2, consistent with demonstrated trends with luminosity at high redshift. The abundance of these luminous candidates suggests that the luminosity function evolves more significantly in phi_* than in L_* at z>~8. Despite the discovery of these luminous candidates, the cosmic star formation rate density for galaxies with SFR >0.7 M_sun yr^-1 shows an order-of-magnitude increase in only 170 Myr from z ~ 10 to z ~ 8, consistent with previous results. Based on the IRAC detections, we derive galaxy stellar masses at z~10, finding that these luminous objects are typically 10^9 M_sun. This allows for a first estimate of the cosmic stellar mass density at z~10 resulting in log rho* = 4.7^+0.5_-0.9 M_sun Mpc^-3 for galaxies brighter than M_UV~-18. The remarkable brightness, and hence luminosity, of these z~9-10 candidates highlights the opportunity for deep spectroscopy to determine their redshift and nature, and demonstrates the value of additional search fields to understand star-formation in the very early universe.
2013arXiv1309.2280O
### Probing the Dawn of Galaxies at z ~ 9-12: New Constraints from HUDF12/XDF and CANDELS data
Oesch, P. A., Bouwens, R. J., Illingworth, G. D., Labbé, I., Franx, M., van Dokkum, P. G., Trenti, M., Stiavelli, M., Gonzalez, V., and Magee, D.
We present a comprehensive analysis of z > 8 galaxies based on ultra-deep WFC3/IR data. We exploit all the WFC3/IR imaging over the Hubble Ultra-Deep Field from the HUDF09 and the new HUDF12 program, in addition to the HUDF09 parallel field data, as well as wider area imaging over GOODS-South. Galaxies are selected based on the Lyman break technique in three samples centered around z ~ 9, z ~ 10, and z ~ 11, with seven z ~ 9 galaxy candidates, and one each at z ~ 10 and z ~ 11. We confirm a new z ~ 10 candidate (with z = 9.8 ± 0.6) that was not convincingly identified in our first z ~ 10 sample. Using these candidates, we perform one of the first estimates of the z ~ 9 UV luminosity function (LF) and improve our previous constraints at z ~ 10. Extrapolating the lower redshift UV LF evolution should have revealed 17 z ~ 9 and 9 z ~ 10 sources, i.e., a factor ~3 × and 9× larger than observed. The inferred star formation rate density (SFRD) in galaxies above 0.7 M &sun; yr-1 decreases by 0.6 ± 0.2 dex from z ~ 8 to z ~ 9, in excellent agreement with previous estimates. From a combination of all current measurements, we find a best estimate of a factor 10× decrease in the SFRD from z ~ 8 to z ~ 10, following (1 + z)-11.4 ± 3.1. Our measurements thus confirm our previous finding of an accelerated evolution beyond z ~ 8, and signify a very rapid build-up of galaxies with M UV < -17.7 mag within only ~200 Myr from z ~ 10 to z ~ 8, in the heart of cosmic reionization. Based on data obtained with the Hubble Space Telescope operated by AURA, Inc., for NASA under contract NAS5-26555.
2013ApJ...773...75O
### A Rest-frame Optical View on z ~ 4 Galaxies. I. Color and Age Distributions from Deep IRAC Photometry of the IUDF10 and GOODS Surveys
Oesch, P. A., Labbé, I., Bouwens, R. J., Illingworth, G. D., Gonzalez, V., Franx, M., Trenti, M., Holden, B. P., van Dokkum, P. G., and Magee, D.
We present a study of rest-frame UV-to-optical color distributions for z ~ 4 galaxies based on the combination of deep HST/ACS+WFC3/IR data with Spitzer/IRAC imaging. In particular, we use new, ultra-deep data from the IRAC Ultradeep Field program (IUDF10), together with previous, public IRAC data over the GOODS fields. Our sample contains a total of ~2600 galaxies selected as B-dropout Lyman-break Galaxies in the HUDF and its deep parallel field HUDF09-2, as well as GOODS-North/South. This sample is used to investigate the UV continuum slopes β and Balmer break colors (J 125 - [4.5]) as a function of rest-frame optical luminosity (using [4.5] to avoid optical emission lines). We find that galaxies at Mz < -21.5 (roughly corresponding to L^*_{z\sim 4}) are significantly redder than their lower luminosity counterparts. The UV continuum slopes and the J 125 - [4.5] colors are well correlated, indicating that the dust reddening at these redshifts is better described by an SMC-like extinction curve, rather than the typically assumed Calzetti reddening. After dust correction, we find that the galaxy population shows mean stellar population ages in the range 108.5 to 109 yr, with a dispersion of ~0.5 dex, and only weak trends as a function of luminosity. Only a small fraction of galaxies shows Balmer break colors consistent with extremely young ages, younger than 100 Myr. Under the assumption of smooth star-formation histories, this fraction is 12%-19% for galaxies at Mz < -19.75. Our results are consistent with a gradual build-up of stars and dust in galaxies at z > 4 with only a small fraction of stars being formed in short, intense bursts of star-formation. Based on data obtained with the Hubble Space Telescope operated by AURA, Inc. for NASA under contract NAS5-26555. Based on observations with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407.
2013ApJ...772..136O
### UV-Continuum Slopes of >4000 z~4-8 Galaxies from the HUDF/XDF, HUDF09, ERS, CANDELS-South, and CANDELS-North Fields
Bouwens, R. J., Illingworth, G. D., Oesch, P. A., Labbe, I., van Dokkum, P. G., Trenti, M., Franx, M., Smit, R., Gonzalez, V., and Magee, D.
We measure the UV-continuum slope beta for over 4000 high-redshift galaxies over a wide range of redshifts z~4-8 and luminosities from the HST HUDF/XDF, HUDF09-1, HUDF09-2, ERS, CANDELS-N, and CANDELS-S data sets. Our new beta results reach very faint levels at z~4 (-15.5 mag: 0.006 L*(z=3)), z~5 (-16.5 mag: 0.014L*(z=3)), and z~6 and z~7 (-17 mag: 0.025 L*(z=3)). Inconsistencies between previous studies led us to conduct a comprehensive review of systematic errors and develop a new technique for measuring beta that is robust against biases that arise from the impact of noise. We demonstrate, by object-by-object comparisons, that all previous studies, including our own and those done on the latest HUDF12 dataset, suffer from small systematic errors in beta. We find that after correcting for the systematic errors (typically d(beta) ~0.1-0.2) all beta results at z~7 from different groups are in excellent agreement. The mean beta we measure for faint (-18 mag: 0.1L*(z=3)) z~4, z~5, z~6, and z~7 galaxies is -2.03+/-0.03+/-0.06 (random and systematic errors), -2.14+/-0.06+/-0.06, -2.24+/-0.11+/-0.08, and -2.33+/-0.16+/-0.13, respectively. Our new beta values are redder than we have reported in the past, but bluer than other recent results. Our previously reported trend of bluer beta's at lower luminosities is confirmed, as is the evolution to bluer beta's at high redshifts. beta appears to show only a mild luminosity dependence faintward of M(UV,AB) ~ -19 mag, suggesting that the mean beta asymptotes to ~ -2.2 to -2.4 for faint z>~4 galaxies. At z~7, the observed beta's suggest non-zero, but low dust extinction, and they agree well with values predicted in cosmological hydrodynamical simulations.
2013arXiv1306.2950B
### The HST eXtreme Deep Field XDF: Combining all ACS and WFC3/IR Data on the HUDF Region into the Deepest Field Ever
Illingworth, G. D., Magee, D., Oesch, P. A., Bouwens, R. J., Labbe, I., Stiavelli, M., van Dokkum, P. G., Franx, M., Trenti, M., Carollo, C. M., and Gonzalez, V.
The eXtreme Deep Field (XDF) combines data from ten years of observations with the HST Advanced Camera for Surveys (ACS) and the Wide-Field Camera 3 Infra-Red (WFC3/IR) into the deepest image of the sky ever in the optical/near-IR. Since the initial observations on the Hubble Ultra-Deep Field (HUDF) in 2003, numerous surveys and programs, including supernova followup, HUDF09, CANDELS, and HUDF12 have contributed additional imaging data across the HUDF region. Yet these have never been combined and made available as one complete ultra-deep optical and near-infrared image dataset. We do so now for the eXtreme Deep Field (XDF) program. Our new and improved processing techniques provide higher quality reductions of the total dataset. All WFC3 near-IR and optical ACS data sets have been fully combined and accurately matched, resulting in the deepest imaging ever taken at these wavelengths ranging from 29.1 to 30.3 AB mag (5sigma in a 0.35" diameter aperture) in 9 filters. The gains in the optical for the four filters done in the original ACS HUDF correspond to a typical improvement of 0.15 mag, with gains of 0.25 mag in the deepest areas. Such gains are equivalent to adding ~130 to ~240 orbits of ACS data to the HUDF. Improved processing alone results in a typical gain of ~0.1 mag. Our 5sigma (optical+near-IR) SExtractor catalogs reveal about 14140 sources in the full field and about 7121 galaxies in the deepest part of the XDF (the HUDF09 region). The XDF is the deepest image of the universe ever taken, reaching, in the combined image for a flat f_nu source, to 31.2 AB mag 5sigma (32.9 at 1sigma) in a 0.35" diameter aperture.
2013arXiv1305.1931I
### Photometric Constraints on the Redshift of z ~ 10 Candidate UDFj-39546284 from Deeper WFC3/IR+ACS+IRAC Observations over the HUDF
Bouwens, R. J., Oesch, P. A., Illingworth, G. D., Labbé, I., van Dokkum, P. G., Brammer, G., Magee, D., Spitler, L. R., Franx, M., Smit, R., Trenti, M., Gonzalez, V., and Carollo, C. M.
Ultra-deep WFC3/IR observations on the HUDF from the HUDF09 program revealed just one plausible z ~ 10 candidate, UDFj-39546284. UDFj-39546284 had all the properties expected of a galaxy at z ~ 10 showing (1) no detection in the deep ACS+WFC3 imaging data blueward of the F160W band, exhibiting (2) a blue spectral slope redward of the break, and showing (3) no prominent detection in deep IRAC observations. The new, similarly deep WFC3/IR HUDF12 F160W observations over the HUDF09/XDF allow us to further assess this candidate. These observations show that this candidate, previously only detected at ~5.9σ in a single band, clearly corresponds to a real source. It is detected at ~5.3σ in the new H 160-band data and at ~7.8σ in the full 85-orbit H 160-band stack. Interestingly, the non-detection of the source (<1σ) in the new F140W observations suggests a higher redshift. Formally, the best-fit redshift of the source utilizing all the WFC3+ACS (and IRAC+Ks -band) observations is 11.8 ± 0.3. However, we consider the z ~ 12 interpretation somewhat unlikely, since the source would either need to be ~20× more luminous than expected or show very high-EW Lyα emission (which seems improbable given the extensive neutral gas prevalent early in the reionization epoch). Lower-redshift solutions fail if only continuum models are allowed. Plausible lower-redshift solutions require that the H 160-band flux be dominated by line emission such as Hα or [O III] with extreme EWs. The tentative detection of line emission at 1.6 μm in UDFj-39546284 in a companion paper suggests that such emission may have already been found. Based in part on observations made with the NASA/ESA Hubble Space Telescope, obtained by the Space Telescope Science Institute. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555.
2013ApJ...765L..16B
### A Tentative Detection of an Emission Line at 1.6 μm for the z ~ 12 Candidate UDFj-39546284
Brammer, G. B., van Dokkum, P. G., Illingworth, G. D., Bouwens, R. J., Labbé, I., Franx, M., Momcheva, I., and Oesch, P. A.
We present deep WFC3 grism observations of the candidate z ~ 12 galaxy UDFj-39546284 in the Hubble Space Telescope (HST) Ultra Deep Field (UDF), by combining spectroscopic data from the 3D-HST and CANDELS surveys. The total exposure time is 40.5 ks and the spectrum covers 1.10 < λ < 1.65 μm. We search for faint emission lines by cross-correlating the two-dimensional G141 spectrum with the observed H 160 morphology, a technique that is unique to slitless spectroscopy at HST resolution. We find a 2.7σ detection of an emission line at 1.599 μm—just redward of the JH 140 filter—with flux 3.5 ± 1.3 × 10-18 erg s-1 cm-2. Assuming that the line is real, it contributes 110% ± 40% of the observed H 160 flux and has an observed equivalent width >7300 Å. If the line is confirmed, it could be Lyα at z = 12.12. However, a more plausible interpretation, given current results, could be a lower redshift feature such as [O III]λ4959,5007 at z = 2.19. We find two other 3D-HST [O III] emitters within 1000 km s-1 of that redshift in the GOODS-South field. Additional support for this interpretation comes from the discovery of a bright "[O III] blob" with a secure G141 grism redshift of z = 1.605. This object has a strikingly large observed equivalent width of nearly 9000 Å that results in similar "dropout" colors as UDFj-39546284. Based on observations made with the NASA/ESA Hubble Space Telescope, programs GO-12099, 12177, and 12547, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
2013ApJ...765L...2B
### The Bright End of the Ultraviolet Luminosity Function at z ~ 8: New Constraints from CANDELS Data in GOODS-South
Oesch, P. A., Bouwens, R. J., Illingworth, G. D., Gonzalez, V., Trenti, M., van Dokkum, P. G., Franx, M., Labbé, I., Carollo, C. M., and Magee, D.
We present new z ~ 8 galaxy candidates from a search over ~95 arcmin2 of WFC3/IR data, tripling the previous search area for bright z ~ 8 galaxies. Our analysis uses newly acquired WFC3/IR imaging data from the CANDELS Multi-Cycle Treasury program over the GOODS-South field. These new data are combined with existing deep optical Advanced Camera for Surveys (ACS) imaging to search for relatively bright (M UV < -19.5 mag) z ~ 8 galaxy candidates using the Lyman break technique. These new candidates are used to determine the bright end of the UV luminosity function (LF) of star-forming galaxies at z ~ 7.2-8.7, i.e., a cosmic age of 600 ± 80 Myr. To minimize contamination from lower redshift galaxies, we make full use of all optical ACS data and impose strict non-detection criteria based on an optical χ2 opt flux measurement. In the whole search area, we identify 16 candidate z ~ 8 galaxies, spanning a magnitude range H 160, AB = 25.7-27.9 mag. The new data show that the UV LF is a factor ~1.7 lower at M UV < -19.5 mag than determined from the HUDF09 and Early Release Science (ERS) data alone. Combining this new sample with the previous candidates from the HUDF09 and ERS data allows us to perform the most accurate measurement of the z ~ 8 UV LF yet. Schechter function fits to the combined data result in a best-fit characteristic magnitude of M *(z = 8) = -20.04 ± 0.46 mag. The faint-end slope is very steep, though quite uncertain, with α = -2.06 ± 0.32. A combination of wide-area data with additional ultra-deep imaging will be required to significantly reduce the uncertainties on these parameters in the future. Based on data obtained with the Hubble Space Telescope operated by AURA, Inc. for NASA under contract NAS5-26555.
2012ApJ...759..135O
### The Spectral Energy Distributions Of z~8 Galaxies From The IRAC Ultra Deep Fields: Emission Lines, Stellar Masses, And Specific Star Formation Rates At 650 Myr
Labbe, I., Oesch, P. A., Bouwens, R. J., Illingworth, G. D., Magee, D., Gonzalez, V., Carollo, C. M., Franx, M., Trenti, M., van Dokkum, P. G., and Stiavelli, M.
Using new ultradeep Spitzer/IRAC photometry from the IRAC Ultradeep Field program (IUDF), we investigate the stellar populations of a sample of 63 Y-dropout galaxy candidates at z~8, only 650Myr after the Big Bang. The sources are selected from HST/ACS+WFC3/IR data over the Hubble Ultra Deep Field (HUDF), two HUDF parallel fields, and wide area data over the CANDELS/GOODS-South. The new Spitzer/IRAC data increase the coverage at 3.6 micron and 4.5 micron to ~120h over the HUDF reaching depths of ~28 (AB,1 sigma). The improved depth and inclusion of brighter candidates result in direct >3 sigma IRAC detections of 20/63 sources, of which 11/63 are detected at > 5 sigma. The average [3.6]-[4.5] colors of IRAC detected galaxies at z~8 are markedly redder than those at z~7, observed only 130Myr later. The simplest explanation is that we witness strong rest-frame optical emission lines (in particular [OIII]4959,5007+Hbeta) moving through the IRAC bandpasses with redshift. Assuming that the average rest-frame spectrum is the same at both z~7 and z~8 we estimate a rest-frame equivalent width of W([OIII]4959,5007+Hbeta) = 670 (+260,-170) Angstrom contributing 0.56 (+0.16,-0.11) mag to the 4.5 micron filter at z~8. The corresponding W(Halpha) = 430 (+160,-110) Angstrom implies an average specific star formation rate of sSFR = 11 (+11,-5) Gyr^-1 and a stellar population age of 100 (+100,-50) Myr. Correcting the spectral energy distribution for the contribution of emission lines lowers the average best-fit stellar masses and mass-to-light ratios by x3, decreasing the integrated stellar mass density to rho*(z=8,MUV<-18)=0.6 (+0.4,-0.3) x 10^6 Msun Mpc^-3.
2012arXiv1209.3037L
### The Star Formation Rate Function for Redshift z ~ 4-7 Galaxies: Evidence for a Uniform Buildup of Star-forming Galaxies during the First 3 Gyr of Cosmic Time
Smit, R., Bouwens, R. J., Franx, M., Illingworth, G. D., Labbé, I., Oesch, P. A., and van Dokkum, P. G.
We combine recent estimates of dust extinction at z ~ 4-7 with UV luminosity function (LF) determinations to derive star formation rate (SFR) functions at z ~ 4-7. SFR functions provide a more physical description of galaxy buildup at high redshift and allow for direct comparisons to SFRs at lower redshifts determined by a variety of techniques. Our SFR functions are derived from well-established z ~ 4-7 UV LFs, UV-continuum slope trends with redshift and luminosity, and infrared excess (IRX)-β relations. They are well described by Schechter relations. We extend the comparison baseline for SFR functions to z ~ 2 by considering recent determinations of the Hα and mid-IR LFs. The low-end slopes of the SFR functions are flatter than for the UV LFs, Δα ~ +0.13, and show no clear evolution with cosmic time (z ~ 0-7). In addition, we find that the characteristic value SFR* from the Schechter fit to the SFR function exhibits consistent, and substantial, linear growth as a function of redshift from ~5 M &sun; yr-1 at z ~ 8, 650 Myr after the big bang, to ~100 M &sun; yr-1 at z ~ 2, ~2.5 Gyr later. Recent results at z ~ 10, close to the onset of galaxy formation, are consistent with this trend. The uniformity of this evolution is even greater than seen in the UV LF over the redshift range z ~ 2-8, providing validation for our dust corrections. These results provide strong evidence that galaxies build up uniformly over the first 3 Gyr of cosmic time.
2012ApJ...756...14S
### UV-continuum Slopes at z ~ 4-7 from the HUDF09+ERS+CANDELS Observations: Discovery of a Well-defined UV Color-Magnitude Relationship for z >= 4 Star-forming Galaxies
Bouwens, R. J., Illingworth, G. D., Oesch, P. A., Franx, M., Labbé, I., Trenti, M., van Dokkum, P., Carollo, C. M., González, V., Smit, R., and Magee, D.
Ultra-deep Advanced Camera for Surveys (ACS) and WFC3/IR HUDF+HUDF09 data, along with the wide-area GOODS+ERS+CANDELS data over the CDF-S GOODS field, are used to measure UV colors, expressed as the UV-continuum slope β, of star-forming galaxies over a wide range of luminosity (0.1L* z = 3 to 2L* z = 3) at high redshift (z ~ 7 to z ~ 4). β is measured using all ACS and WFC3/IR passbands uncontaminated by Lyα and spectral breaks. Extensive tests show that our β measurements are only subject to minimal biases. Using a different selection procedure, Dunlop et al. recently found large biases in their β measurements. To reconcile these different results, we simulated both approaches and found that β measurements for faint sources are subject to large biases if the same passbands are used both to select the sources and to measure β. High-redshift galaxies show a well-defined rest-frame UV color-magnitude (CM) relationship that becomes systematically bluer toward fainter UV luminosities. No evolution is seen in the slope of the UV CM relationship in the first 1.5 Gyr, though there is a small evolution in the zero point to redder colors from z ~ 7 to z ~ 4. This suggests that galaxies are evolving along a well-defined sequence in the L UV-color (β) plane (a "star-forming sequence"?). Dust appears to be the principal factor driving changes in the UV color β with luminosity. These new larger β samples lead to improved dust extinction estimates at z ~ 4-7 and confirm that the extinction is essentially zero at low luminosities and high redshifts. Inclusion of the new dust extinction results leads to (1) excellent agreement between the star formation rate (SFR) density at z ~ 4-8 and that inferred from the stellar mass density; and (2) to higher specific star formation rates (SSFRs) at z >~ 4, suggesting that the SSFR may evolve modestly (by factors of ~2) from z ~ 4-7 to z ~ 2. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs 11563 and 9797.
2012ApJ...754...83B
### Lower-luminosity Galaxies Could Reionize the Universe: Very Steep Faint-end Slopes to the UV Luminosity Functions at z >= 5-8 from the HUDF09 WFC3/IR Observations
Bouwens, R. J., Illingworth, G. D., Oesch, P. A., Trenti, M., Labbé, I., Franx, M., Stiavelli, M., Carollo, C. M., van Dokkum, P., and Magee, D.
The HUDF09 data are the deepest near-IR observations ever, reaching to 29.5 mag. Luminosity functions (LFs) from these new HUDF09 data for 132 z ~ 7 and z ~ 8 galaxies are combined with new LFs for z ~ 5-6 galaxies and the earlier z ~ 4 LF to reach to very faint limits (<0.05 L* z = 3). The faint-end slopes α are steep: -1.79 ± 0.12 (z ~ 5), -1.73 ± 0.20 (z ~ 6), -2.01 ± 0.21 (z ~ 7), and -1.91 ± 0.32 (z ~ 8). Slopes α <~ -2 lead to formally divergent UV fluxes, though galaxies are not expected to form below ~ - 10 AB mag. These results have important implications for reionization. The weighted mean slope at z ~ 6-8 is -1.87 ± 0.13. For such steep slopes, and a faint-end limit of -10 AB mag, galaxies provide a very large UV ionizing photon flux. While current results show that galaxies can reionize the universe by z ~ 6, matching the Thomson optical depths is more challenging. Extrapolating the current LF evolution to z > 8, taking α to be -1.87 ± 0.13 (the mean value at z ~ 6-8), and adopting typical parameters, we derive Thomson optical depths of 0.061+0.009 - 0.006. However, this result will change if the faint-end slope α is not constant with redshift. We test this hypothesis and find a weak, though uncertain, trend to steeper slopes at earlier times (dα/dz ~ -0.05 ± 0.04) that would increase the Thomson optical depths to 0.079+0.063 - 0.017, consistent with recent WMAP estimates (τ = 0.088 ± 0.015). It may thus not be necessary to resort to extreme assumptions about the escape fraction or clumping factor. Nevertheless, the uncertainties remain large. Deeper WFC3/IR+ACS observations can further constrain the UV ionizing flux from faint galaxies. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
2012ApJ...752L...5B
### Through the Looking Glass: Bright, Highly Magnified Galaxy Candidates at z ~ 7 behind A1703
Bradley, L. D., Bouwens, R. J., Zitrin, A., Smit, R., Coe, D., Ford, H. C., Zheng, W., Illingworth, G. D., Benítez, N., and Broadhurst, T. J.
We report the discovery of seven strongly lensed Lyman-break galaxy (LBG) candidates at z ~ 7 detected in Hubble Space Telescope Wide Field Camera 3 (WFC3) imaging of A1703. The brightest candidate, called A1703-zD1, has an observed (lensed) magnitude of 24.0 AB (26σ) in the WFC3/IR F160W band, making it 0.2 mag brighter than the z 850-dropout candidate recently reported behind the Bullet Cluster and 0.7 mag brighter than the previously brightest known z ~ 7.6 galaxy, A1689-zD1. With a cluster magnification of ~9, this source has an intrinsic magnitude of H 160 = 26.4 AB, a strong z 850 - J 125 break of 1.7 mag, and a photometric redshift of z ~ 6.7. Additionally, we find six other bright LBG candidates with H 160-band magnitudes of 24.9-26.4, photometric redshifts z ~ 6.4 - 8.8, and magnifications μ ~ 3-40. Stellar population fits to the Advanced Camera for Surveys, WFC3/IR, and Spitzer/Infrared Array Camera data for A1703-zD1 and A1703-zD4 yield stellar masses (0.7 - 3.0) × 109 M &sun;, stellar ages 5-180 Myr, and star formation rates ~7.8 M &sun; yr-1, and low reddening with AV <= 0.7. The source-plane reconstruction of the exceptionally bright candidate A1703-zD1 exhibits an extended structure, spanning ~4 kpc in the z ~ 6.7 source plane, and shows three resolved star-forming knots of radius r ~ 0.4 kpc. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy under NASA contract NAS5-26555. Based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407.
2012ApJ...747....3B
### Expanded Search for z ~ 10 Galaxies from HUDF09, ERS, and CANDELS Data: Evidence for Accelerated Evolution at z > 8?
Oesch, P. A., Bouwens, R. J., Illingworth, G. D., Labbé, I., Trenti, M., Gonzalez, V., Carollo, C. M., Franx, M., van Dokkum, P. G., and Magee, D.
We search for z ~ 10 galaxies over ~160 arcmin2 of Wide-Field Camera 3 (WFC3)/IR data in the Chandra Deep Field South, using the public HUDF09, Early Release Science, and CANDELS surveys, that reach to 5σ depths ranging from 26.9 to 29.4 in H 160 AB mag. z >~ 9.5 galaxy candidates are identified via J 125 - H 160 > 1.2 colors and non-detections in any band blueward of J 125. Spitzer Infrared Array Camera (IRAC) photometry is key for separating the genuine high-z candidates from intermediate-redshift (z ~ 2-4) galaxies with evolved or heavily dust obscured stellar populations. After removing 16 sources of intermediate brightness (H 160 ~ 24-26 mag) with strong IRAC detections, we only find one plausible z ~ 10 galaxy candidate in the whole data set, previously reported in Bouwens et al.. The newer data cover a 3 × larger area and provide much stronger constraints on the evolution of the UV luminosity function (LF). If the evolution of the z ~ 4-8 LFs is extrapolated to z ~ 10, six z ~ 10 galaxies are expected in our data. The detection of only one source suggests that the UV LF evolves at an accelerated rate before z ~ 8. The luminosity density is found to increase by more than an order of magnitude in only 170 Myr from z ~ 10 to z ~ 8. This increase is >=4 × larger than expected from the lower redshift extrapolation of the UV LF. We are thus likely witnessing the first rapid buildup of galaxies in the heart of cosmic reionization. Future deep Hubble Space Telescope WFC3/IR data, reaching to well beyond 29 mag, can enable a more robust quantification of the accelerated evolution around z ~ 10. Based on data obtained with the Hubble Space Telescope operated by AURA, Inc., for NASA under contract NAS5-26555. Partially based on observations made with the Spitzer Space Telescope, operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA.
2012ApJ...745..110O
### Ultraviolet Luminosity Functions from 132 z ~ 7 and z ~ 8 Lyman-break Galaxies in the Ultra-deep HUDF09 and Wide-area Early Release Science WFC3/IR Observations
Bouwens, R. J., Illingworth, G. D., Oesch, P. A., Labbé, I., Trenti, M., van Dokkum, P., Franx, M., Stiavelli, M., Carollo, C. M., Magee, D., and Gonzalez, V.
We identify 73 z ~ 7 and 59 z ~ 8 candidate galaxies in the reionization epoch, and use this large 26-29.4 AB mag sample of galaxies to derive very deep luminosity functions to < - 18 AB mag and the star formation rate (SFR) density at z ~ 7 and z ~ 8 (just 800 Myr and 650 Myr after recombination, respectively). The galaxy sample is derived using a sophisticated Lyman-break technique on the full two-year Wide Field Camera 3/infrared (WFC3/IR) and Advanced Camera for Surveys (ACS) data available over the HUDF09 (~29.4 AB mag, 5σ), two nearby HUDF09 fields (~29 AB mag, 5σ, 14 arcmin2), and the wider area Early Release Science (~27.5 AB mag, 5σ, ~40 arcmin2). The application of strict optical non-detection criteria ensures the contamination fraction is kept low (just ~7% in the HUDF). This very low value includes a full assessment of the contamination from lower redshift sources, photometric scatter, active galactic nuclei, spurious sources, low-mass stars, and transients (e.g., supernovae). From careful modeling of the selection volumes for each of our search fields, we derive luminosity functions for galaxies at z ~ 7 and z ~ 8 to < - 18 AB mag. The faint-end slopes α at z ~ 7 and z ~ 8 are uncertain but very steep at α = -2.01 ± 0.21 and α = -1.91 ± 0.32, respectively. Such steep slopes contrast to the local α >~ -1.4 and may even be steeper than that at z ~ 4 where α = -1.73 ± 0.05. With such steep slopes (α <~ -1.7) lower luminosity galaxies dominate the galaxy luminosity density during the epoch of reionization. The SFR densities derived from these new z ~ 7 and z ~ 8 luminosity functions are consistent with the trends found at later times (lower redshifts). We find reasonable consistency with the SFR densities implied from reported stellar mass densities being only ~40% higher at z < 7. This suggests that (1) the stellar mass densities inferred from the Spitzer Infrared Array Camera (IRAC) photometry are reasonably accurate and (2) that the initial mass function at very high redshift may not be very different from that at later times. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs 11563, 9797, and 10632.
2011ApJ...737...90B
### Evolution of Galaxy Stellar Mass Functions, Mass Densities, and Mass-to-light Ratios from z ~ 7 to z ~ 4
González, V., Labbé, I., Bouwens, R. J., Illingworth, G., Franx, M., and Kriek, M.
We derive stellar masses from spectral energy distribution fitting to rest-frame optical and UV fluxes for 401 star-forming galaxies at z ~ 4, 5, and 6 from Hubble-WFC3/IR camera observations of the Early Release Science field combined with the deep GOODS-S Spitzer/IRAC data (and include a previously published z ~ 7 sample). A mass-luminosity relation with strongly luminosity-dependent {M}/L_UV ratios is found for the largest sample (299 galaxies) at z ~ 4. The relation {M}\propto L_{UV,1500}^{1.7({+/- }0.2)} has a well-determined intrinsic sample variance of 0.5 dex. This relation is also consistent with the more limited samples at z ~ 5-7. This z ~ 4 mass-luminosity relation, and the well-established faint UV-luminosity functions at z ~ 4-7, are used to derive galaxy mass functions (MFs) to masses {M}\sim 10^{8} at z ~ 4-7. A bootstrap approach is used to derive the MFs to account for the large scatter in the {M}{--}L_UV relation and the luminosity function uncertainties, along with an analytical cross-check. The MFs are also corrected for the effects of incompleteness. The incompleteness-corrected MFs are steeper than previously found, with slopes α M ~ -1.4 to -1.6 at low masses. These slopes are, however, still substantially flatter than the MFs obtained from recent hydrodynamical simulations. We use these MFs to estimate the stellar mass density (SMD) of the universe to a fixed M UV, AB < - 18 as a function of redshift and find an SMD growth vprop(1 + z)-3.4 ± 0.8 from z ~ 7 to z ~ 4. We also derive the SMD from the completeness-corrected MFs to a mass limit {M}\sim 10^{8} M sun. Such completeness-corrected MFs and the derived SMDs will be particularly important for comparisons as future MFs reach to lower masses.
2011ApJ...735L..34G
### A candidate redshift z~10 galaxy and rapid changes in that population at an age of 500Myr
Bouwens, R. J., Illingworth, G. D., Labbe, I., Oesch, P. A., Trenti, M., Carollo, C. M., van Dokkum, P. G., Franx, M., Stiavelli, M., González, V., Magee, D., and Bradley, L.
Searches for very-high-redshift galaxies over the past decade have yielded a large sample of more than 6,000 galaxies existing just 900-2,000million years (Myr) after the Big Bang (redshifts 6>z>3 ref. 1). The Hubble Ultra Deep Field (HUDF09) data have yielded the first reliable detections of z~8 galaxies that, together with reports of a γ-ray burst at z~8.2 (refs 10, 11), constitute the earliest objects reliably reported to date. Observations of z~7-8 galaxies suggest substantial star formation at z>9-10 (refs 12, 13). Here we use the full two-year HUDF09 data to conduct an ultra-deep search for z~10 galaxies in the heart of the reionization epoch, only 500Myr after the Big Bang. Not only do we find one possible z~10 galaxy candidate, but we show that, regardless of source detections, the star formation rate density is much smaller (~10%) at this time than it is just ~200Myr later at z~8. This demonstrates how rapid galaxy build-up was at z~10, as galaxies increased in both luminosity density and volume density from z~10 to z~8. The 100-200Myr before z~10 is clearly a crucial phase in the assembly of the earliest galaxies.
2011Natur.469..504B
### The Evolution of the Ultraviolet Luminosity Function from z ~ 0.75 to z ~ 2.5 Using HST ERS WFC3/UVIS Observations
Oesch, P. A., Bouwens, R. J., Carollo, C. M., Illingworth, G. D., Magee, D., Trenti, M., Stiavelli, M., Franx, M., Labbé, I., and van Dokkum, P. G.
We present UV luminosity functions (LFs) at 1500 Å derived from the Hubble Space Telescope Early Release Science WFC3/UVIS data acquired over ~50 arcmin2 of the GOODS-South field. The LFs are determined over the entire redshift range z = 0.75-2.5 using two methods, similar to those used at higher redshifts for Lyman break galaxies (LBGs): (1) 13 band UV+optical+NIR photometric redshifts to study galaxies in the range z = 0.5-2 in three bins of dz = 0.5 and (2) dropout samples in three redshift windows centered at z ~ 1.5, z ~ 1.9, and z ~ 2.5. The characteristic luminosity dims by 1.5 mag from z = 2.5 to z = 0.75, consistent with earlier work. However, the other Schechter function parameters, the faint-end slope and the number density, are found to be remarkably constant over the range z = 0.75-2.5. Using these LF determinations, we find the UV luminosity density to increase by ~1.4 dex according to (1 + z)2.58±0.15 from z ~ 0 to its peak at z ~ 2.5. Strikingly, the inferred faint-end slopes for our LFs are all steeper than α = -1.5, in agreement with higher-redshift LBG studies. Since the faint-end slope in the local universe is found to be much flatter with α ~= -1.2, this poses the question as to when and how the expected flattening occurs. Despite relatively large uncertainties, our data suggest α ~= -1.7 at least down to z ~ 1. These new results from such a shallow early data set demonstrate very clearly the remarkable potential of WFC3/UVIS for the thorough characterization of galaxy evolution over the full redshift range z ~ 0.5 to z ~ 3. Based on data obtained with the Hubble Space Telescope operated by AURA, Inc., for NASA under contract NAS5-26555.
2010ApJ...725L.150O
### z ~ 7 Galaxy Candidates from NICMOS Observations Over the HDF-South and the CDF-South and HDF-North Goods Fields
Bouwens, R. J., Illingworth, G. D., González, V., Labbé, I., Franx, M., Conselice, C. J., Blakeslee, J., van Dokkum, P., Holden, B., Magee, D., Marchesini, D., and Zheng, W.
We use ~88 arcmin2 of deep (gsim26.5 mag at 5σ) NICMOS data over the two GOODS fields and the HDF-South to conduct a search for bright z >~ 7 galaxy candidates. This search takes advantage of an efficient preselection over 58 arcmin2 of NICMOS H 160-band data where only plausible z >~ 7 candidates are followed up with NICMOS J 110-band observations. ~248 arcmin2 of deep ground-based near-infrared data (gsim25.5 mag, 5σ) are also considered in the search. In total, we report 15 z 850-dropout candidates over this area—7 of which are new to these search fields. Two possible z ~ 9 J 110-dropout candidates are also found, but seem unlikely to correspond to z ~ 9 galaxies (given the estimated contamination levels). The present z ~ 9 search is used to set upper limits on the prevalence of such sources. Rigorous testing is undertaken to establish the level of contamination of our selections by photometric scatter, low-mass stars, supernovae, and spurious sources. The estimated contamination rate of our z ~ 7 selection is ~24%. Through careful simulations, the effective volume available to our z >~ 7 selections is estimated and used to establish constraints on the volume density of luminous (L* z = 3, or ~-21 mag) galaxies from these searches. We find that the volume density of luminous star-forming galaxies at z ~ 7 is 13+8 -5 times lower than at z ~ 4 and >25 times lower (1σ) at z ~ 9 than at z ~ 4. This is the most stringent constraint yet available on the volume density of gsimL* z = 3 galaxies at z ~ 9. The present wide-area, multi-field search limits cosmic variance to lsim20%. The evolution we find at the bright end of the UV LF is similar to that found from recent Subaru Suprime-Cam, HAWK-I or ERS WFC3/IR searches. The present paper also includes a complete summary of our final z ~ 7 z 850-dropout sample (18 candidates) identified from all NICMOS observations to date (over the two GOODS fields, the HUDF, galaxy clusters). Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs #7235, 7817, 9425, 9575, 9723, 9797, 9803, 9978, 9979, 10189, 10339, 10340, 10403, 10530, 10632, 10872, 11082, 11144, and 11192. Observations have been carried out using the Very Large Telescope at the ESO Paranal Observatory under Program ID(s): LP168.A-0485.
2010ApJ...725.1587B
### Star Formation Rates and Stellar Masses of z = 7-8 Galaxies from IRAC Observations of the WFC3/IR Early Release Science and the HUDF Fields
Labbé, I., González, V., Bouwens, R. J., Illingworth, G. D., Franx, M., Trenti, M., Oesch, P. A., van Dokkum, P. G., Stiavelli, M., Carollo, C. M., Kriek, M., and Magee, D.
We investigate the Spitzer/IRAC properties of 36 z ~ 7 z 850-dropout galaxies and three z ~ 8 Y 098 galaxies derived from deep/wide-area WFC3/IR data of the Early Release Science, the ultradeep HUDF09, and wide-area NICMOS data. We fit stellar population synthesis models to the spectral energy distributions to derive mean redshifts, stellar masses, and ages. The z ~ 7 galaxies are best characterized by substantial ages (>100 Myr) and M/LV ≈ 0.2. The main trend with decreasing luminosity is that of bluing of the far-UV slope from β ~ -2.0 to β ~ -3.0. This can be explained by decreasing metallicity, except for the lowest luminosity galaxies (0.1L* z = 3), where low metallicity and smooth star formation histories (SFHs) fail to match the blue far-UV and moderately red H - [3.6] color. Such colors may require episodic SFHs with short periods of activity and quiescence ("on-off" cycles) and/or a contribution from emission lines. The stellar mass of our sample of z ~ 7 star-forming galaxies correlates with star formation rate (SFR) according to log M* = 8.70(±0.09) + 1.06(±0.10)log SFR, implying that star formation may have commenced at z > 10. No galaxies are found with SFRs much higher or lower than the past averaged SFR suggesting that the typical star formation timescales are probably a substantial fraction of the Hubble time. We report the first IRAC detection of Y 098-dropout galaxies at z ~ 8. The average rest-frame U - V ≈ 0.3 (AB) of the three galaxies are similar to faint z ~ 7 galaxies, implying similar M/L. The stellar mass density to M UV,AB < -18 is ρ*(z = 8) = 1.8+0.7 -1.0 × 106 M sun Mpc-3, following log ρ*(z) = 10.6(±0.6) - 4.4(±0.7) log(1 + z) [M sun Mpc-3] over 3 < z < 8. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with programs 11563, 9797. Based on observations with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under NASA contract 1407. Support for this work was provided by NASA through contract 125790 issued by JPL/Caltech. Based on service mode observations collected at the European Southern Observatory, Paranal, Chile (ESO Program 073.A-0764A). Based on data gathered with the 6.5 m Magellan Telescopes located at Las Campanas Observatory, Chile.
2010ApJ...716L.103L | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362025022506714, "perplexity": 6392.532381297121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765584.21/warc/CC-MAIN-20141217075245-00166-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/73564/discussion-of-simulation-results-of-a-signal-sequence-amplitude | # Discussion of simulation results of a signal sequence: amplitude
I need your help in understanding if the simulation result is correct.
I have simulated a random BPSK sequence that was upsampled and filtered with a raised cosine filter. The length of the filter is 150 taps, sampling frequency 16, cut-off frequency 0.5 and transition bandwidth is 0.2. The noise was simulated with EbN0 = 10 with the following noise power:
SignalEnergy = (trapz(abs(filtered_signal_tx).^2))*(1/Fs);
Eb = SignalEnergy/(2*Nb);
N0 = Eb./(10.^(EbN0/10));
NoisePower = 2*N0*Fs;
The signal has a phase offset(shift): exp(1j*(2*pi*t+phase_offset));
As a result, I have got a signal, which has a small amplitude. It is less than 0.08...
I don't know if it is wrong or correct?!
What do you think?
• Oversampling means adding a sample between zero samples, so the result is less amplitude. If your oversampling is 16, then 1/16=0.0625, which is about what I see there. You need to compensate with gain. Mar 2 at 18:26
• @aconcernedcitizen how to compensate? What do you mean? Mar 3 at 7:11
• If you're losing gain due to oversampling, then you need to add gain to compensate. You need to se the gain proportional to the oversampling rate. Mar 3 at 10:30
• @aconcernedcitizen can i normalise the filtered signal 'filtered_signal = filtered_signal * 10' Mar 9 at 15:40
Phase offset and time delay (time offset) are NOT the same and I think this may be a primary source of confusion (Given the OP's other recent posts dealing only with time offset but here the formula used introduces a carrier phase offset). Hopefully the following diagrams help distinguish the two.
No Time or Phase Error
Below is the constellation diagram and eye diagram for the (raised cosine) QPSK waveform in the receiver after the matched filter, with no phase or time offset and showing the decision locations in red:
The eye diagram for the real component of the signal is shown below and should be clear that the imaginary component would look identical:
The blue circles are samples with 4 samples per symbol, one of which is at the ideal sampling location (at sample number 3 and 7 specifically; this eye diagram spans over two symbols).
TIME OFFSET
Next a small time offset (timing error) of about 1/3 of a sample is introduced to show the effect of time offset alone.
PHASE OFFSET
With the time offset back to 0, a phase offset of $$\pi/10$$ is introduced showing it's effect on the constellation and eye diagram.
Conclusion
Phase Offset and Time Offset are two different parameters that each independently need to be corrected (one can exist without the other). Further, phase offset is typically changing with time, which specifically then when changing at a linear rate is a static carrier offset since frequency is the derivative of phase with time, and is corrected with a carrier recovery loop (while time offset is corrected with a timing recovery loop).
• how did you design the (raised cosine) QPSK waveform in the receiver ? Did you gengerate random sequence of 0 or 1 ( 1 or -1), modulate them, upsample, filter and then add phase offset? How does your implementation look? Mar 9 at 15:43
• @Ali23 I generate them in Python the way you described— zero insert and the RRC filter is the interpolation filter. Mar 9 at 17:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6755109429359436, "perplexity": 1624.9683951648767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363292.82/warc/CC-MAIN-20211206103243-20211206133243-00286.warc.gz"} |
http://codeforces.com/problemset/problem/856/C | C. Eleventh Birthday
time limit per test
2 seconds
memory limit per test
512 megabytes
input
standard input
output
standard output
It is Borya's eleventh birthday, and he has got a great present: n cards with numbers. The i-th card has the number ai written on it. Borya wants to put his cards in a row to get one greater number. For example, if Borya has cards with numbers 1, 31, and 12, and he puts them in a row in this order, he would get a number 13112.
He is only 11, but he already knows that there are n! ways to put his cards in a row. But today is a special day, so he is only interested in such ways that the resulting big number is divisible by eleven. So, the way from the previous paragraph is good, because 13112 = 1192 × 11, but if he puts the cards in the following order: 31, 1, 12, he would get a number 31112, it is not divisible by 11, so this way is not good for Borya. Help Borya to find out how many good ways to put the cards are there.
Borya considers all cards different, even if some of them contain the same number. For example, if Borya has two cards with 1 on it, there are two good ways.
Help Borya, find the number of good ways to put the cards. This number can be large, so output it modulo 998244353.
Input
Input data contains multiple test cases. The first line of the input data contains an integer t — the number of test cases (1 ≤ t ≤ 100). The descriptions of test cases follow.
Each test is described by two lines.
The first line contains an integer n (1 ≤ n ≤ 2000) — the number of cards in Borya's present.
The second line contains n integers ai (1 ≤ ai ≤ 109) — numbers written on the cards.
It is guaranteed that the total number of cards in all tests of one input data doesn't exceed 2000.
Output
For each test case output one line: the number of ways to put the cards to the table so that the resulting big number was divisible by 11, print the number modulo 998244353.
Example
Input
421 131 31 12312345 67 8491 2 3 4 5 6 7 8 9
Output
22231680 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5446822047233582, "perplexity": 437.1734706449849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583722261.60/warc/CC-MAIN-20190120143527-20190120165527-00487.warc.gz"} |
https://exoplanetmusings.wordpress.com/2013/03/25/764/ | # The Rossiter-McLaughlin Effect
A retrograde hot Jupiter (Credit: ESO)
It is a convenient fact that stars spin in space – a result of angular momentum imparted upon them from the collapse of the progenitor clouds from which they form. This fact, while easily taken for granted, can permit a great deal of information to be gained about surrounding planets whose orbits are fortuitously oriented to cause them to transit.
For the case of a rotating star, one hemisphere of the star will be approaching the observer, and the opposite hemisphere will be receding. The approaching hemisphere is therefore blueshifted, and the receding hemisphere is redshifted. Because the stellar spectrum will accordingly be blueshifted and redshifted, the spectral lines of a rotating star appear widened or broadened. This spectral line broadening is directly proportional to the star’s rotational velocity, and the angle between the line of sight and the stellar rotation axis. Much like with Doppler spectroscopic detection of planets, the constraints you can set to the rotational velocity of a star are only a lower limit because the rotation axis is unknown. Indeed, a rapidly rotating star viewed pole on (i = 0o), there will be no observed spectral line broadening because all of the surface rotation is perpendicular to the line of sight. Because of this inclination degeneracy, the rotational velocity of the star is represented in terms of v sin i.
The redshifting of the receding hemisphere is typically balanced by the blueshifting of the approaching hemisphere, and so there is no net change in the colour of the star. However the transit of a planet will cause an imbalance in this hemispheric Doppler shifting. For a planet in an equatorial, prograde transiting orbit the planet will first cover part of the approaching hemisphere. The Doppler shifting of the rotating stellar surface is no longer balanced and the stellar spectrum has a net redshift. This anomaly disappears as the planet moves to the centre of the stellar disc, as the redshifted and blueshifted hemispheres are both equally apparent again. Then as the planet moves over the receding hemisphere, there is a net blueshift in the spectrum as some of the redshifted light is occulted.
This radial velocity anomaly is known as the Rossiter-McLaughlin effect. It can be used to measure the projected stellar spin-orbit alignment angle (the angle between the spin axis of the star, and the orbital axis of the star+planet orbit). It can be thought of as the projected obliquity of the star relative to the planet orbit, or the inclination of the planetary orbit relative to the stellar equator. For an equatorial orbit (spin-orbit alignment), λ = 0°, and for a polar orbit (spin-orbit misalignment), λ = 90°.
The Rossiter-McLaughlin Effect
As we can see in the diagram above, the RM effect will have a different RV shape depending on the orientation of the planet’s orbit axis relative to the spin axis of the star. If the orbit is nearly polar and the planet transits across only one of the approaching or receding hemispheres, then the RM effect will be entirely blueshifted (if the planet crosses the receding hemisphere as is shown in the third example above) or redshifted (if the planet crosses the approaching hemisphere).
The amplitude of the RM effect, KR, relative to the amplitude of the star’s reflex velocity due to the influence of the planet over the course of the orbit, KO, can be estimated by
$\displaystyle \frac{K_R}{K_O} \approx 0.3\left(\frac{M}{M_J}\right)^{-1/3}\left(\frac{P}{3\text{ days}}\right)^{1/3}\left(\frac{v \sin i}{5\text{ km s}^{-1}}\right)$
Very few stars have their spin axes perfectly perpendicular to our line of sight. If the star is viewed pole-on, the RM effect amplitude will be zero. It’s worth remembering that it isn’t the absolute stellar rotation velocity that determines the Doppler shifting (and thus the RM effect amplitude), but the apparent rotational velocity, v sin i.
Another degeneracy affecting the RM effect is that impact parameters close to zero have the planet occupying both hemispheres more equally in time. This causes the RM effect to be much more symmetric than an off-centre transit with a high impact parameter. For the case of b = 0, the RM effect will be symmetric regardless of the value for λ, and only the RM effect amplitude will have changed. Since the RM effect amplitude is also a function of v sin i, some ambiguity is present in low- impact parameter systems over the value of λ.
The overwhelming majority of planets which have had RM effects measured are hot Jupiters, whose short orbital periods and high mass cause high orbital RV amplitudes, leaving the RM effect amplitude to be small in comparison. Interestingly, however, the magnitude of the RM effect amplitude is not dependent on the planet’s mass or semi-major axis, but on only the stellar v sin i and Rp. A planet of a given radius will have the same RM effect amplitude whether it is close to the star, producing a high orbital RV amplitude, or far from the star, with a much smaller orbital RV amplitude.
Accordingly, transiting planets in long-period orbits may have RM effect amplitudes several times their orbital RV amplitude. The advantages of this become quite apparent when one takes into consideration the timescales involved. Without the Rossiter-McLaughlin effect, a Doppler spectroscopic detection of an Earth-analogue around a solar-type star (with v sin i = 5 km s-1) would require at least a year to secure a full orbit, and furthermore, would require a Doppler precision on the order of 10 cm s-1. With noise issues in the RV data, it may require several orbits to build up enough data to confirm the planet. However, the Rossiter-McLaughlin effect for the same planet would be on the order of ~30 cm s-1 and would last less than a day. The far shorter time needed to observe the RM effect and the far greater amplitude make this a promising tool to validate extrasolar planet candidates whose existence are known from transits alone, but whose mass may take a considerable amount of effort to determine. Instead of wasting months trying to validate a planet candidate and run the risk that the candidate is a false-positive, a single night can conclusively demonstrate that the object is an orbiting planet.
Rossiter-McLaughlin Effect for a transiting Earth analogue
Because the rotational velocity of a star is correlated with its age and mass, which itself is correlated to the spectral type of the star, the true rotational velocity of a star is loosely correlated to the spectral type. Comparing a measured v sin i to the expected rotational velocity can give you a loose estimate for the inclination of the stellar rotation axis to the line of sight. This, combined with λ, can permit you to estimate the true spin-orbit alignment angle, typically denoted with φ (though variations on this definitely exist).
Constraining the spin-orbit alignment angle of extrasolar planetary systems provides clues to their dynamical histories. It is still not clear how hot Jupiter systems form. Giant planets may tidally interact with the planet-forming disk to migrate inward toward the star, or they may be gravitationally scattered inward by close encounters with other planets. Either scenario has testable predictions in the context of the observed spin-orbit alignment of a planetary system.
Peaceful migration through disk-interaction is expected to result in well-aligned planetary systems. Like in our Solar System, planets that form in a protoplanetary disk coplanar with the stellar equator will maintain that orientation, and you’ll end up with planets orbiting close to the stellar equatorial plane. However if hot Jupiters are the result of the scattering of planets through close encounters with each other, then the migration behaviour of hot Jupiters will have been much more chaotic in the early history of the planetary system and the spin-orbit alignment angle may have a wide range of values.
The first few hot Jupiters to have their RM effects measured were all fairly well aligned, consistent with calm migration as was generally believed to be the most likely explanation of their origin. Later work with planets from the SuperWASP survey uncovered several misaligned planets, including even retrograde hot Jupiters. Further work since has found planets across a wide range of values for λ.
Hot Jupiter Alignments
As spin-orbit alignment angles came in over the years it became apparent that misaligned planets are preferentially around hotter stars. Specifically above stellar effective temperatures of Teff > 6250 K, the distribution of λ seems to be much more random. This is clearly a clue to understanding the dynamical histories of hot Jupiters, but it’s not currently clear what it means.
Not only will measuring λ for extrasolar planet systems help us understand their formation histories, but it can also prove to be a powerful tool for confirming small, long-period planets with relatively little effort. It can require fairly high quality RV data, but it is well worth the effort to obtain these values, especially for smaller planets to see how the dynamical histories of low-mass, short-period planets compare to those of hot Jupiters. Do these multi-planet systems of low-mass planets form in a similar way to hot Jupiters? So far the evidence appears to point to “no,” but more data is needed to understand this question. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906633853912354, "perplexity": 993.9407842441948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864039.24/warc/CC-MAIN-20180621055646-20180621075646-00304.warc.gz"} |
https://eleijonmarck.dev/an-introduction-to-gradient-descent-w-linear-regression/ | # An Introduction to Gradient Descent w. Linear Regression
Gradient descent is one of those “greatest hits” algorithms that can offer a new perspective for solving problems. Unfortunately, it’s rarely taught in undergraduate computer science programs. In this post I’ll give an introduction to the gradient descent algorithm, and walk through an example that demonstrates how gradient descent can be used to solve machine learning problems such as linear regression.
Gradient descent is widely used in Machine Learning and Deep Learning
import pandas as pd
import numpy as np
import altair as alt
data = pd.read_csv('demo.txt')
scatter = alt.Chart(data).mark_circle().encode(
x='x:Q',
y='y:Q'
).properties(
title='data'
)
scatter
# Our goal is to align a line to this dataset
• why would we want to do that?
• we can use this to infer properties of the dataset
• we can use it to predict future behaviour (extrapolate)
from sklearn import linear_model
from sklearn import model_selection
model = linear_model.LinearRegression()
X_train, X_test, y_train, y_test = model_selection.train_test_split(data['x'], data['y'], test_size=0.2, random_state=0)
model.fit(X_train.values.reshape(-1, 1), y_train)
#For retrieving the slope:
print("""
Model intercept (position of the line) \n{:.2f}
Model coefficients (slope of the line) \n{:.2f}
Model score (how close are we to fit a line to the data) \n{:.2f}
""".format(
model.intercept_,
model.coef_[0],
model.score(X_test.values.reshape(-1, 1), y_test)))
This gives us the intercept $b$ 6.69 (where the line should be centered around) and the coefficient of $m$ 1.35. Which given our modeled loss function is a score of 0.27
def make_line_using(m, b):
# y = m * x + b
x = np.arange(100)
y = m * x + b
df = pd.DataFrame(np.matrix([x,y]).T, columns=['x','y'])
line = alt.Chart(df).mark_line().encode(
x='x:Q',
y='y:Q',
tooltip=[alt.Tooltip('y', title='b * x + m')]
).interactive()
return (scatter + (line))
m_guess = model.coef_[0]
b_guess = model.intercept_
make_line_using(m_guess, b_guess)
# Now let's try to implement this ourselves!
### Naive approach to guess until we get a good fit
guessing the beta parameters linear equation
$y = m \times \mathbf{x} + b$
def plot_on_top_of_data(m, b):
# y = B_2 * x + B_1
x = np.arange(100)
y = b * x + m
df = pd.DataFrame(np.matrix([x,y]).T, columns=['x','y'])
line = alt.Chart(df).mark_line().encode(
x='x:Q',
y='y:Q'
)
return (scatter + (line))
Our guess
$y = 2 \times \mathbf{x} + 1$
m_guess = 2
b_guess = 1
plot_on_top_of_data(m_guess, b_guess)
#### Not the most sufficient algorithm but might work.
hmmmmmm ¯\(ツ)/¯ let's think of another approach.
Can we we somehow see if we have a good guess?
# Let's improve our guessing strategy using Gradient Descent
Gradient descent is an optimization algorithm used to minimize some function (loss function) by iteratively moving in the direction of steepest descent as defined by the negative of the gradient. In machine learning, we use gradient descent to update the parameters of our model. Parameters refer to coefficients in Linear Regression and weights in neural networks.
Starting at the top of the mountain, we take our first step downhill in the direction specified by the negative gradient. We continue this process iteratively until we get to the bottom of our graph, or to a point where we can no longer move downhill–a local minimum.
math.stackexchange - Partial derivative in gradient descent
ml-cheatsheet
## Let's introduce the loss function (or cost/error)
A Loss Functions tells us “how good” our model is at making predictions for a given set of parameters. The loss function has its own curve and its own gradients. The slope of this curve tells us how to update our parameters to make the model more accurate.
${\displaystyle \operatorname {MSE} ={\frac {1}{n}}\sum {i=1}^{n}(Y{i}-{\hat {Y_{i}}})^{2}.}$
Given ${\displaystyle n}$ predictions generated to ${\hat{Y}}$, and ${\displaystyle Y}$ is the vector of observed values of the variable being predicted.
Our example with $\hat{Y} = mx_i + b$
Now let’s run gradient descent using our new loss function. There are two parameters in our lost function we can control: m (weight) and b (bias).
Since we need to consider the impact each one has on the final prediction, we need to use partial derivatives. We calculate the partial derivatives of the loss function with respect to each parameter and store the results in a gradient.
Given the loss function:
$$f(m,b) = \frac{1}{N} \sum_{i=1}^{n} (y_i - (mx_i + b))^2$$
The gradient can be calculated as:
$$f'(m,b) = \begin{bmatrix} \frac{df}{dm}\ \frac{df}{db}\ \end{bmatrix} = \begin{bmatrix} \frac{1}{N} \sum -2x_i(y_i - (mx_i + b)) \ \frac{1}{N} \sum -2(y_i - (mx_i + b)) \ \end{bmatrix}$$
def step_gradient(m: int, b: int, points: np.ndarray, learning_rate: float) -> list:
"""
this calculates the gradient step of a **linear function**
WILL NOT WORK for multiple dimensional data,
since the derivates will be on matricies instead
"""
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
b_gradient += -(2/N) * (y - ((m * x) + b))
m_gradient += -(2/N) * x * (y - ((m * x) + b))
b = b - (learning_rate * b_gradient)
m = m - (learning_rate * m_gradient)
return [b, m]
new_b, new_m = step_gradient(m_guess, b_guess, data.values, learning_rate)
new_b, new_m
(6.6874785109966535, 1.3465666635682905)
# Let's run through it through the whole dataset
def gradient_descent_runner(points, starting_m, starting_b, learning_rate, num_iterations):
m = starting_m
b = starting_b
for i in range(num_iterations):
m, b = step_gradient(m, b, np.array(points), learning_rate)
return [m, b]
number_iterations = 10000
data,
m_guess,
b_guess,
learning_rate,
number_iterations
)
print("Starting gradient descent at guess_m = {0}, guess_b = {1}".format(
m_guess,
b_guess
))
print("Last gradient descent at guess_m = {0}, guess_b = {1}".format(
m,
b
))
Starting gradient descent at guess_m = 1.3450919020620442, guess_b = 6.687439682550092
Last gradient descent at guess_m = 1.4510680203998683, guess_b = 1.4510195909326549
def compute_error_for_line(m, b, points):
total_error = 0
# sum (y_i - y_hat_i) ^ 2
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
total_error += (y - (m * x + b)) ** 2
# 1 / n
mse = total_error / float(len(points))
return mse
# lets see how bad our guess was
compute_error_for_line(m_guess, b_guess, data.values)
838.9099083602013
print("Starting gradient descent at \n guess_m = {0}, guess_b = {1}, error {2}".format(
m_guess,
b_guess,
compute_error_for_line(m_guess, b_guess, data.values)
))
print("Last gradient descent at \n guess_m = {0}, guess_b = {1}, error {2}".format(
m,
b,
compute_error_for_line(m, b, data.values)
))
Starting gradient descent at
guess_m = 2, guess_b = 1, error 838.9099083602013
guess_m = 1.4510680203998683, guess_b = 1.4510195909326549, error 111.87217648730648
# How does gradient descent work?
In it's most general form:
Gradient descent is based on the observation that if the multi-variable function ${\displaystyle F(\mathbf {x} )}$ is defined and differentiable in a neighborhood of a point ${\displaystyle \mathbf {a} }$ , then ${\displaystyle F(\mathbf {x} )}$ decreases fastest if one goes from ${\displaystyle \mathbf {a} }$ in the direction of the negative gradient of ${\displaystyle F}$ at ${\displaystyle,-\nabla F(\mathbf {a} )}$. It follows that, if
${\displaystyle \mathbf {a} {n+1}=\mathbf {a} {n}-\gamma \nabla F(\mathbf {a} _{n})}$
for ${\displaystyle \gamma \in \mathbb {R} {+}}$ small enough, then ${\displaystyle F(\mathbf {a{n}} )\geq F(\mathbf {a_{n+1}} )}$. In other words, the term ${\displaystyle \gamma \nabla F(\mathbf {a} )}$ is subtracted from ${\displaystyle \mathbf {a} }$ because we want to move against the gradient, toward the minimum.
Here we have defined and the algorith works with contraints:
learning rate ${\gamma}$, for small ${\displaystyle \gamma \in \mathbb {R} _{+}}$
function $F(\mathbf{x})$, if differentiable; then ${\displaystyle F(\mathbf {a{n}} )\geq F(\mathbf {a{n+1}} )}$
Leading to ($\leadsto$) a monotonic sequence $F(\mathbf {x} {0})\geq F(\mathbf {x} {1})\geq F(\mathbf {x} _{2})\geq \cdots,$
# Coming back to the real world
Scikit learn provides you two approaches to linear regression:
If you can decompose your loss function into additive terms, then stochastic approach is known to behave better (thus SGD) and if you can spare enough memory - OLS method is faster and easier (thus first solution).
1) LinearRegression object uses Ordinary Least Squares solver from scipy, as LR is one of two classifiers which have closed form solution. Despite the ML course - you can actually learn this model by just inverting and multiplicating some matrices.
2) SGDRegressor which is an implementation of stochastic gradient descent, very generic one where you can choose your penalty terms. To obtain linear regression you choose loss to be L2 and penalty also to none (linear regression) or L2 (Ridge regression)
The "gradient descent" is a major part of most learning algorithm. What you will see most often is a improved version of the algorithm Stochastic Gradient Descent.
source - linear regression scitkit | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8059509992599487, "perplexity": 1944.2581319452638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00429.warc.gz"} |
http://arxiv-export-lb.library.cornell.edu/abs/1907.12702 | math.NA
(what is this?)
# Title: On an optimal quadrature formula for approximation of Fourier integrals in the space $L_2^{(1)}$
Abstract: This paper deals with the construction of an optimal quadrature formula for the approximation of Fourier integrals in the Sobolev space $L_2^{(1)}[a,b]$ of non-periodic, complex valued functions which are square integrable with first order derivative. Here the quadrature sum consists of linear combination of the given function values in a uniform grid. The difference between the integral and the quadrature sum is estimated by the norm of the error functional. The optimal quadrature formula is obtained by minimizing the norm of the error functional with respect to coefficients. Analytic formulas for optimal coefficients can also be obtained using discrete analogue of the differential operator $d^2/d x^2$. In addition, the convergence order of the optimal quadrature formula is studied. It is proved that the obtained formula is exact for all linear polynomials. Thus, it is shown that the convergence order of the optimal quadrature formula for functions of the space $C^2[a,b]$ is $O(h^2)$. Moreover, several numerical results are presented and the obtained optimal quadrature formula is applied to reconstruct the X-ray Computed Tomography image by approximating Fourier transforms.
Comments: 27 pages, 6 figures Subjects: Numerical Analysis (math.NA) MSC classes: 41A05, 41A15 Cite as: arXiv:1907.12702 [math.NA] (or arXiv:1907.12702v1 [math.NA] for this version)
## Submission history
From: Abdullo Hayotov R [view email]
[v1] Tue, 30 Jul 2019 01:55:27 GMT (2548kb)
Link back to: arXiv, form interface, contact. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9532462954521179, "perplexity": 400.39352070627854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400234232.50/warc/CC-MAIN-20200926040104-20200926070104-00380.warc.gz"} |
https://www.physicsforums.com/threads/spherical-rolling.422499/ | # Spherical rolling
1. Aug 16, 2010
### hmoein
hi , every one!
I have a problem with a sphere rolling on a fixed sphere. My problem is to find relationship between coordinate of center of sphere (X,Y,Z) and orientations (alpha, beta, gamma) or Euler angles of sphere. as we know a sphere has 6 DOF in space (3 coordiantes and 3 rotation) when a sphere rolling on surface we expect that it have 3 dof beacuse of relation beween coordinate and rotation.
for example when a circle roll on a surface the x coordinate of its center is:
X=R*teta (R = radius of circle) and it has one DOF.
Like the circle rolling i want to find the relations for sphere.
thanks
hossein
2. Aug 16, 2010
### Ben Niehoff
Unfortunately, the constraint for a sphere rolling on a 2-dimensional surface cannot be integrated; it is "non-holonomic". Consider, as a simple case, a sphere rolling on a flat plane without slipping.
By rolling the sphere around a closed path, back to its starting point, you can imagine that in general the sphere will not end up in exactly the same orientation as it started; it will be rotated about the normal axis. Therefore, there is not a 1-to-1 correspondence between locations on the plane and orientations of the sphere.
You can construct differential relations, though; however, they will be more difficult to use.
3. Aug 17, 2010
thanks | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8157047629356384, "perplexity": 751.9760476182648}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00102.warc.gz"} |
https://www.geoinformatics.eu/?cat=9 | # Positioning using RFID (Part 2)
## Materials
The basic RFID system consists of the reader,the reader’s antenna, the middleware, the data processing unit, and the tag(s). In many occasions, the middleware and the data processing unit refer to a single device, usually a computer. In addition, it is not uncommon for the antenna to be embedded to the reader, which is a welcomed combination when designing portable systems.
The positioning abilities of RFID are examined with the help of the parts shown in the table below.
The first item in the list is the Garmin eTrex® 10 GPS unit, a consumer-grade portable receiver with the ability to log single points (Waypoints) or a series of recorded points forming a track (Track). It is compatible with both GPS and GLONASS. Once on, it starts collecting points and building the current track. It is possible to set the time interval of point collection down to 1 sec. The unit connects to the computer via USB as a mass storage device; the spatial data is automatically converted to GPX format and can be transferred by copying the files to the hard disk. Long-range reader
The original purpose of the CF-RU5112 RFID reader model is outdoor placement for car parking lot access control; in a typical scenario, the registered cars are equipped with plastic RFID tags in the form factor of a credit card. The tags are remotely interrogated by the reader and, provided their code is valid, the middleware sends the right command to open the gate.
The device has two cable connections; one for power and one for communication. Power requirements are covered by either a power adaptor connected to the mains or a portable battery pack system which can provide a voltage as low as 9V for a limited functionality. Communication with the computer is possible via the Universal asynchronous receiver/transmitter (UART) port and a UART to USB adapter. The UART connectors are used with serial communication standards similar to the serial ports found on personal computers, and allow for transmission of data at configurable speeds, which are configured by the controlling device (in this case, the computer).
The smaller RFID reader is a desktop USB device about the size of a modern smartphone. It can read and write data to the tags using the bundled software. The tags are credit-card-sized UHF transponders with a small internal dipole antenna. They came blank so they had to be assigned to a unique ID code with the help of the small reader/writer.
### Software tools
The manufacturer of the RFID readers provides the necessary driver software for the units to be recognized by the operating system (Microsoft Windows only) and the necessary resources for further development of customized applications; in addition, they offer a demo app which connects to the reader and runs the functions it supports.
RFID readers are relatively simple devices that essentially send and return packets of data. For the data to be actually usable, they need to be “translated” to more meaningful information, such as ID code of the tag that was read, date and time of the reading, and of course the RSSI data which is essential to the positioning technique described here. All these tasks are performed by middleware devices. In the present implementation, the role of the middleware is undertaken by a small portable computer running the manufacturer’s demo software and a “spying” software, which logs the data packet transactions, since no logging function is provided by the demo app.
The device connects via UART port, connected to a UART to USB adapter and then to the computer. It should be noted that UART hardware devices require that the data format the transmission speeds are configured by a separate microcontroller, in this case, the computer via the demo software, in the Reader Parameter tab, as shown in the screenshot above.
The USB interface hosts a COM port which the demo software opens in order to initialize communication with the device. In parallel, a third-party software is activated and set to “spy” data traffic on the same COM port that is being used. The spying software, Device Monitoring Studio by HHD Software, (hereafter the logger), is a multi-use data logger which is able to log data sent and received across serial ports following the RS232 or RS485 communication protocol standards (which are typically combined with the UART hardware connectors) and save the logs to the hard drive.
The data captured by the logger is exported to a text file, but a preview is shown in the screenshot below. The generated text appears when the demo is set to Query Tag mode, in which the RFID reader continuously transmits query signals until a tag is found in the interrogation area and it responds. The query intervals are controlled by the demo software and can be set to anything between 40–300 ms. The data corresponds with the expected interrogation bytes explicitly described in the RFID reader’s User Guide document.
## Methods
### Obtaining data from the GPS receiver
Consumer GPS receivers like the Garmin eTrex used in this project are built to be user friendly and easy to export data. As mentioned in previous paragraph, the recorded data is exported to GPX format the moment the device is connected to the computer.
The GPS Exchange Format (GPX) file is a XML text file with .gpx extension. As exported by the GPS device, it contains information that can be converted to spatial features (points) by the dedicated tool in ArcMap. Regarding its structure, tracks are registered by the tag set `<trk> … </trk>` and the logged points are found in between. The point entries do not have a serial ID number but they do have the following (in order of appearance):
• Latitude (in decimal degrees)
• Longitude (in decimal degrees)
• Elevation (in meters)
• Time (in the format: YYYY-MM-DD[T]HH:MM:SS[Z])
The GPX standard defines an `<extensions>` tag for each point, but Garmin’s devices do not use it. Instead, URLs are found in the header (before any track points) which point to external location with the ‘extensions’ information: they specify things like the spatial units, various data type declarations, units for features of more advanced GPS devices such as temperature and heart rate, and others. Projection is not defined in the GPX file because it is always the standard WGS84, meaning that spatial features that are either generated or are directly exported from GPX originals will need to be projected.
### Obtaining data from the RFID reader
In order to get usable data from the reader, a special setup has to be made. This is because the manufacturer does not offer data logging software; instead they offer SDKs for the customer to build their own. Below is a diagram which outlines the principle of the setup.
Data transfer among the parts of the system occurs at all times of operation, even if no tag responds. The command that initiates interrogation is sent by the software. The first state of operation is the query state, where the reader continuously scans the environment of tags that might respond. This can be observed by monitoring the data transfer through the logger’s live interface.
Above is a screenshot of the log from the communication between the reader and the tag number `00 99`. The code is not easy to read, so here’s an explanation.
The communication between the reader and the tag contains one or more command-response dialogues. The first dialogue is always the Inventory: the reader interrogates its environment by transmitting signals with the Inventory command repeatedly until a tag is found. When one or more tags are found within the interrogation zone, they capture energy, activate and answer back to the reader. This initial response comprises a few data values which communicate the tags’ ID codes.
After the response from the tag(s), the reader compiles a message to be sent to the computer that includes the IDs of all the tags that responded coupled with the measured RSSI values for each of the tags. The second dialogue is the Read Data: the reader sends one command chunk to all the tags within its range notifying them that the data they contain need to be communicated back to the reader, followed by a second command chunk that notifies which of the tags must answer. This concludes the basic “read that RFID tag” function. More functions are possible, such as permanent or temporary deactivation (kill and lock) of a tag, modification of the data stored in the tag’s memory (write), etc. These functions, however, do not communicate RSSI meaning that they cannot be exploited for RSS positioning techniques.
From the user’s perspective, the operation of the RFID follows the steps outlined below, provided all software and drivers have been installed and are ready.
1. Connect the UART end from the RFID reader to the UART-to-USB adaptor, and then to an available USB port on the computer
2. Run the demo software and open COM port
3. Run the serial port sniffer software, specify which COM port to “spy” and set the path where the log file will be saved. The log file is a text file without extension, and can be opened with a text editor.
4. Switch over to the demo software and initiate Query function
After initiating the Query tool (which runs the Inventory), the software sends commands to the reader periodically via the COM connection in the form of small data packets, which are then translated into functions.
The frequency of the commands is specified by the Read Interval option, which can be between 40 – 300 ms. An increased Read Interval value means fewer interrogations per second. In the log screenshot above, the direction of the data can be either Up (from Reader to Computer) or Down (from Computer to Reader).
Regarding the Inventory function, which is the one returning RSSI values, the interrogation command has the following data form:
An example request code for the Inventory function is the following:
`06 00 01 04 00 ac 36`
Hexadecimal numbers are written with the prefix `0x …`, to distinguish them from decimals, so `0x0a` is the same as `0a`.
EPC ID data chunks contain:
EPC-1 = TagID length + TagID data + RSSI
In the example of the screenshot above, the following response data code can be seen:
`0a 00 01 01 01 02 00 99 c7 12 7f`
The length of this code is 11 bytes, i.e. 11 hex numbers.
Cyclic Redundancy Check (CRC) is an error-detecting computation, similar to hashing and checksum functions used for large data sets. The two numbers (LSB and MSB) are calculated from the rest of the data internally by the reader; the computer can then recalculate them from the same data and compare the new LSB and MSB to the original ones. Transmission or communication error will have occurred if the two sets do not match.
### Calculation of GPS accuracy
GPS points collected over extended periods of time are normally distributed over the true value. For shorter periods, they do not always average over the truth because the sample data may not be enough (Rutledge, 2010). A normal mixture of normal distributions is normally distributed, meaning that for datasets of extensive periods of time, points converge to the true value. For shorter periods of collection, Garmin GPS accuracy is calculated by determining the standard distance in the X and Y directions, then by calculating Circular Error Probable (CEP), and finally by calculating the 2DRMS (95%) radius (Coyle, 2012).
Standard distance is calculated as follows (Mitchell, 2005):
$\mathrm{SD}=\sqrt{\frac{\sum _{i=1}^{n}{\left({x}_{i}–\overline{)X}\right)}^{2}}{n}+\frac{\sum _{i=1}^{n}{\left({y}_{i}–\overline{)Y}\right)}^{2}}{n}}$
where ${x}_{i}$ and ${y}_{i}$ the coordinates for feature i, $\overline{)X}$ and $\overline{)Y}$ are the mean centers, and n the total number of features.
Circular Error Probable is a statistical computation used in ballistics and GPS. For a target at coordinates (x,y), CEP is defined as a circle with center (x,y) and the smallest possible radius so that it contains all locations where there is a 50 probability of finding the true target. CEP is calculated as follows (Coyle, 2012):
$\mathrm{CEP}=0,59·\left({\sigma }_{x}+{\sigma }_{y}\right)=0,59·\left(2·\mathrm{SD}\right)$
Next, the 2DRMS statistic needs to be calculated which combines the vertical and the horizontal probability. However, ArcMap calculates the standard distance based on the combined probabilities of the X and Y coordinates, therefore it is not necessary to include it.
### Analysis of the RSSI behavior
RSSI is an indicator based on the strength of the signal reflected by the tag. In order to conduct a pragmatic survey and directly calculate the signal levels, it is necessary to have a complete reference guide on the specifications of the equipment, which was not the case with the specific model. Therefore, the first step is to take measurements of RSSI values for tags placed at known distances in an obstacle-free environment to prevent signal degradation as much as possible.
The equation of propagation that links measured RSSI and distance was built based on the values in the table below.
Natural logarithms are converted to base-10 logarithms as follows:
${\mathrm{log}}_{b}\left(x\right)=\frac{{\mathrm{log}}_{d}\left(x\right)}{{\mathrm{log}}_{d}\left(b\right)}⇔\mathrm{ln}\left(x\right)={\mathrm{log}}_{e}\left(x\right)=\frac{{\mathrm{log}}_{10}\left(x\right)}{{\mathrm{log}}_{10}\left(e\right)}=\frac{1}{0,4343}·\mathrm{log}\left(x\right)$ $⇒–9,582\mathrm{ln}\left(d\right)+202,0=–9,582·\frac{1}{0,4343}·\mathrm{log}\left(d\right)+202,06=–\left(22,0631·\mathrm{log}\left(d\right)\right)+\left(–202,06\right)$
After this transformation, the developed equation matches the RSSI equation:
$\mathrm{RSSI}=–\left(10·n·{\mathrm{log}}_{10}d+A\right)$
for: $10n=22,0631⇔n=2,20631$ and $A=–202,06$
The equation of propagation is solved for distance $d$:
$\mathrm{RSSI}=–9,582\mathrm{ln}\left(d\right)+202,06⇔d=\mathrm{exp}\left(\frac{\mathrm{RSSI}–202,06}{–9,582}\right)$
### Trilateration
The reader is placed at an “unknown” position in a clear environment. In addition, tags are placed at relatively close positions ( < 15 m) away from the reader. The position of the tag locations is determined by the GPS. The reader, which is controlled by the portable computer it is connected to is activated and RSSI value collection takes place, against one tag location at a time. The RSSI values are statistically analyzed and the average value is used as input for the pre-calculated propagation formula, which extracts a value of distance. Distances to all the tag locations are combined with trilateration calculations and a relative set of coordinates is produced (x,y), which is converted to global coordinates.
For a 2-dimension cartesian system, let $N\left(x,y\right)$ unknown point, ${P}_{1}\left({x}_{1},{y}_{1}\right)$, ${P}_{2}\left({x}_{2},{y}_{2}\right)$, ${P}_{3}\left({x}_{3},{y}_{3}\right)$ points with known coordinates, and ${d}_{1}$, ${d}_{2}$, ${d}_{3}$ distances between $N$ and ${P}_{1}$, ${P}_{2}$, ${P}_{3}$ respectively. The coordinates of the unknown point are the solutions of the system:
$f\left(x,y\right)=\left\{\begin{array}{l}{\left(x–{x}_{1}\right)}^{2}+{\left(y–{y}_{1}\right)}^{2}={d}_{1}^{2}\\ {\left(x–{x}_{2}\right)}^{2}+{\left(y–{y}_{2}\right)}^{2}={d}_{2}^{2}\\ {\left(x–{x}_{3}\right)}^{2}+{\left(y–{y}_{3}\right)}^{2}={d}_{3}^{2}\end{array}\right\$
In the trilateration testing phase, tags were placed at locations and their positions were measured using a GPS. For each of the points, the most accurate data collection session was used, and their mean center was regarded as reference point. The coordinates had to be converted to UTM (projection WGS 1984 UTM Zone 33N) to correspond with the cartesian system. Using the generated model, the measured RSSI values are translated to distances.
Ideally, the system $f\left(x,y\right)$ has solution $\left(x,y\right)\in {\mathrm{ℝ}}^{2}$. The system would ideally be represented by the figure below.
However, the system has solution within an area encompassed by arcs (ab, bc, ca).
Causes of imperfect position estimation are:
• Locations of reference tags are GPS-based
• Combined (pooled) variance does not include model error
• Terrain anomalies
• Multipath, mainly from ground reflections
• Collision of tags
Categories: Localization Tags: Tags: ,
# Positioning using RFID (Part 1)
Radio-based identification systems were originally designed to monitor and identify objects, animals and people based on the proximity of a transponder to a reading device. After simple modifications, the same system architecture can be regarded as a tool for estimating the position of objects or people bearing transponders, since they are radio transmitting devices, when they are found within the operation range of the system. The location of the agent is then determined relative to a reference location (Bouet & dos Santos, 2008). RFID localization systems are more commonly found indoors, in buildings where the quality of satellite positioning service is dramatically weakened. Examples of applications are in construction sites and storerooms.
RFID positioning implementations face problems similar to the problems of other radiowave-based positioning systems, such as multipath, shadowing, interference. Especially for signal-strength ranging techniques, these systems are even more prone to the effects of the environment, however, a strong correlation between the strength of the signal emitted by a base station and its distance does exist. This correlation can be exploited for location estimation (Locher, Wattenhofer, & Zollinger, 2005).
The methods used for RFID positioning make use of the design and specifications of the technology. For indoor location systems (but can also apply to outdoors) that use range-based distance measurements, the methods can be classified into two categories: received signal strength (RSS) methods and time-of-fly (ToF) methods. The methods of first category function under the law of signal decay by distance, while those of the second category are based on the signal propagation physics. Methods such as time-of-arrival (ToA), time-difference-of-arrival (TDoA), angle-of-arrival (AoA), and phase-of-arrival (PoA) are examples of ToF methods. ToF methods are also used in satellite positioning systems, such as GPS (López, Gómez, Álvarez, & Andrés, 2011).
Each one of the methods mentioned above comes with its advantages and disadvantages. The general opinion expressed in the relevant literature is that RSS methods are not as accurate and reliable as the ToF methods due to the fact that distance is only one of the numerous parameters that affect the RF signal strength. However, RSS methods are simple in their setup and do not require expensive equipment. The RFID technology
Radio frequency identification (RFID) systems consist of transponders (tags) and readers that communicate using electromagnetic energy at the radio spectrum, hence their name. RFID systems can be classified according to their energy source into three types: passive, active and semi-active. In a passive system, the tag draws all the energy it needs to function wirelessly from the reader. On the other hand, in the active type the sole purpose of the reader is to communicate with the tag and not to provide power, which is provided by another source, usually a non-replaceable internal battery. A semi-active RFID system uses tags with an internal battery which solely powers the internal circuitry for the tags internal functions, while the energy provided by the reader is only used for the activation (wake up) of the tag and the communication (transmission of data).
The technology behind identification using radio waves is far from new, as it dates back to the beginning of the century and especially in the time before the Second World War, when a simple implementation called IFF, acronym for Identification, Friend or Foe, was used for identifying approaching aircrafts and avoiding friendly fire. A custom antenna was mounted on the friendly aircrafts; it was designed to answer to interrogation electromagnetic signals emitted by ground stations. Precursor to the modern passive RFID systems, however, is the “Thing”, aka the “Great Seal Bug”; a small eavesdropping device invented by Léon Theremin that was used by the Soviet intelligence agencies to spy on the Americans. The device had a simple microphone connected to an antenna. The device was powered wirelessly by a remote source in a manner similar to the modern passive RFID systems.
In the industry, RFID is widely regarded as a replacement for barcodes. When compared to previous Auto-ID technologies, RFID has significant benefits and some drawbacks. Amongst the main advantages is the ability to communicate and track items without physical contact and out of line of sight, the increased automation in reading/writing and the fast and accurate data transfers, the larger amount of programmable data they can store, and the flexibility and robustness of the entire system. When used in a positioning application, these advantages are of particular importance. However, some disadvantages do exist, the most important of which are the potential incompatibilities between devices coming from different manufacturers, the increased implementation costs and the cost of the equipment, and the fact that the propagation of radio waves is affected by the environment and even blocked by some materials such as water and metallic surfaces (Erande, 2008).
Standardization of RFID was proposed and has been developed by the International Organization for Standardization (ISO) and by the Auto-ID Center with their Electronic Product Code technologies.
## RFID system architecture
In its simplest form, a fully functional RFID system consists of two main components: the transponder (tag) and the reader (a.k.a. RFID scanner) (Finkenzeller, 2003, p. 7). The transponder is a device generally simple and cheap that is attached to the object or person being tracked/identified; the reader is a larger, more complex and more expensive unit that can power, read, and write the transponder without physical contact and when the latter is placed within the reader’s read range. The image below shows a typical RFID setup used in logistics. This setup is the basis for a positioning system, where the distance between the antenna and the objects bearing the tags is to be estimated.
### Transponder (tag)
The transponders used in RFID systems typically consist of three sections: a radio-frequency front end, an analog section, and a digital section. Not all sections are present in all kinds of tags; for example, the very simple 1-bit tag usually found in antitheft systems does not have a digital section. The role of each section is outlined below.
RF front end:
• Energy harvesting from the electromagnetic field
• Demodulation of the received signals
• Transmission of the outgoing signal
Analog section:
• Clock of the digital subsystem
• Powers the rest subsystems/components of the tag chip
• Stabilizes the voltage of the front end
• ‘Power on reset signal’
Digital section:
• Manages the power
• Data recovery
• Executes the protocol operations
The Reader consists the first communication layer between the tag and the rest of the RFID system. They consist of subsystems that enable the transmission of energy and data to and from the tag and the forwarding of the data to the next communication layer in the system; these subsystems are typically a RF unit, an external (or internal/embedded) antenna, an electronic control unit, and a communication interface. Readers communicate with the middleware, or another control device, through the communication interface (e.g. RS 232, RS 485, USB, UART).
• High-power (up to 4W) RF transmission for the activation of the passive tags
• High power consumption, in the order of Watts
• Maximum distance of communication in the orders of centimeters or meters (up to 20 m)
• Reading capacity of ~100 tags in a few seconds
• Low-power RF transmission (10–20 mW)
• Power consumption requirements significantly lower (~mW), which facilitates the design of portable readers
• Reading capacity of hundreds of active tags in a few milliseconds
• Small antenna
### Middleware
The interface that connects the reader with the other units down the communication chain is referred to as “middleware”. Middleware units perform the following tasks:
• Filtering of the data incoming from the reader, so that the system is not overwhelmed
• Routing of the filtered data towards the proper software application
• Data logging
• Management of technical parameters, such as reading frequency, transmission energy levels, et al.
## Characteristics
The most meaningful categorization of RFID systems is based on the power requirements of the tags. Three categories can be recognized” passive, active and battery assisted passive, aka “semi-passive”. Passive tags are powered entirely by the reader and constitute the simplest design of all three; active tags have an internal power source, usually a small, non-replaceable battery which provides energy not only for the transmission of the signal, but also for internal circuitry functions; battery-assisted tags embed a small battery which is used only for the internal functions of the tag, while communication with the reader requires passive-like energy harvesting. The capabilities of a tag are greatly affected by the energy source it uses. For example, the maximum reading distance of an active tag is significantly greater than the range of an entirely passive system, which can be from a few centimeters (NFC devices) and can reach 10–15 meters maximum for UHF systems. For comparison, the maximum reading distance of an active tag can exceed 100 m. In general, semi-passive tags are the middle ground between entirely active and entirely passive systems.
The number of RFID system types available in the market today is extremely large because of the different parameters and characteristics of the architecture of the technology. As with the personal computer market, where the buyer has to take into account the characteristics of a number of subsystems that identify a specific model like the CPU, the RAM the HDD etc., a number of ‘selection criteria’ (Finkenzeller, 2003, p. 25) are present in a RFID system. These criteria are:
• Range
• Operating frequency
• Memory
The criteria are not entirely independent; systems that operate at higher frequencies tend to have a higher range. For example, microwave systems operating at frequencies in the vicinity of 2.4 GHz can typically achieve read distances of tens of meters, while systems in the LF bands have range of only a few centimeters. The architecture of the systems is comprised of a number of properties that together define the system itself.
## Range
Of significant importance to a RF-based positioning system is the maximum distance of communication between the transponders and the agent. The range of RFID systems is dependent on its design and spans from a few centimeters to over 100 m. According to (Finkenzeller, 2003, p. 26), the key factors of an application that will define the range are the ‘positional accuracy of the transponder’, the presence and the number of transponders.
## Operating frequency
The operating frequency of a RFID system dictates may other parameters, such as the range and the environment they can be used in mainly due to different permeability. The most established bands in the industry are four, but more can be defined. The four bands are: LF, HF, UHF, SHG or microwave.
For an entirely passive RFID-based positioning system, the best choice seems to be a system operating in the UHF range, where maximum read distance can be up to a few meters. For battery- assisted or active systems, one may opt for a microwave system offering significantly higher range in the orders of tens of meters. Passive LF systems were amongst the first that were used and became widely popular in animal tagging applications and in high-accuracy timing systems used in sports. There systems are low-power, the reading range is a few centimeters and data rates are very low, usually under 8 kbps. LF requires large antennas because the coupling is inductive and reading range is within centimeters. Similarly, HF systems are found in book tracking in libraries and smart cards, and have a reading range of up to 1 m, which is insufficient for passive positioning.
## Memory
The demand for small size and low-cost RFID tags has an impact on the size of the memory they can have. In general, cheap and low-capacity memories store identification data only, while advanced (and more expensive) “smart” tags can afford higher capacity memory and circuity. Typical memory technologies used in RFID tags are:
Tag memory can be used for storing location data, such as coordinates, or semantic position data, such as cell ID. This can be useful in autonomous systems that do not have access to a spatial database from where to retrieve location data.
In sensitive RFID applications, for example where transaction of personal identification data takes place, security needs to be included.
## Passive communication between reader and tag
Data transfer between readers and transponders in an RFID system presupposes the establishment of a communication channel between the two devices. Two methods are described: inductive coupling, which is used for systems operating at LF and HF bands, and modulated backscatter coupling for UHF and higher bands.
In resonant inductive coupling, the reader antenna has a coil which is powered by alternating current generated by an internal oscillator. As the electric current passes through the coil, it generates an alternating magnetic field that serves as a power source for the tag. The latter’s antenna coil is energized by the electromagnetic field which subsequently charges a nearby capacitor and activates the tag’s integrated circuit. Data transfer takes place through the electromagnetic energy exchange, which occurs in pulses translating to data.
Backscatter coupling is used in UHF tags and requires a dipole antenna. The reader generates high- power electromagnetic signals that the tags modulate and reflect back to the reader. Some readers can sense the power levels of the reflected signal, which is the basis for return-signal-strength positioning.
### Sequential, Half-duplex, Full-duplex
RFID tags that could have any use in a positioning system must be able to harvest energy and transmit data that is stored in their small chip. Both of these actions occur through the only antenna system that the tag has, it is therefore necessary to define the timings for energizing, receiving data, and transmitting data. Three alternative procedures are used: sequential, full-duplex and half-duplex. All three procedures use three lanes of operation that “pass through” the single hardware interface, i.e. the antenna. The first lane is the energy transfer, the second lane is the downlink (data transfer from reader to tag), and the third lane is the uplink (from tag to reader).
The main characteristic of the sequential procedure is the intermittent energy transfer from the reader to the tag. Energy and data are transferred simultaneously in the first time slot, which is followed by uplink-only activity in the second time slot, followed by a simultaneous energy transfer/downlink activity, and the cycle continues until it is terminated with an uplink activity that is not followed by energy transfer.
Half- and full-duplex procedures utilize a constant energy transfer and non-constant data transfer. Of the two lanes for data transfer (uplink and downlink), only one is active at any given time slot in half- duplex procedures, while full-duplex procedures have them operating in parallel.
## RFID positioning approaches
Positioning approaches that implement the lateration technique on RFID systems include Phase of Arrival (PoA) and Phase Difference of Arrival (PDoA), where the phase of the signals are used for estimation of distances between the tags and the readers (Povalač & Šebesta, 2010) and are found in systems operating at the UHF range.
### Proximity-based (cell-of-origin) RFID methods
The extent of the interrogation zone of an RFID system defines the granularity of the proximity-based location methods, where the approximate location of the agent equipped with a tag is determined as they move into the zone. Proximity-based positioning might better be described by the term “tracking” (Song, Haas, & Caldas, 2007) rather than positioning because as a method, it does not necessarily disclose the geographical position, absolute or relative, of the tag, but it has a semantic meaning, e.g. “tag is found at gate B17”. The position of “gate B17” is therefore the position of the tag, to a certainty characterized by the granularity of the system.
When this technique is used for mobile phone tracking using the cell tower they are connected to, it is generally referred to as Cell-of-Origin (CoO), but this term could also be used in RFID.
Proximity-based methods are meant to be used in occasions where no information on the distance between the agent and a key location (in this case, a properly placed transponder of known coordinates) is available, such as strength of returned RF signal, propagation time for the signal, angle of arrival, etc. These methods offer the following benefits:
• Reduced cost of equipment. RFID readers and systems with the ability to return or extract some kind of information regarding the propagation parameters that can later be used for positioning estimation cost significantly more. For the present thesis, the purchased RFID reader model without the return signal strength data output would have cost approximately 30% less. The cost increases in more complicated systems based on time-of-fly or angle.
• Higher certainty in precision. Precision in PBPs is tightly linked to the interrogation range of the tag or the overlapping tags, and can be adjusted from the settings panel of the RFID system.
The drawbacks of PBP systems are the following:
• Number of tags. For a system to be able to perform as mentioned above, the precision of a PBP system is linked to the read range.
For a 2D implementation, the unknown Cartesian coordinates of an agent x1,y1 that has interacted with a transponder of known coordinates x0,y0 will always be found within the radian interrogation range r of the transponder/reader system. In other words, assuming that the signal is uniformly homogenous and unobstructed, the following expression holds:
$r\ge \sqrt{{\left({x}_{0}–{x}_{1}\right)}^{2}+{\left({y}_{0}+{y}_{1}\right)}^{2}}$
In this example, as the agent moves in the interrogation range of the tag, i.e. as the distance between the agent and the tag becomes d≥r, the reading process is activated and the location of the agent is registered at high accuracy and at precision equal to r. Higher precision can be achieved by reducing the transmission power of the RFID reader, thus reducing the r; noted that the effective positioning range for that specific tag will also decrease.
Higher precision can be achieved by placing several tags at close proximity with overlapping interrogation ranges. Estimating the location of the tag comes down to selecting a point from the intersecting region, which should be smaller than the entire range of a single reader. However, this implementation must be using anti-collision and anti-interference systems otherwise quality readings will not be possible.
RF signals can be represented by the following units of measurement, which can be more or less converted from one another for measurements that are not close to the extremes and with a level of accuracy that varies (Bardwell, 2002):
• mW (milliwatts), where
• dBm (db-milliwatt)
• % (percent)
Milliwatts (mW) and db-milliwatts (dBm) are measurements of the emitted RF energy but mW is linear, while dBm is logarithmic. Conversion between the measurements is possible by calculating the common logarithm (base 10) and multiplying by 10, as shown in the example that follows:
Doubling the emitted RF energy increases the dBm values by approximately 3 dBm. In addition, while it is possible to have negative dBm values, this is not the case with the mW since the latter refers to emitted RF energy which does not make sense to be negative.
RSSI is usually expressed in the form of a single byte character with value range 0-255, although many vendors of consumer products generally report it in a ‘percentage’ scale of 101 values, where0 refers to minimum signal strength and 100 to maximum. This is mostly found in devices that are not related to RFID, such as wireless network (WiFi) devices etc.
$\mathrm{RSSI}=–\left(10·n·{{\mathrm{log}}_{10}}^{d}+A\right)$
where:
• A is the RSS at 1 m distance
• d is the distance
• n is the signal propagation exponent
Attenuation is expressed as ratio of change, in dB (decibels). For two power states P1 (measured) and P0 (reference), where 0 < P1 < P0, attenuation is calculated as (Couch, 1999, p. 212):
${\alpha }_{p}=10·\mathrm{log}\frac{{P}_{1}}{{P}_{0}}$
At this point, it should be pointed out that RSSI and distance are reversely related, i.e. higher distance will give lower RSSI returns.
Signal-strength-based ranging systems have been used in robotics for adding depth (distance) data to the pixels of tagged objects as they are seen by cameras attached to the robot (Deyle, Nguyen, Reynolds, & Kemp, 2009), in locating systems used for tracking of people in hospitals, livestock, workers in construction sites (Choi, Lee, Elmasri, & Engels, 2009). In fact, RSS-based positioning in construction sites has been the focus in quite a few projects, as the special requirements of these environments require study of interference and filtering of the signals prior to the actual location finding algorithms (Ibrahim & Moselhi, 2015).
In practice, the strength of the returned signal is not stable, experiencing variations that are caused by several factors. These factors have been found to be physical distances between the transponders, obstacles, orientation of the antennas, and interference (Chapre, Mohapatra, Jha, & Seneviratne, 2013). These fluctuations, however, could be used for building of spectral maps in positioning using the fingerprinting method, were each cell must be assigned to a unique set of attributes in its spectral signature.
### Fingerprinting in RFID
With fingerprinting, location determination takes place over an area that has analyzed beforehand. It is a two-step setup that requires a preparatory phase (calibration) and the actualization phase, where real-time localization occurs.
In the first phase (calibration), the entire area is divided into cells and RSS value samples are taken in each cell, with coordinates (xi,yj). The data of the most representative values is stored in a database (lookup table, location fingerprint map or radio map) and remains available for the next phase. By “most representative values”, it is meant that many RSS values are measured but only the average values are kept (Kaemarungsi & Krishnamurthy, 2004). The survey area can be monitored by several RFID readers (Ting, Kwok, Tsang, & Ho, 2011). The level of granularity in fingerprinting is characterized by the cell size which is defined by the distance between each set of coordinates (x1,y1), (x2,y2), …
In the second phase (localisation), the reader scans the environment for tags and receives RSSI values for each tag, which are communicated to a pattern-matching algorithm. Location is determined by querying the database for a matching cell record (xi,yj). However, it might not be possible to match the measured RSSI values to an exact cell, therefore an algorithm of Nearest Neighbors is used (Guvenc, Abdallah, Jordan, & Dedeoglu, 2003) which matches the unmatched measurements to the database records where the Euclidian distance is minimum. It should be noted that it is important that the conditions during the mapping phase and the positioning phase are exactly the same in order to limit the variations in the signal strength measurements.
## Technical considerations
### Signal ranges
As the signal leaves the source, it propagates through the medium (or the free space) surrounding the source, which is generally filled with background noise, towards the receiver. Based on the reaction of the receiving unit with regards to the signal, three zones can be defined:
• Transmission zone, where communication between the transmitter and the receiver is possible without any errors (or with an insignificant error rate)
• Detection zone, where the signal is detected but the error rate is too high for actual communication, and
• Interference zone, which is the space past detection range, where background noise “covers” the signal rendering practically undetectable by the receiver.
### Multipath effects
Multipath particularly affects UHF tags because of the fact that communicate via backscatter, i.e. by reflecting the reader’s interrogation signals. In addition, RFID is popular in tagging applications, meaning that it is expected to find them in confined places, such as indoors, and in places with many obstacles, such as storerooms. Multipath can be filtered out using a statistical profile for each ID (Wang & Katabi, 2013).
### Anti-collision systems
Collision of tag communication is caused by the presence of many tags in a confined space. In such cases, the reader is prevented from communicating properly and signals cannot be registered; a problem that appears in logistics but can also appear in a GIS application. Each communication path opened between the reader and one tag acts as interference to the path between the (same) reader and the nearby tags. If less dense placement of the tags is not feasible, then anti-collision systems (protocols) address these mishaps using the following techniques:
• by forcing the reader to time-manage the readings, lock communication channel with one tag at a time and block all the others,
• Block all others and open alternative communication channel
• Use time-sharing so that each tag has a time slot to communicate
• Install to tags a subsystem that sorts them in the reading queue according to their distance from the reader
• (for moving tags) install to tags a subsystem that sorts them in the reading queue according to their relative speed in relation to the reader
Anti-collision protocols in RFID is a field of ongoing study and new techniques emerge at times.
When designing a model for a system that uses RFID for positioning, the following two aspects need to be taken into account:
• False-negative readings, meaning that a tag is not detected, even though it lies within the reader’s read range. According to (Hähnel, Burgard, Fox, Fishkin, & Philipose, 2004), false- negative readings are frequent in these RFID model scenarios. In positioning applications, false negatives will affect the accuracy of the system by denying the third required distance measurement (for 2D systems). The effects of false negatives can be mitigated if the agent is moving in space covered by the read ranges of multiple tags by implementing correction algorithms (probabilistic distance-aware models) which function under the principle that there is a minimum travel time between the nodes and a minimum number of tags a traveler must encounter, effectively assuming the presence of a node if it’s been too long since the last tag was encountered (Baba, Lu, Pedersen, & Xie, 2013).
• False-positive readings, where the reader detects a tag located at a distance greater than its maximum read distance, as specified by its manufacturer.
### Noise
Data obtained by a wireless positioning device such as a GPS or, as in this case, the RFID reader, contains noise that needs to be removed prior to feeding the data to the model. This task is often carried out algorithmically by the Kalman filter (Kálmán, 1960), a mathematical tool that is used in cases where input data contains statistically independent noise that needs to be removed. Kalman filters are widely used in signal processing (Welch & Bishop, 2001) and function under a “predict- correct” algorithm. In the predict phase, the state of the system and its error covariance are projected one step ahead, and in the correct phase, the estimation corrector is applied on the real-time measurement. As a result, the data feed is smoothed as shown in the diagram below:
As previously explained, the beneficiary’s location is estimated by the positioning system in relation to a reference position and later extrapolated to a common system of coordinates. In the case of RFID, two components of the system can play either role—that of the reference position, or the one of the unknown position. These two components are the tags/transponders and the readers; either the position of the tag is estimated relative to the position of the reader or vice versa. Each of these implementations have strengths and weaknesses. The final choice will be based on the following considerations:
• Physical properties of the beneficiary of the positioning service. For example, if the objective is to increase the accuracy of GPS in driverless cars moving in dense urban environments, then it is possible for the cars to carry a somewhat heavy and bulky long-range UHF reader and perform ranging to fixed passive tags. On the other hand, for tracking of livestock in a farm, it would make more sense to install the readers on fixed positions and tag the animals with the much lighter passive tags.
• Power. Readers need to be connected to a sufficient power source, while tags are passive. As an example, a typical 9V battery can power a long-range UHF RFID reader like the one used in the present thesis for up to one hour of continuous operation.
• Cost. Passive tags are inexpensive (indicative price from USD 0,20 per piece), while long- range readers are significantly more expensive (indicative price USD 220,00 per unit). Installation of readers incurs additional costs as well.
Categories: Localization Tags: Tags: ,
# RFID in positioning applications
In this study, the concepts of positioning and location sensing are explored with a focus on RFID technology as applied in urban environments. RFID positioning is a simple and cheap way to identify the location of an object or a person, indoors and outdoors, alone or in combination with different positioning methods, such as satellite systems. The implementations are numerous and an asset to location-based services and GIS.
## 1. Theoretical foundation
### 1.1. RFID technology and principles
Radio Frequency Identification, generally abbreviated as RFID, finds its origin in military applications decades ago, when backscatter radiation was used to identify hostile aircrafts during World War II. The technology has evolved since then, enabling more complicated implementations, such as supply chain monitoring, product labeling, and item tracking. Currently deployed RFID systems are generally focused on the real-time tracing, tracking and monitoring, with numerous implementations in logistics, inventory management, supply chain and access control, to name a few. The key features of the technology have helped to find its place in industrial and urban settings: being contactless, no line of sight communication, and automation that does not require any human intervention (Goshey, 2008). RFID uses wireless radio frequency (RF) communication technology to interlink remote devices, usually with a unique ID (UID), and transfer data among them for further processing.
The following four components are found in a basic RFID system:
• Transponder, or tag
• Antenna, which is attached to the reader
• Reader interface layer, which takes the role of middleware and runs tasks like the filtering the readings, keeping logs and forwarding the read data to the next level of analysis.
RFID tags are found in the following three types: passive, active and semi-active. This categorization refers to the energy requirements and the method of energy consumption for the tag. Active tags are powered entirely by a battery, while passive tags are powered exclusively by the electromagnetic waves emitted by the reader. The third category of semi-active tags includes transponders that use an internal power source (battery) only for their internal functions like storing data to memory and microprocessing; the communication between tag and reader is done by consuming energy provided by the reader similarly to the entirely passive tags. Passive tags can be very small and cheap, however the maximum read distance is shorter than the active type. On the other hard, active tags can transmit to the reader from a longer distance, but the battery has a limited life (Shikada, Shiraishi, & Takeuchi, 2012).
The passive communication method generally involves three main phases of operation, as described below:
• Charging phase: energy is transmitted by the reader towards the tag, which in turn collects it and temporarily stores it in its circuitry.
• Reading/communication phase: the tag uses the stored energy to transmit data back to the reader.
• Discharging: this phase is necessary in some systems where not all of the absorbed energy is consumed after the reading operation. After the discharge, the tag is reset and ready for a new reading. Discharging phase may require significantly more time to complete than the reading phase, which must be taken into account when designing a RFID-based positioning system.
Tags and readers communicate with each other mainly by either inductive coupling or by electromagnetic backscatter coupling (electromagnetic waves). Inductive coupling is used for RFID systems operating at frequency bands LF and HF (Table 1), while electromagnetic backscatter is used for higher frequencies.
The frequencies that RFID systems use to communicate range from the Low Frequency (LF) part of the spectrum (<135 kHz) to microwaves (2.45–5.8 GHz). Water and other nonconductive substances significantly absorb frequencies around 1 GHz, so this frequency range is not used (Finkenzeller, 2010). Several properties are dependent upon the operation frequency, such as the read range, the penetration of the signal and the data transfer rate. Table 1 gives an overview of different frequency band characteristics.
Higher operation frequencies are generally related to higher data transfer rates. Applications of tracking objects moving at a high speed, such as a speeding car or a train wagon, require relatively high data rates, otherwise the tag will not be at a reading distance from the transceiver long enough. However, data rate is not the only parameter affecting the maximum tracking speed, since there are other factors such as the necessary charging time for the passive transmission methods. In practice, the communication can take nine times longer that the theoretical limit of the data transfer according to (Chon, Jun, Jung, & An, 2004), who showed that a reader moving at a speed of 165 km/h can read 128 bits from a road tag as long as the read range is greater or equal to 81 cm.
Of particular importance to a GIS-based implementation of an RFID system are the characteristics regarding memory and data transfer of the communication path tag-reader. RFID tags come in a variety of memory sizes, from 16 bytes to 64 kB, using ferroelectric random access memory (FRAM) (Fujitsu, 2014).
### 1.2. Location-based services
In the context of the present project and geography in general, positioning refers to the act of a localization system and its relation to a location-based service (LBS), which is a product of combination of contemporary informatics, the Internet, and geography (Figueiras & Frattasi, 2010). An LBS is an informatics service that serves its user in a manner relevant to their location. A number of definitions have been given to the LBS, all relating to a, chiefly mobile, user and their interaction with technologies of informatics.
Examples of LBS are (Kolodziej & Hjelm, 2006):
• Intelligent information management in Wi-Fi hotspots, where connected users are presented with (or are blocked from accessing) information relevant to their location, such as ads in airports and blocking of certain websites, like social media and other social information sharing sites, in military bases.
• Notifications to visitors of exhibitions, according to the kiosk they are closer to.
• Tracking of newborns in maternity clinics using small RFID tags, to eliminate baby stealing.
• Emergency crews share their positions so than an unfortunate event can be handled more effectively.
In all of the examples above, the key input for the LBS is of course the location of the users, which is determined by various means. Of particular importance is the meaning of the term “location”, which generally refers to a physical place but it can also be associated with non-tangible concepts. Physical locations, relevant to LBS implementations, are subcategorized by (Küpper, 2005) as follows:
• Descriptive locations, which are referenced to with the help of elements of nature (e.g. “next to that rock”)
• Spatial locations, which are the positions in space defined with the help of a system of coordinates (Euclidean, polar, etc.)
• Network locations, ‘virtual’ places in a network topology, as defined by a unique address (IP address, etc.)
A distinction is made between physical and virtual locations, the latter being locations in a virtual system, like a video game, a chat room, an instant messaging (IM) application (Küpper, 2005). However, the aforementioned examples generally still point to a specific network location: a website is stored on a server with a specific IP address, translated by a domain name system (DNS) to a human-friendly Uniform Resource Locator (URL).
### 1.3. Fundamentals of positioning
#### 1.3.1. Cell ID
The term refers to cell identification. In this case, the position is identified according to the location of the cell the object of interest is ‘connected’ to. This approach is generally used in combination with others, for example in an RFID positioning system, the reader’s ID number would be the Cell ID whereas in a WiFi network, the MAC address of the wireless module would be the Cell ID. In theory, this method is used for object tracking with an RFID system in symbolic space, where no coordinate system is used per se, but tracking takes place in the form of true-false logical functions, for example “if the tag T is in the area of coverage of reader R at time t, then the function F is true, otherwise (else) false” (Kang, Kim, & Li, 2010)
#### 1.3.2. Signal strength
Radio signals propagate in space but due to fading caused by destructive propagation effects, such as signal attenuation, shadowing, scattering and diffraction, they lose strength. The loss of power is a function of distance between the transmitter and the receiver, it is therefore possible to calculate how far the tracked object is from the transmitter based on the Received Signal Strength (RSS), calculated in dB, or the Received Signal Strength Index (RSSI). However, signal-strength-based positioning can be very inaccurate compared to the techniques below because it is very difficult to be calibrated and because the signal strength is affected by many factors, distance being only one of them.
#### 1.3.3. Time of arrival
The principle of this method is the fact that electromagnetic radiation propagates in space at a finite rate, equal to the speed of light c. Very accurate clocks must be used, which set timestamps when a signal is received at the tracked object. By knowing the propagation speed and the time it took the signal to reach the destination (receiver), it is possible to calculate the distance. In this method, however, the concept of clock granularity is delved, which generates the need for special mechanisms to correct errors due to the non-contiguous function of the clock, and therefore the timestamp operation, which runs with clock ticks at specific frequency.
The position of the tracked object is estimated by triangulation the measurements of distance through the technique of lateration. In a two-dimensional space, three measurements of distance are required, while in three dimensions, four measurements are required. The location of the receiver is at the point where the three (2D) or four (3D) circles intersect, as illustrated in Image 1.
#### 1.3.4. Angle of arrival
The angle at which the signal leaves the transmitter and reaches the receiver is the key to this method. However, such a system requires high cost directional antennas and is generally more complicated than the Time of Arrival method. The measurements of this method are used in the process of angulation, a kind of triangulation that calculates the position of the tracked object based on the angles of at least two signals transmitted from two different sources.
#### 1.3.5. Phase of Arrival (PoA)
The principle of this method is based on the difference in the phase of the signal emitted by the reader and the signal backscattered by the tag that is read, which is related to the distance of the backscatter device. This distance estimation technique is often used in combination with other techniques, such as AOA. It is possible to design a system with multiple frequency pairs that will be able to estimate the location with a higher accuracy (Zhang, Li, & Amin, 2010).
## 2. Positioning systems
The term positioning refers to a technique that can determine and share the location of living and inanimate objects continuously and in real-time. According to (Esri, 2006), positioning can be either static (determining a position on the earth by averaging the readings taken by a stationary antenna over a period of time) or kinematic (determining the position of an antenna on a moving object).
An LBS can deliver services and exchange data among static and moving users. Advanced location sensing mechanisms need to be deployed, as indexing of moving objects that use techniques to “exploit the volatility of the data values being indexed” (Jensen, Lin, & Ooi, 2008), which is a characteristic of objects that move into and out of the area covered by the LBS. In the example of a ${B}^{x}$–Tree indexing of moving objects, a positioning system needs to be able to capture a position vector $\stackrel{\to }{x}$, a velocity vector $\stackrel{\to }{y}$ and a specific time value ${t}_{u}$when these inputs are valid.
The most known positioning technique is the one used by satellite navigation systems, where the satellite positions are known and the time the signal needs to propagate to a target object is measured. Assuming that the electromagnetic waves travel at the speed of light, the distance between the satellite and the target object is calculated by multiplying propagation time by speed of light. In reality, however, electromagnetic radiation experiences an atmospheric delay, which is defined as the reduction of their propagation speed when they pass through the ionosphere and the troposphere.
### 2.1. Outdoor location sensing
The technologies dominating outdoor positioning are mainly based on satellite systems, such as the Global Navigation Satellite Systems (GNSS). Such systems have now reached maturity and have a number of advantages: high precision, continuity, able to function regardless of the weather conditions, near-real-time observation, and increased reliability. With regard to their advantages, GNSS are widely used in applications other than positioning and navigation as well, such as remote sensing (Shuanggen, Estel, & Feiqin, 2013). The most widely used satellite systems for global navigation and positioning are:
• GPS (USA)
• GLONASS (USSR, now Russia)
• Galileo (EU)
• Compass, aka BEIDOU (China), launched in 2007
Global Positioning System (GPS) finds it roots in the early 1960s, when a number of U.S. governmental organizations joined forces to develop a system that would provide navigation and positioning services primarily for military and secondarily for civilian use. At present, the American positioning system consists of a constellation of 24 operational satellites at an orbital radius of approximately 26 600 km, and has been used not only for positioning and navigation, but also for delivering precise timing.
Former USSR had developed their own satellite positioning system, known as Global Navigation Satellite System (GLONASS). The constellation of its satellites was completed in 1995, four years after the collapse of the Soviet Union in 1991. The project’s operation was suspended for the rest of the 1990s but is now operational with 24 satellites (Russian Federal Space Agency, 2016).
The European Space Agency’s (ESA) ongoing GNSS project Galileo, named after the Italian astronomer Galileo Galilei, is a €5 billion project intended for civilian and commercial use. The system will become fully operational until 2020, but the initial services will be made available in late 2016. In full deployment, the system will consist of 24 operational satellites at 23 222 km altitude; it will provide basic, relatively low-accuracy services to everyone and advanced, high-precision services to paying customers (European Space Agency, 2015).
The fourth main GNSS is the Chinese BeiDou/COMPASS, scheduled to deploy by 2020. The system’s constellation will have 35 satellites, five of which in geostationary orbit, and will be offering basic services for civilian use and higher accuracy services for special uses, similarly to the systems mentioned earlier (BeiDou Navigation Satellite System, 2015).
### 2.2. Indoor positioning
Satellite technologies have well met the needs for determining the location in outdoor environments. However, they are not well suited for indoor areas because of the poor reception of satellite signals, which are greatly degraded inside of buildings. The task of positioning in such environments has been addressed by several techniques based on wireless technologies. Examples of such systems include sonars (audio waves), radio signal triangulation and beacons (electromagnetic waves) (Fernandes, Filipea, Costa, & Barroso, 2014), as well as infrared and physical contact methods (Youssef, 2008).
Indoor positioning faces significant challenges that place it in a less widespread position, when compared to the extensive use of ‘conventional’ GNSSs. These challenges are (Gubi, et al., 2010):
• Lack of suitable indoor maps
• Limitations of the available technology
Indoor positioning implementations can be found in office buildings, warehouses, factories. Examples of indoor location sensing technologies include infrared positioning, indoor GPS-based systems, Ultra Wide Band (UWB), Wireless Local Area Network (WLAN), and of course RFID positioning. The dominant RF-based methods, however, are RFID and WLAN. In an indoor environment, localization of an object is possible through tracking in two or three dimensions. If tracking takes place in a 2-dimension space but on multiple planes, like tracking of people in a building of many floors, then the term 2.5-dimension can be used. The processing of the input data and the determination of the location can be implemented on a mobile device on the object or person being traced (client device), or on a central unit (server), the location of which can be, theoretically, anywhere in the world. The position can be reported either symbolically or in coordinates, which in turn can be absolute (e.g. longitude, altitude etc.) or relative (e.g. distance from a nearby point of reference).
### 2.3. Hybrid positioning systems
Hybrid positioning systems usually combine satellite location technology with ground-based systems in order to either increase the accuracy served by the satellite system or to provide location information for those areas where satellites cannot reach, such as indoors. The assisting technologies usually are mobile phone cell tower signals (GSM, LTE), WiFi, WiMAX, Bluetooth and others (AlterGeo, 2015).
RFID systems can be used to obtain indoors location information in such a way, that when combined with GNSS can create a system of seamless positioning. It is not mandatory for areas not covered by GNSS to be indoor areas; outdoors obstacles such as canopies can negatively affect satellite signal quality as well (Shikada, Shiraishi, & Takeuchi, 2012). An example of such a seamless system output is shown in Image 4. The light green points are obtained from the passive RFID system, while the red points from GPS. In (Shikada, Shiraishi, & Takeuchi, 2012), where this image was taken from, it shows a problem of overlapping data for the positions where both the RFID system and the GPS system feed the system with location data.
Hybrid positioning techniques can also be combining RFID and another technology (Bai, Wu, Wu, & Zhang, 2012), not necessarily aimed at indoor implementations only (Wen, 2010). Some examples of non-GNSS technologies that have been used to enhance the capabilities of RFID-based systems include ZigBee (a protocol used for Wireless Sensor Networks – WSN), Wi-Fi/WLAN, ultra wide band (UWB), infrared (IR), and ultrasonic. Such combinations can provide the user with increased accuracy and reliability.
## 3. RFID positioning
There are generally two kinds of implementing RFID-based location sensing: fixed tags–moving reader, which implies that the object or person whose position is to be estimated carries a reader, and fixed reader(s)–moving tags, where the setup is the opposite. In addition, that there are systems where readers and reference tags are stationary and a non-fixed tag is being tracked.
### 3.1. Fixed tags of known location – moving reader of unknown location
A two dimensional space can be interpreted as a Cartesian coordinate system, where each point that belongs to the said plane can be specified by a pair of numbers, usually (x, y). The implementation described below mainly consists of a grid of passive RFID tags equally distributed over the area where the positioning takes place and a portable RFID reader which is carried by the person/object being tracked. A system like the one illustrated below can achieve positioning accuracy of 50 cm if tag placement is 50 cm, the antenna is linearly polarized and the the RFID transmission power is 18dBm (Shiraishi, Komuro, Ueda, Kasai, & Tsuboi, 2008).
Each one of the installed tags has a unique ID number and its spatial location is known with a high accuracy. The spatial location and ID data are stored in a database; therefore, the memory requirement for the tags is low, as it only stores the ID number. The reader moves along the plane of tags at a distance, along the z axis.
As illustrated in image 5, the reader moves over the (x,y) plane at a distance d. The interrogation field of the reader reaches the tags on the floor. As it moves, several tags are read. Experimental applications have verified that the closer the reader to the tag plane is, the better the location estimation is. In addition, the reader detects significantly fewer tags if it is placed closer to the plane than the wavelength of the RFID system. For example, for tags operating at 950 MHZ, the wavelength, and therefore the minimum recommended distance from the tag plane, is about 0,30 m (Shiraishi, Komuro, Ueda, Kasai, & Tsuboi, 2008).
After receiving the tag IDs that have been detected, the position of the reader is estimated computationally. The easiest approach is to calculate the center of gravity of all the tags detected, but this method might not be the most appropriate because the reader might detect tags that are way off. Thus, a clustering approach can be more suitable. This method can locate clusters of tags and disregard outliers. For this purpose, GIS software that can perform spatial statistics operations can be used. In particular, the Hot Spot Analysis (Getis-Ord Gi*) can locate the clustering (Esri). There should be no reason to use global statistics first, which shall identify whether or not clustering exists, since it is taken for granted that there is a cluster of detected tags.
The reader in an implementation of a fixed grid of passive RFID tags can be small enough to be portable by blind people. An example is a prototype called SmartVision (Fernandes, Filipea, Costa, & Barroso, 2014), aimed at providing location-based services for the blind. The prototype is based on the same principle as the example above. There is a base layer of RFID tags which marks the area the people can move and several “layers” of points of interest (POI). Of course, the tags used still only store their ID information; whether a specific tag belongs to layers with POIs is associated in the GIS database. This way, people can navigate (or be informed about their position) on many levels, e.g. on a position marked as ‘safe’ (not wet, for example, since the database can be updated in real-time) and marked as ‘room 5’.
It is possible, however, to use RFID-based positioning system as above but for linear location sensing; along a path, for example. Active RFID systems operating at high frequencies (microwave) are already being used for electronic toll collection on motorways. Though they can be used (and are used) for tracking of vehicles and road traffic monitoring, they do not provide ‘position’ information in the sense of exact location of a vehicle on the road. Contrary to indoor positioning, the main challenge here is the moving speed of the tracked object (the car) and the selection of an RFID system with transponders able to communicate whilst moving at high speeds. In addition, the reader and the middleware installed on the car must be able to handle high data rate, given that many tags are read in a short time frame. Even if the tags only communicate their ID codes, the data adds up. In the experimentation setting of (Chon, Jun, Jung, & An, 2004), the reader moving at 165 km/h needs 81 cm of travel distance to read 128 bits of data from a stationary tag, therefore, its read range has to be at least 81 cm.
### 3.2. Fixed reader(s) & ref. tags of known location – moving tags of unknown location
The implementation opposite of §3.1 is to set readers to a fixed and known position and estimate the position of RFID tags within their interrogation field. The techniques used for this purpose are the ones presented in §1.3.
## 4. Conclusions & further study
Implementing an RFID-based system for positioning, especially indoor positioning, can have some advantages over competitive systems. RFID generally is a simple, flexible, portable and low-cost system that can provide identification, in a supply chain for example, and location information at the same time. However, the communication is one-way when dealing with passive tags and undesirable multipath effects are observed (Bai, Wu, Wu, & Zhang, 2012).
While the positioning techniques mentioned above are mainly focused on either 2D or 2.5D, the possibilities offered by a 3D positioning system are significant, especially in work sites. An example of such a location sensing system is by (Ko, 2013) and provides users with an alternative to the RFID-based symbolic tracking methods, such as the ones in a supply chain.
## 5. References
• Ahmed, A. (2015, Jan-Apr). Role of GIS, RFID and handheld computers in emergency management: an exploratory case study anbalysis. JISTEM – Journal of Information Systems and Technology Management, 12(1), 3-28.
• AlterGeo. (2015). AlterGeo – About Us. Retrieved January 11, 2016, from AlterGeo: http://platform.altergeo.ru/index.php?mode=about
• Bai, Y. B., Wu, S., Wu, H., & Zhang, K. (2012). Overview of RFID-Based Indoor Positioning Technology. Review, RMIT University, School of Mathematical and Geospatial Sciences, Melbourne, Australia.
• BeiDou Navigation Satellite System. (2015). BeiDou Navigation Satellite System. Retrieved January 2016, from http://en.beidou.gov.cn/introduction.html
• Chon, H. D., Jun, S., Jung, H., & An, S. W. (2004). Using RFID for Accurate Positioning. Journal of Global Positioning Systems, 3(1-2), 32-29.
• Esri. (2006). A to Z GIS: An Illustrated Dictionary of Geographic Information Systems.
• Esri. (n.d.). How Hot Spot Analysis: Getis-Ord Gi* (Spatial Statistics) works. Retrieved 2016, from Esri help: http://resources.esri.com/help/9.3/arcgisengine/java/gp_toolref/spatial_statistics_tools/how_hot_spot_analysis_colon_getis_ord_gi_star_spatial_statistics_works.htm
• European Space Agency. (2015, December 18). What is Galileo? Retrieved January 4, 2016, from The European Space Agency – Galileo Navigation: http://www.esa.int/Our_Activities/Navigation/The_future_-_Galileo/What_is_Galileo
• Fernandes, H., Filipea, V., Costa, P., & Barroso, J. (2014). Location based services for the blind supported by RFID technology. Procedia Computer Science. 27, pp. 2-8. Elsevier.
• Figueiras, J., & Frattasi, S. (2010). Mobile Positioning and Tracking: From Conventional to Cooperative Techniques. Chichester, West Sussex, UK: John Wiley & Sons Ltd.
• Finkenzeller, K. (2010). RFID Handbook (Third Edition ed.). (D. Müller, Trans.) Chichester, West Sussex, UK: Wiley.
• Fujitsu. (2014). Datasheet – Fujitsu RFID and Sensor Solutions 64Kbyte FRAM Metal Mount RFID Tag. Retrieved January 2016, from Fujitsu: https://www.fujitsu.com/jp/group/frontech/documents/en/solutions/business-technology/intelligent-society/rfid/ait64k/brochure-ait64k.pdf
• Goshey, M. (2008). Radio Frequency Identification (RFID). In H. X. Shashi Shekhar (Ed.), Encyclopedia of GIS (pp. 943-949). Springer US.
• Gubi, K., Wasinger, R., Fry, M., Kay, J., Kuflik, T., & Kummerfeld, B. (2010). Towards a Generic Platform for Indoor Positioning using Existing Infrastructure and Symbolic Maps. User Modelling, Adaptation & Personalisation (UMAP 2010): Workshop on Architectures and Building Blocks of Web-Based User-Adaptive Systems, (pp. 1-6).
• Jensen, C. S., Lin, D., & Ooi, B. C. (2008). Indexing of Moving Objects, Bx-Tree. In Encyclopedia of GIS (pp. 512-517). Springer US.
• Küpper, A. (2005). Location-Based Services: Fundamentals and Operation. Wiley.
• Kang, H.-Y., Kim, J.-S., & Li, K.-J. (2010). sTrack: Tracking in Indoor Symbolic Space with RFID Sensors. Pusan National University, Department of Computer Science. Pusan, South Korea: Pusan National University.
• Ko, C.-H. (2013). 3D-Web-GIS RFID Location Sensing System for Construction Objects. The Scientific World Journal, 2013, 8.
• Kolodziej, K. W., & Hjelm, J. (2006). Local Positioning Systems: LBS Applications and Services. CRC Press, Taylor & Francis Group.
• Manzoon, F. (2010). Passive RFID-based indoor positioning system, an algorithmic approach. Program for the IEEE International Conference on RFID-Technology and Applications, 17-19 June 2010 (pp. 112-117). Guangzhou, China: IEEE.
• Russian Federal Space Agency. (2016, January 5). GLONASS constellation status. Korolyov, Moscow Oblast, Russia: ederal Space Agency, Information-analytical Centre.
• Shikada, M., Shiraishi, S., & Takeuchi, S. (2012). The method to obtain position using GNSS and RFID for realization of indoor and outdoor seamless positioning. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. XXXIX-B4, pp. 45-50. Melbourne: XXII ISPRS Congress, 25 August – 01 September 2012, Melbourne, Australia.
• Shiraishi, T., Komuro, N., Ueda, H., Kasai, H., & Tsuboi, T. (2008). Indoor Location Estimation Technique using UHF band RFID. Information Networking, 2008. ICOIN 2008. International Conference on 23-25 Jan. 2008 (pp. 1-5). Tokyo: IEEE.
• Shuanggen, J., Estel, C., & Feiqin, X. (2013). GNSS Remote Sensing: Theory, Methods and Applications. Springer.
• Wen, W. (2010). An intelligent traffic management expert system with RFID technology. Expert Systems with Applications, 37, 3024-3035.
• Youssef, M. (2008). Indoor localization. In S. Shekhar, & H. Xiong (Eds.), Encyclopedia of GIS (pp. 547-552).
• Yu, K., Sharp, I., & Guo, Y. J. (2009). Ground-Based Wireless Positioning. John Wiley & Sons, Ltd.
• Zhang, Y., Li, X., & Amin, M. (2010). Principles and Techniques of RFID Positioning (ch 15). In M. Bolic, D. Simplot-Ryl, I. Stojmenovic, M. Bolic, D. Simplot-Ryl, & I. Stojmenovic (Eds.), RFID Systems, Research Trends and Challenges. John Wiley.
Categories: Localization Tags: Tags: , | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40864691138267517, "perplexity": 1731.537765335199}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141189030.27/warc/CC-MAIN-20201126230216-20201127020216-00295.warc.gz"} |
http://www.erisian.com.au/wordpress/2004/04/18 | ## Archive for 2004/04/18
### Ripped Off
The Advanced Algorithms class I took never looked like this. I wanna be a lawyer. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9815629124641418, "perplexity": 6420.379952226305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164069141/warc/CC-MAIN-20131204133429-00022-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/calc/chapter/8/lesson/8.1.6/problem/8-59 | ### Home > CALC > Chapter 8 > Lesson 8.1.6 > Problem8-59
8-59.
An 'ideal' situation would have infinitely many dolls with infinitely small thicknesses. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255682229995728, "perplexity": 19316.545996172852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00308.warc.gz"} |
https://www.physicsforums.com/threads/hamiltonian-in-landau-gauge.848816/ | # Homework Help: Hamiltonian in Landau gauge
Tags:
1. Dec 18, 2015
### shinobi20
1. The problem statement, all variables and given/known data
Define n=(x + iy)/(2)½L and ñ=(x - iy)/(2)½L.
Also, ∂n = L(∂x - i ∂y)/(2)½ and ∂ñ = L(∂x + i ∂y)/(2)½.
with ∂n=∂/∂n, ∂x=∂/∂x, ∂y=∂/∂y, and L being the magnetic length.
a=(1/2)ñ+∂n and a=(1/2)n -∂ñ
a and a are the lowering and raising operators of quantum mechanics.
Show that H=ħωc(aa + ½)
2. Relevant equations
L=ħc/eB, ωc=eB/mc (cyclotron frequency), e for the charge of the electron
H = Px2/2m + ( Py2 + eBx/c )2/2m
3. The attempt at a solution
I have tried to find x,y,∂x,∂y in terms of n,ñ,∂n,∂ñ. But I ended up getting only some if the right terms to come out but not all, is my first step wrong? Any suggestions?
2. Dec 19, 2015
### blue_leaf77
Should the exponent "2" of $P_y$ be there?
3. Dec 19, 2015
### shinobi20
Sorry, it was a typo. Do you have any suggestions?
4. Dec 19, 2015
### blue_leaf77
You should post your initial attempt before we can discuss further. In particular, how the old variables look like in terms of the new ones?
5. Dec 19, 2015
### shinobi20
This is what I've done so far. My problem is that everything is there except for the ½. I wrote ∂ for ∂n and ∂(bar) for ∂ñ.
#### Attached Files:
• ###### IMG20151220132914.jpg
File size:
45.1 KB
Views:
118
6. Dec 20, 2015
### blue_leaf77
According to this link https://en.wikipedia.org/wiki/Landau_quantization, the Gauge you should be using is the symmetric gauge and hence the original Hamiltonian should be different than that you are using. For instance, in Landau gauge, the operator ${y}$ is not present.
7. Dec 20, 2015
### shinobi20
Why can't I show it using the Landau gauge? The choice is just for simplification of computation right?
8. Dec 20, 2015
### blue_leaf77
$x$ and $y$ appear symmetrically in the gauge transformation, but they do not in the original Hamiltonian.
9. Dec 20, 2015
### shinobi20
Oh I see that, then I'll try it again using the symmetric gauge. Thanks! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9143356084823608, "perplexity": 3290.9098596867198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676588972.37/warc/CC-MAIN-20180715203335-20180715223335-00368.warc.gz"} |
https://waseda.pure.elsevier.com/en/publications/comparative-study-on-the-thermal-behavior-of-structural-concretes | # Comparative study on the thermal behavior of structural concretes of sodium-cooled fast reactor: Siliceous concretes
Shin Kikuchi*, Nobuyoshi Koga, Atsushi Yamazaki
*Corresponding author for this work
Research output: Contribution to journalArticlepeer-review
9 Citations (Scopus)
## Abstract
Thermal behaviors of two different siliceous concretes used in a sodium-cooled fast reactor were comparatively investigated in a temperature range from room temperature to 1900 K for obtaining fundamental information required for establishing a plant simulation system for safety assessment under a postulated accidental condition. Silica crystals and Portland cement were identified as the major component of the aggregate and cement portions of the concrete samples, respectively. The thermal decomposition of the cement portion exhibited partially overlapping multistep reaction comprising the thermal dehydration, thermal decomposition processes of Ca(OH)2 and carbonate compounds including CaCO3. TG–DTG curves recorded for the multistep thermal decomposition process of the cement portion were analyzed using the kinetic deconvolution analysis, and the contributions and kinetic parameters of each reaction step were determined. The kinetics of comparable reaction steps between two samples were practically identical, while the difference between the samples was found in the content ratio of Ca(OH)2/CaCO3. The melting behavior of the siliceous concretes was revealed by the complementary interpretation of TG–DTA curves and the morphological observation of the sample heated to different temperatures. The softening and melting behaviors of the siliceous concretes initially occurred in the thermal decomposition product of the cement portion at a temperature range of 1400–1600 K. The subsequent melting behavior of the aggregate portion that occurs at a higher temperature was different between the samples, owing to the different compositions of the aggregates and the possible interaction of the aggregate with the molten cement portion.
Original language English 1211-1224 14 Journal of Thermal Analysis and Calorimetry 137 4 https://doi.org/10.1007/s10973-019-08045-7 Published - 2019 Aug 30
## Keywords
• Melting
• Siliceous concrete
• Thermal analysis
• Thermal behavior
• Thermal decomposition
## ASJC Scopus subject areas
• Condensed Matter Physics
• Physical and Theoretical Chemistry
## Fingerprint
Dive into the research topics of 'Comparative study on the thermal behavior of structural concretes of sodium-cooled fast reactor: Siliceous concretes'. Together they form a unique fingerprint. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.834916889667511, "perplexity": 7079.73094629115}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949355.52/warc/CC-MAIN-20230330163823-20230330193823-00025.warc.gz"} |
https://mrpt.ual.es/reference/1.0.2/_c_range_bearing_k_f_s_l_a_m2_d_8h_source.html | Main MRPT website > C++ reference
CRangeBearingKFSLAM2D.h
Go to the documentation of this file.
1 /* +---------------------------------------------------------------------------+
2 | The Mobile Robot Programming Toolkit (MRPT) |
3 | |
4 | http://www.mrpt.org/ |
5 | |
6 | Copyright (c) 2005-2013, Individual contributors, see AUTHORS file |
7 | Copyright (c) 2005-2013, MAPIR group, University of Malaga |
8 | Copyright (c) 2012-2013, University of Almeria |
10 | |
11 | Redistribution and use in source and binary forms, with or without |
12 | modification, are permitted provided that the following conditions are |
13 | met: |
14 | * Redistributions of source code must retain the above copyright |
15 | notice, this list of conditions and the following disclaimer. |
16 | * Redistributions in binary form must reproduce the above copyright |
17 | notice, this list of conditions and the following disclaimer in the |
18 | documentation and/or other materials provided with the distribution. |
19 | * Neither the name of the copyright holders nor the |
20 | names of its contributors may be used to endorse or promote products |
21 | derived from this software without specific prior written permission.|
22 | |
23 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS |
24 | 'AS IS' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED |
25 | TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR|
26 | PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE |
27 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL|
28 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR|
29 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) |
30 | HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, |
31 | STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN |
32 | ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE |
33 | POSSIBILITY OF SUCH DAMAGE. |
34 +---------------------------------------------------------------------------+ */
35 #ifndef CRangeBearingKFSLAM2D_H
36 #define CRangeBearingKFSLAM2D_H
37
43 #include <mrpt/opengl.h>
45
47 #include <mrpt/utils/bimap.h>
48
52 #include <mrpt/poses/CPoint2D.h>
54 #include <mrpt/slam/CLandmark.h>
55 #include <mrpt/slam/CSimpleMap.h>
58
60
61 namespace mrpt
62 {
63 namespace slam
64 {
65 using namespace mrpt::bayes;
66 using namespace mrpt::poses;
67
68 /** An implementation of EKF-based SLAM with range-bearing sensors, odometry, and a 2D (+heading) robot pose, and 2D landmarks.
69 * The main method is "processActionObservation" which processes pairs of action/observation.
70 *
71 * The following pages describe front-end applications based on this class:
72 * - http://www.mrpt.org/Application:2d-slam-demo
73 * - http://www.mrpt.org/Application:kf-slam
74 *
75 * \sa CRangeBearingKFSLAM \ingroup metric_slam_grp
76 */
78 public bayes::CKalmanFilterCapable<3 /* x y yaw */, 2 /* range yaw */, 2 /* x y */, 3 /* Ax Ay Ayaw */>
79 // <size_t VEH_SIZE, size_t OBS_SIZE, size_t FEAT_SIZE, size_t ACT_SIZE, size typename kftype = double>
80 {
81 public:
82 CRangeBearingKFSLAM2D( ); //!< Default constructor
83 virtual ~CRangeBearingKFSLAM2D(); //!< Destructor
84 void reset(); //!< Reset the state of the SLAM filter: The map is emptied and the robot put back to (0,0,0).
85
86 /** Process one new action and observations to update the map and robot pose estimate. See the description of the class at the top of this page.
87 * \param action May contain odometry
88 * \param SF The set of observations, must contain at least one CObservationBearingRange
89 */
90 void processActionObservation(
91 CActionCollectionPtr &action,
92 CSensoryFramePtr &SF );
93
94 /** Returns the complete mean and cov.
95 * \param out_robotPose The mean & 3x3 covariance matrix of the robot 2D pose
96 * \param out_landmarksPositions One entry for each of the M landmark positions (2D).
97 * \param out_landmarkIDs Each element[index] (for indices of out_landmarksPositions) gives the corresponding landmark ID.
98 * \param out_fullState The complete state vector (3+2M).
99 * \param out_fullCovariance The full (3+2M)x(3+2M) covariance matrix of the filter.
100 * \sa getCurrentRobotPose
101 */
102 void getCurrentState(
103 CPosePDFGaussian &out_robotPose,
104 std::vector<TPoint2D> &out_landmarksPositions,
105 std::map<unsigned int,CLandmark::TLandmarkID> &out_landmarkIDs,
106 CVectorDouble &out_fullState,
107 CMatrixDouble &out_fullCovariance
108 ) const;
109
110 /** Returns the mean & 3x3 covariance matrix of the robot 2D pose.
111 * \sa getCurrentState
112 */
113 void getCurrentRobotPose(
114 CPosePDFGaussian &out_robotPose ) const;
115
116 /** Returns a 3D representation of the landmarks in the map and the robot 3D position according to the current filter state.
117 * \param out_objects
118 */
119 void getAs3DObject( mrpt::opengl::CSetOfObjectsPtr &outObj ) const;
120
121 /** Load options from a ini-like file/text
122 */
123 void loadOptions( const mrpt::utils::CConfigFileBase &ini );
124
125 /** The options for the algorithm
126 */
128 {
129 /** Default values
130 */
131 TOptions();
132
133 /** Load from a config file/text
134 */
136 const mrpt::utils::CConfigFileBase &source,
137 const std::string §ion);
138
139 /** This method must display clearly all the contents of the structure in textual form, sending it to a CStream.
140 */
141 void dumpToTextStream(CStream &out) const;
142
143
144 vector_float stds_Q_no_odo; //!< A 3-length vector with the std. deviation of the transition model in (x,y,phi) used only when there is no odometry (if there is odo, its uncertainty values will be used instead); x y: In meters, phi: radians (but in degrees when loading from a configuration ini-file!)
145 float std_sensor_range, std_sensor_yaw; //!< The std. deviation of the sensor (for the matrix R in the kalman filters), in meters and radians.
146 float quantiles_3D_representation; //!< Default = 3
147 bool create_simplemap; //!< Whether to fill m_SFs (default=false)
148
149 // Data association:
152 double data_assoc_IC_chi2_thres; //!< Threshold in [0,1] for the chi2square test for individual compatibility between predictions and observations (default: 0.99)
153 TDataAssociationMetric data_assoc_IC_metric; //!< Whether to use mahalanobis (->chi2 criterion) vs. Matching likelihood.
154 double data_assoc_IC_ml_threshold;//!< Only if data_assoc_IC_metric==ML, the log-ML threshold (Default=0.0)
155
156 };
157
158 TOptions options; //!< The options for the algorithm
159
160
161 /** Save the current state of the filter (robot pose & map) to a MATLAB script which displays all the elements in 2D
162 */
163 void saveMapAndPath2DRepresentationAsMATLABFile(
164 const std::string &fil,
165 float stdCount=3.0f,
166 const std::string &styleLandmarks = std::string("b"),
167 const std::string &stylePath = std::string("r"),
168 const std::string &styleRobot = std::string("r") ) const;
169
170
171 /** Information for data-association:
172 * \sa getLastDataAssociation
173 */
175 {
177 Y_pred_means(0,0),
178 Y_pred_covs(0,0)
179 {
180 }
181
182 void clear() {
183 results.clear();
184 predictions_IDs.clear();
185 newly_inserted_landmarks.clear();
186 }
187
188 // Predictions from the map:
189 CMatrixTemplateNumeric<kftype> Y_pred_means,Y_pred_covs;
191
192 /** Map from the 0-based index within the last observation and the landmark 0-based index in the map (the robot-map state vector)
193 Only used for stats and so. */
194 std::map<size_t,size_t> newly_inserted_landmarks;
195
196 // DA results:
198 };
199
200 /** Returns a read-only reference to the information on the last data-association */
202 return m_last_data_association;
203 }
204
205 protected:
206
207 /** @name Virtual methods for Kalman Filter implementation
208 @{
209 */
210
211 /** Must return the action vector u.
212 * \param out_u The action vector which will be passed to OnTransitionModel
213 */
214 void OnGetAction( KFArray_ACT &out_u ) const;
215
216 /** Implements the transition model \f$\hat{x}_{k|k-1} = f( \hat{x}_{k-1|k-1}, u_k ) \f$
217 * \param in_u The vector returned by OnGetAction.
218 * \param inout_x At input has \f[ \hat{x}_{k-1|k-1} \f] , at output must have \f$\hat{x}_{k|k-1} \f$ .
219 * \param out_skip Set this to true if for some reason you want to skip the prediction step (to do not modify either the vector or the covariance). Default:false
220 */
221 void OnTransitionModel(
222 const KFArray_ACT &in_u,
223 KFArray_VEH &inout_x,
224 bool &out_skipPrediction
225 ) const;
226
227 /** Implements the transition Jacobian \f$\frac{\partial f}{\partial x} \f$
228 * \param out_F Must return the Jacobian.
229 * The returned matrix must be \f$V \times V\f$ with V being either the size of the whole state vector (for non-SLAM problems) or VEH_SIZE (for SLAM problems).
230 */
231 void OnTransitionJacobian( KFMatrix_VxV &out_F ) const;
232
233 /** Only called if using a numeric approximation of the transition Jacobian, this method must return the increments in each dimension of the vehicle state vector while estimating the Jacobian.
234 */
235 void OnTransitionJacobianNumericGetIncrements(KFArray_VEH &out_increments) const;
236
237
238 /** Implements the transition noise covariance \f$Q_k \f$
239 * \param out_Q Must return the covariance matrix.
240 * The returned matrix must be of the same size than the jacobian from OnTransitionJacobian
241 */
242 void OnTransitionNoise( KFMatrix_VxV &out_Q ) const;
243
244 /** This is called between the KF prediction step and the update step, and the application must return the observations and, when applicable, the data association between these observations and the current map.
245 *
246 * \param out_z N vectors, each for one "observation" of length OBS_SIZE, N being the number of "observations": how many observed landmarks for a map, or just one if not applicable.
247 * \param out_data_association An empty vector or, where applicable, a vector where the i'th element corresponds to the position of the observation in the i'th row of out_z within the system state vector (in the range [0,getNumberOfLandmarksInTheMap()-1]), or -1 if it is a new map element and we want to insert it at the end of this KF iteration.
248 * \param in_S The full covariance matrix of the observation predictions (i.e. the "innovation covariance matrix"). This is a M·O x M·O matrix with M=length of "in_lm_indices_in_S".
249 * \param in_lm_indices_in_S The indices of the map landmarks (range [0,getNumberOfLandmarksInTheMap()-1]) that can be found in the matrix in_S.
250 *
251 * This method will be called just once for each complete KF iteration.
252 * \note It is assumed that the observations are independent, i.e. there are NO cross-covariances between them.
253 */
254 void OnGetObservationsAndDataAssociation(
255 vector_KFArray_OBS &out_z,
256 vector_int &out_data_association,
257 const vector_KFArray_OBS &in_all_predictions,
258 const KFMatrix &in_S,
259 const vector_size_t &in_lm_indices_in_S,
260 const KFMatrix_OxO &in_R
261 );
262
263 void OnObservationModel(
264 const vector_size_t &idx_landmarks_to_predict,
265 vector_KFArray_OBS &out_predictions
266 ) const;
267
268 /** Implements the observation Jacobians \f$\frac{\partial h_i}{\partial x} \f$ and (when applicable) \f$\frac{\partial h_i}{\partial y_i} \f$.
269 * \param idx_landmark_to_predict The index of the landmark in the map whose prediction is expected as output. For non SLAM-like problems, this will be zero and the expected output is for the whole state vector.
270 * \param Hx The output Jacobian \f$\frac{\partial h_i}{\partial x} \f$.
271 * \param Hy The output Jacobian \f$\frac{\partial h_i}{\partial y_i} \f$.
272 */
273 void OnObservationJacobians(
274 const size_t &idx_landmark_to_predict,
275 KFMatrix_OxV &Hx,
276 KFMatrix_OxF &Hy
277 ) const;
278
279 /** Only called if using a numeric approximation of the observation Jacobians, this method must return the increments in each dimension of the vehicle state vector while estimating the Jacobian.
280 */
281 void OnObservationJacobiansNumericGetIncrements(
282 KFArray_VEH &out_veh_increments,
283 KFArray_FEAT &out_feat_increments ) const;
284
285
286 /** Computes A=A-B, which may need to be re-implemented depending on the topology of the individual scalar components (eg, angles).
287 */
288 void OnSubstractObservationVectors(KFArray_OBS &A, const KFArray_OBS &B) const;
289
290 /** Return the observation NOISE covariance matrix, that is, the model of the Gaussian additive noise of the sensor.
291 * \param out_R The noise covariance matrix. It might be non diagonal, but it'll usually be.
292 */
293 void OnGetObservationNoise(KFMatrix_OxO &out_R) const;
294
295 /** This will be called before OnGetObservationsAndDataAssociation to allow the application to reduce the number of covariance landmark predictions to be made.
296 * For example, features which are known to be "out of sight" shouldn't be added to the output list to speed up the calculations.
297 * \param in_all_prediction_means The mean of each landmark predictions; the computation or not of the corresponding covariances is what we're trying to determined with this method.
298 * \param out_LM_indices_to_predict The list of landmark indices in the map [0,getNumberOfLandmarksInTheMap()-1] that should be predicted.
299 * \note This is not a pure virtual method, so it should be implemented only if desired. The default implementation returns a vector with all the landmarks in the map.
300 * \sa OnGetObservations, OnDataAssociation
301 */
302 void OnPreComputingPredictions(
303 const vector_KFArray_OBS &in_all_prediction_means,
304 vector_size_t &out_LM_indices_to_predict ) const;
305
306 /** If applicable to the given problem, this method implements the inverse observation model needed to extend the "map" with a new "element".
307 * \param in_z The observation vector whose inverse sensor model is to be computed. This is actually one of the vector<> returned by OnGetObservations().
308 * \param out_yn The F-length vector with the inverse observation model \f$y_n=y(x,z_n) \f$.
309 * \param out_dyn_dxv The \f$F \times V\f$ Jacobian of the inv. sensor model wrt the robot pose \f$\frac{\partial y_n}{\partial x_v} \f$.
310 * \param out_dyn_dhn The \f$F \times O\f$ Jacobian of the inv. sensor model wrt the observation vector \f$\frac{\partial y_n}{\partial h_n} \f$.
311 *
312 * - O: OBS_SIZE
313 * - V: VEH_SIZE
314 * - F: FEAT_SIZE
315 *
316 * \note OnNewLandmarkAddedToMap will be also called after calling this method if a landmark is actually being added to the map.
317 */
318 void OnInverseObservationModel(
319 const KFArray_OBS & in_z,
320 KFArray_FEAT & out_yn,
321 KFMatrix_FxV & out_dyn_dxv,
322 KFMatrix_FxO & out_dyn_dhn ) const;
323
324 /** If applicable to the given problem, do here any special handling of adding a new landmark to the map.
325 * \param in_obsIndex The index of the observation whose inverse sensor is to be computed. It corresponds to the row in in_z where the observation can be found.
326 * \param in_idxNewFeat The index that this new feature will have in the state vector (0:just after the vehicle state, 1: after that,...). Save this number so data association can be done according to these indices.
327 * \sa OnInverseObservationModel
328 */
330 const size_t in_obsIdx,
331 const size_t in_idxNewFeat );
332
333
334 /** This method is called after the prediction and after the update, to give the user an opportunity to normalize the state vector (eg, keep angles within -pi,pi range) if the application requires it.
335 */
336 void OnNormalizeStateVector();
337
338 /** @}
339 */
340
342 void getLandmarkIDsFromIndexInStateVector(std::map<unsigned int,CLandmark::TLandmarkID> &out_id2index) const
343 {
344 out_id2index = m_IDs.getInverseMap();
345 }
346
347 protected:
348
349 /** Set up by processActionObservation
350 */
351 CActionCollectionPtr m_action;
352
353 /** Set up by processActionObservation
354 */
355 CSensoryFramePtr m_SF;
356
357 /** The mapping between landmark IDs and indexes in the Pkk cov. matrix:
358 */
360
361 /** The sequence of all the observations and the robot path (kept for debugging, statistics,etc)
362 */
363 CSimpleMap m_SFs;
365 TDataAssocInfo m_last_data_association; //!< Last data association
366
367
368 }; // end class
369 } // End of namespace
370 } // End of namespace
371
372 #endif
An implementation of EKF-based SLAM with range-bearing sensors, odometry, and a 2D (+heading) robot p...
TOptions options
The options for the algorithm.
std::map< size_t, size_t > newly_inserted_landmarks
Map from the 0-based index within the last observation and the landmark 0-based index in the map (the...
bool create_simplemap
Whether to fill m_SFs (default=false)
The namespace for Bayesian filtering algorithm: different particle filters and Kalman filter algorith...
vector_float stds_Q_no_odo
A 3-length vector with the std. deviation of the transition model in (x,y,phi) used only when there i...
mrpt::vector_double CVectorDouble
This is just another name for mrpt::vector_double (Backward compatibility with MRPT <=0...
This class allows loading and storing values and vectors of different types from a configuration text...
This base class is used to provide a unified interface to files,memory buffers,..Please see the deriv...
Definition: CStream.h:62
Declares a class that represents a Probability Density function (PDF) of a 2D pose ...
std::vector< size_t > vector_size_t
TDataAssociationMetric
Different metrics for data association, used in mrpt::slam::data_association For a comparison of both...
double data_assoc_IC_ml_threshold
Only if data_assoc_IC_metric==ML, the log-ML threshold (Default=0.0)
Virtual base for Kalman Filter (EKF,IEKF,UKF) implementations.
Classes for 2D/3D geometry representation, both of single values and probability density distribution...
This class stores a sequence of <Probabilistic Pose,SensoryFrame> pairs, thus a "metric map" can be t...
Definition: CSimpleMap.h:59
This is the global namespace for all Mobile Robot Programming Toolkit (MRPT) libraries.
double data_assoc_IC_chi2_thres
Threshold in [0,1] for the chi2square test for individual compatibility between predictions and obser...
std::vector< int32_t > vector_int
float std_sensor_yaw
The std. deviation of the sensor (for the matrix R in the kalman filters), in meters and radians...
TDataAssociationMetric data_assoc_IC_metric
Whether to use mahalanobis (->chi2 criterion) vs. Matching likelihood.
CMatrixTemplateNumeric< double > CMatrixDouble
Declares a matrix of double numbers (non serializable).
The results from mrpt::slam::data_association.
TDataAssociationMethod
Different algorithms for data association, used in mrpt::slam::data_association.
const TDataAssocInfo & getLastDataAssociation() const
Returns a read-only reference to the information on the last data-association.
This is a virtual base class for sets of options than can be loaded from and/or saved to configuratio...
Page generated by Doxygen 1.8.14 for MRPT 1.0.2 SVN: at lun oct 28 00:52:41 CET 2019 Hosted on: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7378978133201599, "perplexity": 10573.017949181796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00166.warc.gz"} |
https://arxiv.org/abs/nucl-ex/0211009 | nucl-ex
(what is this?)
Title: Cross Section Measurement of Charged Pion Photoproduction from Hydrogen and Deuterium
Authors: L.Y. Zhu (for the Jefferson Lab Hall A Collaboration)
Abstract: We have measured the differential cross section for the gamma n --> pi- p and gamma p --> pi+ n reactions at center of mass angle of 90 degree in the photon energy range from 1.1 to 5.5 GeV at Jefferson Lab (JLab). The data at photon energies greater than 3.3 GeV exhibit a global scaling behavior for both pi- and pi+ photoproduction, consistent with the constituent counting rule and the existing pi+ photoproduction data. Possible oscillations around the scaling value are suggested by these new data The data show enhancement in the scaled cross section at a center-of-mass energy near 2.2 GeV. The cross section ratio of exclusive pi- to pi+ photoproduction at high energy is consistent with the prediction based on one-hard-gluon-exchange diagrams.
Subjects: Nuclear Experiment (nucl-ex) Journal reference: Phys.Rev.Lett.91:022003,2003 DOI: 10.1103/PhysRevLett.91.022003 Cite as: arXiv:nucl-ex/0211009 (or arXiv:nucl-ex/0211009v2 for this version)
Submission history
From: Lingyan Zhu [view email]
[v1] Fri, 8 Nov 2002 22:26:45 GMT (89kb)
[v2] Thu, 30 Jan 2003 21:34:00 GMT (51kb) | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9657498002052307, "perplexity": 5065.550669664231}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219692.98/warc/CC-MAIN-20180822065454-20180822085454-00054.warc.gz"} |
http://phabletkeyboards.com/sources-of/speed-of-sound-in-air-lab-sources-of-error.php | Home > Sources Of > Speed Of Sound In Air Lab Sources Of Error
# Speed Of Sound In Air Lab Sources Of Error
## Contents
You will need this to calculate the actual speed of sound at that temperature. Set up a data table to record your observations:Data Table:Tuning fork frequency, Hz f(printed on the fork)Length, L water level to top of tubeDiameter of tube, d$\lambda = 4(L + 0.3d)$Experimental Log in or Sign up here!) Show Ignored Content Know someone interested in this topic? Hold the vibrating tuning fork so that the tines are horizontally aligned near the top of the tube, but not touching the tube. 6. my review here
Generated Mon, 25 Jul 2016 13:54:16 GMT by s_rh7 (squid/3.5.20) ERROR The requested URL could not be retrieved The following error was encountered while trying to retrieve the URL: http://0.0.0.10/ Connection ChromoZoneX, May 15, 2011 (Want to reply to this thread? I understand your setup, but an oscilloscope, sound pulse generator, and a transmitter/receiver setup on a ruler is another good way to calculate speed. Everyone who loves science is here!
## Sources Of Error In Resonance Tube Experiment
What do they show/prove?For the results see the above table. All this errors could be reduced by measuring with greater precision the length of the resonant tube and the temperature in the room. Place the PVC tube into the water. 4. Generated Mon, 25 Jul 2016 13:54:16 GMT by s_rh7 (squid/3.5.20)
Your cache administrator is webmaster. You can only upload photos smaller than 5 MB. Sign up for a free 30min tutor trial with Chegg Tutors Dismiss Notice Dismiss Notice Join Physics Forums Today! Resonance Lab Sources Of Error water level is never straight which can cause fluctuations in the waves enhancements, 1.
There is no way to prevent all the measuring errors. Speed Of Sound Error Analysis Stay logged in Physics Forums - The Fusion of Science and Community Forums > Science Education > Homework and Coursework Questions > Introductory Physics Homework > Menu Forums Featured Threads Recent Please try the request again. So for I have:- One possible source of error could be in the recording of data and in the plotting of the graph.
You can only upload files of type 3GP, 3GPP, MP4, MOV, AVI, MPG, MPEG, or RM. Speed Of Sound Lab Conclusion We want to measure the speed of sound without having water vapor as a possible source of error. Of course other errors can be taken into account such that the error in measuring the room temperature which is at least 0.5 degree Celsius. The experiment could be affected by other sounds around the room and outside the room.
2. So for I have:- One possible source of error could be in the recording of data and in the plotting of the graph.
3. Taking a corner at constant speed.
4. Calculations:1.
5. Explaining Rolling Motion Interview with a Physicist: David Hestenes Interview with a Physicist: David J.
6. Video should be smaller than 600mb/5 minutes Photo should be smaller than 5mb Video should be smaller than 600mb/5 minutesPhoto should be smaller than 5mb Related Questions Speed of Sound In
## Speed Of Sound Error Analysis
The room temperature was higher than zero thus changing the results. Yes, my password is: Forgot your password? Sources Of Error In Resonance Tube Experiment Show your calculations!Answer: The tube has a length of 0.350 m. Tuning Fork Lab Sources Of Error T he shortest length of a tube toresonate at a given frequency is satisfiedby a tube that is a quarter of thewavelength or L=1/4.
This experiment show that it is possible to measure the speed of the sound with a good precision by knowing the its frequency and by deducing its wavelength from resonance measurements.Uncertainty http://phabletkeyboards.com/sources-of/speed-of-sound-in-air-lab-error.php This would make it so the particles in the air are moving at a faster speed. Hence the longest wavelength of the sound in the tube can be $\lambda =4(0.35+0.3*0.053) =1.464 m$. With your other hand move the tube slowly up and down in the water until it resonates at the point of maximum sound intensity.7. Resonance In Closed Air Columns Lab Sources Of Error
Speed of Sound - Errors and Enhancements May 12, 2011 #1 ChromoZoneX I need sources of error (4) , and some enhancements for accuracy (4) for a lab experiment involving speed Trending Now Ellen DeGeneres Hilaria Baldwin Angelique Kerber Kanye West Bob Dylan Luxury SUV Deals Daphne Zuniga Denver Broncos Rheumatoid Arthritis Symptoms 2016 Cars Answers Best Answer: The speed of sound Trending I'll try again. get redirected here Why is this lab important?
The friendliest, high quality science and math community on the planet! Sources Of Error In Standing Waves Experiment Griffiths Advanced Astrophotography Introduction to Astrophotography Intermediate Astrophotography Struggles with the Continuum – Conclusion Name the Science Photo Frames of Reference: A Skateboarder’s View Why Supersymmetry? They come from the position of the resonant fork on the top of the tube (it must be centered and not touch the top of the tube, but not too high)
Atheists, do you think the conclusion of the 'Schrodinger's Cat' thought experiment violates the law of noncontradiction? 22 answers The Pendulum? 7 answers Why do sound waves travel at a speed PC oscilloscope and storage is a better method. The lowest theoretical frequency that can be measured is therefore $f =v/\lambda = 345.2/1.464 =235.8 Hz$ Results: What were your results? Resonance In Air Columns Lab Report I got all the calculations and stuff done :) I just can't figure out those....although i have some rather vague theories such as Errors, 1. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5508484840393066, "perplexity": 1959.2056972023968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221208676.20/warc/CC-MAIN-20180814062251-20180814082251-00128.warc.gz"} |
http://sixthform.info/steve/wordpress/?p=59 | This site is a showcase for using LatexRender for mathematics in WordPress
# Using LaTeX in WordPress
## Tuesday 24th April 2007
Filed under: — Steve @ 11:38 am
A more recent version of this post appears here.
Updated 15 July 2011
A PDF version of this post is available here
A. Images
Unfortunately not every host offers LaTeX but there are sites that can help generate the images that can be downloaded.
1. LaTeX Equation Editor. This innovative editor has a symbol table for those who are not sure of the LaTeX code as well as allowing the code to be typed directly. It uses Ajax so that the page does not need to be refreshed to see the rendered image. The source code uses LatexRender. Update: see CodeCogs Equation Editor.
Hamline University Physics Department Latex Equation Editor is based on the same code, with some innovative additions. Editor Online de Ecuaciones Latex is a Spanish version. Online LaTeX Equation Editor has modified the code to use mimeTeX.
(thanks to umustbe and thornahawk for the links).
2. Bruno Gonçalves – Latex Rendering also has a symbol table and direct typing. It uses mimeTeX for a very useful instant preview and LatexRender for the final high-quality image. Discussion is at Professional looking equations by rendering LaTeX online.
3. MimeTeX parses a LaTeX maths expression and immediately emits the corresponding gif image such as this $c=\sqrt{a^2+b^2}$
4. Troy Henderson’s LaTeX Previewer makes it clear what is in the preamble when rendering the image.
MetaPost Previewer allows you to preview and download images made by MetaPost in a number of formats. You can see the log for any errors.
5. MathTran is a new project by the Open University that intends to “provide translation of mathematical content, from TeX to MathML and vice-versa, and to graphics formats, as a web service”. At the moment only Plain TeX (both text and mathematics) can be converted to an image.
MathTran instant preview is a web-based TeX system, complete with a built-in help. It compiles Plain TeX code in real time. The source is available at mathtran-javascript.
Enso TeX Anywhere makes use of MathTran to convert TeX to images in some Windows programs.
6. Roger’s Online Equation Editor offers a choice of image formats, background and text colours, resolution, transparency and anti-alias
7. Sitmo LaTeX Equation Editor uses realtime rendering. It is a Google gadget so can be added to your website.
8. Texify uses mimeTeX to generate the mage. It can also be used to generate links such as http://texify.com/$E=mc^2$ to the image in text based systems such as email.
9. mathTeX is as simple to use as mimeTeX but uses LaTeX to generate a higher quality image like this $c=\sqrt{a^2+b^2}$. If you have LaTeX installed you may prefer to install mathTeX as a cgi program on your system. See also mathTeX Helper, Embedding math with replacemath.js and How to Install Latex On Blogger/Blogspot.
10. HotEqn is a AWT-based Java applet to view and display mathematical equations. Subtitled The IMGless Equation Viewer Applet it cannot be used to create images.
11. Tex2Im is a JavaScript equation editor written by Sergej Zerr which is a web interface to tex2im – the latex to pixmap converter on the server and creates a JPEG image.
12. jsTeXrender is a small JavaScript program which will convert LaTeX code inside pre tags to images. This makes it easy to embed mathematics in any HTML page or in PHP programs such as phpBB or WordPress. It is based on CodeCogs and LatexRender. An example page can be found at Online JavaScript TeX/LaTeX equation render and documentation at YourEquations.com.
13. mathurl is a mathematical version of TinyURL.com. It allows you to reference LaTeXed mathematical expressions using a short url. For example, http://mathurl.com/?5v4pjw will show which you can then edit. More details on mathurl’s help page
14. LaTeX for Blogger is a JavaScript add-on for Firefox with Greasemonkey that enables the use of mimeTeX in Blogger posts.
15. MathBin.net allows you to quickly post mathematics or physics problems in a forum for others to view and reply to and discuss. The posts are long-lived but not permanent.
16. LaTeX word count provides a word count for complete LaTeX documents or for code fragments, with a number of options for parts of the document and whether or not to include mathematics.
17. Equations 1.2.1 is an add-on for the email program Thunderbird, which converts LaTeX mathematics into graphics via a Convert button.
18. Quick LaTeX is a Google gadget that can be added to the iGoogle homepage. It uses mimeTeX to produce the image but it would be quite simple to change the code to use mathTeX or CodeCogs.
19. Latex2png converts LaTeX code into various image formats PNG, GIF, EPS, or JPEG. It includes a menu for inserting code.
20. QuickLaTeX.com is a free service which converts LaTeX code to a URL of an image along with meaningful error messages. More detail, support plus a WordPress plugin can be found at the author’s blog.
21. jsMath is entitled A Method of Including Mathematics in Web Pages and uses native fonts, which can be resized, rather than using images. It works best (but not exclusively) with TeX fonts.
WordPress and jsMath has instructions for using jsMath in WordPress blogs.
Math support in Sphinx can use both image rendering and jsMath for its document generator.
(thanks to Andreas Maier for the links).
22. LaTeX Composer is an experimental add-on for Firefox. It allows you to see a preview image of LaTeX code before copying the code to a LaTeX-enabled site (the image cannot be copied). Strictly speaking, it isn’t an online application as it installs mimeTeX to the Firefox directory but does offer a simple way of using mimeTeX offline. The program is run inside Firefox and is started using a small icon in the status bar.
23. MathJax is an open source, Ajax-based math display solution which can display MathML or TeX code or a mix of both in the same page. It allows for MathML to be viewed in browsers such as Internet Explorer which don’t have native support and normally require a plug-in. It works with both HTML and XHTML pages. Previews can be found here.
24. Google Chart Tools will also display LaTeX code. For example, is given by the URL http://chart.apis.google.com/chart?cht=tx&chl=\displaystyle\int_{-\infty}^{\infty}e^{-x^{2}}\;dx=\sqrt{\pi} Various image properties can also be set.
25. Latex in Word provides macros for use in Microsoft Word which renders LaTeX code into images inserted into a document. The images can be rendered on the author’s remote server or on a local server.
B. Complete Documents
Complete LaTeX documents can also be compiled online. Here are a few sites I have come across:
1. LaTeX-Online-Compiler will compile LaTeX documents to postscript, PDF or DVI formats and will generate references. There’s a German language version of the page.
2. ScienceSoft.at can compile a document to various image formats as well as PDF. The resolution can be adjusted and there are a number of templates. There’s a Flash applet version. Again there’s a German language version.
3. LaTeX to PDF uses MiKTeX to convert LaTeX documents to PDF and allows uploading of classes, style files and images.
4. TeX on Web converts LaTeX and plain TeX documents to Postscript and PDF. It has Czech language support. The instructions are in Czech but the site is still easy to use by non-Czech speakers.
5. MonkeyTeX allows you to upload, store and convert LaTeX documents to PDF in your own account. This makes it possible to collaborate on documents and you can opt to make your PDF documents public and searchable. You can also upload style, bibtex and other files which you can use when compiling LaTeX documents.
6. LaTeX Lab is an online version giving a full text editor and compiler complete with menus and toolbars.
“On the Live environment an installation of MikTeX provides the LaTeX processes and packages. A simple C# class library provides an API for interfacing with the MikTeX tools (for tasks such as TeX-to-PDF conversion) as well as the LaTexLab application database which stores users and corresponding file systems. The C# class library is in turn exposed to the Web as an ASP.NET web service which is consumed via AJAX from the LaTexLab application.”
The project is very much under development and the project site is at Google Code.
1. ScribTeX is a free online collaborative LaTeX editor.
“ScribTeX allows you to work on LaTeX documents from anywhere with internet access and share them with your friends and colleagues easily. Some of the many features of ScribTeX include:
• Create and edit LaTeX documents and automatically render them to PDFs;
• Full revision control of all your documents;
• The choice to keep your documents private, allow people of your choosing to view or edit them, or publish them to world. A fine grained permissions system allows for flexible access control.”
2. Verbosus is an online LaTeX editor which can
Create and manage your latex projects and generate .pdf files online, directly in your browser, with syntax highlighting.
Registration is required and the service is free for ‘small projects’ using a maximum of 4 resources, though I am uncertain as to the definition of resources.
VerbTeX allows you to use Verbosus from an Android device.
1. Tex Touch is an app for the iPad. It will edit tex documents and then compile them online using the TeX Cloud online compiling service.
C. Other sites
1. LaTeX word count uses a Perl script to count text words in a LaTeX document and has some options to control the count. I’m not sure how useful a word count is for a typical LaTeX document, but this is an easy way to do so.
2. WordPress.com offers free hosted blogs with LaTeX facilities similar to LatexRender. See Can I put Math or Equations in my Posts? for details.
3. LaTeX Symbols Converter will convert accented characters and HTML and XML characters into LaTeX code. For example,
Schrödinger & his cat Schr\ödinger \&\#38; his cat
will both be converted to Schr\"{o}dinger \& his cat.
4. Detexify tries to work out the LaTeX code for any symbol that is mouse-drawn in a box. Thus drawing should give a list of symbols including \sum, \Sigma and \Upsigma. The mode or possible packages required are also listed. The drawing box is not available in Internet Explorer.
Please let me know about similar sites that are worth including here.
## 45 Comments »
1. The site “The Art of Problem Solving” has a page called “teXer”, which allows one to enter LaTeX code and immediately generate images for one’s website.
Site: http://www.artofproblemsolving.com/LaTeX/AoPS_L_TeXer.php
Comment by TG — Friday 18th May 2007 1:40 am #
2. http://www.bgoncalves.com/online/latex/
is a cross between mimitex and latexrender
Comment by umustbe — Saturday 19th May 2007 3:34 pm #
3. http://www.hamline.edu/~arundquist/equationeditor/
is another.
Comment by umustbe — Saturday 19th May 2007 4:21 pm #
4. http://www.sitmo.com/latex/
is another with realtime rendering
Comment by thijs — Friday 15th June 2007 10:41 am #
5. Very nice! I’ve added it to the list above.
Comment by Steve — Friday 15th June 2007 11:07 am #
6. Hi, this is another mimetex based online latex renderer: texify.com. I found it useful to generate quick formulaes for email messages and bbs.
Comment by Yoko — Monday 30th July 2007 3:09 am #
7. Thanks. I’ll add it to the list.
Comment by Steve — Monday 30th July 2007 9:31 am #
8. Check out TeX THE WORLD – http://thewe.net/tex
It’s an add-on to firefox that automatically replaces all text within [; and ;] with an image of that TeX formula.
For example, if you install it you will see this nicely: [;e^{\pi i} + 1 = 0;]
Comment by Avital Oliver — Friday 17th August 2007 3:27 am #
9. I found this one: http://www.rinconmatematico.com/latexrender/
which is also based on the CodeCogs equation editor. The site is in Spanish, but I believe usage should be straightforward. 🙂
Comment by thornahawk — Sunday 2nd September 2007 4:58 pm #
10. I modified the LaTeX equation editor that you have given as your first link to use mimeTeX instead of LaTeXRender. 🙂
The equation editor I have can be found here. There is a download link to the source in that page. I’m still actively modifying the equation editor to further extend its functionality, so don’t fret if it becomes intermittently unavailable. 😉
Comment by thornahawk — Monday 3rd September 2007 10:07 am #
11. THanks – I’ll add a link to your page.
Comment by Steve — Monday 3rd September 2007 10:18 am #
12. I’ve removed the sporadic “advertisements” from the public latex rendering service at http://www.forkosh.dreamhost.com/mathtex.cgi
So a tag like
<img src=”http://www.forkosh.dreamhost.com/mathtex.cgi?c=\sqrt{a^2+b^2″>
will always render $c=\sqrt{a^2+b^2}$
and never render
$\advertisement c=\sqrt{a^2+b^2}$
Comment by John Forkosh — Wednesday 24th October 2007 11:33 pm #
13. Great! I have removed the reference to adverts in the posting.
Comment by Steve — Thursday 25th October 2007 9:31 am #
14. Writing equations and formulae is a snap with LaTeX, but really hard on a website. No longer. This plugin combines the power of LaTeX and the simplicity of WordPress to give you the ultimate in math blogging platforms.
Comment by estetik — Wednesday 2nd January 2008 10:39 am #
15. estetik, you forgot the link 🙂
Comment by Sergej — Monday 3rd March 2008 5:55 pm #
16. As all this links do not allow automatization on the server side, they can not be integrated into forums ect. Are there some free (with source) available JAVA solutions for rendering formulas? I found one, however there is no source code and it is only possible to use it together with an applet (so can not use it for creating images). http://www.atp.ruhr-uni-bochum.de/VCLab/software/HotEqn/HotEqn.html
BTW. I also created an javascript editor with the integrated applet from above http://out.l3s.uni-hannover.de:9080/Equation/ Comments are welcome, if you wish to use javascript interface on your page – just do it. 🙂
Comment by sergejzr — Monday 3rd March 2008 6:05 pm #
17. As all this links do not allow automatization on the server side, they can not be integrated into forums ect.”
LatexRender can be, and is, used in forums.
Thanks for the two links which I will add to the lists in this post.
Comment by Steve — Monday 3rd March 2008 6:10 pm #
18. Hi,
Although the LaTeX to image converters are very useful, I would love to see a LaTeX to MathML seeing some recent browsers nowadays are capable of rendering it. I have been looking for a way to do it, but can’t seem to find one already done. Any ideas if it already exists?
Cheers
Comment by ArTourter — Wednesday 16th July 2008 12:21 am #
19. mathML versus LaTeXRender discusses changing to mathML and also links to mathML and work ahead which uses itex2MML to convert LaTeX to MathML.
Comment by Steve — Wednesday 16th July 2008 10:17 am #
20. Thanks for the links Steve!
Comment by ArTourter — Saturday 19th July 2008 2:20 pm #
21. I just learned about itex2MML also. Would be great to have such a plugin for WordPress. Actually what would be even greater would be something that could spit out conditional HTML with the equivalent of mathML gobbledy gook image of equation
Comment by baxissimo — Wednesday 30th July 2008 8:56 am #
22. Dan, I should have known that pseudo-html tags would get stripped.
That should have said
<if mathML Supported> MathML gobbledy gook <else> image of equation </if>
Hey, that live preview is pretty nice… didn’t notice that either. 🙂
Comment by baxissimo — Wednesday 30th July 2008 8:59 am #
23. ScribTeX (http://www.scribtex.com) is another site like MonkeyTeX which lets you create, edit and collaborate on LaTeX documents and render them to PDFs
Comment by James — Tuesday 13th January 2009 3:44 pm #
24. Thanks for that. I have added ScribTeX to the list.
Comment by Steve — Tuesday 13th January 2009 4:02 pm #
25. Hey, I just wanted to let you know that mathURL has been updated with interactive rendering and some improved input features.
Comment by mathURL — Wednesday 29th July 2009 5:42 am #
26. Another browser based latex editor can be found at http://www.verbosus.com which also allows to generate and preview pdf documents directly in the browser. Maybe an alternative to monkeytex?
Comment by Jason — Thursday 8th October 2009 1:29 pm #
27. Thank you for that. I will add verbosus to the list.
Comment by Steve — Thursday 8th October 2009 1:40 pm #
28. an update to the verbosus site: It is now possible to use a max. of 4 resources per type which allows bigger and more projects.
Comment by verbosus — Thursday 22nd October 2009 9:57 pm #
29. Thanks for letting me know. I have updated the post.
Comment by Steve — Thursday 22nd October 2009 10:35 pm #
30. Asciimathml can be invoked via the skin or theme of web 2 apps to provide display of math notation and svg. Examples at http://EdTech.alaskapolicy.net
Comment by Marc — Wednesday 28th October 2009 6:30 pm #
31. http://www.sitmo.com/latex/
another one
Comment by Sushi — Tuesday 3rd November 2009 5:36 pm #
32. Thanks. In fact it’s already there but I haven’t made it clear that it belongs to Sitmo so I’ll add their name above.
Comment by Steve — Tuesday 3rd November 2009 6:07 pm #
33. I think an important one is missing:
It seems to work for wordpress
http://stacyprowell.com/blog/2009/04/20/wordpress-and-jsmath/
and can be used insphinx:
http://sphinx.pocoo.org/latest/ext/math.html
Comment by Andreas Maier — Friday 13th November 2009 6:59 pm #
34. Thanks for that. I have added the links.
Comment by Steve — Friday 13th November 2009 9:23 pm #
35. Asciimathml can be invoked via the skin or theme of web 2 apps to provide display of math notation and svg. Examples at
Comment by huzurevi — Tuesday 5th January 2010 1:24 pm #
36. an update to the verbosus site: Syntax highlighting is now available!
Comment by verbosus — Tuesday 16th February 2010 8:15 am #
37. an update to the verbosus site: HTTPS is now supported!
Comment by verbosus — Tuesday 23rd February 2010 3:11 pm #
38. The ‘Detextify’ site is really amazing. I just tried it and it works great. Really cool.
Comment by verbosus — Thursday 25th February 2010 8:45 am #
39. an update to the verbosus site: Code completion is now supported as well 😉
Comment by verbosus — Wednesday 17th March 2010 6:16 am #
40. From the same developer as jsmath, there is the newer MathJax
http://www.mathjax.org/
Comment by Michael — Friday 21st May 2010 12:14 pm #
41. There exists a LaTeX Editor for Android devices called ‘VerbTeX’. It uses the LaTeX service available at http://www.verbosus.com to generate PDFs
Comment by verbtex — Monday 31st January 2011 3:07 pm #
42. I only want to thank you for this page.
In my work I had to give a proposal to use a online latex editor in an institutional CMS, and the site was very helpful
Comment by Diego — Friday 18th February 2011 9:56 pm #
43. Hi, unfortunately a lots of links are dead :-(, a lots of clone :-(, so please before updating this post and pdf version check them.
Comment by Sandor — Friday 23rd December 2011 9:51 am #
44. Yes, I know I ought to do another post with everything updated – I need to find the time to do so. Thanks for reminding me.
Comment by Steve — Friday 23rd December 2011 9:57 am #
45. Finally got round to checking links and updating. I have put the results into a new post at Online LaTeX Update.
Comment by Steve — Friday 23rd December 2011 5:36 pm #
## Leave a comment
XHTML: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>
This site is a showcase for using LatexRender for mathematics in WordPress | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 1, "/images/math/codecogs": 0, "mathtex.cgi": 3, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.698845624923706, "perplexity": 3879.2974038751095}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323221.23/warc/CC-MAIN-20190825062944-20190825084944-00402.warc.gz"} |
https://mathhelpboards.com/threads/fast-fourier-transform-and-its-inverse.6550/ | [SOLVED]Fast Fourier Transform and its inverse
dwsmith
Well-known member
Feb 1, 2012
1,673
Does every FFT have $$i$$ in it?
Given $$u_t = -(u_{xxx} + 6uu_x)$$.
$$f'''(x) = \mathcal{F}^{-1}\left[(ik)^3\mathcal{F}(f(x))\right]$$
$$f'(x) = \mathcal{F}^{-1}\left[(ik)\mathcal{F}(f(x))\right]$$
The only equation I have used the pseudo-spectral method on was the NLS which is
$$u_t = i(u_{xx} + |u|^2u)$$. In this case, I know I will have $$i$$ in the FFT.
Are my transforms for the KdV correct or do I need to remove $$i$$? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9106096625328064, "perplexity": 1067.2098982865625}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400249545.55/warc/CC-MAIN-20200926231818-20200927021818-00521.warc.gz"} |
https://zbmath.org/serials/?q=se%3A111 | ×
## International Journal of Theoretical Physics
Short Title: Int. J. Theor. Phys. Publisher: Springer US, New York, NY ISSN: 0020-7748; 1572-9575/e Online: http://link.springer.com/journal/volumesAndIssues/10773
Documents Indexed: 9,025 Publications (since 1974) References Indexed: 7,682 Publications with 226,160 References.
all top 5
### Latest Issues
61, No. 8 (2022) 61, No. 7 (2022) 61, No. 6 (2022) 61, No. 5 (2022) 61, No. 4 (2022) 61, No. 3 (2022) 61, No. 2 (2022) 61, No. 1 (2022) 60, No. 11-12 (2021) 60, No. 10 (2021) 60, No. 9 (2021) 60, No. 8 (2021) 60, No. 7 (2021) 60, No. 6 (2021) 60, No. 5 (2021) 60, No. 4 (2021) 60, No. 3 (2021) 60, No. 2 (2021) 60, No. 1 (2021) 59, No. 12 (2020) 59, No. 11 (2020) 59, No. 10 (2020) 59, No. 9 (2020) 59, No. 8 (2020) 59, No. 7 (2020) 59, No. 6 (2020) 59, No. 5 (2020) 59, No. 4 (2020) 59, No. 3 (2020) 59, No. 2 (2020) 59, No. 1 (2020) 58, No. 12 (2019) 58, No. 11 (2019) 58, No. 10 (2019) 58, No. 9 (2019) 58, No. 8 (2019) 58, No. 7 (2019) 58, No. 6 (2019) 58, No. 5 (2019) 58, No. 4 (2019) 58, No. 3 (2019) 58, No. 2 (2019) 58, No. 1 (2019) 57, No. 12 (2018) 57, No. 11 (2018) 57, No. 10 (2018) 57, No. 9 (2018) 57, No. 8 (2018) 57, No. 7 (2018) 57, No. 6 (2018) 57, No. 5 (2018) 57, No. 4 (2018) 57, No. 3 (2018) 57, No. 2 (2018) 57, No. 1 (2018) 56, No. 12 (2017) 56, No. 11 (2017) 56, No. 10 (2017) 56, No. 9 (2017) 56, No. 8 (2017) 56, No. 7 (2017) 56, No. 6 (2017) 56, No. 5 (2017) 56, No. 4 (2017) 56, No. 3 (2017) 56, No. 2 (2017) 56, No. 1 (2017) 55, No. 12 (2016) 55, No. 11 (2016) 55, No. 10 (2016) 55, No. 9 (2016) 55, No. 8 (2016) 55, No. 7 (2016) 55, No. 6 (2016) 55, No. 5 (2016) 55, No. 4 (2016) 55, No. 3 (2016) 55, No. 2 (2016) 55, No. 1 (2016) 54, No. 12 (2015) 54, No. 11 (2015) 54, No. 10 (2015) 54, No. 9 (2015) 54, No. 8 (2015) 54, No. 7 (2015) 54, No. 6 (2015) 54, No. 5 (2015) 54, No. 4 (2015) 54, No. 3 (2015) 54, No. 2 (2015) 54, No. 1 (2015) 53, No. 12 (2014) 53, No. 11 (2014) 53, No. 10 (2014) 53, No. 9 (2014) 53, No. 8 (2014) 53, No. 7 (2014) 53, No. 6 (2014) 53, No. 5 (2014) 53, No. 4 (2014) ...and 333 more Volumes
all top 5
### Authors
76 Fan, Hong-Yi 50 Dvurečenskij, Anatolij 49 Yu, Zhaoxian 45 Debnath, Ujjal 43 Guo, Ying 42 Wang, Jisuo 40 Fei, Shaoming 38 Chung, Won-Sang 38 Yang, Shuzheng 38 Zhou, Rigui 37 Gudder, Stanley P. 37 Sang, Minghuang 35 Li, Yuanhua 35 Nie, Yiyou 35 Roy Chowdhury, Asesh 34 Nagata, Koji 31 Pulmannová, Sylvia 31 Rahaman, Farook 30 Bainov, Drumi Dimitrov 30 Gong, Lihua 30 Jamil, Mubasher 30 Jiao, Zhiyong 29 Castagnino, Mario Alberto 29 Sadeghi, Jafar 28 Nishimura, Hirokazu 28 Saadat, Hassan 28 Setare, Mohammad Reza 28 Zecca, Antonio 27 Aerts, Diederik Emiel 27 Pourhassan, Behnam 27 Zha, Xinwei 27 Zhou, Nanrun 26 Hwang, Tzonelih 26 Li, Ziping 26 Meng, Xiangguo 26 Nakamura, Tadao 25 Fang, Maofa 25 Li, Songsong 25 Negi, O. P. S. 25 Obada, Abdel-Shafy Fahmy 25 Pykacz, Jarosław 25 Steeb, Willi-Hans 24 Adhav, Kishor S. 24 Yang, Guohui 23 Gunzig, Edgard 23 Koranga, Bipin Singh 23 Ren, Gang 22 Pradhan, Anirudh 22 Wu, Junde 22 Xie, Shucui 22 Zapatrin, Roman Romanovitz 21 Chakraborty, Subenoy 21 Lu, Daoming 21 Song, Heshan 21 Ye, Liu 21 Zhang, Shou 20 Chen, Xiu-Bo 20 Giuntini, Roberto 20 Huang, Guoqiang 20 Shen, Yougen 20 Zhang, Jianzhong 20 Zhao, Ren 20 Zhou, Ling 19 Finkelstein, David Ritz 19 Hu, Liyun 19 Liu, Wenbiao 19 Sidharth, Burra Gautam 19 Svozil, Karl 19 Wang, Hongfu 19 Xu, Xuexiang 19 Yang, Yuguang 19 Zhang, Lichun 19 Zhu, Hongfeng 18 Bali, Raj 18 Bisht, P. S. 18 Biswas, Anjan 18 Cao, Huaixin 18 Du, Jianming 18 Garola, Claudio 18 Hamhalter, Jan 18 Ji, Yinghua 18 Li, Dongfen 18 Lin, Kai 18 Luo, Mingxing 18 Pták, Pavel 18 Reddy, Dandala R. K. 18 Tao, Yuanhong 18 Xue, Kang 18 Yang, Yixian 18 Ye, Tianyu 17 Kulshreshtha, Usha 17 Liu, Tangkun 17 Mo, Zhiwen 17 Paseka, Jan 17 Payandeh, Farrin 17 Raptis, Ioannis A. 17 Riečanová, Zdenka 17 Sheykhi, Ahmad 17 Xu, Xinglei 17 Yuan, Hongchun ...and 8,072 more Authors
all top 5
### Fields
6,147 Quantum theory (81-XX) 2,445 Relativity and gravitational theory (83-XX) 960 Information and communication theory, circuits (94-XX) 640 Differential geometry (53-XX) 629 Statistical mechanics, structure of matter (82-XX) 467 Mathematical logic and foundations (03-XX) 382 Partial differential equations (35-XX) 366 Computer science (68-XX) 323 Mechanics of particles and systems (70-XX) 298 Functional analysis (46-XX) 294 Order, lattices, ordered algebraic structures (06-XX) 268 Optics, electromagnetic theory (78-XX) 234 Classical thermodynamics, heat transfer (80-XX) 220 Astronomy and astrophysics (85-XX) 211 Global analysis, analysis on manifolds (58-XX) 192 Dynamical systems and ergodic theory (37-XX) 183 Probability theory and stochastic processes (60-XX) 163 Nonassociative rings and algebras (17-XX) 136 Topological groups, Lie groups (22-XX) 129 Operator theory (47-XX) 113 Ordinary differential equations (34-XX) 104 Fluid mechanics (76-XX) 91 Linear and multilinear algebra; matrix theory (15-XX) 91 Statistics (62-XX) 75 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 59 Numerical analysis (65-XX) 57 General and overarching topics; collections (00-XX) 53 Measure and integration (28-XX) 51 Special functions (33-XX) 49 Category theory; homological algebra (18-XX) 45 Number theory (11-XX) 43 Group theory and generalizations (20-XX) 43 Biology and other natural sciences (92-XX) 38 Manifolds and cell complexes (57-XX) 33 Combinatorics (05-XX) 31 Associative rings and algebras (16-XX) 30 Algebraic geometry (14-XX) 27 Mechanics of deformable solids (74-XX) 25 History and biography (01-XX) 24 Harmonic analysis on Euclidean spaces (42-XX) 24 Systems theory; control (93-XX) 22 Real functions (26-XX) 18 Calculus of variations and optimal control; optimization (49-XX) 18 Geometry (51-XX) 17 Algebraic topology (55-XX) 16 General topology (54-XX) 14 Operations research, mathematical programming (90-XX) 13 General algebraic systems (08-XX) 12 Several complex variables and analytic spaces (32-XX) 12 Difference and functional equations (39-XX) 10 Approximations and expansions (41-XX) 10 Convex and discrete geometry (52-XX) 8 Integral transforms, operational calculus (44-XX) 8 Geophysics (86-XX) 6 Functions of a complex variable (30-XX) 6 Integral equations (45-XX) 3 Commutative algebra (13-XX) 3 Abstract harmonic analysis (43-XX) 1 Field theory and polynomials (12-XX) 1 $$K$$-theory (19-XX) 1 Sequences, series, summability (40-XX)
### Citations contained in zbMATH Open
5,198 Publications have been cited 24,333 times in 13,668 Documents Cited by Year
The large-$$N$$ limit of superconformal field theories and supergravity. Zbl 0969.81047
Maldacena, Juan
1999
Introduction to sh Lie algebras for physicists. Zbl 0824.17024
1993
Conservative logic. Zbl 0496.94015
Fredkin, Edward; Toffoli, Tommaso
1982
Relationship between symmetries and conservation laws. Zbl 0962.35009
Kara, A. H.; Mahomed, F. M.
2000
Relational quantum mechanics. Zbl 0885.94012
Rovelli, Carlo
1996
Fractional-order diffusion-wave equation. Zbl 0846.35001
El-Sayed, Ahmed M. A.
1996
Pseudoeffect algebras. I: Basic properties. Zbl 0994.81008
Dvurečenskij, Anatolij; Vetterlein, Thomas
2001
Toward a quantitative theory of self-generated complexity. Zbl 0605.94003
Grassberger, Peter
1986
Filters and supports in orthoalgebras. Zbl 0764.03026
Foulis, D. J.; Greechie, Richard J.; Rüttimann, G. T.
1992
A tutorial review on fractal spacetime and fractional calculus. Zbl 1312.83028
He, Ji-Huan
2014
Group field theory: an overview. Zbl 1100.83010
Freidel, L.
2005
Generalization of blocks for $$D$$-lattices and lattice-ordered effect algebras. Zbl 0968.81003
Riečanová, Zdenka
2000
Pseudoeffect algebras. II: Group representations. Zbl 0994.81009
Dvurečenskij, Anatolij; Vetterlein, Thomas
2001
Controlled dense coding between multi-parties. Zbl 1161.81323
Huang, Yi-Bin; Li, Song-Song; Nie, Yi-You
2009
Superstrings, knots, and noncommutative geometry in $${\mathcal E}^{(\infty)}$$ space. Zbl 0935.58005
El Naschie, M. S.
1998
Generalized uncertainty principle from quantum geometry. Zbl 0981.83021
Capozziello, S.; Lambiase, G.; Scarpetta, G.
2000
Dense coding with cluster state via local measurements. Zbl 1243.81051
Li, Song-Song
2012
Positive and negative hierarchies of integrable lattice models associated with a Hamiltonian pair. Zbl 1058.37055
Ma, Wenxiu; Xu, Xixiang
2004
Topos perspective on the Kochen-Specker theorem. I: Quantum states as generalized valuations. Zbl 0979.81018
Isham, C. J.; Butterfield, J.
1998
Realistic quantum probability. Zbl 0653.60004
Gudder, Stanley
1988
A protocol for the quantum private comparison of equality with $$\chi$$-type state. Zbl 1246.81047
Liu, Wen; Wang, Yong-Bin; Jiang, Zheng-Tao; Cao, Yi-Zhen
2012
Improvement on “quantum key agreement protocol with maximally entangled states”. Zbl 1225.81045
Chong, Song-Kong; Tsai, Chia-Wei; Hwang, Tzonelih
2011
A topos perspective on the Kochen-Specker theorem. II: Conceptual aspects and classical analogues. Zbl 1007.81009
Butterfield, J.; Isham, C. J.
1999
Non-Turing computations via Malament–Hogarth space-times. Zbl 0991.83030
Etesi, Gábor; Németi, István
2002
Non-maximally entangled controlled teleportation using four particles cluster states. Zbl 1169.81318
Nie, Yiyou; Hong, Zhihui; Huang, Yibin; Yi, Xiaojie; Li, Songsong
2009
Bidirectional controlled teleportation via six-qubit cluster state. Zbl 1282.81044
Yan, An
2013
High-capacity quantum secure direct communication with single photons in both polarization and spatial-mode degrees of freedom. Zbl 1261.81058
Liu, Dan; Chen, Jia-Liang; Jiang, Wei
2012
$${\mathcal{PT}}$$-symmetric periodic optical potentials. Zbl 1215.81036
Makris, K. G.; El-Ganainy, R.; Christodoulides, D. N.; Musslimani, Z. H.
2011
Quantum secure direct communication based on four-Qubit cluster states. Zbl 1264.81162
Zhang, Qin-nan; Li, Cui-cui; Li, Yuan-hua; Nie, Yi-you
2013
Bidirectional quantum controlled teleportation by using a genuine six-qubit entangled state. Zbl 1315.81030
Chen, Yan
2015
Energy-momentum complex in Møller’s tetrad theory of gravitation. Zbl 0785.53062
Mikhail, F. I.; Wanas, M. I.; Hindawi, Ahmed; Lashin, E. I.
1993
Quantum teleportation and quantum information splitting by using a genuinely entangled six-qubit state. Zbl 1201.81029
Li, Yuan-Hua; Liu, Jun-Chang; Nie, Yi-You
2010
Quantum Hilbert image scrambling. Zbl 1298.81048
Jiang, Nan; Wang, Luo; Wu, Wen-Ya
2014
LSB based quantum image steganography algorithm. Zbl 1335.81052
Jiang, Nan; Zhao, Na; Wang, Luo
2016
Finite quantum field theory in noncommutative geometry. Zbl 0846.58015
Grosse, H.; Klimčík, C.; Prešnajder, P.
1996
Bidirectional quantum controlled teleportation via a maximally seven-qubit entangled state. Zbl 1308.81045
Duan, Ya-Jun; Zha, Xin-Wei; Sun, Xin-Mei; Xia, Jia-Fan
2014
An invitation to quantum game theory. Zbl 1037.81020
2003
Quantum private comparison protocol based on cluster states. Zbl 1264.81153
Sun, Zhiwei; Long, Dongyang
2013
Quantum secure direct communication with two-photon four-qubit cluster states. Zbl 1251.81034
Sun, Zhi-Wei; Du, Rui-Gang; Long, Dong-Yang
2012
A quantum watermark protocol. Zbl 1264.81127
Zhang, Wei-Wei; Gao, Fei; Liu, Bin; Jia, Heng-Yue; Wen, Qiao-Yan; Chen, Hui
2013
New quantum private comparison protocol using $$\chi$$-type state. Zbl 1251.81033
Liu, Wen; Wang, Yong-Bin; Jiang, Zheng-Tao; Cao, Yi-Zhen; Cui, Wei
2012
Quantum teleportation via $$GHZ$$-like state. Zbl 1162.81358
Yang, Kan; Huang, Liusheng; Yang, Wei; Song, Fang
2009
A general approach for the exact solution of the Schrödinger equation. Zbl 1162.81369
Tezcan, Cevdet; Sever, Ramazan
2009
Quantum teleportation of three and four-qubit state using multi-qubit cluster states. Zbl 1338.81103
Li, Yuan-hua; Li, Xiao-lan; Nie, Li-ping; Sang, Ming-huang
2016
Space-times with covariant-constant energy-momentum tensor. Zbl 0855.53056
Chaki, M. C.; Ray, Sarbari
1996
Peristaltic motion of a particle-fluid suspension in a planar channel. Zbl 0974.76601
Mekheimer, Kh. S.; El Shehawey, Elsayed F.; Elaw, A. M.
1998
Variable-G cosmology and creation. Zbl 0609.53054
Beesham, Aroonkumar
1986
Quantum private comparison based on GHZ entangled states. Zbl 1262.81041
Liu, Wen; Wang, Yong-Bin
2012
The controlled teleportation of an arbitrary two-atom entangled state in driven cavity QED. Zbl 1168.81321
Shan, Chuan-Jia; Liu, Ji-Bing; Liu, Tang-Kun; Huang, Yan-Xia; Li, Hong
2009
A blind quantum signature scheme with $$\chi$$-type entangled states. Zbl 1247.81102
Yin, Xun-Ru; Ma, Wen-Ping; Liu, Wei-Yan
2012
Radiative corrections in the Boulatov-Ooguri tensor model: the 2-point function. Zbl 1236.81164
Geloun, Joseph Ben; Bonzom, Valentin
2011
Foundations of quantum physics: a general realistic and operational approach. Zbl 0963.81005
Aerts, Diederik
1999
Einstein frame or Jordan frame? Irreversibility and cosmology. Zbl 0937.83040
Faraoni, Valerio; Gunzig, Edgard
1999
Finarity substitute for continuous topology. Zbl 0733.54001
Sorkin, Rafael D.
1991
Subalgebras, intervals, and central elements of generalized effect algebras. Zbl 0963.03087
Riečanová, Zdenka
1999
Difference posets, effects, and quantum measurements. Zbl 0806.03040
Dvurečenskij, Anatolij; Pulmannová, Sylvia
1994
Dark energy models with variable equation of state parameter. Zbl 1210.83054
Yadav, Anil Kumar; Rahaman, Farook; Ray, Saibal
2011
Static universe in a modified Brans-Dicke cosmology. Zbl 0699.53090
Berman, Marcelo Samuel
1990
Multi-party quantum private comparison protocol using $$d$$-dimensional basis states without entanglement swapping. Zbl 1297.81064
Liu, Wen; Wang, Yong-Bin; Wang, Xiao-Mei
2014
Holonomy and path structures in general relativity and Yang-Mills theory. Zbl 0728.53055
Barrett, J. W.
1991
Information-theoretical aspects of quantum measurement. Zbl 0384.94006
Prugovecki, E.
1977
Undecidability and incompleteness in classical mechanics. Zbl 0850.70023
da Costa, N. C. A.; Doria, F. A.
1991
Quantum image encryption algorithm based on image correlation decomposition. Zbl 1312.81045
Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun
2015
Bidirectional quantum controlled teleportation by using a seven-qubit entangled state. Zbl 1335.81046
Sang, Ming-huang
2016
Topos perspective on the Kochen-Specker theorem. III: Von Neumann algebras as the base category. Zbl 1055.81004
Hamilton, J.; Isham, C. J.; Butterfield, J.
2000
Quantum computational logic. Zbl 1036.81003
Gudder, S.
2003
An efficient protocol for the secure multi-party quantum summation. Zbl 1203.81047
Chen, Xiu-Bo; Xu, Gang; Yang, Yi-Xian; Wen, Qiao-Yan
2010
Generalized octonion electrodynamics. Zbl 1194.81312
Chanyal, B. C.; Bisht, P. S.; Negi, O. P. S.
2010
Joint remote preparation of a multipartite GHZ-class state. Zbl 1171.81330
Hou, Kui; Wang, Jing; Lu, Yilin; Shi, Shouhua
2009
Bianchi type III anisotropic dark energy models with constant deceleration parameter. Zbl 1213.83053
2011
Dynamics of quantum Fisher information in two component Bose-Einstein condensate. Zbl 1257.82008
Li, Song-Song; Liu, Zhen-Ya; Xiao, Yong-Jun
2012
Forks in the road, on the way to quantum gravity. Zbl 0908.53058
Sorkin, Rafael D.
1997
A novel strategy for quantum image steganography based on Moiré pattern. Zbl 1328.81081
Jiang, Nan; Wang, Luo
2015
Bidirectional and asymmetric quantum controlled teleportation. Zbl 1327.81114
Zhang, Da; Zha, Xin-Wei; Duan, Ya-Jun
2015
Expansions of algebras and superalgebras and some applications. Zbl 1128.17004
de Azcárraga, J. A.; Izquierdo, J. M.; Picón, M.; Varela, O.
2007
Quantum private comparison using genuine four-particle entangled states. Zbl 1248.81020
Jia, Heng-Yue; Wen, Qiao-Yan; Li, Yan-Bing; Gao, Fei
2012
Teleportation of a pure EPR state via GHZ-like state. Zbl 1197.81082
Tsai, Chia-Wei; Hwang, Tzonelih
2010
Improved QSDC protocol over a collective-dephasing noise channel. Zbl 1260.81062
Yang, Chun-Wei; Hwang, Tzonelih
2012
Quantum-like tunnelling and levels of arbitrage. Zbl 1281.91130
Haven, Emmanuel; Khrennikov, Andrei
2013
Bidirectional quantum controlled teleportation via a six-qubit entangled state. Zbl 1307.81022
Duan, Ya-Jun; Zha, Xin-Wei
2014
Quantum teleportation of a two qubit state using GHZ-like state. Zbl 1300.81017
Nandi, Kaushik; Mazumdar, Chandan
2014
A quantum multi-proxy blind signature scheme based on genuine four-qubit entangled state. Zbl 1333.81129
Tian, Juan-Hong; Zhang, Jian-Zhong; Li, Yan-Ping
2016
Automata theory based on quantum logic. II. Zbl 1047.81007
Ying, Mingsheng
2000
Gödel’s theorem and information. Zbl 1016.03501
Chaitin, Gregory J.
1982
Interacting holographic viscous dark energy model. Zbl 1186.83123
Jamil, Mubasher; Farooq, M. Umar
2010
Fractional dynamics of relativistic particle. Zbl 1186.83027
Tarasov, Vasily E.
2010
Spacetimes with semisymmetric energy-momentum tensor. Zbl 1372.53021
De, U. C.; Velimirović, Ljubica
2015
The differential geometry of elementary point and line defects in Bravais crystals. Zbl 0709.53050
Kröner, E.
1990
Algebraic determination of the metric from the curvature in general relativity. Zbl 0523.53037
Hall, G. S.; McIntosh, C. B. G.
1983
Exact traveling-wave solutions to bidirectional wave equations. Zbl 1097.35115
Chen, Min
1998
Smearings of states defined on sharp elements onto effect algebras. Zbl 1016.81005
Riečanová, Zdenka
2002
Controlled teleportation of an arbitrary two-particle pure or mixed state by using a five-qubit cluster state. Zbl 1197.81077
Liu, Jun-Chang; Li, Yuan-Hua; Nie, Yi-You
2010
Verifiable quantum $$(k,n)$$-threshold secret key sharing. Zbl 1209.81142
Yang, Yu-Guang; Teng, Yi-Wei; Chai, Hai-Ping; Wen, Qiao-Yan
2011
Quantum Fisher information in the generalized one-axis twisting model. Zbl 1190.81010
Liu, Wan-Fang; Xiong, Heng-Na; Ma, Jian; Wang, Xiaoguang
2010
Exact solutions of the two-dimensional Schrödinger equation with certain central potentials. Zbl 0985.81028
Dong, Shi-Hai
2000
Total energy of the Bianchi type I universes. Zbl 0968.83051
Xulu, S. S.
2000
Compatibility in D-posets. Zbl 0843.03042
Kôpka, František
1995
Cosmological relativity: a new theory of cosmology. Zbl 0959.83040
Behar, Silvia; Carmeli, Moshe
2000
Lipschitzian quantum stochastic differential inclusions. Zbl 0766.58058
Ekhaguere, G. O. S.
1992
Few simple rules to fix the dynamics of classical systems using operators. Zbl 1261.81006
Bagarello, F.
2012
Asymmetric bidirectional controlled quantum teleportation using eight qubit cluster state. Zbl 1486.81045
2022
Multi-party quantum key agreement protocol based on G-like states and $$\chi^+$$ states. Zbl 1486.81075
Gao, Hang; Zhou, Ri-Gui
2022
Two-photon blockade with second-order nonlinearity in cavity systems. Zbl 1486.81184
Wu, Qi-Cheng; Zhang, Xing-Yuan; Wang, Yue-Ming; Liu, Tong; Zhou, Yan-Hui; Shen, Hong-Zhi; Yang, Chui-Ping
2022
Quantum identity authentication based on round robin differencial phase shift communication line. Zbl 1486.81082
Qian, Yi; Gui, Can; Liu, Bin; Huang, Wei; Xu, Bing-Jie
2022
Detector-device-independent quantum key agreement based on single-photon Bell state measurement. Zbl 1486.81088
Yang, Yu-Guang; Lv, Xin-Long; Gao, Shang; Zhou, Yi-Hua; Shi, Wei-Min
2022
On reconstructing parts of quantum theory from two related maximal conceptual variables. Zbl 1487.81009
Helland, Inge S.
2022
Catalytic transformations with CNOT gate. Zbl 07532983
Gao, Dong-Mei
2022
Probing the information-probabilistic description. Zbl 07542143
2022
Quantum key agreement protocols with GHZ states under collective noise channels. Zbl 1490.81057
Guo, Ji-hong; Yang, Zhen; Bai, Ming-Qiang; Mo, Zhi-Wen
2022
Quantum discord fraction. Zbl 1490.81030
Gao, Dong-Mei; Peng, Li
2022
Quantum key agreement protocol based on quantum search algorithm. Zbl 07420799
Huang, Xi; Zhang, Shi-Bin; Chang, Yan; Qiu, Chi; Liu, Dong-Mei; Hou, Min
2021
Hierarchical quantum teleportation of arbitrary single-qubit state by using four-qubit cluster state. Zbl 07420888
Li, Dongfen; Zheng, Yundan; Liu, Xiaofang; Liu, Mingzhe
2021
Authenticated semi-quantum secret sharing based on GHZ-type states. Zbl 1477.94069
Yin, Aihan; Chen, Tong
2021
Bidirectional quantum teleportation of an arbitrary number of qubits by using four qubit cluster state. Zbl 07420756
Kazemikhah, Payman; Aghababa, Hossein
2021
Improved quantum teleportation of ten-qubit state based on the cluster state quantum channel. Zbl 07420758
Verma, Vikram
2021
A framework for quantum-classical cryptographic translation. Zbl 07420796
Nimbe, Peter; Weyori, Benjamin Asubam; Yeng, Prosper Kandabongee
2021
A novel practical quantum secure direct communication protocol. Zbl 07420826
Lu, Yin-Ju
2021
Secure three-party semi-quantum summation using single photons. Zbl 07421017
Zhang, Cai; Huang, Qiong; Long, Yinxiang; Sun, Zhiwei
2021
Comment on “Quantum controlled teleportation of Bell state using seven-qubit entangled state”. Zbl 07420753
2021
On a topology and limits for inductive systems of $$C^\ast$$-algebras over partially ordered sets. Zbl 07420768
Gumerov, R. N.; Lipacheva, E. V.; Grigoryan, T. A.
2021
Studies on noncommutative measure theory in Kazan university (1968–2018). Zbl 1487.46069
Bikchentaev, Airat M.; Sherstnev, Anatolij N.
2021
Quantum secure multiparty summation based on the phase shifting operation of $$d$$-level quantum system and its application. Zbl 07420797
Ye, Tian-Yu; Hu, Jia-Li
2021
A novel quantum voting scheme based on BB84-state. Zbl 07420842
Liu, Bing-Xin; Jiang, Dong-Huan; Liang, Xiang-Qian; Zhang, Yong-Hua
2021
Efficient verifiable quantum secret sharing schemes via eight-quantum-entangled states. Zbl 1477.94067
Jiang, Shaohua; Liu, Zehong; Lou, Xiaoping; Fan, Zhou; Wang, Sheng; Shi, Jinjing
2021
Majorana representation for a composite system. Zbl 1483.81082
Yang, Jing; Zhang, Yong
2021
Improvement on quantum teleportation of three and four qubit states using multi-qubit cluster states. Zbl 1483.81037
Verma, Vikram; Singh, Nidhi; Singh, Ravi S.
2021
Verifiable quantum secret sharing scheme using $$d$$-dimensional GHZ state. Zbl 1483.81049
Bai, Chen-Ming; Zhang, Sujuan; Liu, Lu
2021
Two-way remote preparations of inequivalent quantum states under a common control. Zbl 07420726
An, Nguyen Ba; Choudhury, Binayak S.; Samanta, Soumen
2021
Cryptanalysis and improvement on authenticated semi-quantum direct communication protocol using Bell states. Zbl 1490.94041
Tsai, Chia-Wei; Yang, Chun-Wei
2021
An exact quantum algorithm for the 2-junta problem. Zbl 07420729
Chen, Chien-Yuan
2021
Can there be given any meaning to contextuality without incompatibility? Zbl 1473.81018
Khrennikov, Andrei
2021
Verifiable quantum key exchange with authentication. Zbl 07420742
Shi, Run-hua; Liu, Bai; Zhang, Mingwu
2021
New entanglement-assisted quantum MDS codes with maximal entanglement. Zbl 07420743
Sarı, Mustafa; Köroğlu, Mehmet E.
2021
Semi-honest three-party mutual authentication quantum key agreement protocol based on GHZ-like state. Zbl 1477.94070
Zhu, Hongfeng; Wang, Chaonan; Li, Zexi
2021
The dynamics of quantum-memory-assisted entropic uncertainty of two-qubit system in the XY spin chain environments with Dzyaloshinsky-Moriya interaction. Zbl 07420759
Zhang, Yanliang; Zhou, Qingping; Kang, Guodong; Fang, Maofa
2021
The logos categorical approach to quantum mechanics. I: Kochen-Specker contextuality and global intensive valuations. Zbl 1475.18003
de Ronde, C.; Massri, C.
2021
On $$\tau$$-essentially invertibility of $$\tau$$-measurable operators. Zbl 07420775
Bikchentaev, Airat M.
2021
Minimal time generation of density matrices for a two-level quantum system driven by coherent and incoherent controls. Zbl 07420776
Morzhin, Oleg V.; Pechen, Alexander N.
2021
Jordan invariants of von Neumann algebras given by abelian subalgebras and Choquet order on state spaces. Zbl 07420778
Turilova, Ekaterina; Hamhalter, Jan
2021
Quantum online streaming algorithms with logarithmic memory. Zbl 1491.68075
2021
Orthogonality spaces arising from infinite-dimensional complex Hilbert spaces. Zbl 07420790
Vetterlein, Thomas
2021
Switchable and enhanced absorption via qubit-mechanical nonlinear interaction in a hybrid optomechanical system. Zbl 07420791
Sohail, Amjad; Ahmed, Rizwan; Yu, Chang shui
2021
A one-round quantum mutual authenticated key agreement protocol with semi-honest server using three-particle entangled states. Zbl 07420807
Zhu, Hongfeng; Liu, Tianhua; Wang, Chaonan
2021
Quantum codes and entanglement-assisted quantum codes derived from one-generator quasi-twisted codes. Zbl 1484.81025
Yao, Yu; Ma, Yuena; Lv, Jingjie
2021
$$q$$-deformed coherent states for $$q$$-deformed photon by using the Tsallis’s $$q$$-deformed exponential function in the non-extensive thermodynamics. Zbl 07420822
Chung, Won Sang; Lütfüoğlu, Bekir Can; Hassanabadi, Hassan
2021
Optimizing quantum teleportation and dense coding via mixed noise under non-Markovian approximation. Zbl 07420832
Islam, Akbar; Wang, An Min; Abliz, Ahmad
2021
Quantum codes from repeated-root cyclic and negacyclic codes of length $$4p^s$$ over $$\mathbb{F}_{p^m}$$. Zbl 07420840
Rani, Saroj; Verma, Ram Krishna; Prakash, Om
2021
Tighter constraints of quantum correlations among multipartite systems. Zbl 07420851
Liu, Dan
2021
Multi-party quantum private comparison based on entanglement swapping of Bell entangled states within $$d$$-level quantum system. Zbl 1476.91224
Ye, Tian-Yu; Hu, Jia-Li
2021
Bosonic fields in causal set theory. Zbl 07420853
Sverdlov, Roman
2021
Cyclic remote state preparation. Zbl 07420862
Peng, Jia-yin; Lei, Hong-xuan
2021
Quantum matrix multiplier. Zbl 07420899
Li, Hong; Jiang, Nan; Wang, Zichen; Wang, Jian; Zhou, Rigui
2021
A novel quantum protocol for private set intersection. Zbl 07420903
Liu, Wen; Yin, Han-Wen
2021
The non-relativistic many-body quantum-mechanical Hamiltonian with diamagnetic current-current interaction. Zbl 07420916
2021
Quantum violation of the suppes-zanotti inequalities and “contextuality”. Zbl 07420921
Svozil, Karl
2021
Transformation of photon-added coherent states via conditional measurements on a beam splitter. Zbl 07420924
Ren, Gang
2021
Quantifying the quantumness of ensembles via generalized $$\alpha$$-$$z$$-relative Rényi entropy. Zbl 07420928
Huang, Huaijing; Wu, Zhaoqi; Zhu, Chuanxi; Fei, Shao-Ming
2021
Quantum private query using W state. Zbl 07420940
Zhou, Ri-Gui; Hua, Yun
2021
Some measurement-based characterizations of separability of bipartite states. Zbl 07420942
Cao, Huaixin; Zhang, Chengyang; Guo, Zhihua
2021
Bidirectional controlled quantum teleportation via two pairs of Bell states. Zbl 07420951
Wang, Mengting; Li, Hai-Sheng
2021
New quantum codes from skew constacyclic codes over a class of non-chain rings $$R_{e, q}$$. Zbl 07421005
Prakash, Om; Islam, Habibul; Patel, Shikha; Solé, Patrick
2021
Semi-quantum mutual identity authentication using Bell states. Zbl 07421006
Jiang, ShuQi; Zhou, Ri-Gui; Hu, WenWen
2021
Quantum private comparison using single Bell state. Zbl 1483.81054
Lang, Yan-Feng
2021
$$n$$-bit quantum secret sharing protocol using quantum secure direct communication. Zbl 1483.81057
2021
Efficient quantum private comparison based on entanglement swapping of Bell states. Zbl 1483.81052
Huang, Xi; Zhang, Shi-Bin; Chang, Yan; Hou, Min; Cheng, Wen
2021
Thermal entanglement in $$2 \times 3$$ Heisenberg chains via distance between states. Zbl 1483.81031
Silva, Saulo L. L.
2021
Flexible for multiple equations about GHZ states and A prototype case. Zbl 1483.81058
Wang, Chaonan; Li, Zexi; Zhu, Hongfeng
2021
Quantum and semi-quantum blind signature schemes based on entanglement swapping. Zbl 1490.81051
Chen, BingCai; Yan, LiLi
2021
Quantum voting scheme based on locally indistinguishable orthogonal product states. Zbl 1433.81032
Jiang, Dong-Huan; Wang, Juan; Liang, Xiang-Qian; Xu, Guang-Bao; Qi, Hong-Feng
2020
Quantum bidirectional teleportation $$2 \leftrightarrow 2$$ or $$2 \leftrightarrow 3$$ qubit teleportation protocol via 6-qubit entangled state. Zbl 1433.81047
Zhou, Ri-Gui; Li, Xin; Qian, Chen; Ian, Hou
2020
Quantum gate-based quantum private comparison. Zbl 1433.81033
Lang, Yan-Feng
2020
Efficient quantum secure direct communication protocol based on quantum channel compression. Zbl 1433.81062
Bebrov, Georgi; Dimova, Rozalina
2020
Three-party semi-quantum key agreement protocol. Zbl 1433.81075
Zhou, Nan-Run; Zhu, Kong-Ni; Wang, Yun-Qian
2020
Quantum blind signature scheme based on quantum walk. Zbl 1462.81050
Li, Xue-Yang; Chang, Yan; Zhang, Shi-Bin; Dai, Jin-Qiao; Zheng, Tao
2020
Multi-party quantum summation within a $$d$$-level quantum system. Zbl 1441.81035
Duan, Ming-Yi
2020
Semi-quantum secure direct communication using entanglement. Zbl 1441.81036
Rong, Zhenbang; Qiu, Daowen; Zou, Xiangfu
2020
A novel quantum identity authentication based on Bell states. Zbl 1433.81072
Zhang, Shun; Chen, Zhang-Kai; Shi, Run-Hua; Liang, Feng-Yu
2020
Some theoretically organized algorithm for quantum computers. Zbl 1436.68123
2020
A quantum proxy blind signature scheme based on superdense coding. Zbl 1435.81065
Niu, Xu-Feng; Ma, Wen-Ping; Chen, Bu-Qing; Liu, Ge; Wang, Qi-Zheng
2020
An encryption scheme based on discrete quantum map and continuous chaotic system. Zbl 1432.68125
Alghafis, Abdullah; Munir, Noor; Khan, Majid; Hussain, Iqtadar
2020
An evolutionary approach to optimizing teleportation cost in distributed quantum computation. Zbl 1432.68159
2020
Semi-quantum proxy signature scheme with quantum walk-based teleportation. Zbl 1458.94290
Zheng, Tao; Chang, Yan; Yan, Lili; Zhang, Shi-Bin
2020
Semi-quantum key distribution with single photons in both polarization and spatial-mode degrees of freedom. Zbl 1458.81015
Ye, Tian-Yu; Li, Hong-Kun; Hu, Jia-Li
2020
A novel protocol for bidirectional controlled quantum teleportation of two-qubit states via seven-qubit entangled state in noisy environment. Zbl 1433.81048
Zhou, Ri-Gui; Qian, Chen; Xu, Ruiqing
2020
Quantum codes derived from one-generator quasi-cyclic codes with Hermitian inner product. Zbl 1472.94097
Lv, Jingjie; Li, Ruihu; Wang, Junli
2020
Two forms schemes of deterministic remote state preparation for four-qubit cluster-type state. Zbl 1433.81046
Zha, Xin-Wei; Wang, Min-Rui; Jiang, Ruo-Xu
2020
Quantum private comparison protocol based on four-particle GHZ states. Zbl 1441.81081
Xu, Qiang-Da; Chen, Hua-Ying; Gong, Li-Hua; Zhou, Nan-Run
2020
Multi-user quantum private query protocol. Zbl 1480.81036
Ye, Tian-Yu; Li, Hong-Kun; Hu, Jia-Li
2020
Quantum private comparison without classical computation. Zbl 1458.81012
Lang, Yan-Feng
2020
Quantumness of bosonic field states. Zbl 1433.81173
Luo, Shunlong; Zhang, Yue
2020
Entanglement sudden death and birth effects in two qubits maximally entangled mixed states under quantum channels. Zbl 1439.81022
Sharma, Kapil K.; Gerdt, Vladimir P.
2020
Bidirectional quantum teleportation with GHZ states and EPR pairs via entanglement swapping. Zbl 1433.81040
Du, Zhenlong; Li, Xiaoli; Liu, Xuejun
2020
Randomized entangled mixed states from phase states. Zbl 1440.81020
Mansour, M.; Daoud, M.; Dahbi, Z.
2020
Quantum cyclic controlled teleportation of unknown states with arbitrary number of qubits by using seven-qubit entangled channel. Zbl 1435.81054
Sun, Shiya; Li, Lixin; Zhang, Huisheng
2020
Effect of quantum noise on teleportation of an arbitrary single-qubit state via a triparticle W state. Zbl 1435.81052
He, Liang-Ming; Wang, Nong; Zhou, Ping
2020
Tighter monogamy constraints in multi-qubit entanglement systems. Zbl 1435.81045
Liang, Yanying; Zhu, Chuan-Jie; Zheng, Zhu-Jun
2020
Improving the bidirectional quantum teleportation scheme via five-qubit cluster state. Zbl 1464.81015
Yuan, Hao; Pan, Guo-zhu
2020
A quantum dialogue protocol in discrete-time quantum walk based on hyperentangled states. Zbl 1462.81044
Liu, Fen; Zhang, Xin; Xu, Peng-Ao; He, Zhen-Xing; Ma, Hong-Yang
2020
Distribution of additive quantum resources. Zbl 1462.81031
Gao, Dong-Mei; Liu, Feng
2020
Quantum secret sharing protocol using maximally entangled multi-qudit states. Zbl 1462.81071
Mansour, M.; Dahbi, Z.
2020
...and 664 more Documents
all top 5
### Cited by 14,049 Authors
7,551 Quantum theory (81-XX) 4,254 Relativity and gravitational theory (83-XX) 1,220 Information and communication theory, circuits (94-XX) 1,126 Differential geometry (53-XX) 956 Partial differential equations (35-XX) 884 Mathematical logic and foundations (03-XX) 697 Statistical mechanics, structure of matter (82-XX) 680 Computer science (68-XX) 657 Order, lattices, ordered algebraic structures (06-XX) 641 Mechanics of particles and systems (70-XX) 574 Dynamical systems and ergodic theory (37-XX) 512 Functional analysis (46-XX) 423 Ordinary differential equations (34-XX) 405 Global analysis, analysis on manifolds (58-XX) 385 Nonassociative rings and algebras (17-XX) 368 Astronomy and astrophysics (85-XX) 328 Optics, electromagnetic theory (78-XX) 325 Probability theory and stochastic processes (60-XX) 321 Classical thermodynamics, heat transfer (80-XX) 306 Fluid mechanics (76-XX) 301 Operator theory (47-XX) 265 Numerical analysis (65-XX) 209 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 205 Linear and multilinear algebra; matrix theory (15-XX) 200 Statistics (62-XX) 179 Real functions (26-XX) 174 Topological groups, Lie groups (22-XX) 165 Category theory; homological algebra (18-XX) 157 Measure and integration (28-XX) 138 Biology and other natural sciences (92-XX) 132 General and overarching topics; collections (00-XX) 132 Special functions (33-XX) 125 Systems theory; control (93-XX) 117 Number theory (11-XX) 116 Associative rings and algebras (16-XX) 110 Mechanics of deformable solids (74-XX) 99 General topology (54-XX) 97 Algebraic geometry (14-XX) 84 Group theory and generalizations (20-XX) 80 Manifolds and cell complexes (57-XX) 75 Combinatorics (05-XX) 74 Algebraic topology (55-XX) 62 Calculus of variations and optimal control; optimization (49-XX) 57 Difference and functional equations (39-XX) 54 Harmonic analysis on Euclidean spaces (42-XX) 49 General algebraic systems (08-XX) 44 Geometry (51-XX) 39 Operations research, mathematical programming (90-XX) 38 Functions of a complex variable (30-XX) 37 Integral equations (45-XX) 34 History and biography (01-XX) 31 Several complex variables and analytic spaces (32-XX) 23 Geophysics (86-XX) 18 Commutative algebra (13-XX) 17 Integral transforms, operational calculus (44-XX) 14 Approximations and expansions (41-XX) 14 Convex and discrete geometry (52-XX) 11 Field theory and polynomials (12-XX) 10 Abstract harmonic analysis (43-XX) 7 Potential theory (31-XX) 7 Sequences, series, summability (40-XX) 2 $$K$$-theory (19-XX) 1 Mathematics education (97-XX) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5997117757797241, "perplexity": 11660.642914424241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337404.30/warc/CC-MAIN-20221003070342-20221003100342-00498.warc.gz"} |
http://stsievert.com/blog/2014/05/15/Scientific-Python-tips-and-tricks/ | You want to pick up Python. But it’s hard and confusing to pick up a whole new framework. You want to try and switch, but it’s too much effort and takes too much time, so you stick with MATLAB. I essentially grew up on Python, meaning I can guide you to some solid resources and hand over tips and tricks I’ve learned.
This guide aims to ease that process a bit by showing tips and tricks within Python. This guide is not a full switch-to-Python guide. There are plenty of resources for that, including some wonderful SciPy lectures, detailed guides to the same material, and Python for MATLAB users. Those links are all useful, and those links should be looked at.
For an intro to Python, including types, the scope, functions and optional keywords and syntax (string addition, etc), look at the Python docs.
But, that said, I’ll share my most valuable tips and tricks I learned from looking at the resources above. These do not serve as a complete replacement those resources! I want to emphasize that.
## Installation
I recommend you install Anaconda. Essentially, all this amounts to is running bash <downloaded file>, but complete instructions can be found on Anaconda’s website.
This would be easiest if you’re familiar with the command line. The basics involve using cd to navigate directories, bash <command> to run files and man <command> to find help, but more of the basics can be found with this tutorial.
### Interpreters
The land of Python has many interpreters, aligning with the Unix philosophy. But at first, it can seem confusing: you’re presented with the default python shell, bpython, IPython’s shell, notebook and QtConsole.
I most recommend IPython; they seem to be more connected with scientific computing. But which one of IPython’s shells should you use? They all have their pros and cons, but the QtConsole wins for plain interpreters. Spyder is an alternative (and IDE, meaning I haven’t used it much) out there that tries to present a MATLAB-like GUI. I do know it’s possible to have IPython’s QtConsole in Spyder.
EDIT: Apparently Spyder includes IPython’s QtConsole by default.
### QtConsole
This is what I most highly recommend. It allows you to see plots inline. Let me repeat that: you can plot inline. To see what I mean, here’s an example:
I’ve only found one area where it’s lacking. The issue is so small, I won’t mention it.
### Notebook
Great for sharing results. Provides an interface friendly to reading code, LaTeX, markdown and images side-by-side. However, it’s not so great to develop code in.
### IPython magic
Normally in Python, you have to run attach(filename) to run an object. If you use IPython, you have access to %run. I’ve found it most useful for inspecting global variables after the script has run. IPython even has other useful tools including %debug (debug after error occured acting like it just occured), !<shell-command> and function?/function?? for help on a function. The docs on magics are handy.
### My personal setup
I typically have MacVim and IPython’s QtConsole (using a special applescript to open; saves opening up iTerm.app) visible and open with an external monitor to look at documentation. A typical script look like
I can then run this script in IPython’s QtConsole with %run script.py (using a handy Keyboard Maestro shortcut to switch windows and enter %run) and then can query the program, meaning I can type z in the QtConsole and see what z is or even plot(z[0,:]). This is a simple script, but this functionality is priceless in larger and more complicated scripts.
## pylab
Pylab’s goal is to bring a MATLAB-like interface to Python. They largely succeed. With pylab, Python can almost serve as a drop-in replacement for MATLAB. You call x = ones(N) in MATLAB; you can do the same with pylab.
One area where it isn’t a drop-in replacement is with division. In Python 2, 1/2 == 0 through integer division and in MATLAB (and the way it should be), 1/2 == 0.5. In Python, if int/int-->int is wanted, you can use 1//2 instead.
To present a nearly drop-in MATLAB interface, use the following code
This from pylab import * is frowned upon. The Zen of Python says
Namespaces are a honking great idea – let’s use more of those!
meaning that from package import * shouldn’t be used with any package. It’s best to use import pylab as p but that’s kinda annoying and gets messy in long lines with lots of function calls. I use from pylab import *; I’m guesing you’ll do the same. If I’m wondering if a function exists, I try calling it and see what happens; oftentimes, I’m surprised.
## Parallelism
Parallelism is a big deal to the scientific community: the code we have takes hours to run and we want to speed it up. Since for-loops can be sped up a ton by parallelism if each iteration is independent, there are plenty of parallelization tools out there to parallelize code, including IPython’s paralleziation toolbox.
But, this is still slightly confusing and seems like a pain to execute. I recently stumbled across on a method to parallelize a function in one line. Basically, all you do is the following:
The link above goes into more detail; I’ll omit most of it. IPython’s parallelization toolkit also includes a map() interface.
## SymPy (+LaTeX printing!)
SymPy serves as a replacement for Mathematica (or at least it’s a close race). With SymPy, you have access to symbolic variables and can perform almost any function on them: differentiation, integration, etc. They support matrices of these symbolic variables and functions on them; it seems like a complete library.
Perhaps most attractive, you can even pretty print LaTeX or ASCII.
I haven’t used this library much, but I’m sure there are good tutorials out there.
## Indexing
When indexing a two-dimensional numpy array, you often use something like array[y, x] (reversed for good reason!). The first index y selects the row while the second selects the column.
For example,
This makes sense because you’d normally use x[0][1] to select the element in the 1st row and 2nd column. x[0,1] does the same thing but drops the unnecessary brackets. This is because Python selects the first object with the first index. Looking at our array, the first object is another array and the first row.
In MATLAB, indexing is 1-based but perhaps most confusingly array(x,y) is array[y,x] in Python. MATLAB also has a feature that allows you to select an element based on the total number of element in an array. This is useful for the Kroeneker product. MATLAB stacks the columns when doing this, which is exactly the method kron relies on. To use Kroeneker indexing in Python, I use x.T.flat[i].
## @: Dot product operator
In any Python version <= 3.4, there’s no dot product operator unlike MATLAB’s *. It’s possible to multiply an array element-wise easily through * in Python (and .* in MATLAB). But coming in Python 3.5 is a new dot product operator! The choices behind @ and the rational are detailed in this PEP.
Until the scientific community slowly progresses towards Python 3.5, we’ll be stuck on lowly Python 2.7. Instinct would tell you to call dot(dot(x,y), z) to perform the dot product of $X \cdot Y \cdot Z$. But instead, you can use x.dot(y).dot(z). Much easier and much cleaner.
## Version Control
This is not really related to the scientific programming process; it applies to any file, whether it be in a programming language or not (a good example: LaTeX files).
Stealing from this list, if you’ve ever
• made a change to code, realised it was a mistake and wanted to revert back?
• lost code or had a backup that was too old?
• had to maintain multiple versions of a product?
• wanted to see the difference between two (or more) versions of your code?
• wanted to prove that a particular change broke or fixed a piece of code?
• wanted to review the history of some code?
• wanted to submit a change to someone else’s code?
• wanted to share your code, or let other people work on your code?
• wanted to see how much work is being done, and where, when and by whom?
• wanted to experiment with a new feature without interfering with working code?
then you need version control. Personally, I can’t imagine doing anything significant without source control. Whenever I’m writing a paper and working on almost any programming project, I use git commit -am "description" all the time. Source control is perhaps my biggest piece of advice.
Version control is normally a bit of a pain: you normally have be familiar with the command line and (with CVS/etc) it can be an even bigger pain. Git (and it’s brother Github) are considered the easiest to use versioning tool.
They have a GUI to make version control simple. It’s simple to commit changes and roll back to changes and even branch to work on different features. It’s available for Mac, Windows and many more GUIs are available.
They even offer private licenses for users in academia. This allows you to have up to five free private code repositories online. This allows for easy collaboration and sharing (another plus: access to Github Pages). There’s a list of useful guides to getting started with Git/Github.
## drawnow
(shameless plug) MATLAB has a great feature that allows you to call drawnow to have a figure update (after calling a series of plot commands). I searched high and low for a similar syntax in Python. I couldn’t find anything but matplotlib’s animation frameworks which didn’t jive with the global scope ease I wanted to use. After a long and arduous search, I did find clf() and draw(). This is simple once you know about it, but it’s a pain to find it.
So, I created python-drawnow to make this functionality easily accessible. It easily allows you to view the results of an iterative (aka for-loop) process.
## Conclusion
As I stressed in the introduction, this guide is not meant to be a full introduction to Python; there are plenty of other tools to do that. There are many other tutorials on learning Python. These all cover the basics: syntax, scope, functions definitions, etc. And of course, the documentation is also a great place to look (NumPy, SciPy, matplotlib). Failing that, a Google/stackoverflow search will likely solve your problem. Perhaps the best part: if you find a problem in a package and fix it, you can commit your changes and make it accessible globally! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24705174565315247, "perplexity": 1906.335570434185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889798.67/warc/CC-MAIN-20180121001412-20180121021412-00417.warc.gz"} |
http://math.stackexchange.com/questions/571188/probability-density-function-of-reallocations-in-bucket-allocation-algorithm | # Probability density function of reallocations in bucket allocation algorithm
Say I have $n$ objects and 100 small buckets. I assign the objects to buckets at random.
Each small bucket can fit at most 5 objects in. To accommodate a 6th, the small bucket must be switched out with a medium bucket that can fit in 10 objects. To accommodate an 11th, this again must switched out for a large bucket that can fit 15 objects. This switching out is repeated with increasingly large buckets that can hold an increasing number of objects, multiple of 5.
What is the probability density function of the number of switches required to accommodate $n$ objects?
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7490609288215637, "perplexity": 1003.9195350949817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119646849.7/warc/CC-MAIN-20141024030046-00092-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://openstudy.com/updates/4f188fd0e4b00328e4c53d30 | Here's the question you clicked on:
55 members online
• 0 viewing
## anonymous 4 years ago A shadow of length L is created by an 850-foot building when the sun is theta degrees above the horizon. Delete Cancel Submit
• This Question is Closed
1. anonymous
• 4 years ago
Best Response
You've already chosen the best response.
0
(a) Write L as a function of $\theta$ . (b)The angle measure increases in equal increments in the table. Does the length of the shadow change in equal increments? Explain.
2. Not the answer you are looking for?
Search for more explanations.
• Attachments:
Find more explanations on OpenStudy
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931981563568115, "perplexity": 6290.215289524777}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722951.82/warc/CC-MAIN-20161020183842-00122-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://amathew.wordpress.com/2009/11/30/a-primer-on-harmonic-functions/ | The topic for the next few weeks will be Riemann surfaces. First, however, I need to briefly review harmonic functions because I will be talking about harmonic forms. I will have more to say about them later, and I actually won’t use most of today’s post even until then. But it’s fun.
Some of this material has also been covered by hilbertthm90 at A Mind for Madness.
Definition
A ${C^2}$ function ${f}$ on an open subset of ${\mathbb{R}^n}$, ${n >1}$, is called harmonic if it satisfies the Laplace equation$\displaystyle \Delta f = \sum \frac{\partial^2f}{\partial x_i^2} = 0.$ For now, we are primarily interested in the case ${n=2}$, and we will identify ${\mathbb{R}^2}$ with ${\mathbb{C}}$. In this case, as is well-known, harmonic functions are locally the real parts of holomorphic functions.
The Poisson Integral
The following fact is well-known: given a continuous function ${f}$ on the circle ${C_1(0)}$, there is a unique continuous function on the closed unit disk ${\overline{U}}$ which is harmonic in the interior and coincides with ${f}$ on the boundary.The idea of the proof is that ${f}$ can be represented as a Fourier series,
$\displaystyle f(e^{it}) = \sum_{n \in \mathbb{Z}} c_n e^{int}$
where the ${c_n}$ are obtained through the orthogonality relations
$\displaystyle c_n = ( f, e^{-int} )$
where the inner product is the ${L^2}$ product taken with respect to the Haar measure on the circle group. This convergence holds in ${L^2}$, because the exponentials form an orthonormal basis for that space. Indeed, orthonormality can be checked by integration, and the Stone-Weierstrass theorem implies their linear combinations are dense in the space of continuous functions on the circle. It is even the case that convergence holds uniformly if ${f}$ is well-behaved (say, ${C^2}$). But this is only for motivational purposes, and I refer anyone interested to, say, Zygmund’s book on trigonometric series for a whole lot fo such results.
Now, it is clear that the functions$\displaystyle z \rightarrow r^n e^{int}, \ z \rightarrow r^n e^{-int}$
are harmonic (where ${t = Arg(z), r = |z|}$) as the real parts of ${z^n, \bar{z}^n}$.
It thus makes sense to define the extended function ${\tilde{f}}$ as$\displaystyle \tilde{f}(re^{it}) = \sum_n c_n r^{|n|} e^{int}.$
Thus, writing ${F(r,t) := \tilde{f(re^{it})}}$, we find
$\displaystyle F(r,t) = \frac{1}{2\pi} \int_0^{2\pi} \sum_n f(e^{ix}) e^{-inx} e^{int} r^{|n|} dx$ which implies
$\displaystyle \boxed{ F(r,t) = \frac{1}{2\pi} \int f(e^{ix}) P_r(t-x) }$
where
$\displaystyle P_r(y) = \sum_n r^{|n|} e^{iny} = \frac{1-r^2}{1-2r\cos y + r^2}$
Theorem 1 The function ${\tilde{f}}$ is continuous in ${\bar{U}}$, harmonic in the interior ${U}$, and equal to ${f}$ on the boundary.
I’m not going to actually fully prove the theorem; the basic idea is that the ${P_r}$ are all of norm ${1}$ (because they are nonnegative and of integral 1), so we have ${||\tilde{f}||_{\infty} \leq ||f||_{\infty}}$. Consequently the result follows from its counterpart on trigonometric polynomials, which is evident since the Fourier series is finite! The approximation result proves useful again.
The Poisson integral shows that it is possible to solve the Dirichlet problem for the disk: that is, one can extend a continuous function on the boundary to a harmonic function. This does not work if ${D}$ is the deleted disk, because there is no harmonic function ${f}$ vanishing on ${\{z: |z|=1\}}$ with ${f(0)=1}$. (Cf. the maximum principle below.)
It is in fact the case that ${\tilde{f}}$ is the only such harmonic function, satisfying the conclusions of the theorem. This follows from the maximum principle—a nonconstant harmonic function has no local maxima, which in turn follows from the Laplace equation and the second derivative test as follows. If ${f}$ is harmonic and has a local maximum at ${0}$, so does ${h:=f + \epsilon(x^2+y^2)}$ for ${\epsilon>0}$ small; however, ${\Delta h > 0}$, a contradiction.
In particular, in view of the expression for the Poisson kernel, a harmonic function is necessarily smooth on its domain; apparently this is more generally true for solutions to elliptic PDEs, but I haven”t learned about them yet. This is probably one of the most important facts for us in the next few posts.
The mean value property
A harmonic function ${f}$ must satisfy on any disk ${D_r(a)}$ in its domain
$\displaystyle \boxed{f(a) = \frac{1}{2\pi} \int_0^{2\pi} f(a + re^{it}) dt.}$
This is in fact a corollary of the Poisson formula and uniqueness above. The mean value property is actually a sufficient condition for harmonicity (together with, say, continuity), but that is not necessary for us, and I refer anyone interested to Rudin’s Real and Complex Analysis.
The Harnack principle
Positive harmonic functions take comparable values on compact subsets of their domains, according to the next result:
Theorem 2 (Harnack)
Let ${D_0}$ be a compact subset of the open region ${D}$. Then there is a positive constant ${c := c(D_0,D)}$ such that for any positive harmonic function on ${D}$ and ${z,z' \in D_0}$$\displaystyle c^{-1} \leq \frac{f(z)}{f(z')} \leq c .$
If ${r<1}$ is sufficiently small, there is a positive constant ${c_0}$ such that$\displaystyle c_0^{-1} \leq P_{r'}(t) \leq c_0, \ \mathrm{when} \ r' \leq r , \forall t.$ This is evident, e.g., from the power series expression. (We actually could have said somewhat more.)
So let ${U}$ be the unit disk and ${U'}$ a small proper subdisk. If ${f}$ is a positive harmonic function on ${U}$, the above observation implies that the values of ${f}$ on ${U'}$ satisfy$\displaystyle d^{-1} \leq \frac{f(z)}{f(z')} \leq d$ for some ${d>0}$. In particular, we get Harnack’s theorem locally. By covering ${D_0}$ with a finite number of overlapping disks, we get the theorem globally. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 67, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882336854934692, "perplexity": 167.42518880335666}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00127.warc.gz"} |
http://www.scstatehouse.gov/sess119_2011-2012/sj11/20110726.htm | South Carolina General Assembly
119th Session, 2011-2012
Journal of the Senate
Tuesday, July 26, 2011
(Statewide Session)
Indicates Matter Stricken
Indicates New Matter
The Senate assembled at 2:00 P.M., the hour to which it stood adjourned, and was called to order by the PRESIDENT.
A quorum being present, the proceedings were opened with a devotion by the Chaplain as follows:
Almost hoping beyond hope, Hosea declared to his people:
Bind Your heart with me as we pray, friends:
Glorious Lord, these faithful servants and their aides have again returned to this Senate Chamber. Strengthen them all as they carry on with their work on behalf of this State. Moreover, give each Senator the sort of wisdom that You alone can provide; may their decisions enrich the life of every South Carolinian. And as always, dear Lord, be with our women and men in uniform who themselves serve in so many places around the globe, often in harm's way. Truly, may we all continue to honor You through everything we say and do in this place! In Your loving name we pray, O Lord.
Amen.
The PRESIDENT called for Petitions, Memorials, Presentments of Grand Juries and such like papers.
Leave of Absence
At 2:15 P.M., Senator BRIGHT requested a leave of absence for the balance of the day.
S. 274 (Word version) Sen. Cromer
INTRODUCTION OF BILLS AND RESOLUTIONS
The following were introduced:
S. 995 (Word version) -- Senator Peeler: A SENATE RESOLUTION TO RECOGNIZE THE TIMKEN GAFFNEY BEARING PLANT UPON ITS FORTIETH ANNIVERSARY AND TO WISH IT CONTINUED SUCCESS IN THE PALMETTO STATE.
l:\s-res\hsp\026timk.mrh.hsp.docx
S. 996 (Word version) -- Senator Leatherman: A SENATE RESOLUTION TO HONOR AND CONGRATULATE BREE BOYCE UPON BEING CROWNED MISS SOUTH CAROLINA 2011 AND TO WISH HER WELL IN ALL HER FUTURE ENDEAVORS.
l:\s-res\hkl\009boyc.mrh.hkl.docx
S. 997 (Word version) -- Senator Nicholson: A SENATE RESOLUTION TO RECOGNIZE OLD MOUNT ZION BAPTIST CHURCH OF THE EPWORTH COMMUNITY IN GREENWOOD COUNTY ON THE OCCASION OF ITS HISTORIC ONE HUNDRED FIFTIETH ANNIVERSARY AND TO COMMEND THE CHURCH FOR A CENTURY AND A HALF OF SERVICE TO THE COMMUNITY.
l:\council\bills\swb\6334cm11.docx
S. 998 (Word version) -- Senator Lourie: A SENATE RESOLUTION TO HONOR JAMES HOWARD "JIM" FOSTER FOR HIS TWO DECADES OF DEDICATED SERVICE AS SOUTH CAROLINA DEPARTMENT OF EDUCATION DIRECTOR OF COMMUNICATIONS, TO CONGRATULATE HIM ON HIS NEW POSITION AS DIRECTOR FOR SCHOOL AND COMMUNITY SERVICES WITH THE BEAUFORT COUNTY SCHOOL DISTRICT, AND TO WISH HIM MUCH SUCCESS IN ALL HIS FUTURE ENDEAVORS.
l:\council\bills\rm\1301ab11.docx
RATIFICATION OF ACTS
Pursuant to an invitation the Honorable Speaker and House of Representatives appeared in the Senate Chamber on July 26, 2011, at 2:15 P.M. and the following Acts were ratified:
L:\COUNCIL\ACTS\172DG11.DOCX
(R110, H. 3792 (Word version)) -- Rep. Rutherford: AN ACT TO AMEND SECTION 50-21-85, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO THE CONDITIONS UPON WHICH A PERSON MAY OPERATE A VESSEL DISPLAYING, REFLECTING, OR FLASHING A BLUE LIGHT, SO AS TO REVISE THE CIRCUMSTANCES IN WHICH A PERSON MAY OPERATE A VESSEL WHILE DISPLAYING A BLUE LIGHT, AND TO REVISE THE PENALTY PROVISION.
L:\COUNCIL\ACTS\3792CM11.DOCX
RECESS
At 2:17 P.M., Senator McCONNELL, moved that the Senate stand in recess for no more than thirty minutes or upon receipt of H. 3992 from the House of Representatives, whichever occurred first.
The motion to recede was adopted.
At 2:52 P.M., the Senate reconvened.
RECESS
At 2:53 P.M., on motion of Senator McCONNELL, the Senate receded from business not to exceed thirty minutes.
At 3:24 P.M., the Senate resumed.
On motion of Senator McCONNELL, the Senate agreed to waive the provisions of Rule 32A requiring H. 3992 to be printed on the Calendar.
The Bill was ordered placed in the category of Bills Returned from the House and would be taken up for consideration when that category was reached in the order of the day.
RECOMMITTED
S. 814 (Word version) -- Senators McConnell, Ford, L. Martin, Hutto, Malloy, Cleary and Shoopman: A BILL TO AMEND SECTION 1-1-715, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO ADOPTION OF THE UNITED STATES CENSUS, SO AS TO ADOPT THE UNITED STATES CENSUS OF 2010 AS THE TRUE AND CORRECT ENUMERATION OF INHABITANTS OF THIS STATE; TO ADD SECTION 7-19-35, SO AS TO ESTABLISH SEVEN ELECTION DISTRICTS FROM WHICH MEMBERS OF CONGRESS FOR SOUTH CAROLINA ARE ELECTED COMMENCING WITH THE 2012 GENERAL ELECTION; TO REPEAL SECTION 7-19-40, AS AMENDED, RELATING TO CONGRESSIONAL DISTRICTS FROM WHICH SOUTH CAROLINA MEMBERS OF CONGRESS WERE FORMERLY ELECTED; AND TO JOINTLY DESIGNATE THE PRESIDENT PRO TEMPORE OF THE SENATE AND THE SPEAKER OF THE HOUSE OF REPRESENTATIVES AS THE APPROPRIATE OFFICIALS OF THE SUBMITTING AUTHORITY TO MAKE THE REQUIRED SUBMISSION OF THE CONGRESSIONAL REAPPORTIONMENT PLAN TO THE UNITED STATES DEPARTMENT OF JUSTICE UNDER THE VOTING RIGHTS ACT.
Senator McCONNELL asked unanimous consent to commit the Bill to the Committee on Judiciary.
There was no objection and the Bill was recommitted to the Committee on Judiciary.
CONCURRENCE
H. 3992 (Word version) -- Reps. Harrell, Lucas, Harrison, Clemmons, Barfield, Cooper, Hardwick, Owens, Sandifer, G.R. Smith, J.R. Smith, White, Bingham and Erickson: A BILL TO AMEND SECTION 1-1-715, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO ADOPTION OF THE UNITED STATES CENSUS, SO AS TO ADOPT THE UNITED STATES CENSUS OF 2010 AS THE TRUE AND CORRECT ENUMERATION OF INHABITANTS OF THIS STATE; TO ADD SECTION 7-19-35, SO AS TO ESTABLISH SEVEN ELECTION DISTRICTS FROM WHICH MEMBERS OF CONGRESS FOR SOUTH CAROLINA ARE ELECTED COMMENCING WITH THE 2012 GENERAL ELECTION; TO REPEAL SECTION 7-19-40, AS AMENDED, RELATING TO CONGRESSIONAL DISTRICTS FROM WHICH SOUTH CAROLINA MEMBERS OF CONGRESS WERE FORMERLY ELECTED; TO JOINTLY DESIGNATE THE PRESIDENT PRO TEMPORE OF THE SENATE AND THE SPEAKER OF THE HOUSE OF REPRESENTATIVES AS THE APPROPRIATE OFFICIALS OF THE SUBMITTING AUTHORITY TO MAKE THE REQUIRED SUBMISSION OF THE CONGRESSIONAL REAPPORTIONMENT PLAN TO THE UNITED STATES DEPARTMENT OF JUSTICE UNDER THE VOTING RIGHTS ACT; AND TO PROVIDE THAT A MEMBER OF ANY BOARD, COMMISSION, OR COMMITTEE REPRESENTING A CONGRESSIONAL DISTRICT, WHOSE RESIDENCY IS TRANSFERRED TO ANOTHER DISTRICT BY THIS ACT, MAY SERVE, OR CONTINUE TO SERVE HIS TERM IN OFFICE; HOWEVER, THE APPOINTING OR ELECTING AUTHORITY MAY ADD AN ADDITIONAL MEMBER ON A BOARD, COMMISSION, OR COMMITTEE WHICH LOSES A RESIDENT MEMBER.
The House returned the Bill with amendments.
Senator McCONNELL explained the House amendments.
Senator MALLOY spoke on the House amendments.
Remarks by Senator MALLOY
Thank you, Mr. PRESIDENT. Gentlemen of the Senate, this is one of the few opportunities that many of us get in a lifetime. In almost five decades of my life now, in watching this Senate from my Senate chair for almost the last decade, we have seen a lot of congratulations to our State for what has been done as far as creating and having enough population to have a new Congressional seat. What happened in this Congressional plan is just totally different. And, I realized that there was a need for this Senate to come together. For many of us that are here now, I hope that we see this moment again as our State continues to move forward toward prosperity. Hopefully with this growth, South Carolina will become a larger player on the national scene and that it will signify progress and development in our State to our Congressional folks. We'll have another task at that particular time. But this task now will last a decade.
My daddy always told me that a generation was generally 18 years. I didn't realize that. I thought it would be longer. But for the next generation, a child born today will have to actually take the remnants -- be a beneficiary, and get the residual results of what we do here going forward. We have long known that it depends on where you live and what's going on around you. The Pee Dee area is vastly different than the coast. It just is. It's a manufacturing and farming area. The folks that grew up in my hometown of Chesterfield -- you all see it on the map -- it is a big area. The area is almost as large as Horry County in land mass and the Senator from Kershaw and Chesterfield represents them. They have only 40 some odd thousand people in the whole county. But Horry County has almost 270,000 people. A child going home from college may go over to McLeod Farms and get a job at the peach orchard. A child going home from college may go over to Horry County and may go to work on a golf course, or may work in a coastal area. The person with a summer job will end up doing what they can do in the manufacturing field, in a family business. It's just different in the areas where the area of Horry is 270,000. I submit to you as I go through the numbers and as I look at that area, it is larger than Marion, Dillon, Marlboro and Chesterfield Counties combined. Horry and Georgetown -- those two little areas on the coast -- will make up half of the population in a Congressional seat.
Many of you know that I've been fortunate to be able to practice law and be a lawyer. In the law, there's a lady that we pay a lot of attention to -- Lady Justice. What Lady Justice does is look at things with a blindfold on. You are sitting in a jury and they are the judges of the facts. So that way you don't have bias. You don't have prejudice. You take your ordinary experience and you are supposed to look at things objectively as they come from the witness chair. It's a great part of our system. This is the process. We have the Voting Rights Act in South Carolina because of our past behavior. What that means is they are simply looking over our shoulders to see what we do. And, I appreciate the communities of interest. I appreciate everybody watching the same television station or whatever. That's not what it's about. It's about whether or not we put these districts together on race and how we are counting. But the plan that just came back from the House of Representatives increased the numbers in the 6th District that's represented by Congressman Clyburn at this point in time to over 55%. There's nothing that we take a vote on in the positive that takes the number up to that amount. The numbers speak for themselves. It's some 55 odd percent. The earlier plan that we had, I think, was 52%. We'll end up having those figures because I want those to be part of the matters of the record that we end up having. The plan we have in the 6th District was 50.49% African American. The House amendment plan that just passed was 55.18%.
The Senate plan that came out had 29.7% African American. The House plan that just came back to us was 27.64% -- a full 2 percentage points less the 1st District's from 21.06%. And that's represented, don't forget, by a young African American who happens to be a Republican -- Congressman Scott. We look at the House plan that just passed and see it has 12 precincts that are divided and the Senate plan had only eight. As I look over the plan now, I'm curious as to the debate that we had. I'm interested in obviously the areas that I represent. But as I go forward and I look to what happened over in Edgefield and Aiken, it was very important during our debate that we had those together for several reasons. But now they are going to be divided. What happened to the fact that we never wanted large counties to dominate smaller counties? And the Bill that was passed by the Senate had Horry, Charleston and Georgetown, the coastal areas and they actually balanced one another. But Greenville was split. Spartanburg was split and Richland was also split, but we're going to make Horry whole. This process is the process that it is. I want to thank the Senator on the other side of the aisle that we did engage in a discussion that was meaningful, that we had the opportunity to bring the matter to a vote. We did vote for a plan that came out of the Senate that was totally adverse to the plan that came out of the House, and by a majority of the Senate. What's unusual is -- oh, what a difference a few days makes.
I have to somewhat apologize in advance. But I do have some editorials that have been published by members of this body. And it troubles me a bit, but I feel compelled to talk about them. There is an old adage in the practice of law that says, "When the facts are on your side, you argue the facts. And when the law is on your side, you argue the law. When neither is on your side, you bang the table and raise cane." There's been banging on the table and raising cane since we left the Senate the last time because I know that those that are voting for this plan cannot be happy. You cannot be happy for the same arguments that you made before we left. I can't presuppose that I would be able to represent any of the other counties because I don't know the constituents. But the numbers in the Greenville-Spartanburg area have changed considerably, where Greenville had a larger population mix in the Senate passed plan than what it has now.
I don't presuppose to be able to say what's best for your county, but we know where the population growth is going. The growth is not in the areas outside of Greenville and others. And so one would think that if you were looking into the trends, how are you going to have the growth in Greenville without having that area become more dominate in the 3rd District? It's growing faster than the rest of the counties. It is. And the votes are being changed. I guess we'll see. I have been through Laurens County. I enjoy it. I go through there sometimes whenever I'm making my way up to Clemson where I love to go and watch the football games. I know that the Congressman from there is a nice young man. I don't know him well. Greenville becomes a different animal because Greenville then grows in population. You watch it. You get a Congressman from Greenville. What's going on in Oconee and Pickens, in that area? Again, I'm just making the point. I don't live there. I am not trying to end up throwing the dagger at any of my colleagues. I have great respect for each of you. But I just asked the question. Edgefield and Aiken have already been touched. Beaufort is getting ready to be divided if we vote for this plan. There was a district under the Senate plan about communities of interest. I have to read from part of an article that has been authored by my good friend, the Senator from Berkeley. In a pertinent part it says the debate centers on this plan. The first plan drew the 7th District to include Horry County and much of the Pee Dee. However, the 6th District represented by Representative Jim Clyburn meandered from the farms of Richland County some 130 miles to the Charleston peninsula and more than 150 miles from the Sumter-Florence lines to the Georgia-South Carolina border at the Savannah River.
The House plan split Berkeley. It split Dorchester, Colleton and Beaufort Counties. It split at least six more counties dividing many along racial lines. In contrast, the plan that was sent by the Senate (I start back to quote), made Colleton, Beaufort, Jasper, Allendale and Barnwell Counties whole. They had a chance for a new 7th District. Coastal Georgetown remains in the 1st District while inland areas join Williamsburg in the 7th. And it goes on to say -- and I planned on obviously having this as a matter of the record that I think should be included in the Journal and end up having those posted -- so, that in the end we'll be able to end up doing just that. I want to go on and talk a little bit about what has actually happened.
I have to end up quoting from an article that was put in circulation with all due respect for all of the folks that are involved in this because I realize that we are honorable people here. I have great respect for each and every person that has put themselves here to be in this Senate. I quote from the article published on July 7th by my good friend, Senator DAVIS. Basically, it says -- and the article speaks for itself -- "One of the worst kept secrets in State politics is that Myrtle Beach Representative Alan Clemmons is running for the yet unrealized 7th Congressional district. Clemmons is chairman of the house subcommittee that drafted the plan and crafted himself exactly a district he could win." That's what Clemmons did. The House created a new district stretching from Myrtle Beach into the democratic Pee Dee area -- a district created for a more moderate Republican. That House approved plan was developed in conjunction with U.S. Representative Jim Clyburn and members of his staff. It chops Beaufort County into two pieces, gutting its political relevance. I don't have to go into the entire article, but I'm just saying that's what changed. This plan is vastly similar to what we got from the House plan. Horry County and Georgetown County are still dominant in this redistricting plan. So, I submit both of these matters. And there is no need for me to go through all of these line by line, because I realize sometimes attitudes change and issues change and facts change.
What I do want to tell you are a few things that absolutely did change. I want to make certain that these numbers are such that we don't understand the templates. We understand where many of the folks are. We had a couple of Congressional plans that came out of the Senate Judiciary. And I can't speak for what happened in the House of Representatives. But I will speak to what happened in the Senate Judiciary plan. In the Senate Judiciary plan -- and I'm going to focus primarily on the 6th and the 7th Districts -- the 7th District that was drafted initially, and these matters are part of the record that we have, the African American voting age population was 31.26% on this Senate staff plan number 1.
Senator SHEHEEN: Senator, what you are saying is -- our Republican colleagues have led the African American vote out of the 7th Congressional District under the amendments that occurred?
Senator MALLOY: I'm giving the numbers. It has gone down from my initial proposal. And basically we started out with the first proposal of 31 some odd percent and it's now down to 27.64%.
Senator SHEHEEN: Senator, I want you to look at this map with me. Look at Florence County there. Florence County is split. In the lower half of Calhoun County, Darlington County, Florence County -- do you know the demographic makeup of that county at all? Do you know why that was split like that?
Senator MALLOY: I do not know why it was split. But I know looking at the plans and looking at the resulting numbers, it appears that the numbers on the new plan have 27.64% where the earlier plans had a more African American population.
Senator SHEHEEN: And that would dilute African American voting in the 7th Congressional District?
Senator MALLOY: I think it would.
Senator SHEHEEN: If you look at Sumter County, do you see that split? Part of it is in the 7th and part of it in the 6th Congressional, is that right? You would agree with me the overall take of this plan is to dilute the African American voting strength in the 7th Congressional District versus even the plans originally presented, is that right? It appears from the plans the African American vote in the 7th District has decreased and the plan that had the African American vote in Number 6 has actually increased. Would you agree with me that if you look at this map, that generally the voting strength of small and rural counties is being diluted, except for perhaps in the 3rd Congressional District? In other words, under this plan the counties that get screwed are the rural and small counties? Is that fair to say?
Senator MALLOY: I think that's fair. I think that the rural areas are going to be eaten up by these larger districts. You're going to have a large district like Horry County, that is coupled with another county just like Charleston which both represent the coastal area. And the areas that you and I represent in the rural part of South Carolina, we just want representation.
Senator SHEHEEN: You understand that the coastal region of Horry County is vastly different than the more upcountry rural part of Horry County? And you understand as well that the coastal portions of Horry and Georgetown Counties have much in common with the coastal portions of the rest of the State? You would love to have a Pee Dee district, wouldn't you? That was truly Pee Dee where the small town and rural communities that really make up the Pee Dee had the ability to elect a Congressman?
Senator MALLOY: I think that's the only chance they will have to invigorate that area. And what is troubling is the fact that right in this district -- Horry and Georgetown comprise one-half of the district. And, in fact, they made a lot of arguments about NESA and the configuration of the counties as relates to NESA -- the North Eastern Strategic Alliance. The chair of NESA is from Williamsburg County who will lead the economic and prosperity of the Pee Dee and they put the chairman of NESA from Williamsburg County back into the 6th District. He's not even in the 7th District, which is in the Pee Dee area.
Senator SHEHEEN: Williamsburg County is part of the Pee Dee?
Senator MALLOY: That's correct.
Senator SHEHEEN: Williamsburg County has much in common with Darlington, Chesterfield, Marlboro, Dillon, Marion and even Florence Counties -- these areas that are Pee Dee areas -- rural, small-town communities, is that correct?
Senator MALLOY: Farming and manufacturing.
Senator SHEHEEN: But Williamsburg County also happens to have a large African American population which Florence and Sumter have likewise and are cut out of the 7th as well. It's pretty apparent what's going on here? What's going on is our Republican colleagues want to have six of the seven Congressional Districts likely electing Republicans, is that right?
Senator MALLOY: I don't know what their end game is, but I'll tell you that's what's likely to happen.
Senator SHEHEEN: That's what it appears to be from the map, is that right? It appears to be from the map the way to do that is to bleed out African American voting influence in the 7th Congressional District. And from the map, that appears to be what's occurred.
Senator MALLOY: It appears it has gone decreasingly lower under each plan that we have and what is growing is that the plan that we passed in the Senate by third quarters that we sent to the House of Representatives, the numbers in the districts have changed but the numbers are different. So it has been decreased. But the numbers in the only majority district that was there has actually increased. And I want to make a point. The issue is not to create, in my view, majority-minority districts. The ACLU drafted a plan that can't be ignored which created two majority-minority districts. So, what they showed us is that it can be done. The issue is what happens with what we're talking about -- these communities of interest. If you look at the landscape in South Carolina, and we look at what has happened to the electorate in South Carolina.
With two Congressional seats, the Democratic vote is well over 40% in this State. And so if you are well over 40%, why would a Congressional district then only yield one Democratic candidate that will win? That would have to be a majority district. One of the highlights I had in my life was whenever I ran for the Senate. The people in my area -- which was not a majority district -- proclaimed it differently. They elected a person like me from that district that was not a majority. They showed us that it could be done, which is critical in our situation because the Justice Department and the courts and others don't know what retrogressing is. But whenever it's seen as though we should be trying to have the fairness of "one person, one vote," and we met the criteria -- even at the 50% level or slightly above that -- was in the Senate plan. Now we go from 50.49% in the plan that we passed in the Senate up to 55.18%, another 5 percentage points.
I want to be careful when I tell you what 5 percentage points mean. It's well over 30-some odd thousand people. And so each percentage point, when you do the deviation down to a seat plus one, you're talking over 30-some odd thousand people with 5.5%. I mean it is almost a 5% deviation of 660,000 people. And so, that's what happened whenever you put those folks in a district that dilutes the voting strength in the adjoining district. I think that's the issue that we have to address. I want to make one point on this other district. In the Senate plan that we had -- Number Two -- the Senate Staff Judiciary Plan which we passed in District Number 4, the population at that time was 28.84% in the 7th District. That was in the other proposal. I have asked for the amendment that we had -- that we passed here that the Senator from Florence ended putting up because I wanted to see the African American population in that one. But we went from 28.84% under this plan to now 27.64%. Does it make a difference? You are talking about 1.2%. You are talking well over 7,000 people when that happens. That's what's happened. We have gradually gotten down to a lower African American population in the final vote that we are about to end up taking. My thanks to Mr. Terreni who has worked with us on this. I have known him 20 years and I appreciate his diligence in giving us the information. The plan that we had that we passed through an amendment was 28.12% African American in the 7th District. Now that plan has gone down to 27.64%. Again, the pattern is a decrease, diluting the voting strength in the population mix.
Senator McCONNELL: In the plans that we have adopted under that plan the population, the voting population in District 6, that's 27,000 new voters?
Senator MALLOY: I think it will be more than that. In the population it is probably a greater number. I'm not certain how many of the voting age, but I know that the population is at least right at 5%.
Senator McCONNELL: Tell me, is it also true that you and I served on the Senate Reapportionment Subcommittee?
Senator MALLOY: Yes.
Senator McCONNELL: During the testimony, for example, let's say in York County, in Spartanburg County, and even in Beaufort County, isn't it true that those residents of those counties thought that the new 7th District should be in their county and should be the flagship for the new Congressional district just like the people in Horry County thought it should be in Horry? Isn't it true we had least four counties bidding to have the new flagship district in their counties?
Senator MALLOY: Yes. We went through all of the hearings and it was a task, and they obviously wanted to end up having the anchor district in their home counties. But here is what we have in South Carolina, Senator. We have certain areas that are growing at the disproportionate rate as it compares to other areas. I'm very concerned about the rural areas. The rural areas in South Carolina are where we're losing the population. Do you know why? We don't have the development there. I have great concerns that in the area where I live, if it stays the way that it is now, we will have over half of the district being a part of the coastal county. That's 330,000 people.
Senator McCONNELL: I don't know whether or not you watched the news today but did you know that in the past two years, the African American wealth has gone down from 52%? In other words, the household wealth of the average African American household in South Carolina is $5,000 compared with our white counterpart which is$89,000. The average income in Horry County is about $115,000. You are a trained lawyer -- a real astute lawyer. Can you tell me what would they have in common in Horry County with an$115,000 average income with Darlington, Chesterfield, Marlboro, and parts of Dillon and Marion Counties and let's say about one-third of Florence County where the average income is no more than \$5,000?
Senator MALLOY: I've said it and over and over again -- the commonality in the areas of the Pee Dee is just not the same. I understand the arguments on the communities of interests, but with all due respect, it's what the courts said in 2000. In 2000, those communities were basically the same then and when the courts drew a plan which was in 2002, they drew Horry, Georgetown, and went up in Charleston, similar to the way that they came up into the Berkeley County area. But Horry and Georgetown Counties were there. What is odd is that that is a great contrast in the change that we have from that point in time until now. And so the Horry area then comes up so that they can dominate the Pee Dee as opposed to having competing areas in Charleston. With the Congressional seat from the 1st District now in Horry and Georgetown, well over 300,000 people are displaced from one district to another. And the question is -- is that politically expedient? I think that's the question that we have to end up posing.
You know, last time that we were in Session, the Senate did work. And I realize that there was some discussion as to where we should go, how we should end up dividing this district and how much time that it was going to end up taking us to do it. Basically, I begged then to please not to vote cloture on this matter because we need a chance to end up continuing debate. What happened then was there was a Bill and the Senate spoke at that point in time, which was a majority of the Senate that was present and voting. They voted for a plan that decreased the African American population in the 7th District, albeit in a different location. But they increased tremendously the vote in the protected district, which is a majority 6th District, and it went up 5 percentage points. So, that is the question that is presented to us. And, that is the question as to how well we wear it in our conscience? Are we actually voting like we would if we were Lady Justice and having blinders on and making certain that we were trying to be fair and making certain that we were trying to represent one party, one vote and making certain that we were not packing districts and making certain that we were not lightening districts? Are we going to end up voting with political expediency? The question is only what we answer in our own hearts.
Senator HUTTO: First of all, let me thank you for your efforts and what you have been doing on reapportionment. Back when we did this before, there were a couple of terms that we used that were relevant today to those efforts. We called those bleaching and stacking. So when you look at this plan, would you define it as a bleaching of districts, which means removing African Americans from it and then stacking them in certain districts?
Senator MALLOY: The point I have been laboriously going through is that from every proposal that we have termed as a 7th District has gotten progressively lesser in African American voting strength. We have culminated today to having at least 2 percentage points down from the 7th District as it was from the plan that we had passed. But even equally as important with the plans that have been reported out and the plans that we have looked at here in this body from the proposals that came out of Senate Judiciary to the amendment that was passed on the floor, the only one that was really passed, that actually got a majority vote and maybe one or two others, but the District 7, Congressional Seat 7 has gotten a progressively lesser African American vote. You would call that bleaching.
Senator HUTTO: It has increased in every one of those proposals and on the vote on the amendment. The increase is because of your good friend, my good friend, Congressman Clyburn, the one who represents what we called the 6th District, which is a protected district where the African American population has increased to the point that it was 55.18%. And that is almost 5 percentage points higher than the plan that we passed from the Senate. That's five percentage points -- over 30,000 thousand people. Would you not agree also under the Voting Rights Act, the African American community is the only protected class? Would you say they are better off under this plan or worse off?
Senator MALLOY: I think that's the ultimate question that we have to say overwhelmingly -- they are worse off under the plan that has just passed back from the House of Representatives than they were under the Senate plan that we had passed, and under every proposal that we had that would have reached a vote here in this body, particularly with the two proposals and the one amendment that we had, that we passed -- that came from the Senator from Florence.
Senator HUTTO: The other point I want to make is when you look at these districts, you look at retrogression. You look at African American majority districts. You also look at influential districts. When you look at this drawing, versus the current situation, the current 5th District was about what percentage under the current existing plan that Congressmen now get elected from? That's somewhere around 31 or 32%.
Senator MALLOY: Under the current court plan, the numbers that I have for voting age population is 29.41%. That's voting age population.
Senator HUTTO: So, if you take that as an influential district, are there any other districts that meet the criteria with 29% under this plan?
Senator MALLOY: There are not. Under the first plan -- and I think they are looking at voting age population in the 1st, it is 18.18%. Under the second plan it is 21.48%. That's the 2nd District. 3rd District is 17.93%. The 4th District is 18.23%. The 5th District is 24.46%. The 6th District is 55% in that protected district. And the 7th District is 27.64%. Nothing comes close to that number.
Senator HUTTO: So, in most of those districts there has been retrogression except we stacked the 6th Congressional District? In essence, if you were a minority in this State, from a political point of view, under this plan you would be worse off?
Senator MALLOY: It would be my view they are worse off under the plan that just came back from the House of Representatives than they were under the previous court-ordered plan and under the plans that we have previously passed in the Senate. So all we have to compare are the court-ordered plan we have from 2002 and the plan we have now. Numbers in the plan we have now are progressively less in African American population. I think the voting strength has been diluted. There has been an increase from the numbers that we have for every proposal as it relates to the 6th District, which is our protected district.
Senator HUTTO: Another thing on retrogression from a rural perspective, would you agree that the rural communities under this proposal are worse off than they were under the current proposal?
Senator MALLOY: I think certainly the rural district is losing the impact -- the voting strength -- particularly under the scenario that's been passed back from the House. I think that the rural interest for the one person has been diluted to end up for them to have a fair vote. They are worse off.
Senator HUTTO: Would you agree that if you wanted to diminish rural interest, you take all the small rural counties and put them where there is a predominantly Republican county? You really dilute their influence as a rural community?
Senator MALLOY: I think what we are seeing is that we are only growing in certain portions of the State. Let's take a look at the map that we have up there. Let's suppose that a Congressman comes out of Horry County. You look at one on the coast from Horry, one on the coast in Charleston, one up at the very top of the map almost into North Carolina, in the York County area. Then you come to the Richland and Lexington area where there are two. Then you go into the Laurens area where there is one. And then you would go right there in the Spartanburg area where there is another. So you could draw a line up there by the interstate with York and draw it all the way down, and you would get half of the State where you would not have a representative that will live there, but for the fact that there could be one from Horry County drawn around it. So, you have that area that would come in from Chesterfield, Kershaw, the Fairfield area, Lee County, Sumter, Williamsburg, and Dorchester -- all of that area with no representation. Suppose you have to live in a top part of Chesterfield or Marlboro County, where would you drive to see a Congressman? You have to go all the way down to the Horry area. Senator, the thing that's troublesome is simply this -- we are all in the political world. Let's suppose you have a Rotary meeting in one county with 250 people there, and you have a cattleman's association or a farm association where there are eight or ten people that want to come and talk to you. Which meeting will common sense and human behavior tell you to go to? You will go to the one with 200 or 300 people there. What happens is you are diluting their representation because those individuals will not be able to reach their Congressmen. Their Congressman is going to go to where those people are and that's the most people that get the most bang for the buck. That's how it's going to be. This is a sad day for rural South Carolina.
Senator HUTTO: It is very sad because if you are running for Congress, you can just campaign in Horry and Georgetown and forget the rest of it and still be elected? The influence factor when you take these small counties and lump them with the big county, then you negate any opportunity for them to have real influence in that district? All they are doing is making the numbers? So, in essence, we are moving backwards in terms of rural influence and backwards as far as minority influence under this plan? I think so. That's what makes it difficult for those who represent some rural interests, some poverty interests and some educational interests where they may not have those kinds of interests. Our voice will not be heard.
Senator MALLOY: I think we are going to have a very difficult time of our voices being heard, particularly in the rural areas in South Carolina. Because the population mix is just going to spill itself out. That's what's going to happen. I didn't hear these comments that were coming out during the last census, during the last time we had a court-ordered plan. All of a sudden we want to put the Pee Dee together. I grew up in the Pee Dee. I am from the Pee Dee. My parents are from the Pee Dee. My grandparents and every ancestor that I have ever heard about are from the Pee Dee area. Not once in my life did I end up understanding the joining of that area included the coastline down in Horry and Georgetown. I have a lot of respect for those individuals. They have done a good job and masterful job saying let's keep the Pee Dee together. But I will tell you, that is not the Pee Dee, and I grew up in the Pee Dee. It is simply not the Pee Dee. What I am having a hard time with is that it seems to be a time of convenience. A time of convenience is to lump it together because the big guys will swallow up the small guys once again. That's what's going to happen.
Senator HUTTO: Senator, you think this plan has national implications rather than in it being in the best interests of the citizens of this State?
Senator MALLOY: I am hesitant to speak to that. I can tell you it is not in the best interest for those in rural South Carolina; however, they have the motivation and whatever the cause was or what instigated it, they were not part of the Pee Dee. In the court-ordered plan in 2002, I didn't see the same arguments being made to keep the Pee Dee together and put us back in the 6th District where other guys came out. I wasn't serving then. I didn't hear that argument, but I hear it now. So that's what I have an issue with.
Senator HUTTO: Would you say it's not the argument of convenience, and it does not reflect what the facts are because this is really not a Pee Dee district, it is a coastal district?
Senator MALLOY: The 7th District is a coastal district?
Senator HUTTO: It is going to be. That's what the population mix says. The numbers spell it out. There is in Horry County. There are over 60,000 in Georgetown. Added together that's 330,000. The median number is 660,000. So the 330,000 is half of it. You have two counties that will have more population or equal population, at least, to Florence, Marion, Dillon, Darlington, Chesterfield, and Marlboro Counties all combined?
Senator MALLOY: That's just not right. When the children that are born today who are from the Pee Dee area -- I don't think we are doing them a service because what we have done is we have worked hard in this body to promote tourism. What we have done is that we have sent dollars there to try to create and promote tourism in this State. That's what happened with the growth. I heard arguments made that we have been in a bit of recession -- my goodness, wait until we come out. And so those numbers that we have now. Look at the Senate districts of the Senator from Marion, Senator WILLIAMS, who was 11,000 down in population, and any Senate district that is almost 18,000 in population, where the other areas over there are losing in population, the area that's growing is the Horry-Georgetown coast. They will continue to increase, which will continue to dilute the strength. That was the wisdom apparently in 2002, when the court came in and said they were going to put the coastal region together, which will be Horry, Georgetown and Charleston Counties. So they will be able to end up growing. That coastal area will be able to end up having the areas to bounce off of one another. That's the issue.
But now what happens is that Horry comes in and they are able to end up being the largest county there. They are not divided. Greenville is divided. Richland is divided. It looks like Charleston has a bit of a division as well. So the largest counties in our State are divided. And the one that's growing as fast as any of the other ones is not. How are we going to address the large double digit unemployment issues in Marion County? How are we going to end up creating an economic engine over in those areas so that those folks will be able to have a working and living wage and make certain that they are educating their children? We know how the money comes in these districts. We know what happens to our educational system. We know what you have worked on many years for the I-95 Corridor. I-95 comes directly through the Pee Dee. What are we going to do? We don't want to be a depository for I-73. I support I-73, but, whenever you bring in an interstate there without doing the infrastructure and those matters in the corresponding areas, then you will have lack of development, and we are going to end up initiating and putting some gunpowder on the unemployment aspects in that area. It will continue to grow because their Congressman from that area is from the largest area. How often is he going to sit down with the mom and pop shops to say we want to make you grow, we want to make it better for you? The votes are just simply not there.
Senator HUTTO: I want to make two points and then I will be through. I think you addressed one of them. The first one is that Horry and Georgetown now make up 50% of that proposed district based on current population. But based on projected growth, they will dominate that district substantially in the next two to three to five years. Do you know what the growth rate in Horry and Georgetown has been over the last ten years?
Senator MALLOY: I do not know it, but I know it surpasses the other rural counties we have in that area.
Senator HUTTO: If you look at the population of Marion County today versus what it was ten years ago, Marion lost population. So, its influence in this district will continue to go down as it proportionately goes down as the population is to the total district? Their future is not nearly as good. Let me ask you two more questions. Does this plan kind of remind you of the Voting Rights Act of the Voter ID Bill we just passed? You know, they didn't say you couldn't vote. They just shave off some points. They just make it more difficult for a senior to get an ID. They make it more difficult for a person if you are in college. They make it more difficult for working mothers. So, it is just shaving off points.
Senator MALLOY: I think that this plan is going to be exactly what we think it is. I think that it actually is a very opportunistic plan. The plan will not be reflective of the political landscape of the African American voting population in this State. It has gotten progressively higher in the 6th District, progressively lower in the new 7th. I think what we are doing now is it becomes more of a political plan. Our voting strength has been diluted. I think we are worse off as far as an African American voter.
Senator HUTTO: Let's talk about the 6th District a little bit in the same manner. One of the big things I see to get elected is the cost to get elected. If you look at the 6th Congressional District under this proposal versus the proposal we passed out in the Senate, a compact plan that reduced the number of counties, but it also did two unique things, under this proposed plan. There are about four different media markets you have to run in. And really, that's where your costs come out. You have to run in the Pee Dee market, Charleston, Columbia and Savannah. So it's the most expensive district out of all the districts. Who do you think more likely gets the opportunity to run -- an African American or a Caucasian?
Senator MALLOY: I will tell you what I understand about that point. I think that it is unfortunately expensive to end up running a race. I think that what we have is the attempt obviously to comply with the Voting Rights Act. I think we have gone further than what we needed to put more of the African American population in the plan that just came back from the House of Representatives than was necessary for compliance under the Voting Rights Act. The increased amount, which is 5 percentage points from what we passed in the Senate, represents the fact that there are more African Americans placed in the 6th District now than what we had under the Senate plan and the question I have is -- if we are opposed to vote for that, then what we are voting for? This body is voting to increase the African American population in the 6th District. That isn't the total issue because what it does is dilute the voting strength of African Americans' political value in the rest of the districts.
Senator HUTTO: So, if you are under 21 or 22%, really they ignore you. They can ignore the African American community in all these Congressional districts probably, except the 5th. The plan that has just come back, I don't think that it is as helpful as those that we had passed previously.
Senator MALLOY: Thank you. One of the things my friend, Senator McCONNELL, taught me was the Rules, and I've been a great admirer of his. During the process I know he has a Rule book back in his drawer and he has the precedents that are already set forth. And I understand the Rule as it relates to a reapportionment discussion. We have particular Rules, and I realize, gentlemen of the Senate, as you all may not think this is an important time. I speak to a half empty Chamber as to how critical this issue is. We know that even under the Rules generally that we would not be able to bring this matter to a close until we have two legislative days of discussion. I do not know if we are making our points, I'm not certain the folks are listening. So I'm going to talk for a while until I make certain that my colleagues pay the respect that we have for this plan. Because of the importance of what is going to happen over the next decade. So really two days to end up making the folks understand that what we're doing here is changing the district in a way that they are increasing the African American population in the 6th District.
As I went home after the fourth, after the last vote that we took, a family member asked me, "What are you doing as it relates to the Pee Dee under the plan that you have voted for?" And I tell you what -- it's a difficult question. It's perplexing. But I submit to you that even under that plan, that those in the Pee Dee get a chance to have more contact with their Congressman than they will under the plan with the 7th District. In every race that we have in the next five years, it's going to be very difficult for anyone outside of the Horry County area to become a Congressman. And that's sad. And under the plan that came out you had three or four Congressional seats that would reach into it, but how dare you make the argument as to when you are better off or why you are not. I was always told a piece of something is better than all of nothing. You are getting all of nothing because that person has to reach down deep in their hearts whenever they are part of a big fish in a small pond. They are the biggest guy on the block, the biggest person on the playground. Will then they be a bully or will they cease and make peace with the children if they are on the playground if you are the biggest guy there? That's what's going to happen.
So we get an opportunity to make a difference here and operate as if we have blinders on and to see what we would do. We are not Lady Justice, but we have an opportunity to fix it now as opposed to going into the hearts of man and then saying you may get a good person that will lead the Grand Strand and come up to Shiloh Community and Cash Community -- these little areas that you probably have never heard of in Chesterfield County and others. I know you all travel some, but what's going to make them come up there and see the people? There's nothing that's going to make them come. You've got a ready-made audience with convention centers and conferences. That's not what happens in the Pee Dee. NESA is working for development in the Pee Dee, but Williamsburg County is not in it. That's not the goal. If we're talking about economic and communities of interest, tell me what the communities of interest are in the coastline and Horry County? Oh, they watch the same television stations. That's not a community of interest. It's just simply not. As I labor here and they go back and forth.
Since I've been elected here, I can say that there are good times. I tell you this is a noble place and we have an awesome, awesome responsibility. I think this plan will eventually pass that came back from the House of Representatives. I think it's going to pass largely on party lines. I just ask you to search your hearts. Is it political expediency? Are we really doing what's right for South Carolina? Are we positioning ourselves so that we can end up making a run and we don't want to have the backlash of our party? Are we doing what's right for South Carolina? A chain is no stronger than the weakest link. We've got a weak link in the Pee Dee. We need jobs. We need representation. We need someone to hear our voice. And does this plan do that? I don't think that it does.
What happened to the arguments that were being made last week? In the Greenville area I'm surprised, almost shocked, that the arguments that came from this floor, arguing over percentage points as to what you would do with Greenville and Spartanburg and all of a sudden it's changed. The Greenville population is decreased, Spartanburg increased, and now where are we? We get those numbers right. I don't want to step down from here and say we are butchering the numbers. I want to be a bit specific. When we passed the Bill, it was 61.6% in Greenville and 38.04% in Spartanburg. Now it's 59.97% and 40.2%. Greenville, you lost under this plan. The population from the 3rd District, the Greenville population increased to 54,952 from 41,827, a full 13,000 people. In Spartanburg, you now have only 19,814 in the 5th, when under the earlier plan you had 32,938. The question is what changed? Will someone from Greenville soon represent a Congressional seat in Anderson, Pickens and Oconee? It was an issue a few weeks ago, but it's not an issue now. And I make the point because I can't look inside your heart. I can just only look at the numbers and see what they say and it appears that it's just politically expedient.
And so I look around and I think that, maybe you did not feel our passion. And again, I can't speak into the hearts of men and women, but I do know that something changed. And something changed from a few weeks ago because what I anticipate the vote to be at this point in time seems to be contrary to not what I believe you said as what I've seen on this Senate floor.
Part of what I wish could happen is that we could reach into the hearts of men and tell them more about Lady Justice so that they then will try and respond and see if they will look into the eyes of the children and the eyes of the future in South Carolina and say this is what's going to happen and this is what's better for my State, not necessarily what's better for my party. What the question becomes is -- if we can continue to talk to you and educate you on this issue for the next couple of days. Certainly you can. The Rules allow it. Gentlemen, I've got news for you today. With everything that we've said here -- and I realize that we may have stepped on some toes -- but it had to be done. It's not right. And only you can know whether or not it's just politics or whether it's something that you think is genuinely right for South Carolina. My father always told me that "two wrongs don't make a right." It doesn't make a right. And so you can hold that body hostage for not just today but tomorrow, and come back on Thursday and we can get a vote.
But as I stand here before you, a little bit disappointed at the response that it seems that we've given this debate and the cavalier approach that I believe that we are taking whenever we have some serious allegations that are being made as to what the numbers represent and we've got a cavalier approach, I want to see how we respond because the population did increase for African Americans in the 6th District and they did decrease for African Americans in the 7th District. The influential district is different than it was under the court's plan. I realize the arguments can be made -- but then speaking from a little old country boy from the Pee Dee area -- you would be surprised to know that, Senator, even though I always love to talk to the Senator from Cherokee because he and I are both friends and we both like to fight. You know, the fight in me says to raise cane because I fundamentally disagree with it. It's going to surprise some of you because I have too much respect for this body and this State that's already struggling to have to bring us back day after day to come in and take what will be the same vote.
But I want you to hear what I'm saying because I think we made a mistake. I heard the chatter in the room from somebody who gave me credit for talking 11½ hours not too long ago. My wife tells me I talk too much anyway and everybody says I do talk a lot, but most of the time I'm pretty quiet, and I think I should, but I am going to, believe it or not, sit down and give us a chance to vote today. I made the decision since I've been up here even though there have been some serious allegations being made, serious reasons that we are under the Voting Rights Act to have the United States government looking over our shoulders as to how we do things. It has been what I believe that something is on the increase. Does it meet a legal definition? I guess we don't know. I believe it does. And when the Bill comes back here, that however you cut it, the numbers will spell it out. And I submit to you that the only person that can really answer it is what's in your heart. But I also say that those that actually made the arguments last week for your district, the changes that were made, the diluting of the counties, the change in the population mix, the percentages of divided county, and that we're dividing some larger counties and not dividing others, and the fact that we have continued to digress in the African American population in the 7th District to I think was at the lowest point as the plan is now if you vote for concurrence.
When you cast your vote, if you are doing it for political expediency, I wouldn't care if one of my Republican friends from a rural area in a community of interest represented me. I really don't have any problem with that. I see the people in my area suffering. They can't find jobs. One of my friends asked me why when they had the catastrophe in New Orleans and some people didn't leave because they said they couldn't. Where were these folks going to go? They also had bus loads of people that they packed up to send down to Horry County from various counties to work in hotels as maids and stuff. I've seen it before. When will it ever happen that we will pack up and move to these rural areas and work there? If we don't envision that happening in our lifetime, then we're giving up hope on our area. And the rural areas are going to continue to suffer.
I had a few matters I needed to put in the record. As much as it pains me, I have to let this matter come to a close. I think we erred, folks. The only problem is that we may not get this opportunity at another time in our history as to whether we grow at a rate large enough in a ten-year census that will allow us to expand our Congressional seat. So we get that one opportunity in history, and the question is will we blow it?
So I would urge you, and I anticipate if one will stand up and say because my arguments that I made last week, they are with me this week and I'm going to stay with my arguments. I hope that you all will say that the Senator from Darlington told us the first day that what he was opposed to was having a large county like Horry be a dominant area in the Pee Dee. I submit to you that we are consistent. But I don't see the consistency therein, and so I ask you to search into your hearts and see if one person will stand up and say, "No, no, we're going to nonconcur. We're going to put this thing into conference and give it an opportunity and see if the Senate position can be upheld."
This is going to sound like a bit of a sour grapes thing, but the House has done it to you again. It beat us up. We had no cards going over there and they came in and gave us what they wanted us to have, and now they are bringing it down and shoving it down our throats once again. So, when are we going to tell them, "No?" And as I stand here now, the House of Representatives -- you know where they are? They are gone. They left a few hours ago. And I saw on the television that they will reconvene at the Call of the Speaker. They dumped it in your lap, and they said, "If one of you comes up and upsets the apple cart, woe be unto you. We are going to win the public relations game. And it's going to be your fault." That's right. They dare any of you that have to depend on them to say we're going to nonconcur. You can't stand it in the political world because they don't know what goes on generally in your heart in your local area because you are the king there. But they are daring you. You know, when we were children, I used to hate it when somebody said, "I dare you to cross that line." Senator from Cherokee, I would jump across it. They would draw a line with me and I would jump across it. But the House has dared us again. They have dared us again because they are sending it to us. I came up to one Senator earlier and I joked, "You know you guys are my friends. We work together. We just have some difference of opinions from time to time, but most of the time we work those out. Do you remember when you were young and your mom or whatever used to give you medicine and she told you to hold your nose, it won't taste so bad?" How many folks are going to have to hold their nose whenever they vote for this because they know it's not what they wanted and they know that it's not good, but it may be like that little child that may say, "But, maybe it's good for me." Well, here's the difference. You are not that little child now. You represent 80,000-90,000 people. Under our new areas we're going to have over 100,000 people. That's who you are representing. All of those children that come in that district that will grow up and be voters and all those that are there now will have to live with this for a period of time.
So, what is politically expedient to do what is right? I wish I could tell all of you to be like Lady Justice, to put your blinders on and look strictly at the facts and look strictly at the law and then say, "If all is well, then cast your vote then, as opposed to -- what's going to happen during the next time I get ready to run? Who is going to come after me because I went against what the House has said that we should end up doing?"
So with that, my leaders, I told all three of you all three weeks ago whenever we had this conversation, that I was on my heels, I knew you would be successful. Anticipating the vote, I want to congratulate you. You all did it. I don't think it is right, but I knew this day was going to happen. I don't think it's good for South Carolina, and I hope that I'm wrong.
On motion of Senator ANDERSON, with unanimous consent, the remarks of Senator MALLOY were ordered printed in the Journal.
On motion of Senator MALLOY, with unanimous consent, the following two articles regarding redistricting were ordered reprinted in the Journal:
Redistricting Plan Strikes Fair Balance
by Senator GROOMS
July 8, 2011
The General Assembly recessed last week, the Senate ending its deliberations on the once-per-decade question of redistricting. South Carolina's population growth means that we gain a new, seventh congressional seat.
The debate centered on two plans. The first plan drew the 7th District to include Horry County and much of the Pee Dee.
However, the 6th District, represented by Rep. Jim Clyburn, meandered from the farms of Blythewood in northern Richland County some 130 miles to the Charleston peninsula, and more than 150 miles from the Sumter-Florence line to the Georgia-South Carolina border at the Savannah River.
That plan split Charleston County. It split Berkeley. It split Dorchester, Colleton and Beaufort. It split at least six more counties, dividing many along racial lines.
In contrast, the plan I presented keeps all of Berkeley, Dorchester, Colleton, Beaufort, Jasper, Hampton, Allendale and Barnwell counties whole, within the 7th. Charleston and Horry remain anchors of the 1st District and are not split. Coastal Georgetown remains in the 1st while its inland areas join Williamsburg in the 7th.
Daniel Island, Goose Creek, Moncks Corner, Summerville, Walterboro, Ridgeville, St. Stephen, St. George -- these towns are growing. They can emerge from Charleston's shadow and have their own representative in Congress. Beaufort's sizeable population will have significant influence. And because of the size and significance of the Charleston metro area, which extends into Berkeley and Dorchester, Charleston effectively could have two voices in Congress.
Communities of interest -- where people live, work, shop, worship -- are kept whole wherever possible. County and city boundaries are generally protected.
Racial gerrymandering is avoided, while we are careful not to dilute minority voting strength. Common geography, transportation, and communication are accounted for to ensure more compact districts. Statewide just eight counties are split.
Sadly, the plan has been rebuked by some in my own party who seem to prefer racially fractured counties. Some even insinuate that the plan is part of a conspiracy designed to aid Democrats.
Why would I do that? I am one of the most consistently conservative Republicans in the General Assembly.
What it is, is a conservative, common-sense plan. It was carefully drawn, in part by a well-respected, nationally known Republican demographer. Democrats knew this, and initially balked at supporting it.
However, with a few changes, we were able to craft a plan that both sides could support. The plan has such broad support that not only did Democrats and Republicans back it, Senators from 44 of our 46 county delegations voted for it.
It's revealing that, with one or two exceptions, those who voted against the plan are moderate and liberal Republicans. They fought our common-sense plan because it brought to light the flaws in their gerrymandered, parochial plan.
These Senators, and many in the South Carolina House, will continue to fight our plan and hope to change it later this summer. They say that their plan stands a better chance in any court challenge.
The truth is the map we passed on June 29 is the better one. It needs only a vote of the House to become law.
A quick glance at the maps shows that ours is the common-sense plan.
It recognizes communities of interest, avoids racial gerrymandering, minimizes county splits, and has broad, bipartisan support.
Senate Congressional Plan Best for State, Beaufort County
by Senator DAVIS
July 7, 2011
Recently, a new congressional plan for South Carolina, pushed by the Myrtle Beach business community in general and by a Myrtle Beach state representative in particular, unraveled in the Senate. Much to their dismay, as reported by The (Columbia) State newspaper, "the state Senate approved a redistricting plan that creates a new 7th District that is centered in Beaufort County."
South Carolina once had a 7th Congressional District, but the 1930 census took it away. The 2010 census, however, showed our state's population had grown at a rate of 15.3 percent, greater than the country as a whole at 9.7 percent. So our state's 7th District was restored.
Wesley Donehue, director of the state Senate Republican Caucus, summarized what happened next: "One of the worst kept secrets in state politics is that (Myrtle Beach) Rep. Alan Clemmons is running for the yet-unrealized 7th Congressional District. Clemmons, as chairman of the House) subcommittee drafting the plan, had the ability to craft himself a district that he could win."
And that's exactly what Clemmons did. The House adopted his plan to create a new district stretching from Myrtle Beach "into the Democratic Pee Dee area ... a district created for a more moderate Republican." (That House-approved plan was developed in conjunction with U.S. Rep. Jim Clyburn and members of his staff, and it chops Beaufort County into two pieces, gutting its political relevance; more on that later.)
Clemmons is an honorable man; however, drawing a new district to suit the desire of a particular politician is horrible public policy. The Senate Republican Caucus agreed, so it hired John Morgan, one of America's leading electoral demographers, to draw a congressional plan that reflected South Carolina's communities of interest, avoided gerrymandering and had the strongest chance of surviving the inevitable legal challenges in federal court.
Morgan objectively reviewed the data, applied federal Justice Department criteria and drew a plan that, among other things, happened to anchor the new 7th District in Beaufort County. That plan became the state Senate Republican Caucus plan, and attorneys specializing in redistricting law formally recommended it to the Senate's special redistricting subcommittee. That subcommittee then held a meeting to consider it, and that's when power politics reared its head again.
Unhappy that the new district might not be anchored in Myrtle Beach and include the Pee Dee, hundreds of people from that area went to the subcommittee meeting and demanded adoption of the Clemmons plan passed by the House. The subcommittee had no such plan -- none resembling it had even been recommended -- but one was hurriedly prepared that evening and quickly approved.
That hasty action was subsequently corrected by the full Senate, which voted 25 to 15 to approve the Senate Republican Caucus plan. Senators from all parts of the State -- except those from Myrtle Beach and the Pee Dee -- voted for the plan, for the same reason I did: It is the most logical plan for the State, the least gerrymandered and the one with the least number of county splits.
I also supported the Senate-passed plan because it recognizes Beaufort County's growing prominence. There is finally a chance for our county and its surrounding economic region (the counties of Jasper, Hampton and Colleton) to be the heart of a congressional district, rather than the forgotten tail-end appendages of metropolitan-dominated districts to the north (Lexington-Columbia) or the northeast (Charleston).
I did not support that plan for personal reasons. During the congressional redistricting debate, I publicly stated that if the new district ended up centered in Beaufort County, I would not run for the seat. I am making progress as a state senator on things important to me and my constituents, and right now, I can make more of a difference in Columbia than in Washington.
The General Assembly will reconvene July 26 to decide which chamber's plan will prevail. I am convinced the one approved by the House, based on the gerrymandering of raw politics, would be successfully challenged in federal court and result in judge-drawn district boundaries, a nightmare scenario that must be avoided. I will do everything in my power to keep that from happening
* * *
The question then was concurrence with the House amendments.
The "ayes" and "nays" were demanded and taken, resulting as follows:
Ayes 24; Nays 16
AYES
Alexander Bryant Campbell
Campsen Cleary Courson
Cromer Davis Elliott
Fair Gregory Grooms
Knotts Leatherman Martin, Larry
Martin, Shane McConnell McGill
O'Dell Peeler Rankin
Rose Shoopman Verdin
Total--24
NAYS
Anderson Coleman Ford
Hutto Jackson Leventis
Lourie Malloy Massey
Matthews Nicholson Reese
Scott Setzler Sheheen
Thomas
Total--16
The Senate concurred in the House amendments and a message was sent to the House accordingly. Ordered that the title be changed to that of an Act and the Act enrolled for Ratification.
Expression of Personal Interest
Senator FORD rose for an Expression of Personal Interest.
RATIFICATION OF AN ACT
Pursuant to an invitation the Honorable Speaker and House of Representatives appeared in the Senate Chamber on July 26, 2011, at 5:30 P.M. and the following Act was ratified:
(R111, H. 3992 (Word version)) -- Reps. Harrell, Lucas, Harrison, Clemmons, Barfield, Cooper, Hardwick, Owens, Sandifer, G.R. Smith, J.R. Smith, White, Bingham and Erickson: AN ACT TO AMEND SECTION 1-1-715, CODE OF LAWS OF SOUTH CAROLINA, 1976, RELATING TO ADOPTION OF THE UNITED STATES CENSUS, SO AS TO ADOPT THE UNITED STATES CENSUS OF 2010 AS THE TRUE AND CORRECT ENUMERATION OF INHABITANTS OF THIS STATE; BY ADDING SECTION 7-19-35 SO AS TO ESTABLISH SEVEN ELECTION DISTRICTS FROM WHICH MEMBERS OF CONGRESS FOR SOUTH CAROLINA ARE ELECTED COMMENCING WITH THE 2012 GENERAL ELECTION; TO REPEAL SECTION 7-19-40 RELATING TO CONGRESSIONAL DISTRICTS FROM WHICH SOUTH CAROLINA MEMBERS OF CONGRESS WERE FORMERLY ELECTED; TO JOINTLY DESIGNATE THE PRESIDENT PRO TEMPORE OF THE SENATE AND THE SPEAKER OF THE HOUSE OF REPRESENTATIVES AS THE APPROPRIATE OFFICIALS OF THE SUBMITTING AUTHORITY WHO ARE RESPONSIBLE FOR OBTAINING PRECLEARANCE OF THE CONGRESSIONAL REAPPORTIONMENT PLAN UNDER THE VOTING RIGHTS ACT; AND TO PROVIDE THAT A MEMBER OF ANY BOARD, COMMISSION, OR COMMITTEE REPRESENTING A CONGRESSIONAL DISTRICT WHOSE RESIDENCY IS TRANSFERRED TO ANOTHER DISTRICT BY THIS ACT MAY CONTINUE TO SERVE HIS TERM IN OFFICE; HOWEVER, THE APPOINTING OR ELECTING AUTHORITY MAY ADD AN ADDITIONAL MEMBER ON A BOARD, COMMISSION, OR COMMITTEE WHICH LOSES A RESIDENT MEMBER.
L:\COUNCIL\ACTS\3992AHB11.DOCX
On motion of Senator ROSE, with unanimous consent, the Senate stood adjourned out of respect to the memory of Mrs. Emily Myers Millhouse of Summerville, S.C., beloved wife of Tillman Millhouse, Jr. and devoted mother of four.
and
On motion of Senators KNOTTS and SETZLER, with unanimous consent, the Senate stood adjourned out of respect to the memory of Mrs. Ruth J. Buzhardt of Cayce, S.C.
and
On motion of Senators MATTHEWS, PINCKNEY and GROOMS, with unanimous consent, the Senate stood adjourned out of respect to the memory of Mr. Floyd Buckner of Colleton, S.C., Colleton County Councilman.
and | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34197142720222473, "perplexity": 1639.4317171365435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054974/warc/CC-MAIN-20131204131734-00099-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://wiki.kidzsearch.com/wiki/Euler%27s_identity | kidzsearch.com > wiki Explore:images videos games
# Euler's identity
Euler's identity, sometimes called Euler's equation, is a simple equation. It links several important numbers (mathematical constants) in mathematics in an unexpected way. Euler's identity is named after the Swiss mathematician Leonhard Euler, though it is not clear that he did invent it.[1]
Euler's identity is the equation $e^{i\pi} + 1 = 0$.
The special numbers in Euler's Identity, are
• 0: zero, special because zero plus any number is still that same number
• 1: one, special because one times any number is still that same number
• $\pi$: pi, special because it is one of the most common numbers in mathematics, and the distance around the outside of a circle divided by the distance across the circle.
$\pi \approx 3.14159$
• $e$, Euler's Number. Euler's Number appears in calculus and is related to the area between a curve that follows $y = {1 \over x}$ and the line $y = 0$.
$e \approx 2.71828$
• $i$, which is an imaginary number. The number $i = \sqrt{-1}$ and has the property $i \times i = i^2 = -1$.
## Reputation
A reader poll done by Physics World in 2004 called Euler's identity the "greatest equation ever", together with Maxwell's equations. Richard Feynman called Euler's identity "the most beautiful equation". The Identity is well known for its mathematical beauty: It combines the fields of geometry and algebra, and yet does so using only 7 of the most common and important mathematical symbols.
## Mathematical proof using Euler's formula
Euler's Formula is the equation $e^{ix} = \cos(x) + i \sin(x)$. Our variable $x$ can be any real number, but for this proof $x = \pi$. Then $e^{i\pi} = \cos(\pi) + i \sin(\pi)$. Since $\cos(\pi) = -1$ and $\sin(\pi) = 0$, the equation can be changed to read $e^{i\pi} = -1$, which gives the identity $e^{i\pi} + 1 = 0$.
## References
1. Sandifer, C. Edward 2007. Euler's greatest hits. Mathematical Association of America, p. 4. ISBN 978-0-88385-563-8
pl:Wzór Eulera#Tożsamość Eulera | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9872763752937317, "perplexity": 861.7512125472086}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400222515.48/warc/CC-MAIN-20200925053037-20200925083037-00792.warc.gz"} |
https://plainmath.net/17136/figure-identify-vertical-sngles-adjacent-supplementary-complementa | Question
# Use the figure shawn 10 identify each \text { a. vertical sngles } b. adjacent angles \text { c. linear pairs } ¢. supplementary angles e. complementa
Solid Geometry
Use the figure shawn 10 identify each
a. vertical sngles
c. linear pairs cent
d. supplementary angles
e. complementary angles
2021-06-23
a) $$\displaystyle∠{3}$$ and $$\angle 5$$
b) $$\displaystyle∠{1}$$ and $$\displaystyle∠{2},∠{2}$$ and $$\displaystyle∠{3},∠{3}$$ and $$\displaystyle∠{4},{4}$$ and $$\displaystyle∠{5},∠{1}$$ and $$\displaystyle∠{5}$$
c) $$\displaystyle∠{4}$$ and $$\displaystyle∠{5},∠{3}$$ and $$\displaystyle∠{4}$$
d) $$\displaystyle∠{4}$$ and $$\displaystyle∠{5},∠{3}$$ and $$\displaystyle∠{4}$$
e) $$\displaystyle∠{1}$$ and $$\displaystyle∠{2}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7323550581932068, "perplexity": 5707.846766926454}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780060677.55/warc/CC-MAIN-20210928092646-20210928122646-00065.warc.gz"} |
https://www.physicsforums.com/threads/is-this-equation-conservative-or-non-conservative.878399/ | # Is this equation conservative or non-conservative?
Tags:
1. Jul 10, 2016
### humphreybogart
1. The problem statement, all variables and given/known data
This is the Navier-Stokes equation for compressible flow. nj is the unit normal vector to the surface 'j', and ni is the unit normal vector in the 'i' direction. Is this equation written for a control volume or a material volume?
2. Relevant equations
3. The attempt at a solution
I believe it's for a control volume, since it's in integral form and expressing fluxes out of a cube (taking advantage of conservation of momentum). However, I know that integral forms of non-conservative equations also exist, so I'm not sure.
2. Jul 11, 2016
### Staff: Mentor
Would the first term on the right hand side be present in the material volume form?
3. Jul 14, 2016
### humphreybogart
I'm tempted to say 'no', because no fluid enters or leaves a material volume. So the term would disappear. I'd like to see the integral and differential form for conservative, and the integral and differential form for non-conservative.
4. Jul 14, 2016
### Staff: Mentor
The integral form for material volume is the same as for control volume, except that the first term on the right hand side is absent. The differential forms for both are identical. See this link to see why the integral form of the material volume development reduces to the same differential form as the control volume development: https://en.wikipedia.org/wiki/Reynolds_transport_theorem
5. Jul 15, 2016
### humphreybogart
Thank you.
Great! I seen in another post a reference to Bird's Transport Phenomena book. Thanks.
Draft saved Draft deleted
Similar Discussions: Is this equation conservative or non-conservative? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.981392502784729, "perplexity": 818.9971574160538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886105341.69/warc/CC-MAIN-20170819105009-20170819125009-00547.warc.gz"} |
http://cpr-hepph.blogspot.com/2013/07/13074037-s-jadach-et-al.html | ## KK MC 4.22: CEEX EW Corrections for f\bar{f}\rightarrow f'\bar{f'} at LHC and Muon Colliders [PDF]
S. Jadach, B. F. L. Ward, Z. Was
We present the upgrade of the coherent exclusive (CEEX) exponentiation realization of the YFS theory in the {\cal KK} MC to the processes f\bar{f}\rightarrow f'\bar{f'}, f=\mu,\tau,q,\nu_\ell, f'=e,\mu,\tau,q,\nu_\ell, q=u,d,s,c,b,t \ell=e,\mu,\tau with f\ne f', with an eye toward the precision physics of the LHC and possible high energy muon colliders. We give a brief summary of the CEEX theory in comparison to the older (EEX) exclusive exponentiation theory and illustrate theoretical results relevant to the LHC and possible muon collider physics programs.
View original: http://arxiv.org/abs/1307.4037 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328299164772034, "perplexity": 13335.709889540041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891813088.82/warc/CC-MAIN-20180220185145-20180220205145-00411.warc.gz"} |
https://www.arxiv-vanity.com/papers/1207.0553/ | # [
June Huh Department of Mathematics, University of Michigan
Ann Arbor, MI 48109
USA
###### Abstract
We show that the maximum likelihood degree of a smooth very affine variety is equal to the signed topological Euler characteristic. This generalizes Orlik and Terao’s solution to Varchenko’s conjecture on complements of hyperplane arrangements to smooth very affine varieties. For very affine varieties satisfying a genericity condition at infinity, the result is further strengthened to relate the variety of critical points to the Chern-Schwartz-MacPherson class. The strengthened version recovers the geometric deletion-restriction formula of Denham et al. for arrangement complements, and generalizes Kouchnirenko’s theorem on the Newton polytope for nondegenerate hypersurfaces.
maximum likelihood degree, logarithmic differential form, Chern-Schwartz-MacPherson class.
###### :
14B05, 14C17, 52B40
The maximum likelihood degree of a very affine variety]The maximum likelihood degree of a very affine variety
## 1 Introduction
Maximum likelihood estimation in statistics leads to the problem of finding critical points of a product of powers of polynomials on an algebraic variety [PS05, Section 3.3]. When the polynomials and the variety are linear and defined over the real numbers, the number of critical points is the number of bounded regions in the corresponding arrangement of hyperplanes.
Studying Bethe vectors in statistical mechanics, Varchenko conjectured a combinatorial formula for the number of critical points for complex hyperplane arrangements [Var95]. Let be the complement of hyperplanes in defined by the linear functions . The master function , where the exponents are integral parameters, is a holomorphic function on . We assume that the affine hyperplane arrangement is essential, meaning that the lowest-dimensional intersections of the hyperplanes are isolated points.
###### Varchenko’s conjecture.
If the hyperplane arrangement is essential and the exponents are sufficiently general, then the following hold.
1. has only finitely many critical points in .
2. All critical points of are nondegenerate.
3. The number of critical points is equal to the signed Euler characteristic .
The conjecture was proved by Varchenko in the case where the hyperplanes are defined over the real numbers [Var95], and by Orlik and Terao in general [OT95]. Subsequent works of Silvotti and Damon extended this result to some nonlinear arrangements [Dam99, Dam00, Sil96]. The assumption made on the arrangement is certainly necessary, for there are arrangements violating the inequality .
The principal aim of this paper is to generalize the theorem of Orlik and Terao. The generalization is pursued in two directions. In Theorem 1, we obtain the same conclusion for a wider class of affine varieties than complements of essential arrangements; in Theorem 2, we recover the whole characteristic class from the critical points instead of the topological Euler characteristic. A connection to Kouchnirenko’s theorem on the relation between the Newton polytope and the Euler characteristic is pointed out in Section 4.
The above extensions are motivated by the problem of maximum likelihood estimation in algebraic statistics. Recall that an irreducible algebraic variety is said to be very affine if it is isomorphic to a closed subvariety of an algebraic torus. Very affine varieties have recently received considerable attention due to their central role in tropical geometry [EKL06, Spe05, Tev07]. The complement of an affine hyperplane arrangement is affine, and it is very affine if and only if the hyperplane arrangement is essential. Any complement of an affine hyperplane arrangement is of the form , where is the complement of an essential arrangement.
In view of maximum likelihood estimation, very affine varieties are the natural class of objects generalizing complements of hyperplane arrangements. Consider the projective space with the homogeneous coordinates , where the coordinate represents the probability of the -th event. An implicit statistical model is a closed subvariety . The data comes in the form of nonnegative integers , where is the number of times the -th event was observed.
In order to find the values of on which best explain the given data , one finds critical points of the likelihood function
L(p1…,pn)=pu11⋯punn/(p1+⋯+pn)u1+⋯+un.
Statistical computations are typically done in the affine chart defined by the nonvanishing of , where the sum can be set equal to and the denominator of can be ignored. The maximum likelihood degree of the model is defined to be the number of complex critical points of the restriction of to the projective variety , where we only count critical points that are not poles or zeros of , and are assumed to be sufficiently general [HKS05]. In other words, the maximum likelihood degree is the number of critical points of the likelihood function on the very affine variety
U:={x∈V∣p1⋯pn(p1+⋯+pn)≠0}.
### 1.1 Varchenko’s conjecture for very affine varieties
We extend the theorem of Orlik and Terao to smooth very affine varieties. Let be a smooth very affine variety of dimension . Choose a closed embedding
f:U⟶(C∗)n,f=(f1,…,fn).
The master function , where the exponents are integral parameters, is a holomorphic function on . The maximum likelihood degree of is defined to be the number of critical points of the master function with sufficiently general exponents .
###### Theorem 1.
If the exponents are sufficiently general, then the following hold.
1. has only finitely many critical points in .
2. All critical points of are nondegenerate.
3. The number of critical points is equal to the signed Euler characteristic .
More precisely, there is a nonzero polynomial such that the assertions are valid for with .
Theorem 1 shows that, for instance, the conclusions of [CHKS06, Theorem 20] and [Dam99, Corollary 6] hold for smooth very affine varieties without further assumptions. This has a few immediate corollaries that might be of interest in algebraic geometry and algebraic statistics. First, the maximum likelihood degree does not depend on the embedding of into an algebraic torus. Second, the maximum likelihood degree satisfies the deletion-restriction formula as in the case of a linear model. Third, the sign of the Euler characteristic of a smooth very affine variety depends only on the parity of its dimension.
### 1.2 A geometric formula for the CSM class
The theorem of Orlik and Terao can be further generalized to very affine varieties which admit a good tropical compactification in the sense of Tevelev [Tev07]. See Definition 3.6 for schön very affine varieties. For example, the complement of an essential hyperplane arrangement is schön, and the hypersurface defined by a sufficiently general Laurent polynomial (with respect to its Newton polytope) is schön. The open subset of the Grassmannian given by nonvanishing of all Plücker coordinates is another schön very affine variety, which is of interest in algebraic statistics [SS04].
The generalization is formulated in terms of the variety of critical points of , the totality of critical points of all possible (multivalued) master functions for . More precisely, given a compactification of , the variety of critical points is defined to be the closure
X(U)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯X∘(U)⊆¯¯¯¯U×Pn−1of X∘(U)={n∑i=1ui⋅dlog(fi)(x)=0}⊆U×Pn−1,
where is the projective space with the homogeneous coordinates . The variety of critical points has been studied previously in the context of hyperplane arrangements [CDFV11, DGS12]. See Section 2 for a detailed construction in the general setting.
We relate the variety of critical points to the Chern-Schwartz-MacPherson class of [Mac74]. Let be the intrinsic torus of , an algebraic torus containing whose character lattice is the group of nonvanishing regular functions on modulo nonzero constants. We compactify the intrinsic torus by the projective space , where is the dimension of .
###### Theorem 2.
Suppose that is an -dimensional very affine variety which is not isomorphic to a torus. If is schön, then
[X(U)]=r∑i=0vi[Pr−i×Pn−1−r+i]∈A∗(Pn×Pn−1),
where
cSM(1U)=r∑i=0(−1)ivi[Pr−i]∈A∗(Pn).
Theorem 1 is recovered by considering the number of points in a general fiber of the second projection from , which is the maximum likelihood degree
vr=(−1)r∫cSM(1U)=(−1)rχ(U).
When is the complement of an essential hyperplane arrangement and is the usual compactification of defined by the ratios of homogeneous coordinates , Theorem 2 specializes to the geometric formula for the characteristic polynomial of Denham et al. [DGS12, Theorem 1.1]:
χA(q+1)=r∑i=0(−1)iviqr−i.
The formula is used in [Huh] to verify Dawson’s conjecture on the logarithmic concavity of the -vector of a matroid complex, for matroids representable over a field of characteristic zero [Daw84]. Other implications of the geometric formula are collected in Remark 3.12.
### 1.3 A generalization of Kouchnirenko’s theorem
A schön hypersurface in an algebraic torus is defined by a Laurent polynomial which is nondegenerate in the sense of Kouchnirenko [Kou76]. We generalize Kouchnirenko’s theorem equating the Euler characteristic with the signed volume of the Newton polytope, in the setting of Theorem 2. We hope that the approach of the present paper clarifies an analogy noted in [Var95, Remarks (e)], where Varchenko asks for a connection between Kouchnirenko’s theorem and the conjecture stated in the introduction.
Let be a Laurent polynomial in variables with the Newton polytope , and denote the corresponding hypersurface by
U={g=0}⊆(C∗)n.
Fix the open embedding defined by the ratios of homogeneous coordinates .
We follow the convention of [CLO98, Chapter 7] and write for the -dimensional mixed volume. For example, the -dimensional standard simplex in has the unit volume .
###### Theorem 3.
Let be a nonzero Laurent polynomial in variables with
cSM(1U)=r∑i=0(−1)ivi[Pr−i]∈A∗(Pn).
If is nondegenerate, then
vi=MVn(Δ,…,Δr−i,Δg,…,Δgi+1)for i=0,…,r.
In particular, the maximum likelihood degree of is equal to the normalized volume
vr=(−1)r∫cSM(1U)=Volume(Δg).
Theorem 3 has applications not covered by Kouchnirenko’s theorem. In particular, we get an explicit formula for the degree of the gradient map of a homogeneous polynomial in terms of the Newton polytope; see Corollary 4.6. This shows that many delicate examples discovered in classical projective geometry have a rather simple combinatorial origin.
As an example, we find irreducible homaloidal projective hypersurfaces of given degree and ambient dimension , improving upon previous constructions in [CRS08, FM12]; see Example 4.9.
### 1.4 Organization
We provide a brief overview of the paper.
Section 2 is devoted to the proof of Theorem 1. Along the way we construct the variety of critical points and describe its basic properties.
Section 3 introduces the deletion-restriction for hyperplane arrangements, extending that to very affine varieties. A brief introduction to the Chern-Schwartz-MacPherson class is given, and Theorem 2 is proved.
Section 4 focuses on the maximum likelihood degree of nondegenerate hypersurfaces in algebraic tori. Applications of Theorem 3 to the geometry of projective hypersurfaces are given.
## 2 Proof of Theorem 1
### 2.1 The Gauss map of very affine varieties
An important role will be played by the Gauss map of a very affine variety in its intrinsic torus. Let be a smooth very affine variety of dimension . Choose a closed embedding
f:U⟶(C∗)n,f=(f1,…,fn).
By a theorem of Samuel (see [Sam66]), the group of invertible regular functions is a finitely generated free abelian group. Therefore one may choose to form a basis of . In this case, is a closed embedding of into the intrinsic torus with the character lattice
f:U⟶TU.
Any morphism from to an algebraic torus is a composition of with a homomorphism . The Gauss map of is defined by the pushforward of followed by left-translation to the identity; that is,
In coordinates, the first map is represented by the Jacobian matrix
(∂fi∂xj),1≤i≤n,1≤j≤r,
and the second map is represented by the diagonal matrix with diagonal entries . The composition of the two is the logarithmic Jacobian matrix
(∂logfi∂xj),1≤i≤n,1≤j≤r.
This defines the Gauss map from to the Grassmannian of :
U⟶Grr(T1TU),x⟼TxU⊆T1TU.
Let be the sheaf of differential one-forms on . Consider the complex vector space
W:=MU⊗ZC.
The dependence of on will often be omitted from the notation. We write for (the sheaf sections of) the trivial vector bundle over with the fiber . There is a vector bundle homomorphism , defined by the evaluation of the logarithmic differential forms as follows:
Φ:WU⟶Ω1U,(n∏i=1fuii,x)⟼n∑i=1ui⋅dlog(fi)(x).
At a point , the linear map between the fibers is dual to the injective linear map considered above,
Therefore is surjective and is a vector bundle over .
### 2.2 The variety of critical points
The inclusion of into defines a closed immersion between the projective bundles
X∘(U):=Proj(Sym(kerΦ∨))⟶Proj(Sym(W∨U))≃U×P(W).
Note that the following conditions are equivalent:
1. is injective.
2. is isomorphic to a torus.
3. is empty.
If is not empty, then is a projective bundle over of dimension equal to that of , defined by the equation
n∑i=1ui⋅dlog(fi)(x)=0
where are the homogeneous coordinates of . In short, is the set of critical points of all possible (multivalued) master functions.
###### Definition 2.1.
Given a compactification of , the variety of critical points of is defined to be the closure
XV(U):=¯¯¯¯¯¯¯¯¯¯¯¯¯¯X∘(U)⊆V×P(W).
We denote the variety of critical points by when there is no danger of confusion.
The variety of critical points is irreducible by construction. When is the complement of an essential arrangement of hyperplanes, is the variety of critical points previously considered in the context of hyperplane arrangements [CDFV11, DGS12]. This variety has its origin in [OT95, Proposition 4.1].
We record here the following basic compatibility: If is a morphism between two compactifications of which is the identity on , then the class of maps to the class of under the induced map between the Chow groups
A∗(V1×P(W))⟶A∗(V2×P(W)),[XV1(U)]⟼[XV2(U)].
###### Remark 2.2.
We point out a technical difference between Definition 2.1 and the variety of critical points in the cited literature. This remark is intended for readers familiar with [CDFV11, DGS12] and is independent of the rest of the paper.
Let be the complement of , an essential arrangement of hyperplanes in . The variety of critical points in [DGS12] is defined when is a central arrangement, so we suppose that this is the case. In our notation, it is the quotient under the torus action
˜X(U):=XV(U)/(C∗×1)⊆(Cr×Pn−1)/(C∗×1)=Pr−1×Pn−1,
where is the partial compactification of by the affine space . Since is central, by [OT95, Proposition 3.9] we have
˜X(U)⊆{u1+⋯+un=0}≃Pr−1×Pn−2⊂Pr−1×Pn−1.
The quotient variety is indeed the variety of critical points (of a closely related arrangement) in our sense. Consider a decone of , an affine arrangement obtained by declaring one of the hyperplanes in the projectivization of to be the hyperplane at infinity. The number of hyperplanes and the rank of the decone are one less than the corresponding quantities for . More precisely, we have the following relation between the characteristic polynomials:
χ˜A(q)=χA(q)/(q−1).
Let be the complement of the decone in , and take the obvious compactification of . Then the variety of critical points of is the subvariety considered above,
˜X(U)⊆Pr−1×Pn−2.
The reader is invited to compare the formula of Corollary 3.11 with its cohomology version [DGS12, Theorem 1.1] for essential central arrangements.
### 2.3 Nonvanishing at the boundary
Let be the torus-invariant hyperplanes in defined by the homogeneous coordinates . Fix the open embedding
ι:(C∗)n⟶Pn,
defined by the ratios . Let be the closure of in , and choose a simple normal crossing resolution of singularities
where is an isomorphism over , is smooth, and is a simple normal crossing divisor with the irreducible components . Our goal is to show that a sufficiently general differential form on with logarithmic poles along has a zero scheme which is a finite set of reduced points in .
Each defines a rational function on which is regular on . We have that
ordDj(fi) is {positiveif π(Dj)⊈H0 and π(Dj)⊆Hi,negativeif π(Dj)⊆H0 and π(Dj)⊈Hi.
###### Lemma 2.3.
For each , there is an such that is nonzero.
###### Proof.
Since is irreducible, each is contained in some . The assertion follows from the following set-theoretic reasoning:
1. If , then for some because .
2. If , then for some because .
An integral vector defines a rational function on
φu=n∏i=1fuii,
which is regular on . Note that for each ,
ordDj(φu)=n∑i=1ui⋅ordDj(fi).
Combining this with Lemma 2.3, we have the following result.
###### Lemma 2.4.
For a sufficiently general , is nonzero for all .
More precisely, there are corank- subgroups of such that is nonzero for and .
Consider the sheaf of logarithmic differential one-forms , where . For the definition and the needed properties of the sheaf of logarithmic differential one-forms, we refer to [Del70, Sai80]. We note that is a locally free sheaf of rank , and the rational function defines a global section
dlog(φu)=n∑i=1ui⋅dlog(fi)∈H0(˜V,Ω1˜V(logD)).
###### Lemma 2.5.
For a sufficiently general , does not vanish on .
###### Proof.
Given a point , let be the irreducible components of containing , and let be local defining equations on a small neighborhood of . Clearly, is at least . By replacing with a smaller neighborhood if necessary, we may assume that trivializes over , and that
φu=go11⋯gollhwhere oj=ordDj(φu)
for some nonvanishing holomorphic function on . Over the open set , we have
dlog(φu)=(l∑j=1ordDj(φu)⋅dlog(gj))+ψ,
where is a regular differential one-form. Since the form part of a free basis of a trivialization of over , it follows from Lemma 2.4 that does not vanish on for a sufficiently general . ∎
### 2.4 Proof of Theorem 1
Let be the variety of critical points of in , where is the complex vector space as before. Write for (the sheaf sections of) the trivial vector bundle over with the fiber , and consider the homomorphism defined by evaluation of the logarithmic differential forms
Ψ:W˜V⟶Ω1˜V(logD),(x,u)⟼dlog(φu)(x):=n∑i=1ui⋅dlog(fi)(x).
We do not attempt to give an individual meaning to the multivalued master function when is not integral. Let and be the two projections from , and define the incidence variety of the evaluation by
We drop the subscript from when there is no danger of confusion. By definition, is the unique irreducible component of which dominates under .
Then, for a sufficiently general ,
1. is contained in , by Lemma 2.5, and
2. is a finite set of reduced points, by the Bertini theorem applied to on [Jou83, Theoreme 6.10].
More precisely, there is a nonempty Zariski open subset such that the two assertions are valid for any element in the infinite set .
It follows that the zero scheme of the section ,
pr1(pr−12(u))={x∈˜V∣dlog(φu)(x)=0},
is a finite set of reduced points in . The smoothness of implies that the section is regular, and the above set represents the homology class [Ful98, Example 3.2.16]. Since all the are nonvanishing on , we may identify the critical points of with the zero scheme of .
Therefore all the critical points of are nondegenerate, and the number of critical points is equal to the degree of the top Chern class of . Finally, from the logarithmic Poincaré-Hopf theorem [Kaw78, Nor78, Sil96], we have
∫˜Vcr(Ω1˜V(logD))=(−1)r∫˜Vcr(Ω1˜V(logD)∨)=(−1)rχ(U).
## 3 Deletion-restriction for very affine varieties
In this section we formulate the deletion-restriction for the characteristic polynomial of a hyperplane arrangement in the very affine setting. The role of the characteristic polynomial will be played by a characteristic class for very affine varieties.
This point of view is particularly satisfactory for very affine varieties satisfying a genericity condition at infinity, and gives new insights on the positivity of the coefficients of the characteristic polynomial. For complements of hyperplane arrangements, we recover the geometric formula for the characteristic polynomial of Denham et al. [DGS12, Theorem 1.1].
Let be a very affine variety, and let be the hypersurface of defined by the vanishing of a regular function. The complement is a very affine variety, being a principal affine open subset of a very affine variety.
###### Definition 3.1.
A triple of very affine varieties is a collection of the form .
In the language of hyperplane arrangements, corresponds to the arrangement obtained by deleting the distinguished hyperplane from the arrangement corresponding to . For this reason, we call the deletion of and call the restriction of . As in the case of hyperplane arrangements, we have
dimU1=dimUanddimU0=dimU−1.
By the set-theoretic additivity of the topological Euler characteristics of complex algebraic varieties [Ful93, Section 4.5], we have
χ(U)=χ(U1)+χ(U0).
Therefore, by Theorem 1, the maximum likelihood degrees of a triple of smooth very affine varieties satisfy an additive formula. Write for the maximum likelihood degree of a very affine variety , i.e. the number of critical points of a master function of with sufficiently general exponents.
###### Corollary 3.2.
If is a triple of smooth very affine varieties, then
ML(U)=ML(U1)−ML(U0).
It would be interesting to obtain a direct justification of Corollary 3.2.
###### Remark 3.3.
Let us call a very affine variety primitive if it does not have any deletion. For example, the complement of an essential hyperplane arrangement is primitive if and only if it is the complement of a Boolean arrangement. Note that this is the case exactly when the complement is isomorphic to its intrinsic torus. A distinguished feature of the deletion-restriction for very affine varieties, when compared to that for hyperplane arrangements, is that there are primitive very affine varieties which are not isomorphic to a torus. These very affine varieties are responsible for the noncombinatorial aspect of the extended theory.
When is a triple of hyperplane arrangement complements, Corollary 3.2 is the deletion-restriction formula for the Möbius invariant , where is the characteristic polynomial of an affine hyperplane arrangement . The full deletion-restriction formula between the characteristic polynomials can be formulated in terms of the Chern-Schwartz-MacPherson (CSM) class [Mac74]. Below we give a brief description of the CSM class; Aluffi provides a gentle introduction in [Alu05].
Recall that the group of constructible functions of an algebraic variety is generated by functions of the form , where is a subvariety of . If is a morphism between complex algebraic varieties, then the pushforward of constructible functions is defined by the homomorphism
f∗:C(X)⟶C(Y),1Z⟼(p⟼χ(f−1(p)∩Z)).
If is a compact complex manifold, then the characteristic class of is the Chern class of the tangent bundle , where is the Chow homology group of (see [Ful98]). A generalization is provided by the Chern-Schwartz-MacPherson class, whose existence was once a conjecture of Deligne and Grothendieck. For a construction with emphasis on smooth and possibly noncompact varieties, see [Alu06b].
Let be the functor of constructible functions from the category of complex algebraic varieties (with proper morphisms) to the category of abelian groups.
###### Definition 3.4.
The CSM class is the unique natural transformation
cSM:C⟶A∗
such that when is smooth and complete.
The uniqueness follows from the naturality, the resolution of singularities, and the normalization for smooth and complete varieties. The CSM class satisfies the inclusion-exclusion relation
cSM(1U∪U′)=cSM(1U)+cSM(1U′)−cSM(1U∩U′)
and captures the Euler characteristic as its degree
χ(U)=∫cSM(1U).
When is the complement of an arrangement of hyperplanes in , the CSM class of is the characteristic polynomial . For the definition of the characteristic polynomial of an affine arrangement, see [OT92, Definition 2.52].
###### Theorem 3.5.
Let be the compactification of defined by the hyperplane at infinity . Then
cSM(1U)=r∑i=0(−1)ivi[Pr−i]∈A∗(Pr),
where
χA(q+1)=r∑i=0(−1)iviqr−i.
This is because the recursive formula for a triple of arrangement complements
cSM(1U1)=cSM(1U−1U0)=cSM(1U)−cSM(1U0),
agrees with the usual deletion-restriction formula
χA1(q+1)=χA(q+1)−χA0(q+1)
(see [OT92, Theorem 2.56]). The induction is on the dimension and on the number of hyperplanes. The case of no hyperplanes involves a direct computation of by the inclusion-exclusion formula, and the case of dimension is a special case of the equality
χ(U)=∫cSM(1U)=χA(1).
See [Alu12, Theorem 1.2] and also [Huh12, Remark 26].
Our goal is to relate the variety of critical points to the CSM class. If we restrict our attention to the degree of the CSM class, then the relation recovers the conclusion stated in Varchenko’s conjecture for the Euler characteristic. We prove this for a class of very affine varieties satisfying a genericity condition at infinity.
The genericity condition is commonly expressed using the language of tropical compactifications. If is a subvariety of an algebraic torus , then we consider the closures of in various (not necessarily complete) normal toric varieties of . The closure is complete if and only if the support of the fan of contains the tropicalization of [Tev07, Proposition 2.3]. We say that is a tropical compactification of if it is complete and the multiplication map
m:Tׯ¯¯¯U⟶X,(t,x)⟼tx
is flat and surjective. Tropical compactifications exist, and they are obtained from toric varieties defined by sufficiently fine fan structures on the tropicalization of [Tev07, Section 2].
###### Definition 3.6.
We say that is schön if the multiplication is smooth for some tropical compactification of .
Equivalently, is schön if the multiplication is smooth for every tropical compactification of [Tev07, Theorem 1.4].
###### Remark 3.7.
There are two classes of schön very affine varieties that are of particular interest. The first is the class of complements of essential hyperplane arrangements, and the second is the class of nondegenerate hypersurfaces [Tev07]. What we need from the schön hypothesis is the existence of a simple normal crossings compactification which admits sufficiently many logarithmic differential one-forms. For arrangement complements, such a compactification is provided by the wonderful compactification of De Concini and Procesi [DP95]. For nondegenerate hypersurfaces, and more generally for nondegenerate complete intersections, the needed compactification has been constructed by Khovanskii [Hov77].
Let be a very affine variety of dimension , where is the intrinsic torus of Section 2.1. Let be the closure of in , where is a fixed toric compactification of . We follow Section 2.2 and define the variety of critical points
X(U)⊆V×P(W)⊆Pn×Pn−1where W=MU⊗ZC.
###### Theorem 3.8.
Suppose that is schön and not isomorphic to a torus. Then
[X(U)]=r∑i=0vi[Pr−i×Pn−1−r+i]∈A∗(Pn×Pn−1),
where
cSM(1U)=r∑i=0(−1)ivi[Pr−i]∈A∗(Pn).
###### Proof.
We prove a slightly more general statement. Let be a compactification of obtained by taking the closure in a toric variety of . We will prove the equality
cSM(1U)=r∑i=0(−1)ipr1∗[pr−12(Pr−i)∩X(U)]∈A∗(V),
where and are the two projections from and is a sufficiently general linear subspace of of the indicated dimension. The projection formula shows that this implies the stated version when .
If is a schön very affine variety, then there is a tropical compactification of which has a simple normal crossings boundary divisor. More precisely, there is a smooth toric variety of , obtained by taking a sufficiently fine fan structure on the tropicalization of , such that the closure of in is a smooth and complete variety with the simple normal crossings divisor [Hac08, Proof of Theorem 2.5].
By taking a further subdivision of the fan of if necessary, we may assume that there is a toric morphism preserving . By the functoriality of the CSM class, we have
A∗(˜V)⟶A∗(V),cSM(1U)⟼cSM(1U).
Note also that
By the projection formula, the problem is reduced to the case where .
In this case, we have the following exact sequence induced by the restriction :
Note that the middle term is isomorphic to the trivial vector bundle over with the fiber [Ful93, Section 4.3]. Under this identification, the restriction of the differential one-forms is the evaluation map of Section 2.4,
It follows that the latter sequence is exact, and the projectivization of the kernel
I(U)={(x,u)∈V×P(W)∣dlog(φu)(x)=0}
coincides with the variety of critical points . Since the pullback of to is the canonical line bundle , we have
r∑i=0pr1∗[pr−12(Pr−i)∩X(U)] = pr1∗(r∑i=0c1(pr∗2 OP(W)(1))n−1−r+i∩[X(U)]) = s(N∨V/X)∩[V] = c(Ω1V(logV∖U))∩[V].
Here is the dimension of , and the last equality is the Whitney sum formula. Now the assertion follows from the fact that the CSM class of a smooth variety is the Chern class of the logarithmic tangent bundle; that is,
cSM(1U)=c(Ω1V(logV∖U)∨)∩[V].
This follows from a construction of the CSM class which is most natural from the point of view of this paper [Alu06b, Section 4]. For precursors, see [Alu99, Theorem 1] and also [GP02, Proposition 15.3]. ∎
###### Remark 3.9.
In a refined form, the above proof shows that the equality
cSM(1U)=r∑i=0(−1)ipr1∗[pr−12(Pr−i)∩X(U)]
holds in the -equivariant proChow group of [Alu06a, Alu06b]. This removes the dependence on the compactification from Theorem 2. It should not be expected that the equality holds in the ordinary proChow group of .
###### Remark 3.10.
Theorem 3.8 may fail to hold for a smooth very affine variety. As an example, consider a smooth hypersurface in whose closure in has a node at a torus orbit of codimension . A direct computation on a simple normal crossings compactification of shows that the incidence variety has a two-dimensional component other than , and hence the classes of and are different in .
For complements of hyperplane arrangements, Theorem 3.8 gives a geometric formula for the characteristic polynomial [DGS12, Theorem 1.1]. Let be the complement of an arrangement of distinct hyperplanes
A={f1=0}∪⋯∪{fn=0}⊂Cr.
Then is a very affine variety if and only if is an essential arrangement, meaning that the lowest-dimensional intersections of the hyperplanes of are isolated points. Indeed, taking one of the isolated points as the origin of and choosing linearly independent hyperplanes intersecting at that point reveals to be a principal affine open subset of . For the converse, write as a product , where is the complement of an essential arrangement, and note that the affine line does not admit a closed embedding into an algebraic torus.
Suppose from now on that is an essential arrangement. Then is a Boolean arrangement if and only if is isomorphic to an algebraic torus. The equations of the hyperplanes define a closed embedding
f:U⟶(C∗)n≃TU,f=(f1,…,fn).
The indicated isomorphism follows from the linear independence of the in . We fix the open embedding defined by the ratios of homogeneous coordinates . Then the closure of in is a linear subspace . In this setting, combining Theorems 3.5 and 3.8 gives the following statement.
###### Corollary 3.11.
Suppose that is not a Boolean arrangement. Then
[X(U)]=r∑i=0vi[Pr−i×Pn−1−r+i]∈A∗(Pr×Pn−1),
where
χA(q+1)=r∑i=0(−1)iviqr−i.
###### Remark 3.12.
A sequence of integers is said to be log-concave if for all , and it is said to have no internal zeros if the indices of the nonzero elements are consecutive integers. Write a homology class as the linear combination
ξ=∑iei[Pk−i×Pi].
It can be shown that some multiple of is the fundamental class of an irreducible subvariety if and only if the form a log-concave sequence of nonnegative integers with no internal zeros [Huh12, Theorem 21].
Therefore, by Theorem 3.8, the of a schön very affine variety form a log-concave sequence of nonnegative integers with no internal zeros. In particular, the coefficients of form a sequence with the three properties. This strengthens the previous result that the coefficients of form a log-concave sequence [Huh12, Theorem 3], and answers several questions on sequences associated to a matroid, for matroids representable over a field of characteristic zero:
1. Read’s conjecture predicts that the coefficients of the chromatic polynomial of a graph form a unimodal sequence [Rea68]. This follows from the log-concavity of when is the graphic arrangement of a given graph [Huh12].
2. Hoggar’s conjecture predicts that the coefficients of the chromatic polynomial of a graph form a strictly log-concave sequence [Hog74]. This follows from the log-concavity of when is the graphic arrangement of a given graph [Huh].
3. Welsh’s conjecture predicts that the -vector of a matroid complex forms a unimodal sequence [Wel69]. This follows from the log-concavity of when is an arrangement corresponding to the cofree extension of a given matroid [Len].
4. Dawson’s conjecture predicts that the -vector of a matroid complex forms a log-concave sequence [Daw84]. This follows from the log-concavity of when is an arrangement corresponding to the cofree extension of a given matroid [Huh].
For details on the derivation of the above variations, see [Huh].
###### Remark 3.13.
The characteristic class approach to Varchenko’s conjecture and the generalized deletion-restriction have been pioneered by Damon for nonlinear arrangements on smooth complete intersections [Dam99, Dam00]. In fact, it can be shown that Damon’s higher multiplicities are the degrees of the CSM class of the arrangement complement. See [Huh12] for the connection between the two, for nonlinear arrangements on a projective space.
The CSM point of view successfully deals with several problems considered by Damon in [Dam99, Dam00]. In particular, an affirmative answer to the conjecture in [Dam00, Remark 2.6] follows from the product formula
cSM(1U1⊗1U2)=cSM(1U1)⊗cSM(1U2)∈A∗(¯¯¯¯¯¯U1ׯ¯¯¯¯¯U2).
The above is a refinement of the product formula of Kwieciński in [Kwi92], and can be viewed as a generalization of the product formula for the Euler characteristic,
χ(U1×U2)=χ(U1)χ(U2);
see [Alu06a, Théorème 4.1].
## 4 Nondegenerate hypersurfaces
A nondegenerate Laurent polynomial defines a hypersurface in an algebraic torus which admits a tropical compactification with a simple normal crossings boundary divisor [Hov77, Section 2]. A sufficiently general Laurent polynomial with the given Newton polytope is nondegenerate, and the corresponding hypersurface is schön.
We show that the variety of critical points of a hypersurface defined by a nondegenerate Laurent polynomial is controlled by the Newton polytope. This gives a formula for the CSM class in terms of the Newton polytope, which specializes to Kouchnirenko’s theorem equating the Euler characteristic with the signed volume of the Newton polytope [Kou76].
Let be a nonzero Laurent polynomial in variables
g=∑ucuxu∈C[x±11,…,x±1n].
We are interested in the CSM class of the very affine variety
U={g=0}⊆(C∗)n.
The Newton polytope of , denoted by , is the convex hull of exponents with nonzero coefficient . Write for the Laurent polynomial made up of those terms of which lie in a face of the Newton polytope. We say that is nondegenerate if is nonvanishing on for every face of its Newton polytope.
We follow the convention of [CLO98, Chapter 7] and write for the -dimensional mixed volume. For example, the -dimensional standard simplex in has normalized volume .
In view of later applications to projective hypersurfaces, we state our result for a fixed compactification , where the open embedding is defined by the ratios of homogeneous coordinates . The formulation for other toric compactifications and the extension to complete intersections are left to the interested reader.
###### Theorem 4.1.
Let be a nonzero Laurent polynomial in variables with
cSM(1U)=r∑i=0(−1)ivi[Pr−i]∈A∗(Pn).
If is nondegenerate, then
vi=MVn(Δ,…,Δr−i,Δg,…,Δgi+1)for i=0,…,r.
In particular, the maximum likelihood degree of is equal to the normalized volume
vr=(−1)r∫cSM(1U)=Volume(Δg).
Since the degree of is the Euler characteristic , this recovers Kouchnirenko’s theorem [Kou76, Théorème IV].
###### Proof.
In any case, is the fundamental class of the ambient smooth and complete toric variety. Therefore we may assume that is nonempty and compute the CSM class of instead.
Fix a sufficiently fine subdivison of the fan of on which the support function of is piecewise linear. We may assume that the corresponding toric variety is smooth and the closure of in has simple normal crossings with [Hov77, Section 2]. Note in this case that
cSM(1(C∗)n)−cSM(1U)=c(Ω1X(logD∪V)∨)∩[X]∈A∗(X).
In order to compute the right-hand side, we use the Poincaré-Leray residue map
r\'{e}s:Ω1X(logD∪V)⟶OV,η⋅dlog(z)+ψ⟼η|V,
where is a local defining equation for and is a rational differential one-form which does not have poles along . The restriction of on is uniquely and globally determined, and in particular it does not depend on the choice of . Note that the residue map fits into the exact sequence
Since is a trivial vector bundle, the Whitney sum formula shows that
c(Ω1X(logD∪V)∨)∩[X]=n∑i=0(−1)ic1(OX(V))i∩[X].
Therefore, by the projection formula applied to the birational map , we have
cSM(1(C∗)n)−cSM(1U)=n∑i=0(−1)i(Hn−i⋅V | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9747875332832336, "perplexity": 381.35380383447983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710916.40/warc/CC-MAIN-20221202183117-20221202213117-00413.warc.gz"} |
https://brilliant.org/problems/finding-polynomial/ | # Finding Polynomial
Algebra Level 4
Let $P (x) = x^{3}-3x+1$. Find the polynomial $Q$ whose roots are the fifth powers of the roots of $P$.
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 7, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.529354453086853, "perplexity": 2224.346714096981}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00174.warc.gz"} |
https://leons.im/posts/the-formula-of-annual-leave-days/ | # The Formula of Annual Leave Days
Company D recently changed the policy of annual leave days, which confused many people. So I make up these mathematical formulas to make it more clear.
Assume Amy’s entry date is year $$x_0$$, month $$y_0$$, day $$z_0$$, then $$F(x)$$, the annual leave days for year $$x$$ is
$$F(x) = \dfrac{\left\lfloor{2\left(f(x)+\dfrac{1}{4}\right)}\right\rfloor}{2}$$
$$f(x) = \begin{cases} \dfrac{(x - x_0 + 9)(y_0 - 1) + (x - x_0 + 10)(12 - y_0 + 1)}{12}, &\text{if x > x_0;}\\ \dfrac{10(13 - y_0)}{12}, &\text{if x = x_0 , z_0 <= 15;}\\ \dfrac{10(12 - y_0)}{12}, &\text{if x = x_0, z_0 > 15;}\\ \end{cases}$$
The current available annual leave days $$G(x, y, z)$$, for year $$x$$, month $$y$$, day $$z$$, is
$$G(x, y, z) = \dfrac{\left\lfloor2\left(g(x, y, z) + \dfrac{1}{4}\right)\right\rfloor}{2} + M(x, y, z) - N$$
$$g(x, y, z) = \begin{cases} \dfrac{f(x)(y - 1)}{12}, &\text{if x > x_0, z is not the last day of month y;}\\ \dfrac{f(x)y}{12}, &\text{if x > x_0, z is the last day of month y;}\\ \max(0, \dfrac{10(y - y_0 - 1)}{12}), &\text{if x = x_0, z_0 > 15, z is not the last day of month y;}\\ \dfrac{10(y - y_0)}{12}, &\text{if x = x_0, z_0 <= 15, z is not the last day of month y;}\\ \dfrac{10(y - y_0)}{12}, &\text{if x = x_0, z_0 > 15, z is the last day of month y;}\\ \dfrac{10(y - y_0 + 1)}{12}, &\text{if x = x_0, z_0 <= 15, z is the last day of month y;}\\ \end{cases}$$
$$M(x, y, z) = \begin{cases} G(x - 1, 12, 31), &\text{if (y, z) <= (6, 30);}\\ 0, &\text{if (y, z) > (6, 30);}\\ \end{cases}$$
$$N = \text{Annual leave days have been used for this year}$$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5499334931373596, "perplexity": 4957.445383687501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00739.warc.gz"} |
https://www.physicsforums.com/threads/rotational-kinematics-of-a-turntable.727715/ | # Rotational Kinematics of a turntable
1. Dec 10, 2013
### ziggo
Good afternoon! I've been mulling over this question for a bit and I can't seem to understand what it is asking. This is a question for an introductory calculus-based physics university course.
1. The Problem:
A uniform disk, such as a record turntable, turns 8.0 rev/s around a frictionless spindle. A non-rotating rod of the same mass as the disk is dropped onto the freely spinning disk so that both turn around the spindle. Determine the angular velocity of the combination in rev/s.
2. Equations used:
I interpreted this as a conservation of angular momentum problem where the radius remains constant:
m r^2 ω = m(disc and rod) r ^2 ω(final)
3. The solution:
Since the radius remains constant and the mass doubles, both the mass and radius^2 can be removed from both sides leaving:
ω(initial) = 2ω(final)
and since the initial angular velocity was 16π Rad/s the final angular velocity would be 8π Rad/s.
Am I in the ballpark here assuming that this question is concerning the conservation of angular momentum? I don't see any other way to incorporate mass other than using Newton's laws, but I'm not sure on that.
Last edited: Dec 10, 2013
2. Dec 10, 2013
### MostlyHarmless
Is the radius of the rod and the disc necessarily equal?
3. Dec 10, 2013
### ziggo
The problem doesn't state it unfortunately.
4. Dec 10, 2013
### ryandaly
The first thing that comes to my mind is to try solving it with energy, since the moments of inertia of a rod and a disc are different. Have you covered rotational kinetic energy yet?
5. Dec 10, 2013
### Mentz114
Conservation of angular momentum gives
$m_dr_d^2\omega_0= \omega_1(m_dr_d^2+m_r r_r^2)$ so $\frac{\omega_0}{\omega_1}=\frac{I_d+I_r}{I_d}$. The subscripts are 'r' for the rod and 'd' for the disc and $I$ is a moment of inertia. I'm assuming the rod and the disc have $I=mr^2/2$.
6. Dec 10, 2013
### ziggo
We have, but I'm not sure how to imply it in this case without any information concerning the rod other than that it has the same mass as the disc and it is now a part of the system.
7. Dec 10, 2013
### ziggo
This is a very good analysis of it, and this is what I would break it down as. I simply solved it for final angular velocity or "omega 1"
Draft saved Draft deleted
Similar Discussions: Rotational Kinematics of a turntable | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9846811294555664, "perplexity": 778.7468037013451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828189.71/warc/CC-MAIN-20171024071819-20171024091819-00660.warc.gz"} |
https://www.lessonplanet.com/search?keyterm_ids%5B%5D=22101 | We found 9 resources with the keyterm lowest common denominator
Videos (Over 2 Million Educational Videos Available)
9:28
Ancient Egypt | What Everyday Life Was...
4:07
Cells - Overview & Introduction
5:25
Language and Creativity
Other Resource Types ( 9 )
Lesson Planet
Ordering Fractions and Addition of Fractions with Unlike Denominators
For Teachers 5th - 7th
Students order fractions with unlike denominators using grid paper by identifying the least common denominator and finding fractional equivalents. They convert improper fractions to mixed numbers.
Lesson Planet
Adding Fractions: The Numbers Tell LCD
For Teachers 4th - 6th
Students add fractions with different denominators, by finding the lowest common denominator by using prime factors.
Lesson Planet
Fractions Lesson 4
For Teachers 4th - 5th
As this slide show explains, simplifying a fraction is very similar to finding an equivalent fraction. The process to simplify fractions is fully demonstrated and clearly explained, 15 practice problems are included.
Lesson Planet
Adding and Subtracting Fractions (With Unlike Denominators)
For Students 5th - 6th
In this math worksheet, students subtract and add fractions with unlike denominators. Students solve the problem in steps. They are given examples of how to solve the problems.
Lesson Planet
Graphing Calculator Activity for finding common denominators
For Students 4th - 7th
In common denominator worksheet, students solve four multi-step common denominator problems using their graphing calculators. An example problem giving key strokes and steps is provided.
Lesson Planet
Fractions
For Students 8th - 12th
For this LCD worksheet, students find the lowest common denominator in fraction problems that are also division problems. Students complete 3 problems.
Lesson Planet
For Students 9th - 11th
In this Algebra I/Algebra II worksheet, students add and subtract algebraic fractions and reduce the answer to lowest terms. The two page worksheet contains a combination of eleven multiple choice and free response questions. Answers...
Lesson Planet
Lowest Common Denominator Worksheet
For Students 5th - 6th
In this fractions worksheet, students follow the steps to finding the lowest common denominator, filling in the answers to the prompts for each step.
Lesson Planet
Mighty Multiples
For Students 5th - 6th
In this math worksheet, students list the first 8 multiples of 8 numbers and identify the least common multiple for 5 pairs of factors. Students also find the lowest common denominator for 15 sets of fractions. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597706198692322, "perplexity": 6004.106890731951}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00713.warc.gz"} |
https://stats.stackexchange.com/questions/62896/jaccard-similarity-from-data-mining-book-homework-problem/88296#88296 | # Jaccard Similarity - From Data Mining book - Homework problem
Exercise 3.1.3 : Suppose we have a universal set U of n elements, and we choose two subsets S and T at random, each with m of the n elements.
What is the expected value of the Jaccard similarity of S and T ?
I am reading the book http://infolab.stanford.edu/~ullman/mmds/ch3.pdf
Each item in T has an $\frac{m}{n}$ chance of also being in S. The expected number of items common to S & T is therefore $\frac{m^2}{n}$.
Exp. $\text{Jaccard Similarity} = \dfrac{\text{No. of common items}}{\text{Size of T} + \text{Size of S} - \text{Number of common items}} = \dfrac{m}{2n - m}$ (after simplification.)
• The expected number of items common to S & T is therefore m^2 / n How?+ Dec 16 '13 at 11:41
• Expected value is calculated as Sigma x.p(x), summing over each of the m elements in set T. Each common item you find adds 1 to the common item total, so in our case, x = 1 and p(x) is m / n. Dec 16 '13 at 12:16
• According to the Example 3.2 in the book, the size of the union should always the sum of the sizes of the two sets. So I think the denominator should be 2m. Numerator is m^2/n. So SIM= m/2n. Dec 21 '15 at 0:19
• In my opinion, this is wrong or at least, not sufficiently explained. Reasoning that E[X/Y] = E[X] / E[Y] is not valid in general. Mar 25 '16 at 15:50
• Incorrect. Counter example: m=2, n=3. You can compute by hand that the correct answer is 5/9, this formula yields 1/2. As pointed out by Antoine, the E[X/Y] = E[X] / E[Y] derivation is incorrect. Apr 9 '19 at 19:15
The above answer assumes that an element in $T$ may be repeated several times in $S$ (i.e. $S$ and $T$ are not sets but multisets); else the probability will not be $m/n$ uniformly.
I expect the answer should be more along the following lines:-
Let the number of common elements between $S$ and $T$ be $k$. Then, as mentioned by ack_inc in the comment to his answer, Jaccard similarity $Sim(S,T)=k/(2m-k)$.
Now, $Pr(Sim(S,T)=k/(2m-k))$ will be $\dfrac{{m\choose {k}} {n-m\choose m-k}}{n\choose m}$ since there are $n$ total elements, of which $m$ are in $S$ and $k$ are common. So the number of ways we can choose $m$ elements for $T$ is given by ${m \choose k}$ (choosing the $k$ common elements from $S$) times ${n-m\choose m-k}$ (choosing remaining $m-k$ elements).
Thus, $E(Sim(S,T))=\sum_{k=0}^{m} \dfrac{k}{2m-k} \dfrac{{m\choose {k}} {n-m\choose m-k}}{n\choose m}$.
However, simplifying the above expression is beyond my limited knowledge of combinatorial identities. If anyone can do so, kindly update the answer.
• When size of $T$ is same as the size of $U$ (each of size n), then wny not $S\subset T$?
– CKM
Apr 6 '20 at 6:18
I'm posting an alternative solution.
Jaccard similarity of two sets $S$ and $T$ is defined as the fraction of elements these two sets have in common, i.e. $\text{sim}(S,T)=|S\cap T|/|S\cup T|$. Suppose we chose $m$-element subsets $S$ and $T$ uniformly at random from an $n$-element set. What is the expected Jaccard similarity of these two sets? Suppose the $|S\cap T|=k$ for some $0\le k\le m$. Notice that for the first set, $S$, we have $\binom{n}{m}$ choices, while for $T$ we have $\binom{m}{k}\binom{n-m}{m-k}$ choices, because $k$ elements must be from $S$ and $m-k$ elements must not be from $S$. This gives us $$\Pr[|S\cap T|=k]=\frac{\binom{m}{k}\binom{n-m}{m-k}}{\binom{n}{m}},$$ meaning that $$\text{E}[\text{sim}(S,T)]=\sum_{k=0}^m\frac{\binom{m}{k}\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$ Even though $\text{E}[|S\cap T|/|S\cup T|]\neq\text{E}[|S\cap T|]/\text{E}[|S\cup T|]=m/(2n-m)$, this expression seems to give good approximation.
Thanks to Mitja Trampus for pointing out an alternate solution, with $$\Pr[|S\cap T|=k]=\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}},$$ giving the following expression: $$\text{E}[\text{sim}(S,T)]=\sum_{k=0}^m\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$
(The above expressions are, of course, equivalent.)
EDIT: Regarding the simplification, perhaps applying the following identity (from Aigner's book, page 13) could work: $$\binom{n}{m}\binom{m}{k}=\binom{n}{k}\binom{n-k}{m-k}.$$
• There is a problem. What exactly do you do with Pr [S union T]? How do you go from E [ S int. T / S union T] to the second equation? Apr 21 '14 at 8:23
• Um, which equation exactly? Once I know the size of the intersection is $k$, I know the size of the union must be $2m-k$. So I view the size of the intersection $|S\cap T|$ as a random variable. Once I compute $\Pr[|S\cap T|=k]$, I just use the definition of the expectation: $E[sim(S,T)]=\Pr[|S\cap T|=1]/(2m-1)+\ldots+\Pr[|S\cap T|=m]$. Does this answer your question? Apr 21 '14 at 9:32
I agree with blazs answer - just want to add a small correction (credit to another guy in the course who pointed it out).
The summation does not start at 0. (you'll see it if you make n=100 and m=99)
$$\text{E}[\text{sim}(S,T)]=\sum_{k=max(0, 2m-n)}^m\binom{m}{k}\frac{\binom{m}{k}}{\binom{n}{k}}\frac{\binom{n-m}{m-k}}{\binom{n}{m}}\frac{k}{2m-k}.$$
I just want to add the following:
As pointed out, ack_inc's answer is not correct and can serve only as an approximation. Also, the lower bound should be $$\max\{0, 2m - n\}$$ instead of $$0$$, as GM1313 mentions.
I needed to compute the similarities for $$1\leq m\leq n$$ where $$n = 5000$$ or even bigger, so computing all the probabilities $$P_{n, m}[|S\cap T| = k]$$ (blindly following the definition)
• takes a lot of time,
• gives wrong results due to floating-point arithmetics, e.g., $$p = 0.0$$.
Therefore, I used the fact that $${a \choose b + 1} = \frac{a - b}{b + 1}{a \choose b}$$ to speed up the process, and compute the probabilities recursively, starting with the biggest one.
I also used the approximation $$m / (2n - m)$$ and actually, it is really good, especially for larger values of $$n$$ (curves for $$n\in\{20, 100, 5000\}$$): | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9098352789878845, "perplexity": 350.6562092072229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323586043.75/warc/CC-MAIN-20211024142824-20211024172824-00038.warc.gz"} |
https://socratic.org/questions/how-do-you-factor-completely-a-2-4b-2-4a-4 | Algebra
Topics
# How do you factor completely a^2 - 4b^2 - 4a + 4?
Sep 2, 2016
${a}^{2} - 4 {b}^{2} - 4 a + 4 = \left(a - 2 b - 2\right) \left(a + 2 b - 2\right)$
#### Explanation:
The difference of squares identity can be written:
${A}^{2} - {B}^{2} = \left(A - B\right) \left(A + B\right)$
We use this with $A = \left(a - 2\right)$ and $B = 2 b$ as follows:
${a}^{2} - 4 {b}^{2} - 4 a + 4 = \left({a}^{2} - 4 a + 4\right) - 4 {b}^{2}$
$\textcolor{w h i t e}{{a}^{2} - 4 {b}^{2} - 4 a + 4} = {\left(a - 2\right)}^{2} - {\left(2 b\right)}^{2}$
$\textcolor{w h i t e}{{a}^{2} - 4 {b}^{2} - 4 a + 4} = \left(\left(a - 2\right) - 2 b\right) \left(\left(a - 2\right) + 2 b\right)$
$\textcolor{w h i t e}{{a}^{2} - 4 {b}^{2} - 4 a + 4} = \left(a - 2 b - 2\right) \left(a + 2 b - 2\right)$
##### Impact of this question
831 views around the world | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21411387622356415, "perplexity": 2781.089983774078}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573053.13/warc/CC-MAIN-20190917061226-20190917083226-00430.warc.gz"} |
https://www.physicsforums.com/threads/calculus-problems-where-to-begin.6859/ | # Calculus problems, where to begin?
1. Oct 7, 2003
### Jeebus
Hello! I have a math problem (mostly proofs) that im stuck
on, partially because I do not know where to begin and partially
because I believe I dont even fully understand the problem. I
was wondering if any of you would be kind enough to show me what to
do? Thank you.
1. Suppose f(x,y) is differentiable for all (x,y), f(x,y)=17 on the
unit circle x^2+y^2=1 and grad f is never zero on the unit circle. For
any real number K, find a unit vector parallel to grad
f(cos(k),sin(k))....grad f stands for the gradient of f. But isnt it contradicting what its saying? It says f(x,y)=17 on the unit circle x^2+y^2. How the...?
I'm just supposing f(x,y) is differentiable for all (x,y), f(x,y)=17 on the unit circle x^2+y^2=1 and grad(f) is never zero on the unit circle(?) So you just find a unit vector parallel to grad f(cos(k),sin(k)), for k real, right?
PS- Do level curves apply to this problem?
2. Oct 8, 2003
### dhris
Hello Jeebus,
You're being asked to find the direction of ∇f on the unit circle (k is just an angle). I think it's easier if we use polar coordinates (r,θ), and their corresponding unit vectors r and θ (don't know how to make the little hats yet). If we look at the gradient on the circle, and dot it with θ: θ dot ∇f, we get the rate of change of f in the θ direction. But what is the rate of change of f in the θ-direction on the unit circle? And what can you conclude about the direction of ∇f from this?
Hope this helps,
dhris
3. Oct 8, 2003
### HallsofIvy
dhris was giving hints that should help you but I got the impression that you really had no idea what was going on (and so need more than hints).
You ask "Do level curves apply to this problem?" Well, yes, of course. You are GIVEN that f(x,y)= 17 on the unit on the unit circle. The point is that f(x,y) is CONSTANT on that circle. The unit circle IS a level curve. Now, what is the relationship between level curves of a function and the gradient of the function at points on a level curve?
Similar Discussions: Calculus problems, where to begin? | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9452882409095764, "perplexity": 656.0337818575957}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891807660.32/warc/CC-MAIN-20180217185905-20180217205905-00510.warc.gz"} |
http://www.ganitcharcha.com/view-article-A-Population-Growth-Model-Including-Hermaphrodites.html | # A Population Growth Model Including Hermaphrodites
1. Introduction: Biomathematics or mathematical biosciences are concerned with the applications of mathematical techniques to get an insight into the problems of biosciences. The can well be interpreted in a similar way as in engineering, physical or social science mathematics. However, situations in life sciences are quite complex and complicated and, therefore, we have to look for a situation before a mathematical model is constructed. If a model be formed, then its consequences can be deduced by using mathematical tools and the results so obtained can be compared with observations. The discrepancies between theoretical results and observations suggest further improvement of the model. The process is repeated till the model becomes a realistic one.
Biomathematics include both mathematical modeling in biology and medicine and give useful information’s to enlighten the complex biological situations. Some of the disciplines included in the subject are: botany, zoology, ecology, population dynamics, genetics, epidemiology, pharmacokinetics, physiology, environmental science and so on. The extent to which mathematics has penetrated into different disciplines varies in each instance, but with its own right in exponentially growing literature. The techniques used in biomathematics are: classical, probabilistic and statistical, computer and simulation, operations research etc.
In the present article, it is proposed to enlighten a mathematical model on population dynamics (also called demography). The field entails the study of population growth, population dispersal, effects or immigration, emigration and mixing of populations, effect of age structure on population sizes etc.
The subject is based on the populations of microorganisms so tiny in size to be seen by the naked eye. But they play an important role in (i) fermentation technology, e.g. production of alcohol, beverages, vinegar, biogas etc, (ii) mining technology like leaching out undesirable elements from ores, (iii) sanitary engineering, e.g. removal of pollutants from waste water, (iv) bioconversion of solar energy and soon. The growth difference of individual has a vital role in demography and it occurs due to space location, genetic property, age and size variations etc. of which size-difference plays a significant role due to its wide applications in fishery and forestry.
Another important consideration of population growth is the marriage in society. Kendall [1] proposed a mathematical model on this by taking the marriage rate to be constant. But the rate increases throughout the world and there exists a curvilinear relationship between the marriage rate and time. On the basis of this conception, Mishra [2] and ojha and pandey [3] modified Kendall’s model by taking the marriage rate to be a linear and quadratic function of time. However, in all these problems, the number of hermaphrodites (although a few in number) has been neglected. We, therefore, propose a model by introducing the number of hermaphrodites and taking the marriage rate to be a linear function of time.
2. The Model and Basis Equations:
Suppose M, F, H and Z denote respectively the number of unmarried males, females, hermaphrodites and married couples at any time t. $u_{1}$, $u_{2}$, $u_{3}$ are the male, female and hermaphrodite birth rates per married couples per unit time. $a_{1}$, $a_{2}$, $a_{3}$ are the death rates of unmarried male, female and hermaphrodite per unit time; $a_{4}$, $a_{5}$, $a_{6}$ are the male, female and hermaphrodite death rates per married couple per unit time. We also suppose that the marriage rate is a linear function of time t and is given by $A_{0} + A_{1}.t$, where $A_{0}$, $A_{1}$ are non-negative constant. Then the population model runs as follows:
3. Solutions:
Integrating equation (4) w.r.t. $t$, we get
References:
[1] Kendall, D. G. : Stochastic Model and Population Growth, Demography, Springer Verlag(1977)
[2] Mishra, P. : Progress of Mathematics, Vol. 22(1988). P 20.
[3] Ojha, V.P. and Pandey, H: Jour. Nat. Acad. Math., Vol. 7(1989) p. 99
Dr D. C. Sanyal {Email: dcs_klyuniv[at]yahoo.com} is a retired professor of mathematics of University of Kalyani, India. Prof. Sanyal has published many research papers in several national and international journals and conferences of repute. His areas of expertise include Solid Mechanics, Hydrodynamics, Geophysics and Geidynamics and Bio Mathematics. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8613725900650024, "perplexity": 1705.3101831086847}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320707.69/warc/CC-MAIN-20170626101322-20170626121322-00570.warc.gz"} |
http://logfc.wordpress.com/ | Pathway analysis in R and BioConductor.
There are many options to do pathway analysis with R and BioConductor.
First, it is useful to get the KEGG pathways:
library( gage )
kg.hsa <- kegg.gsets( "hsa" )
kegg.gs2 <- kg.hsa$kg.sets[ kg.hsa$sigmet.idx ]
Of course, “hsa” stands for Homo sapiens, “mmu” would stand for Mus musuculus etc.
Incidentally, we can immediately make an analysis using gage. However, gage is tricky; note that by default, it makes a pairwise comparison between samples in the reference and treatment group. Also, you just have the two groups — no complex contrasts like in limma.
res <- gage( E, gsets= kegg.gs2,
ref= which( group == "Control" ),
samp= which( group == "Treatment" ),
compare= "unpaired", same.dir= FALSE )
Now, some filthy details about the parameters for gage.
• E is the matrix with expression data: columns are arrays and rows are genes. If you use a limma EList object to store your data, this is just the E member of the object (rg$E for example). However, and this is important, gage (and KEGG and others) are driven by the Entrez gene identifiers, and this is not what you usually have when you start the analysis. To get the correct array, you need to • select only the genes with ENTREZ IDs, • make sure that there are no duplicates • change the row names of E to ENTREZ IDs • gsets is just a list of character vectors; the list names are the pathways / gene sets, and the character vectors must correspond to the column names of E. • ref and samp are the indices for the “reference” and “sample” (treatment) groups. This cannot be logical vectors. Only two groups can be compared at the same time (so for example, you cannot test for interaction). • compare — by default, gage makes a paired comparison between the “reference” and “sample” sets, which requires of course to have exactly the same number of samples in both sets. Use “unpaired” for most of your needs. • same.dir — if FALSE, then absolute fold changes are considered; if TRUE, then up- and down-regulated genes are considered separately To visualise the changes on the pathway diagram from KEGG, one can use the package pathview. However, there are a few quirks when working with this package. First, the package requires a vector or a matrix with, respectively, names or rownames that are ENTREZ IDs. By the way, if I want to visualise say the logFC from topTable, I can create a named numeric vector in one go: setNames( tt$logFC, tt$EID ) Another useful package is SPIA; SPIA only uses fold changes and predefined sets of differentially expressed genes, but it also takes the pathway topology into account. New editor in WordPress and how to circumvent it I have problems like everyone else with the new editor, except that my main issue is editing existing posts. 1. Why I don’t want to use the new editor, in order of importance: – it doesn’t take up all of the screen estate, only a narrow column in the middle of one of the two of 24″ monitors I use to work with text. – it meddles with my HTML code. I have many “tl;dr” (very long) posts (not here, on another blog) which I like to format with empty lines in order to facilitate editing. – it is buggy (read: non-functional) in my browser of choice (experimental Opera running on Ubuntu). Now, I don’t ask WordPress to adapt to my needs; but if everything else works fine with that browser, why would I want to switch to something I don’t like just because one beta feature on one site doesn’t work? The first two problems make the editor unusable; the third can be circumvented (I could work with WordPress and WordPress only in Firefox, for example). 2. How to circumvent the new editor Of course, clicking on the little pencil icon on your blog post takes you to the new editor. To use the old editor, the only way is to go to “Dashboard” -> “Posts”, search for the post you would like to edit, and click on “Edit”. 3. A better solution Another, better solution would be to add a link rewrite plugin. The link to the old style editor looks like this: https://yourblogname.wordpress.com/wp-admin/post.php?post=1234&action=edit Since the link to the new editor looks like this: https://wordpress.com/post/BlogIDNumber/1234 I think that a regular expression rewriting your links (for example, a simple bookmarklet) should do the trick. Essentially, you would catch the regular expression “https://wordpress.com/post/BlogIDNumber/(%5B0-9%5D*)” and replace the link by “https://yourblogname.wordpress.com/wp-admin/post.php?post=$1&action=edit”.
The code below is untested and you should use it on your own responsibility
javascript:(function(){
var m = /wordpress.com\/post\/YourBlogID\/([0-9]*)/ ;
for(var i = 0, l; l = links[i]; i++) {
if (l.href)
l.href = l.href.replace(m,r);
}
})();
Replace “yourblogname” by your blog name (e.g. “logfc” in case of my blog); replace “YourBlogID” by your blog ID. Add a bookmark that contains, as reference, the above code. Clicking on the bookmark should replace all “new style links” by “old style links”.
Here is an example screenshot from Google Chrome. You see that in the field where you would normally put your URL (http…), you enter the above code (except that you replace “YourBlogID” by the actual ID and “yourblogname” by the actual name):
The downside is that you will need a bookmark for each blog that you are writing.
Sloppy Science
Last week, Science has published a paper by Rodriguez and Laio on a density-based clustering algorithm. As a non-expert, I found the results actually quite good compared to the standard tools that I am using in my everyday work. I even implemented the package as an R package (soon to be published on CRAN, look out for “fsf”).
However, there are problems with the paper. More than one.
1. The authors claim that the density for each sample is determined with a simple formula which is actually the number of other samples within a certain diameter. This does not add up, since then the density must be always a whole number. It is obvious from the figures that this is not the case. When you look up the original matlab code in the supplementary material, you see that the authors actually use a Gaussian kernel function for density calculation.
2. If you use the simple density count as described in the paper, the algorithm will not and cannot work. Imagine a relatively simple case with two distinct clusters. Imagine that in one cluster, there is a sample A with density 25, and in the other cluster, there are two samples, B and C, with identical densities 24. This is actually quite likely to happen. The algorithm now determines, for each sample, $\delta$, that is the distance to the next sample with higher density. The whole idea of the algorithm is that for putative cluster centres, this distance will be very high, because it will point to the center of another cluster.
However, with ties, we have the following problem. If we choose the approach described by the authors, then both of the samples with density B and C (which have identical density 24) will be assigned a large $\delta$ value and will become cluster center candidates. If we choose to use a weak inequality, then B will point to C, and C to B, and both of them will have a small $\delta$.
Therefore, we either have both B and C as equivalent cluster candidates, or none of them. No wonder that the authors never used this approach!
3. The authors explicitly claim that their algorithm can “automatically find the correct number of clusters.” This does not seem to be true, at least there is nothing in the original paper that warrants this statement. If you study their matlab code, you will find that the selection of cluster centers is done manually by a user selecting a rectangle on the screen. Frankly, I cannot even comment on that, this is outrageous.
I think that Science might have done a great disservice to the authors — everyone will hate them for having a sloppy, half-baked paper that others would get rejected in PLoS ONE published in Science. I know I do :-)
Copy text from images
All the elements were there. Algorithms freely available; implementations ready to download, for free. It just took one clever person to FINALLY make it: a seamless way to copy text from images. At least in Chrome, for now, but since it is Open Source, I guess it is just a matter of time and we will see it as a build-in feature in Firefox and other browsers. For now, use the Project Naphta Chrome browser extension. The main project page is also an interesting read.
Words on the web exist in two forms: there’s the text of articles, emails, tweets, chats and blogs— which can be copied, searched, translated, edited and selected— and then there’s the text which is shackled to images, found in comics, document scans, photographs, posters, charts, diagrams, screenshots and memes. Interaction with this second type of text has always been a second class experience, the only way to search or copy a sentence from an image would be to do as the ancient monks did, manually transcribing regions of interest.
(from the Naphta web site)
It works really nice, assuming that you select the “English tesseract” option from the “Languages” sub-menu — the off-line javascript implementation of the OCRad is not very effective, in contrast to the cloud-based tesseract service.
Rotating movies with RGL
Learned another thing today: it is very simple to create animated GIFs in the rgl package with the build-in functions spin3d, play3d and movie3d.
library( pca3d )
data(metabo)
pca <- prcomp( metabo[,-1], scale.= TRUE )
pca3d( pca, group= metabo[,1] )
rot <- spin3d( axis= c( 0, 1, 0 ) )
movie3d( rot, duration= 12 )
Here is the result:
Check credit card numbers using the Luhn algorithm
You can find a nice infographic explaining how to understand the credit card number here.
Credit card numbers include, as the last digit, a simple checksum calculated using the Luhn algorithm. In essence, we add up the digits of the card number in a certain way, and if the resulting sum divided by 10 is not an integer (i.e., the reminder of dividing by 10 is zero), then the number is invalid (the reverse is not true; a number divisible by 10 can still be invalid).
This can be easily computed in R, although I suspect that there might be an easier way:
checkLuhn <- function( ccnum ) {
# helper function
sumdigits <- function( x )
sapply( x, function( xx )
sum( as.numeric( unlist( strsplit( as.character( xx ), "" ))))
)
# split the digits
ccnum <- as.character( ccnum )
v <- as.numeric( unlist( strsplit( ccnum, "" ) ) )
v <- rev( v )
# indices of every second digit
i2 <- seq( 2, length( v ), by= 2 )
v[i2] <- v[i2] * 2
v[ v > 9 ] <- sumdigits( v[ v > 9 ] )
if( sum( v ) %% 10 == 0 ) return( TRUE )
else return( FALSE )
}
Sample size / power calculations for Kaplan-Meier survival curves
The problem is simple: we have two groups of animals, treated and controls. Around 20% of the untreated animals will die during the course of the experiment, and we would like to be able to detect effect such that instead of 20%, 80% of animals will die in the treated group, with power 0.8 and alpha=0.05. Group sizes are equal and no other parameters are given.
What is the necessary group size?
I used the ssizeCT.default function from the powerSurvEpi R package. Based on the explanation in the package manual, this calculates (in my simple case) the required sample size in a group as follows:
$n = \frac{m}{p_E + p_C}$
where $p_E$ and $p_C$ are, respectively, probabilities of failure in the E(xpermiental) and C(ontrol) groups. I assume that in my case I should use 0.8 and 0.2, respectively, so $n=m$. The formulas here are simplified in comparison with the manual page of ssizeCT.default, simply because the group sizes are identical.
$m$ is calculated as
$m=\big(\frac{RR+1}{RR-1}\big)^2(z_{1-\alpha/2}+z_{1-\beta})^2$
RR is the minimal effect size that we would like to be able to observe with power 0.8 and at alpha 0.05. That means, if the real effect size is RR or greater, we have 80% chance of getting a p-value smaller than 0.05 if the group sizes are equal to $m$. To calculate RR, I first calculate $\theta$, the hazard ratio, and for this I use the same approximate, expected mortality rates (20% and 80%):
$\theta = \log(\frac{\log(0.8)}{\log(0.2)}) = -1.98$
Since $RR=exp(\theta)=0.139$; thus $m=18.3$. This seems reasonable (based on previous experience). | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 14, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6626691222190857, "perplexity": 1613.5090130726094}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657134620.5/warc/CC-MAIN-20140914011214-00111-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://www.scholarpedia.org/article/Expansive_system | # Expansive Systems
(Redirected from Expansive system)
Post-publication activity
Curator: Jorge Lewowicz
A discrete invertible (the case we shall mainly refer to) expansive system is a dynamical system such that every point of the underlying space has a distinctive behaviour. A homeomorphism $$f$$ from the compact metric space $$M$$ onto $$M$$ is expansive if there exists $$\alpha >0\ ,$$ (called expansivity constant of $$f$$) such that if $$x,y\in M$$ and $$dist(f^{n}(x),f^{n}(y))\leq \alpha$$ for every $$n\in Z$$ then, $$x=y\ .$$ Thus, if $$x\neq y\ ,$$ then for some $$n,$$ $$dist(f^{n}(x),f^{n}(y))>\alpha .$$
Expansive systems are then wholly sensitive to initial conditions and therefore, in this sense, chaotic.
Assume the dynamics of $$f$$ is observed with a precision that permits to distinguish points at a distance larger than $$\alpha ,$$ meanwhile, points at a distance less than $$\delta >0,$$ $$\delta <<\alpha,$$ are not distinguished. Then, a $$\delta$$-small neighbourhood of, say, $$x\in M$$ with infinite points, will be seen -at present- as only one point. However, for some $$n\in Z\ ,$$ the $$n$$-iterate through $$f$$ of this point, will show many of them, since points at a distance larger than $$\alpha$$ are distinguished by the observer (see [B]).
Since $$M$$ is compact, on account of the expansiveness of $$f,$$ it is not difficult to show that given a $$\delta$$ like in the preceding paragraph, $$\delta <\alpha /2\ ,$$ there is a $$C^{0}-$$neighbourhood $$N$$ of $$f\ ,$$ such that if $$g\in N\ ,$$ and $$dist(g^{n}(x),g^{n}(y))\leq \alpha$$ for every $$n\in Z,$$ then $$dist(g^{n}(x),g^{n}(y))\leq \delta$$ for all $$n\in Z.$$ Therefore the relation $$\mathcal{R}_{\delta }=\{(x,y)\in M\times M:dist(g^{n}(x),g^{n}(y))\leq \delta ,\;\;n\in Z\}$$ is an equivalence relation on $$M,\ .$$ The canonical projection $$\pi:M\to M/\mathcal{R}_{\delta }$$ is closed and consequently $$M/\mathcal{R}_{\delta }$$ is a Hausdorff compact topological space and therefore the $$M/\mathcal{R}_{\delta }$$ is a compact metrizable space, and $$g^{\ast }: M /\mathcal{R}_{\delta }\rightarrow M /\mathcal{R}_{\delta }$$ defined by $$g^{\ast }(\pi (x))=\pi (g(x))\ ,$$ is an expansive homeomorphism of $$M/ \mathcal{R}_{\delta }.$$
Again, an observer that can not distinguish points at a distance less than $$\delta$$ will see the motion as taking place in $$M/ \mathcal{R}_{\delta }$$ (instead of $$M$$) under the action of $$g^{\ast }\ .$$
Clearly, homeomorphisms conjugate to an expansive one, are also expansive.
## Examples
### The shift
Consider $$\ 2^{Z}=\left\{ \left( a_{n}\right) :a_{n}=0\text{ or }1,n\in Z\right\}$$ and the distance$dist(( a_{n}) ,( b_{n}) )=\sum_{-\infty }^{\infty }| a_{n}-b_{n}| 2^{-| n| }$
With this metric, which induces the product toplogy, $$2^{Z}$$ is compact. Let $$\sigma :2^{Z}\rightarrow 2^{Z}$$ be defined by $$\sigma (a_{n})=(b_{n}),$$ where $$b_{n}=a_{n-1}\ .$$ $$\sigma$$ is the usual shift homeomorphism. If $$(a_{n}) \neq ( b_{n})$$ then, for some $$K\in Z,a_{K}\neq b_{K}\ ,$$ and , therefore
$$dist(\sigma ^{K}(( a_{n})) ,\sigma ^{K} ((b_{n})) )\geq 1,$$
showing that the shift is an expansive homeomorphism.
### The Denjoy map
Take a rotation of $$S^{1}$$ by an angle $$2\pi \alpha\ ,$$
where $$\alpha$$ is irrational, and replace the points of a dense orbit, say $$\left\{ x_{n},n\in Z\right\}\ ,$$with arcs of diameter decreasing with $$| n|$$ in order to get a new space also homeomorphic to $$S^{1}\ .$$ The Denjoy map, $$f\ ,$$ may be defined by assigning to each point not
on the added arcs, the former image under the rotation, and mapping (length) linearly the arc replacing $$x_{n}$$ onto the one replacing $$x_{n+1},n\in Z\ .$$ It is easy to see that this map is a homeomorphism of $$S^{1}\ ,$$ and that the set $$D$$ of points not lying in the interior of the added arcs is compact and invariant under the Denjoy map (in fact this set is homeomorphic to the Cantor set). A non-trivial arc whose end points lie on this set contains some of the added arcs, and, consequently, some iterate of this arc will include the one replacing $$x_{0}$$ of diameter, say $$d\ .$$ Thus, $$d$$ will be an expansivity constant for the restriction of $$f$$ to $$D\ .$$
### Anosov and quasi-Anosov diffeomophisms.
Let $$f$$ be a diffeomorphism of a compact, Riemannian, smooth manifold $$M$$ onto itself; $$f$$ is Anosov if there exitsts $$L>0,0<\lambda <1\ ,$$ and continuous non trivial $$Tf$$ invariant sub-bundles $$S,U$$ of $$TM,\ ,$$ such that $$S\oplus U=TM\ ,$$ $$\left\| Tf^{n}(s)\right\| \leq L\lambda ^{n}$$ for $$s\in S,n\geq 0\ ,$$ and $$\| Tf^{-n}(u)\| \leq L\lambda ^{n}$$ and $$u\in U,n\geq 0\ .$$
If $$A$$ is a compact $$f$$-invariant subset of $$M$$ and the above decomposition holds on $$A\ ,$$ $$A$$ is called a hyperbolic set. The restriction $$f|A\ ,$$ of $$f$$ to a hyperbolic set $$A$$ is also expansive. Anosov difeeomorphisms may also be characterized in a different way (see [L1]). Let $$B:TM\rightarrow R$$ be a continuous quadratic form, i.e, $$B_{x}=B|T_{x}M$$ is a quadratic form on the vector space $$T_{x}M$$ that depends continuously on $$x\in M\ .$$ A diffeomorphism $$f:M\rightarrow M$$ is quasi-Anosov if there exists such a $$B$$ with the property $$B_{f(x)}((Tf)_{x}(v))-B_{x}(v)>0,$$ for every $$x\in M\ ,$$ and each $$v\in T_{x}M,\| v\| \neq 0\ .$$ A diffeomorphism $$f$$ is Anosov if and only if it is quasi-Anosov and $$B_{x}$$ is non-degenerate for all $$x\in M\ .$$ There are quasi-Anosov diffeomorphisms that fail to be Anosov (see [FR], the examples in this paper have a strange attractor and a strange repeller [M] and the motion of most points evolves to the attractor and comes from the reppeller). This characterization of quasi-Anosov (Anosov) diffeomorphisms permits to conclude the existence of a $$C^{1}$$ neighbourhood of $$f$$ such that any finite composition of diffeomorphisms in that neighbourhhod is also quasi-Anosov (Anosov). We shall see below that Anosov and quasi-Anosov diffeomorphisms (and hyperbolic sets) are expansive.
### Pseudo-Anosov homeomorphisms.
Figure 1: A singular point x of a pseudo-Anosov map f and nearby orbits.
Let $$f$$ be a homeomorphism of an oriented compact surface $$M$$ of genus larger than 1 onto itself. The map $$f$$ is pseudo-Anosov if there exist two $$f$$-invariant, transversal foliations with singularities (see figure 1) $$W^{S},W^{U},$$ and also two transversal measures $$\mu _{S},\mu _{U}$$ (defined on the space of (stable, unstable) leaves of $$W^{S},$$ respectively $$W^{U}$$) and $$\lambda >1$$ such that $$f^{\ast }(\mu _{U})=\lambda \mu _{U}$$ and $$f^{\ast }(\mu _{S})=\lambda ^{-1}\mu _{S}\ .$$ The existence and expansivity of these homeomorphisms is proved in [T], [FLP].
### Another example.
Let $$f:T^{2}\rightarrow T^{2}$$ be defined by
$$\tag{1} f(x,y)=(2x+y-\frac 12\pi c\sin (2\pi x),\;x+y-\frac 12\pi c\sin (2\pi x)).$$
For $$0\leq c<1,$$ $$f$$ is Anosov (for $$c=0$$ $$f$$ is linear), but for $$c=1\ ,$$ $$f$$ is expansive but is neither Anosov nor quasi-Anosov since $$Tf_{0}$$ has no non-trivial invariant subspaces.
## General Properties.
### Question: Why not to define expansivity only for the future?
Theorem [U]. Let $$M$$ be a compact metric space and $$f:M\rightarrow M$$ be an homeomorphism such that there is $$\alpha >0$$ with the property that for $$x,y\in M,x\neq y,$$ $$dist(f^{n}(x),f^{n}(y))>\alpha$$ for some $$n\geq 0\ .$$ Then, $$M$$ is finite.
### Stable (unstable) sets
Let $$f:M\rightarrow M$$ be a homeomorphism; for $$x\in M\ ,$$ the stable set of $$x$$ is
$$W^{S}(x)=\{ y\in M:dist(f^{n}(x),f^{n}(y))\rightarrow 0 \mbox{ if } n\rightarrow +\infty \} ,$$ and the unstable set is
$$W^{U}(x)=\{ y\in M:dist(f^{n}(x),f^{n}(y))\rightarrow 0 \mbox{ if } n\rightarrow -\infty \} .$$
The local stable (unstable) sets of $$x,$$ are defined as follows: given $$\varepsilon >0,$$ $$W_{\varepsilon }^{S}(x)=\left\{ y\in M:dist(f^{n}(x),f^{n}(y))\leq\varepsilon ,n\geq 0\right\}$$ $$W_{\varepsilon }^{U}(x)=\left\{ y\in M:dist(f^{n}(x),f^{n}(y))\leq \varepsilon ,n\leq 0\right\}$$
Let now $$f$$ be expansive. May the stable set contain a neighbourhood of $$x$$ for every $$\varepsilon >0\ ?$$ In other words : may $$x$$ be Lyapunov stable in the future? The answer is yes; it is easy to find a shift invariant subset of $$2^{Z}$$ for which $$0$$ is Lyapunov stable in the future. Nevertheless,
Theorem [L2 ]. If $$M$$ is locally connected there are no stable points (either in the future or in the past).
Corollary. If $$M$$ is locally connected, for every $$\varepsilon >0,$$ there is $$r>0\ ,$$ such that for every $$x\in M\ ,$$ $$W_{\varepsilon }^{S}(x)$$ and $$W_{\varepsilon }^{U}(x)$$ contain a compact connected set of diameter larger than $$r\ .$$
(Compare with the Denjoy map $$f|D\ ;$$ for points not lying on the added arcs the local stable (unstable) sets are trivial.)
Application. There are no expansive homeomorphisms of $$S^{1}.$$
Proof: Assume by contradiction that there exist an expansive homoemorphism on $$S^1\ .$$ Then by the previous Corollary there are non-trivial stable open sets (a connected set of $$S^1$$ contains an open arc) and every point of it is a stable point, in contradiction with the above Theorem.
### Expansiveness and Lyapunov Functions.
Theorem [L1]. Let $$f$$ be a homeomophism of $$M\ ,$$ then $$f$$ is expansive if and only if there exist a neighbourhood $$N$$ of the diagonal in $$M\times M$$ and a real continuous function $$V$$ (Lyapunov) defined on $$N\ ,$$ vanishing on the diagonal and such that for $$(x,y)\in N,x\neq y,$$ $$V(f(x),f(y))-V(x,y)>0.$$
In order to proof expansivity for Anosov and quasi-Anosov diffeomorphisms, the quadratic form $$B$$ mentioned in the section Anosov and quasi-Anosov diffeomorphisms, can be used to construct a Lyapunov function. In fact, for $$y$$ close to $$x\ ,$$ the Lyapunov function is $$V(x,y)=B_{x}(u)\ ,$$ where $$\exp _{x}(u)=y\ .$$ The expansivity of pseudo-Anosov maps may be shown also using Lyapunov functions [L2]. For the examples in (Figure 3), choose
$$V(x,y)=V((x_{1},x_{2}),(y_{1},y_{2}))=(y_{1}-y_{2})((x_{1}-y_{1})-(x_{2}-y_{2})).$$
## On Surfaces.
Classification Theorem ([Hi], [L3]). Let $$f$$ be an expansive homeomorphism of a compact connected oriented boundaryless surface $$M\ .$$ Then,
• $$S^{2}$$ does not support such a homeomorphism,
• if $$M=T^{2},$$ $$f$$ is conjugate to an Anosov diffeomorphism
• if the genus of $$M$$ is larger than 1, then $$f$$ is conjugate to a pseudo-Anosov homeomorphism.
($$T^{2}$$ is the unique surface that supports Anosov diffeomorphisms.)
Those properties are consequences of the description of the local stable (unstable) sets of $$f\ .$$
Usually, the study of local stable (unstable) sets are made on the basis of strong assumptions on the dynamics of $$Tf\ ,$$ as for Anosov diffeomorphisms, hyperbolic sets, etc. In our case, even for expansive diffeomorphims, we only have the dialogue between the topology of $$M$$ and the dynamics of $$f\ .$$ Nevertheless, after showing the local connectedness of the connected component containing $$x$$ of $$W_{\varepsilon}^{S}(x)(W_{\varepsilon }^{U}(x))$$ the following theorem is proved.
Theorem. For $$x\in M,\; W_{\varepsilon }^{S}(x)(W_{\varepsilon }^{U}(x))$$ is the union of a finite number $$r$$ of arcs, $$(r\geq 2)$$ that meet only at $$x\ .$$ Stable (unstable) sectors (the sets limited by two consecutive stable (unstable) arcs) are separated by unstable (resp. stable) arcs. If at $$x \in M\ ,$$ $$r\geq 3\ ,$$ $$x$$ is called a singular point; the set of singular points is finite.
When $$r=2\ ,$$ as it is always the case for Anosov diffeomorphisms, $$x$$ has a neighbourhood $$N$$ such that if $$y$$ and $$z$$ belong to $$N\ ,$$ $$W_{\varepsilon }^{S}(y)\cap W_{\varepsilon }^{U}(z)$$ is not void. This is not the case for singular points (see figure 2).
Now a very brief mention of some steps of the proof of the Classification Theorem is given. For $$r\geq2\ ,$$ if $$y$$ and $$z$$ lie in a sector then $$W_{\varepsilon }^{S}(y)$$ and $$W_{\varepsilon }^{U}(z)$$ meet only once. The set of these intersections includes, by the Theorem of invariance of domain, a neighbourhood of $$x$$ in the sector (local product structure). This implies that singular points can not accumulate and then, their number is finite. Let now $$M^{\ast }$$ be the universal cover of $$M\ .$$ It is not difficult to show that the lifting to $$M^{\ast }$$ of a stable or an unstable set is closed and that the union of the lifting of a stable arc and an unstable one can not be homeomorphic to $$S^{1}\ .$$
If $$S^{2}$$ supported an expansive homemorphism, and $$W^{S}(x)$$ does not contain singular points, it is homeomorphic to $$S^{1}\ ,$$ and this in turn, implies the existence of stable points; a contradiction.
Figure 2: At the singular point x the stable manifold through y does not intersect the unstable one through z.
That expansive homeomorphisms $$f$$ of surfaces of genus $$\geq 1\ ,$$ are conjugate to Anosov or to pseudo-Anosov maps follows from the following two Lemmas.
Lemma An expansive homeomorphism $$f$$ on a surface $$M$$ of genus $$\geq 1$$ is isotopic to an Anosov (if $$M=T^2$$) or to a pseudo-Anosov map (genus $$\geq 1$$).
Proof. It follws from [L3] on account of Thurston's results [T].
Definition. Let $$f,g$$ be homeomorphisms of the compact metric space $$M\ ;$$ $$f$$ is semi-conjugate to g if there exists $$h:M\rightarrow M$$ continuous and surjective, such that $$h\circ f=g\circ h\ .$$
Lemma If the expansive homeomorphism $$f$$ of the surface $$M$$ is isotopic to an Anosov diffeomorphism, or to a pseudo-Anosov homeomorphism $$g\ ,$$ then $$f$$ is semi-conjugate to $$g$$
Proof. See [F], [L3].
In both cases, $$h^*:M^*\to M^*\ ,$$ a lifting of the semi-conjugacy $$h$$ is a proper map, and this fact is an essential tool to prove that the semi-conjugacy is, actually, a conjugacy.
## Higher Dimension.
Consider now expansive homeomorphisms $$f$$ defined on compact boundaryless manifolds $$M$$ of dimension larger than 2. In the case of surfaces, it follows from the Classification theorem that periodic points are dense on the surface, and , moreover, that on an open and dense set, $$r=2\ .$$ Thus for points $$x$$ in that set, $$W_{\varepsilon }^{S}(x)$$ includes a topological 1-dimensional manifold and $$W_{\varepsilon }^{U}(x)$$ another such manifold, topologically transversal to the first one at $$x\ .$$ The results concerning $$dim M\geq 3$$ assume the existence of a dense set of periodic points $$p$$ such that $$W_{\varepsilon }^{S}(p)$$ contains a topological manifold of dimension $$d,\;1\leq d<\dim M\ ,$$ and $$W_{\varepsilon }^{U}(p)$$ a manifold of complementary dimension, topologically transversal to $$W_{\varepsilon }^{S}(p)$$ at $$p\ .$$ Points $$x$$ with such a behaviour of $$W_{\varepsilon }^{S}(x)$$ and $$W_{\varepsilon }^{U}(x)$$ are called topologically hyperbolic.(This is the case for Anosov diffeomorphisms at every $$x\in M$$).
Theorem([ABP], [V1], [V2]). Let $$f$$ be an expansive homeomorphism of $$M$$ with a dense set of topologically hyperbolic periodic points. Then there is an open and dense set with local product structure. Furthermore if $$\dim M\geq 3,$$ and for some topologically hyperbolic periodic point $$p\ ,$$ either $$W_{\varepsilon }^{S}(p)$$ or $$W_{\varepsilon }^{U}(p)$$ is one-dimensional, $$M$$ is a torus and $$f$$ is conjugate to a linear Anosov diffeomorphism.
Therefore, in this case, in contrast with what happens for surfaces, there are no singularities. This is, essentially, a consequence of the fact that if $$\dim M\geq 3\ ,$$ say, $$W_{\varepsilon }^{S}(p)$$ separates small balls centered at $$p\ ,$$ meanwhile $$W_{\varepsilon }^{U}(p)$$ does not. Of course, if we do not assume that one of this dimensions is one, the result is false: take the product of two pseudo-Anosov maps.
## $$C^{0}$$-perturbations of expansive systems.
Let $$f$$ be a homeomorphism of a compact metric space $$M$$ onto itself.
### a) Persistence.
$$f$$ is persistent if for any $$\varepsilon >0$$ there exists a $$C^{0}$$-neighbourhood $$N$$ of $$f$$ such that for $$g\in N$$ and $$x\in M\ ,$$ there exists $$y\in M$$ with the following property. $$dist(f^{n}(x),g^{n}(y))\leq \varepsilon ,\;\;n\in Z$$
### b) Topological Stability
$$f$$ is topologically stable if for $$\varepsilon >0\ ,$$ there exists,$$N\ ,$$ a $$C^{0}-$$neighbourhood of $$f\ ,$$ such that any $$g\in N$$ is semi-conjugate to $$f$$ (see 4)) and $$dist(x,h(x))<\varepsilon\ .$$
A $$\delta$$ pseudo-orbit for $$f$$ is a sequence $$\{ x_{n}:n\in Z\}$$ such that $$dist(f(x_{n}),x_{n+1})<\delta\ ,$$ $$n\in Z\ .$$ Such a pseudo-orbit is $$\varepsilon$$ shadowed if there is $$y\in M$$ such that $$dist(x_{n},f^{n}(y))\leq \varepsilon ,n\in Z.$$
Clearly b) implies a) since the semi-conjugacy $$h$$ is surjective, but a) does not imply b). All three properties are invariant under conjugacy. Anosov diffeomorphisms satisfy b) ([W1]) and, since because of the classification theorem, every expansive homeomorphism of $$T^{2}$$ is conjugate to an Anosov, then all expansive homeomorphisms of $$T^{2}$$ sastisfy b). A pseudo-Anosov homeomorphism $$f$$ satisfies a) (see [H]) but not b); because, according to [W2], for expansive systems b) is equivalent to c) and figure 3 shows an $$f$$ pseudo-orbit shadowed by no $$f$$-trajectory; thus $$f$$ does not satisfy c).
The quasi-Anosov diffeomorphisms are not even persistent. However each semi-trajectory is persistent : given $$x\in M\ ,$$ and $$\varepsilon >0$$ there is, $$N_{x}\ ,$$ a $$C^{0}$$-neighbourhood of $$f$$ such that for any $$g\in N_x,$$ there is $$y\in M\ ,$$ with the property $$dist(f^{n}(x),g^{n}(y))\leq \varepsilon,\;\;n\geq 0.$$
Figure 3: The pseudo orbit consisting of the $$f$$ past of $$x_k$$ and the $$f$$-future of $$x_{k+1}$$ $$(x_{k+1}$$ very close to $$x_{k})$$ is not shadowed by an $$f$$-orbit (see fig. 1)
This is the $$f$$ persistence of $$x$$ in the future. We define similarly persistence in the past. A point $$x$$ could be $$f$$ persistent in the future and in the past without being persistent on both sides. This is the case of many points in a quasi-Anosov diffeomorphism. An open question is: are all the semi-trajectories of an expansive system persistent?
## Links with the tangent map.
Let $$M$$ be a compact boundaryless smooth manifold, and let $$E$$ be the set of all expansive diffeomorphisms of $$M\ .$$
Theorem [Ma]. The $$C^{1}$$-interior of $$E$$ is the set of quasi-Anosov diffeomorphisms of $$M\ .$$
On surfaces , quasi- Anosov diffeomorphisms are Anosov, and since in case $$M$$ has genus larger than 1, $$M$$ does not support Anosov diffeomorphisms, the interior mentioned in the theorem, is, in this case, void. Thus, there are expansive diffeomorphisms which are not approximated by Anosov . Consider now the case $$M=T^{2}\ ,$$ where we do have Anosov diffeomorphisms. Since every expansive homeomorphism $$f$$is conjugate to a linear Anosov diffeomorphism $$l\ ;$$ $$f=hlh^{-1}$$ and according to [Mu] $$h$$ may be $$C^{0}$$-approximated by a diffeomorphism $$g$$ it follows easily, as $$glg^{-1}$$ is Anosov, that $$f$$ has arbitrarily $$C^{0}$$-close Anosov diffeomorphisms. However, it is not known, whether the $$C^{1}$$-closure of the $$C^{1}$$-interior of the expansive diffeomorphisms of $$T^{2}$$ includes all the expansive diffeomorphisms of the 2-torus. In other words: Is every expansive diffeomorphism the $$C^{1}$$-limit of Anosov diffeomorphisms? On the other hand , according to the results in [K], it is possible to conclude that such an expansive diffeomorphism has a dense set of periodic hyperbolic points.
## Expansive flows
We consider flows with no equilibrium points. Such a flow $$\varphi _{t}:M\rightarrow M,t\in R,$$ is expansive if there exist $$\alpha ,\sigma >0,$$ such that if $$x,y\in M,$$ and $$dist(\varphi _{t}(x),\varphi _{\tau (t)}(y))\leq \alpha$$ for every $$t\in R,$$ then $$y=\varphi _{t_{0}}(x)$$ for some $$t_{0},0\leq \left| t_{0}\right| \leq \sigma\ .$$ Here $$\tau :R\rightarrow R$$ is a re-parametrization of the flow through $$y\ ,$$ i.e, a surjective homeomorphism with $$\tau (0)=0\ .$$ This definition is somewhat more complicated than the one for discrete expansive systems as a consequence of the fact that we ask for geometric (instead of kinematic) separation. Important examples of expansive flows are geodesic flows on compact smooth Riemannian manifolds of negative curvature.
We mention below a short list of papers concerning expansive flows:
• R. Bowen, P. Walters. On expansive one-parameter flows. J. Diff Eq. 12(1972) 180-193
• M. Brunella. Expansive flows on Seifert manifolds and on Torus bundles.Bol. Soc. Brasil. Mat. (N.S.) 24(1993),89-104
• M. Brunella. Surfaces of section for expansive flows on three-manifolds.J.Math.Soc.Japan 47(1995), 491-501
• K. Moriyasu, K. Sakai, W. Sun. $$C^{1}-$$stably expansive flows. J. Differential Equations 213(2005) 352-367.
• J. Lewowicz. Lyapunov functions and Stability of Geodesic Flows. Springer Lecture Notes in Math. 1007(1981),463-480.
• M. Paternain. Expansive flows and the fundamental group. Bol.Soc.Brasil. Mat.(N.S.)24(1993), 179-199
• M. Paternain. Expansive geodesic flows on surfaces. Ergodic Theory Dynam. Systems 13(1993),153-165
• R. Ruggiero, V. Rosas. On the Pesin set of expansive geodesic flows in manifolds with no conjugate points Bol.Soc. Brasil. Mat. (N.S.)34(2003), 263-274
• R. Ruggiero. The accessibility property of expansive geodesic flows without conjugate points. Ergodic Theory Dynam. Systems 28(2008), 229-244.
## Non-invertible expansive maps
This section refers to continuous maps $$f$$ of a compact metric space $$M$$ to itself that are not necessarily one-to-one. For those maps, a natural analogue to the notion of expansiveness is positive expansiveness.
A map $$f$$ is positively expansive if $$dist(f^{n}(x),f^{n}(y))\leq \alpha \; ;n\geq 0\ ,$$ implies $$x=y\ .$$ A simple example of such a map is $$f:S^{1}\rightarrow S^{1};\ ,$$ $$f(z)=z^{n},$$ $$n>1,$$ where $$S^{1}$$ is the set of complex numbers $$z$$ of modulus 1.
As in the preceding section we mention a short list of papers concerning, mainly, positively expansive maps.
• E.Coven and W. Reddy. Positively expansive maps on compact manifolds. Lecture notes in Math 819, Springer Verlag, 1980, 96-110
• K. Hiraide. Positively expansive open maps of Peano spaces, Topology and its Appl. 37 (1990), 213-220
• K. Hiraide. Nonexistence of positively expansive maps on compact connected manifolds with boundary, Proc. Amer. Math.Soc. 110 (1990), 565-568
• M.Nasu. Endomorphisms of Expansive systems on compact metric spaces and the pseudo-orbit tracing property. Trans. of the Am. Math Soc. 352(2000),10, 4731-4757
• W. Reddy. Expanding maps on compact metric spaces. Toplogy and its Appl. 13 (1982) 327-334
• D. Richeson and J. Wiseman. Positively expansive dynamical systems. Topology and its Appl. 154(3), (2007), 604-613
• M.Shub. Endomorphisms of compact differentiable manifolds. Amer. J. Math 91 (1969), 175-199.
## References
[ABP] A. Artigue, J. Brum, R. Potrie. Local product structure for expansive homeomorphisms. Toplogy and its Applications, (2008) (To appear).
[B] J.L. Borges. Tigres azules. Obras Completas (3). Emece Editores (1989), 381-388
[FLP}] A. Fathi, F. Laudenbach, V. Poenaru. Travaux de Thurston sur les surfaces. Asterisque (1979)66-67
[F] J. Franks. Anosov Diffeomorphisms.Proceedings of the Symposium in pure mathematics. 14(1970), 61-94
[F,R}] J. Franks, C. Robinson. A quasi-Anosov diffeomorphism that is not Anosov. Trans. Am. Math.Soc. 283(1976), 267-278.
[H] M. Handel. Global Shadowing of pseudo-Anosov diffeomorphisms. Ergodic Theory Dynam. Systems 5(1985)373.377
[Hi] K. Hiraide. Expansive diffeomorphisms of surfaces are pseudo-Anosov. OsakaJ. Math.27(1990), 117-162.
[K] A. Katok. Lyapunov exponents, entropy and periodic orbits of diffeomorphisms. Publ. Marh. IHES 51 (1980).
[L1] J. Lewowicz. Lyapunov functions and Topological Stability. Journal of Diff. Equations. 38(1980) 192-209.
[L2] J. Lewowicz. Persistence in expansive systems. Ergodic Theory Dynam. Systems 3(1983), 567-578.
[L3] J. Lewowicz. Expansive Homeomorphisms of surfaces.Bol. Soc. Bras. Math. 20(1989), 113-133.
[Ma] R. Mañe. Expansive Diffeomorphisms. Lecture Notes in Math.468 (1975), 162-174
[Mu] J. Munkres. Obstructions to the smoothing of piece-wise differentiable homeomorphisms. Ann. of Math. 72(3)(1960), 521-554
[T] W. Thurston. On the geometry and dynamics of diffeomorphisms of surfaces. Bull. Am. Math. Soc. 19 (1988), 417-431
[U] W. Utz. Unstable homeomorphisms. Proc. Am. Math. Soc. 1(1950), 769-774
[V1] J. Vieitez.Three Dimensional expansive homeomorphisms. Pitman Research Notes in Math.285(1993),299-323.
[V2] J. Vieitez. Expansive homeomorphisms and hyperbolic diffeomorphisms on three manifolds. Ergodic Theory Dynam. Systems 16(1996), 591-622.
[W1] P. Walters. Anosov diffeomorphisms are topologically stable.Topology 9(1970), 71-78
[W2] P. Walters. On the pseudo-orbit tracing property and its relation to stability. Lecture Notes in Math. 668 (1978), 231-244.
Internal references
• Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014.
• Philip Holmes and Eric T. Shea-Brown (2006) Stability. Scholarpedia, 1(10):1838.
### Internal Reference
[M] J. Milnor. Attractor. Scholarpedia 1(11):1815 (2006),1-9.
• A. Katok, B. Hasselblatt. Introduction to the Modern theory of Dynamical Systems. Encyclopedia of Mathematics and its Applications, 54. Cambridge University Press, Cambridge, 1995. ISBN: 0-521-34187-6 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.932223379611969, "perplexity": 431.66220099198074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612018.97/warc/CC-MAIN-20170529053338-20170529073338-00104.warc.gz"} |
http://mathhelpforum.com/algebra/69545-linear-word-problems.html | 1. ## Linear word problems
A biathlon event involves running and cycling. Kim can cycle 30km/h faster than she can run If Kim spends 48 minutes and a third as much time again cycling in an event that covers a total distance of 60 km, how fast can she run?
Formula: distance= speed * time
2. Originally Posted by delicate_tears
A biathlon event involves running and cycling. Kim can cycle 30km/h faster than she can run If Kim spends 48 minutes and a third as much time again cycling in an event that covers a total distance of 60 km, how fast can she run?
Formula: distance= speed * time
dont know if this is right for sure, let me know x-48 1/3 =30
3. Originally Posted by Leona_Marie
dont know if this is right for sure, let me know x-48 1/3 =30
what i meant was x-48 1/3 /2 = 30
4. Originally Posted by delicate_tears
A biathlon event involves running and cycling. Kim can cycle 30km/h faster than she can run If Kim spends 48 minutes and a third as much time again cycling in an event that covers a total distance of 60 km, how fast can she run?
Formula: distance= speed * time
Hi delicate_tears,
Let's see if I'm interpreting this problem correctly.
Let x = speed in km/h running
Let x + 30 = speed in km/hr cycling
48 minutes running = 48/60 = 4/5 hours
one-third as much again cycling seems like 48 + 1/3(48) = 64 minutes = 64/60 = 16/15 hours.
d = rate X time
d = 60
The distance traveled while running would be $\frac{4}{5}x$
The distance traveled while cycling would be $\frac{15}{16}(x+30)$
The two distances together would equal 60 km.
The linear equation would then be:
$\frac{4}{5}x+\frac{16}{15}(x+30)=60$
5. Hey Leona_Marie, thanks for replying first to my thread! Unfortunately I don't quite understand the equation you wrote as it contained too many slashes that I got mixed up.
Originally Posted by masters
Hi delicate_tears,
Let's see if I'm interpreting this problem correctly.
Let x = speed in km/h running
Let x + 30 = speed in km/hr cycling
48 minutes running = 48/60 = 4/5 hours
one-third as much again cycling seems like 48 + 1/3(48) = 64 minutes = 64/60 = 16/15 hours.
d = rate X time
d = 60
The distance traveled while running would be $\frac{4}{5}x$
The distance traveled while cycling would be $\frac{15}{16}(x+30)$
The two distances together would equal 60 km.
The linear equation would then be:
$\frac{4}{5}x+\frac{16}{15}(x+30)=60$
Hi masters, my interpretation of the problem is different to yours but your interpretation is actually the correct one.
Where the word problem says 'as third as much time again cycling' I wrote is as 1/3(48)= 16 minutes= 16/60 hr = 4/15 hr.
Thankyou so much | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7989590167999268, "perplexity": 2579.06534006056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718284.75/warc/CC-MAIN-20161020183838-00290-ip-10-171-6-4.ec2.internal.warc.gz"} |
http://images.planetmath.org/node/87512 | # Noetherian ring of infinite Krull dimension
Hi,
In Eisenbud’s Commutative Algebra with a view towards algebraic topology there is an example of a such ring. I have a question concerning one of the steps to find one (section 9.2, exercise 9.6, page 229). I’ll do my best to explain myself if you don’t have the book:
Let $k$ be an algebraic closed field and $R=k[x_{1},x_{2},...,x_{r},...]$ be a polynomial ring in infinitely many variables over $k$, and let $P_{1}=(x_{1},...,x_{{d(1)}})$, $P_{2}=(x_{{d(1)+1}},...,x_{{d(2)}})$, …, $P_{m}=(x_{{d(m-1)+1}},...,x_{{d(m)}}),...$ be an infinite collection of prime ideals made from disjoint subsets of the variables.
Let $U=R-\bigcup_{{m=1}}^{{\infty}}P_{m}$. As we know, this is multiplicative set in $R$ and thus we can form the localization of $U$ at $R$, denoted by $S=R[U^{{-1}}]$.
The idea is to prove that $S$ has infinite Krull dimension. To achieve this, the book uses an exercise: If $I\subset R$ is an ideal and $I\subset\bigcup_{{m=1}}^{{\infty}}P_{m}$, then $I\subset P_{n}$ for some $n$. The problem is that this seems false. Here an example:
$I=(x_{{d(1)-1}},x_{{d(1)+1}})\subset P_{1}\cup P_{2}$ but $I$ is not contained in any of the $P_{i}$. Is there something I am missing? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 19, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9671661257743835, "perplexity": 40.62904559194956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00702.warc.gz"} |
https://www.sawaal.com/simplification-questions-and-answers/to-fill-a-tank-25-buckets-of-water-is-required-nbsp-how-many-buckets-of-water-will-be-required-to-fi_2047 | 49
Q:
# To fill a tank, 25 buckets of water is required. How many buckets of water will be required to fill the same tank if the capacity of bucket is reduced to two-fifth of its present ?
A) 52.5 B) 62.5 C) 72.5 D) 82.5
Explanation:
Let the capacity of 1 bucket = x.
Then, capacity of tank = 25x.
New capacity of bucket = (2/5)x
$\inline \therefore$Required no of buckets = 25x/(2x/5) = 62.5
Q:
If
A) 30° B) 90° C) 60° D) 45°
Explanation:
0 301
Q:
The value of
A) 5 B) 0 C) 2-22 D) 2
Explanation:
1 582
Q:
Solve the following
113 × 87 =?
A) 9831 B) 10026 C) 10169 D) 10000
Explanation:
0 925
Q:
In ΔABC measure of angle B is 90 deg. If tanA = 12/5, and AB = 1cm, then what is the length (in cm) of side BC?
A) 2.6 B) 2.4 C) 1.5 D) 2
Explanation:
0 142
Q:
ΔXYZ is right angled at Y. If m∠X = 60 deg, then find the value of (cotZ + 2)
A) (2√2+1)/2 B) √3+2 C) (√6+1)/√3 D) (2√2+√3)/2
Explanation:
0 608
Q:
If ,then which of the following is correct?
A) a=2197 B) a>2197 C) a<2197 D) a<1728
Explanation:
0 734
Q:
Find the number of solutions a pair of linear equation (4x-9y+13=0 and 2x+3y-13=0) have.
A) 1 B) 2 C) 10 D) infinite
Explanation:
0 631
Q:
Find the area (in sq units) of the triangle formed by lines x - 3y = 0, x – y = 4 and x + y = 4.
A) 1 B) 2 C) 3 D) 4 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.879658043384552, "perplexity": 3035.5345424235707}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948976.45/warc/CC-MAIN-20230329120545-20230329150545-00671.warc.gz"} |
https://mathoverflow.net/questions/288259/how-to-show-that-the-following-function-isnt-a-polynomial-over-q/288288 | # How to show that the following function isn't a polynomial over Q?
Enumerate the rationals as $b_1,b_2,\dots$ and define the (set) function: $$f(x) = (x-b_1)^2 + (x-b_1)^2(x-b_2)^2 + \dots.$$ At any particular $x$, only finitely many terms are non zero so this is perfectly well defined as a (set) function but surely, it is not equal to any polynomial! (or is it?) How do I show that there is no polynomial $p(t) \in \mathbb Q[t]$ such that $p(x) = f(x)$ for all $x \in \mathbb Q$?
If $f(x)$ were defined as $(x-b_1) + (x-b_1)(x-b_2) + \dots$, then this question is not so hard. If $p(x)$ has degree $n$, then testing on $b_1,\dots,b_n$ would show that $p(x)$ is necessarily $(x-b_1) + (x-b_1)(x-b_2) + \dots + (x-b_1)\dots(x-b_n)$ but then $x=b_{n+1}$ derives a contradiction.
I don't know how to adapt this approach. Trying to guess the polynomial seems hard even if we think $p(x)$ is degree $1$.
I posted a follow up to this question here: (Variation of an old question) Are these functions polynomials?.
• It is even conceivable that the question of whether $f(x),\mathcal{O}$ (where $\mathcal{O}$ represents some ordering in the enumeration of the rationals,) is expressible as a polynomial, depends on the choice taken for $\mathcal{O}$. So a good question is whether there is any $\mathcal{O}$ and $p(t)\in\Bbb Q[t]$ for which $p(x) = f(x)$ for all rational $x$. Dec 11 '17 at 19:27
• It's easy to see that then $f_1=f/(x-b_1)^2$ would have to be a polynomial too, but of lower degree. Then again $f_2=(f_1-1)/(x-b_2)^2$ would also be a polynomial of lower degree yet... and so on. Dec 11 '17 at 21:08
• There may be a flow in my scheme: I'm no longer sure that it's obvious that $b_1$ is a zero of $f/(x-b_1)$ and thus that $f/(x-b_1)^2$ is indeed a polynomial... Dec 11 '17 at 21:37
• @ChristianRemling, but (modulo my comment) this doesn't seem to show that the polynomial $p/(x - b_1)$ has a root. That is, we can divide $f$ by $x - b_1$ 'canonically' to obtain a function $g$ (EDIT: oops, sorry, not your $g$), and we can divide $p$ by $x - b_1$ canonically to obtain a polynomial $q$, but it isn't a priori clear to me that $g = q$ (in other words, that these two kinds of division preserve equality). Dec 11 '17 at 23:52
• I agree that $f/(x - b_1)$ is not always well defined for a function vanishing at $b_1$. In this case, though, each term in the defining sum for $f$ is a polynomial divisible by $x - b_1$, and so there is a natural sum of polynomials that deserves to be called $f/(x - b_1)$. Dec 12 '17 at 1:15
For each positive integer $n$ and any rational $x$, we have
$$f(x)\geq (x-b_1)^2(x-b_2)^2\dots(x-b_n)^2.$$ For large $x$, we then have $f(x)\gg x^{2n}$, which implies that if $f$ is a polynomial, it must have degree $≥2n$.
• Just curious: Do you see a way to prove something similar if we replace $\mathbb Q$ by $\overline{ \mathbb F}_p$ ? Dec 11 '17 at 19:50
You can adapt the same approach as follows: Say $p_n(x)$ is an $n$-th degree polynomial matching $f(x)$ at all rational points, then in particular, $$p_n(b_1) = 0\\ p_n(b_2) = (b_2-b_1)^2 \\ \vdots\\ p_n(b_k) = \sum_{i\in\Bbb Z+, i<k}\prod_{j\in\Bbb Z+, j\leq i} (b_k-b_i)^2$$ For a given fixed sequencing of the rationals as $b_1, b_2, \cdots$, and for any given $k$, the latter expression is just some fixed rational number.
So $p(n)$ is fixed by its values at $b_1 \ldots b_n$, and now consider $-f(b_{n+1}-p_n(b_{n+1}))$. Since all the terms past the $n+2$ term in $f(b_{n+1})$ are zero, $$f(b_{n+1}) = p_n(b_{n+1}) + \prod_{i\leq n}(b_{n+1}-b_i)^2 > p_n(b_{n+1})$$ which contradicts the statement that $f$ matches $p_n$ at all rationals.
• why do $n$ values determine a degree $n$ polynomial? Don't you need $n+1$ terms? Dec 11 '17 at 20:20
• Do you really mean to consider $-f(b_{n+1} - p_n(b_{n+1}))$ and not $-f(b_{n+1}) - p_n(b_{n+1})$? Is the idea to use that $f$ is always positive? How do you show that the difference between f(b_{n+1}) and $p_n(b_{n+1})$ is that expression? Dec 11 '17 at 20:29
"If $f(x)$ were defined as $(x−b_1) + (x−b_1)(x−b_2) + …,$ then this question is not so hard."
Does taking the derivative of the given function get you to this simpler case?
"At any particular $x$, only finitely many terms are non zero"
But at irrational values of $x$, none of the terms are zero. Now, you may respond "But I'm talking about it over $\mathbb Q.$" But unless you're going to claim that $f$ isn't continuous, if you take a sequence of irrationals approaching a rational, the function evaluated at that rational must be the limit of the function evaluated at those irrational numbers. You could also construct a sequence of rationals approaching $b_1$ such that there exists some $\epsilon > 0$ such that for all $x$ in the sequence, $f(x) > \epsilon$. If $f(x)$ is continuous, $f(b_1)$ must be $> 0$, but clearly by the definition of $f$, $f(b_1) = 0$. I suppose you'll still have to argue that $f$ must be continuous, but that should follow from it being a polynomial, even with the restriction to $\mathbb Q$.
• It is not clear that the function is continuous, much less continuously extendible to $\mathbb R$, or differentiable. (Also, the termwise derivative doesn't match this easier function.) Even if the function extended continuously, your argument only shows that the same formula doesn't naïvely define the extension. Dec 11 '17 at 23:48
• It's a proof by contradiction. I'm certainly not claiming that it is continuous, merely that if it were defined by a polynomial over Q, then it would be continuous. Dec 12 '17 at 0:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9536929130554199, "perplexity": 101.50568356471258}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305341.76/warc/CC-MAIN-20220128013529-20220128043529-00322.warc.gz"} |
http://physics.stackexchange.com/questions/54299/semi-conductor-band-gap-and-deformation-potential | # Semi-conductor band-gap and deformation potential
Submitting a semi-conductor to stress leads to a deformation in the energy-bands, roughly described by:$$H_{ij} = {\cal{D}}_{ij}^{\alpha\beta}\;\epsilon_{\alpha\beta}$$ $\epsilon$ being the strain (linked to the stress by Hooke's law), $H$ the perturbation Hamiltonian to the Hamiltonian describing a stress-free semiconductor, $i,j$ being indexing the energy level of the previous "free" Hamiltonian. Still, I lack intuition regarding the apparition of this term, it seems that compression enlarges the band-gap whereas dilation tightens it. Why do we have this behaviour? I have tried thinking on electrostatic arguments, the potential decreasing as $r^{-1}$ we do have an increase of energies in $r \rightarrow \alpha r$ for $\alpha < 1$ or also seeing the dilatation as a renormalization group transformation, basically going to a coarser grain (although there probably other pertinent length scales in an atomic lattice (spreading of the electronic orbitals, ...) which would make this argument wobbly). Cutting to the point, what is your hand-waved way of seeing it? Books on the subject of deformation potential don't really seem to offer intuition on it, more numerical values for specific materials.
-
Isn't that the only term you can write down which is linear in the strain? The assumption would be that higher order terms $\sim\epsilon^2$ are negligible unless for some reason $\mathcal{D}^{\alpha\beta}_{ij}$ vanished. Just a guess. – Michael Brown Feb 18 '13 at 14:48
## 1 Answer
Well, possibly the simplest argument (I have no idea if this is correct) is that compressing the material reduces the configuration space available for the electrons, and increasing confinement means increased energy (differences) in quantum mechanics. Consider just the old particle-in-a-box problem, the energy levels scale as $E_n \sim \frac{n^2}{L^2}$, where $L$ is the size of the box, so a smaller box means bigger spacings between energy levels.
-
Valid quantitative argument, although you're not taking into account that our electrons are not free in this box, and sweeping under the rug all the details of the interactions (another way of explaining my doubts in "renormalization group" arguments. – Learning is a mess Feb 18 '13 at 15:52
The bound electrons aren't free, but the conduction electrons are. – KDN Feb 18 '13 at 16:30
@Learningisamess, I guess you mean "qualitative"? It's certainly not a quantitative argument. The particle in a box is not meant to even be an approximation to the problem you are interested in. I am simply trying to illustrate a completely general feature of quantum mechanics that may be relevant to the problem at hand: greater confinement implies greater energy (differences). Weak interactions won't change this behaviour much; you can imagine making a mean-field approx. where increasing the electron density just increases the single-particle effective potential due to Coulombic repulsion. – Mark Mitchison Feb 18 '13 at 16:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117648363113403, "perplexity": 695.582970216048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928078.25/warc/CC-MAIN-20150521113208-00291-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/454801/copy-tabularx-x-column-as-new-vertically-centered-column | # Copy tabularx X column as new vertically centered column
My question is very similar to Vertical alignmnent in tabularx X column type and there seems to be many similar questions so hopefully I haven't missed the answer somewhere.
However, I would like to create a new column that is a copy of tabularx X column, with the difference being the new column Y is an m column compared to a p column (which X is by default as described in the docs).
\tabularxcolumn - The default denition of X is p{#1}.
\def\tabularxcolumn#1{p{#1}}
So I would like to define a new column type like
\newcolumntype{Y}{>{\centering\arraybackslash}X}
and then convert the Y column type to m instead of p as it would currently be. That way I can leave X as it's original definition. If I use
\renewcommand{\tabularxcolumn}[1]{>{\small}m{#1}}
Then X is changed to an m column which is not desired. I would need something like
\renewcommand{Y}[1]{>{\small}m{#1}}
But that doesn't work.
Thanks for any help,
• Do you want/need both X and Y columns in a single table?
– Werner
Oct 11, 2018 at 18:00
• Potentially, I would like to be able to have a p type X column and a m type X column if possible. Oct 11, 2018 at 18:02
you need to patch in a second X-like columntype, this just duplicates the definition of X so you can separately specify \tabularxcolumn for X and \tabularxycolumn for Y
\documentclass[a4paper]{article}
\usepackage{etoolbox,tabularx}
\tracingtabularx
\makeatletter
\newcolumntype{Y}{}
\def\tabularxycolumn#1{m{#1}}
\def\TX@newycol{\newcol@{Y}[0]}
\patchcmd\TX@endtabularx
{\expandafter\TX@newcol}%
{\expandafter\TX@newycol\expandafter{\tabularxycolumn{\TX@col@width}}%
\expandafter\TX@newcol}
{}
{}
\patchcmd\TX@endtabularx
{\def\NC@rewrite@X}%
{\def\NC@rewrite@Y{\NC@rewrite@X}%
\def\NC@rewrite@X}
{}
{}
\makeatother
\begin{document}
\begin{tabularx}{6cm}{XXc}
aa aaa aaa aaa aaa aaa&
bb bb bb bb bb bb bb bb bb bb bb bb bb bb b &
aa aaa
\end{tabularx}
\begin{tabularx}{6cm}{XYc}
aa aaa aaa aaa aaa aaa&
bb bb bb bb bb bb bb bb bb bb bb bb bb bb b &
aa aaa
\end{tabularx}
\begin{tabularx}{6cm}{YYc}
aa aaa aaa aaa aaa aaa&
bb bb bb bb bb bb bb bb bb bb bb bb bb bb b &
aa aaa
\end{tabularx}
\end{document}
• I thought it worked but I was looking at the PDF (which compiled fine) but there were errors. I am using > pdfLaTeX Version 3.14159265-2.6-1.40.18 (MiKTeX 2.9.6500 64-bit). My original use was in knitr, but it failed there, and then I went to TexStudio, but it failed there as well. I started with a blank tex document and copied and pasted your code. Any thoughts? Thanks! Oct 19, 2018 at 23:02
• @Prevost sorry I had omitted another place to patch in the Y version of X columns, the original wasn't counting Y columns in its count of the number of active X columns. code and result image updated Oct 20, 2018 at 0:02
• That's great compiles no errors! Could that code be theoretically created from just your docs? Oct 20, 2018 at 0:19
• @Prevost not from the user level documentation, it's definitely an extension of the design not just "using tabularx". Perhaps it could be derived from the documented code, although actually I only looked at the docstripped code with comments removed, perhaps that's why I missed one place the first time:-) Oct 20, 2018 at 8:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8858765959739685, "perplexity": 1925.6036098474005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103989282.58/warc/CC-MAIN-20220702071223-20220702101223-00388.warc.gz"} |
http://link.springer.com/article/10.1007%2Fs12544-012-0082-9 | Open Access
Original Paper
European Transport Research Review
, Volume 4, Issue 4, pp 217-233
# Spatial association techniques for analysing trip distribution in an urban area
## Authors
• Gabriella Mazzulla
• Department of Land Use PlanningUniversity of Calabria
• Carmen Forciniti
• Department of Land Use PlanningUniversity of Calabria
DOI: 10.1007/s12544-012-0082-9
## Abstract
### Purpose
Urban processes and transportation issues are intrinsically spatial and space dependent. For analysing the spatial pattern of urban and transportation features, the spatial statistics techniques can be applied. This paper presents a spatial association statistics for mobility data, and particularly the daily trips made by people from home to work and study places (commuter trips).
### Methods
In the last few years, urban analysis has been supported by the adoption of Geographic Information Systems (GIS). Using GIS, statistics of global autocorrelation (Getis-Ord General G and Global Moran’s Index I) and statistics of local autocorrelation (Gi* and Local Moran’s I) was elaborated.
### Results
The application of spatial association statistics led to find clusters and to identify eventual hot spots of the mobility data set. The results showed that the spatial distribution of trips among the census parcels displays spatial dependence in the data set.
### Conclusions
This work provided interesting results about the spatial distribution of commuter trips because it showed spatial auto-correlation of the daily trips variable.
### Keywords
Spatial association Daily commuter trips GIS
## 1 Introduction
Urban processes and transportation issues are intrinsically spatial and space dependent. An urban spatial structure is a spatial arrangement of a city in which it is a result of the interaction between land markets, topography, infrastructure, taxation, regulations and urban policy over time [1]. Railways, road networks, civil and industrial building, and other constructions built on territory fit for people’s needs. In particular, transport demand is influenced by the location of dwellings and economic activities; therefore, it is strongly dependent on the spatial distribution of these [2].
To find the processes of spatial distributions, it is necessary to manipulate a large amount spatial data about urban areas using spatial analysis techniques. The notion of spatial analysis can include any operation performed on geographical data. Spatial analysis techniques allow to study the shape of spatial aggregation of the variables and their spatial relationships. It is possible to make some objective considerations about spatial patterns; understanding if spatial pattern is random or represent a definite aggregation; establishing the causes of a spatial distribution; discovering if the observed values are enough for analysing a spatial phenomenon; exploring the heterogeneity of the areas in the region of study [3].
Over the last few years, the adoption of Geographic Information Systems (GIS) has supported urban analysis. A GIS allows the spatial relationships among the variables to be studied, because it integrates common tasks performed on the database, such as statistical analysis, with the advantages of graphical representation of data and geographic analysis offered by maps. Using GIS, researchers can manipulate a large amount of data and visualize urban affairs [4].
This paper presents the application of the spatial association techniques using mobility data of the Cosenza-Rende urban area. The aim is to understand the spatial distribution of mobility data and identifying eventual spatial patterns.
The paper is organized as follows: in the next section some spatial association techniques are described; Section 3 presents a brief literature review about spatial association; in Section 4 the case study is described; Section 5 presents the outcomes of the application of global and local techniques of spatial association and concluding remarks are contained in Section 6.
## 2 Spatial association techniques
Spatial statistics comprises a set of techniques for describing and modelling spatial data. Unlike traditional (non-spatial) statistical techniques, spatial statistical techniques actually use space–area, length, proximity, orientation, or spatial relationships–directly in their mathematics [5].
There are some technical issues in spatial statistics. Among these, spatial association or spatial autocorrelation is the tendency of variables to display some degree of systematic spatial variation. In urban studies, this fact often means that data from locations near to each other are usually more similar than data from locations far away from each other. Spatial association may be caused by a variety of spatial processes, including interaction, exchange and transfer, and diffusion and dispersion. It can also result from missing variables and unobservable measurement errors in multivariate analysis [6]. The advantages of the study of spatial autocorrelation are manifold [7]: to provide tests on model misspecification; to determine the strength of the spatial effects on the variables in the model; to allow for tests on assumptions of spatial stationarity and heterogeneity; to find the possible dependent relationship that a realization of a variable may have on other realizations; to identify the role that distance decay or spatial interaction might have on any spatial autoregressive model; to help to recognize the influence that the geometry of spatial units under study might have on the realizations of a variable; to allow for identifying the strength of associations among realizations of a variable between spatial units; to give the means to test hypotheses about spatial relationships; to give the opportunity to weigh the importance of temporal effects; to provide a focus on a spatial unit to better understand the effect that it might have on other units and vice versa (“local spatial autocorrelation”); to help in the study of outliers.
Spatial association can be modelled by a specific kind of regression models, known as spatially autoregressive models. These models have been developed in geography, a field often concerned with the analysis of areal units (e.g., census parcels) or network data (e.g., nodes in a network), and have recently found substantial application in urban analysis. In these models, the spatial dependence is taken into account by the addiction to the regression model of a new term in the form of a spatial relation for the dependent variable. Formally, this is expressed as [8]:
$$\matrix{ {Y = \rho WY + X\beta + \varepsilon; } \hfill &{\varepsilon = \lambda W\varepsilon + \mu } \hfill \\ }<!end array>$$
(1)
The elements of the model are: a vector Y (n×1) of objective variable observations; a matrix X (n×K) of independent observations including the usual constant; a vector β (1×K) of parameters corresponding to K independent variables. Scalars ρ and λ are parameters of spatial association corresponding to the objective variable and the error term ε, respectively, while μ are independent and possibly homogeneous error terms [6]. W is the spatial lag operator and is a matrix (n × n) containing weights w ij describing the degree of spatial relationship (contiguity, proximity and connectivity) between units of analysis i and j. Considering physical contiguity, in the matrix W a weight of 1 is assigned to pairs of zones sharing a border and 0 otherwise. Connectivity can be given in terms of travel between pairs of origins and destinations. Alternatively, proximity can be defined in terms of distance or various accessibility measures, such as travel time or generalized costs.
In general, the modelling process is preceded by the explanatory data spatial analysis (ESDA), which is a phase associated to the visual presentation of the data in the form of graphs and maps and leads to the identification of spatial dependency patterns in the phenomenon under study. ESDA is a collection of techniques to visualize spatial distributions, identify atypical locations or spatial outliers, discover patterns of spatial association, clusters or hot spots, and suggest spatial regimes or other forms of spatial heterogeneity.
In ESDA, the predominant approach to assess the degree of spatial association is based on global statistics. Among the most familiar tests for global spatial autocorrelation there is Moran’s I. This statistic is essentially a cross product correlation measure that incorporates “space” by means of a spatial weights matrix W [9]. Moran’s global index I can be expressed as follow:
$$I = \frac{{\sum\nolimits_{i = 1}^n {\sum\nolimits_{j = 1}^n {{w_{ij}}\left( {{x_i} - \overline x } \right)\left( {{x_j} - \overline x } \right)} } }}{{\sum\nolimits_{i = 1}^n {\left( {{x_i} - \overline x } \right)} }}$$
(2)
where n is the number of areas, x i is the value of the attribute considered in area i, $$\overline x$$ is the mean value of the attribute in the region of study, and w ij are the elements of a spatial lag operator W. Generally, Moran’s I serves as a test where the null hypothesis is the spatial independence (in this case its value would be zero). Positive values (between 0 and 1) indicate a direct correlation, and negative values (between −1 and 0) indicate an inverse correlation. To estimate the significance of the index, it will be necessary to associate it to a statistical distribution, which is usually the normal distribution.
In the study of local pattern association, several statistics of spatial association allow to detect places with unusual concentrations of high or low values to be analysed (‘hot’ or ‘cold’ spots). In the last few years, two statistics have been used in many applications: G i (d) statistics [1012] and Local Indicators of Spatial Association (LISA) as Local Moran’s I [13].
The G i (d) statistics is a distance-based statistic and measures the proportion of a variable found within a given radius of a point, respective to the total sum of the variable in the study region. The statistic for a location i is defined as:
$${G_i}(d) = \frac{{\sum\nolimits_{j = 1}^n {{w_{ij}}(d){x_j}} }}{{\sum\nolimits_{i = 1}^n {{x_i}} }}$$
(3)
where x j is the value of the observation at j, w ij (d) is the ij element of a binary W matrix (w ij = 1 if the site is within distance d, w ij = 0 elsewhere) and n is the number of the observations. The mean and the variance of this statistic can be obtained from a randomization process and used to derive a standard statistic. When the value of the standardized statistic is greater than the cut-off value at a prespecified level of significance, positive or negative spatial association exists. Positive values represent a spatial agglomeration of relatively high values, while negative values represent relatively low values clustered together [6].
The LISA allows for the decomposition of global indicators, such as Moran’s I, into the contribution of each individual observation. LISAs statistics must satisfy two requirements: the LISA for each observation gives an indication of the extent of significant spatial clustering of similar values around that observation; the sum of LISAs for all observation is proportional to a global indicator of spatial association [13]. In general terms, a LISA for a variable x i , observed at location i, can be expressed as a statistic L i :
$${L_i} = f\left( {{x_i},{x_j}} \right)$$
(4)
where f is a function and x j are the values observed in the neighbourhood J i of I.
The local version of Moran’s I is given by the following expression [13]:
$${I_i} = {x_i}\sum\nolimits_j {{w_{ij}}{x_j}}$$
(5)
where the terms are analogous to that of the global Moran’s I. It is possible to derive the mean and the variance of I i based on a randomization procedure, and inference can be carried out by obtaining a normalized statistic.
Interpretation of the Local Moran’s I is less intuitive than interpretation of the G i (d) statistic. In general, there are four patterns of local spatial association:
1. 1.
High-high association: the value of x i is above the mean and the values of x j at ‘neighboring’ zones are generally above the mean, the statistic is positive;
2. 2.
Low-low association: both values are below the mean, the statistic is positive;
3. 3.
High-low association: the value at i is above the mean and the values at neighboring zones are, in general, below the mean, this gives a negative statistics;
4. 4.
Low-high association: the value at i is below the mean and the weighted average is above the mean, I i is negative.
These can be reached from a Moran’s scatterplot tool. The combination of LISA and a Moran’s scatterplot tool provides information on different types of spatial association at the local level.
## 3 Literature review
In the literature, many studies deal with the application of spatial analysis but in different fields. For example, Anselin [13] applied measures of spatial association to investigate the spatial patterns of conflict in Africa, whereas a study by Anselin et al. [9] established the utility of exploratory spatial data analysis in uncovering interesting patterns of child risk, considering rates for infant mortality, low birth weight and prenatal care as social indicators. In both cases, the exploration of spatial patterns clearly demonstrated the presence of significant spatial clusters of high and low values, as well as some interesting spatial outliers.
Spatial association has been studied also to analyse land-use data, which have the tendency to be spatially autocorrelated, as land-use changes in one area tend to propagate to neighboring regions. Aguiar et al. [14] built spatial regression models to assess the determining factors of deforestation, pasture, temporary and permanent agriculture in Amazon. The goal of this paper is to explore intra-regional differences in land-use determining factors.
Over the last decades, there has been considerable interest in the analysis of urban spatial structures using spatial analysis techniques to describe and explain the distribution of population, land values, employment and other structural variables in a city. Some studies are about the exploratory spatial data analysis. Among these, Páez et al. [15] applied ESDA techniques to analyse the land price data in Sendai City, a middle- sized Japanese city with population rounding up to 1 million. The application of global statistics as Moran Index I showed that all variables present a high degree of positive, meaning that observations with similar values tend to form clusters. To complement the global analysis, the authors resorted to the use of local spatial association statistics. Localised exploratory data analysis shows that the distribution of land prices in Sendai City follows an essentially monocentric pattern, with only two spatial regimes: the CBD area and the periphery. In Baumont et al. [16] ESDA was studied to analyse the intraurban spatial distributions of population and employment in the agglomeration of Dijon (regional capital of Burgundy, France). The aim was to study whether this agglomeration has followed the general tendency of job decentralization observed in most urban areas or whether it is still characterized by a monocentric pattern.
In others studies the spatial association techniques were applied to analyse housing prices. Tse [17] suggested a stochastic approach which is able to correct autocorrelation bias in the hedonic house price function due to spatial dependence. The model, using data from Hong Kong, incorporates adjustments reflecting net floor area ratio, age, floor level, views, transport accessibility and amenities such as availability of recreational facilities.
Spatial autoregressive models (SAR) were used to estimate the impact of locational elements (as propinquity to a shopping facility or a recreational amenity) on the price of residential properties sold during 1995 in the Greater Toronto Area [18]. The first step was to estimate Moran’s I to determine the effects of spatial autocorrelation that existed in housing values. This research discovered that SAR models offered a better fit than non-spatial models, because in the presence of other explanatory variables, locational and transportation factors were not strong determinants of housing values.
The analysis of spatial association is beginning to be applied to model transportation processes and land use and transportation interaction. Bolduc et al. [19] analysed travel flows and modal split using a regression model of spatial association. In this model an error components specification with spatial error autocorrelation was introduced. Application of the model to a case study shows that the spatial model gives a better fit to the data compared to non-spatial models.
Berglung and Karlstroem [20] used Gi statistics (local spatial association) for applications with flow-data, and demonstrated its usefulness in two applications. They explored non-stationarities and identified underlying geographical patterns. The authors concluded that localised statistics allow to address how relationships between variables vary over space.
A study proposed by Shaw and Xin [21] implements a temporal GIS, coupled with an exploratory analysis approach, to allow a systematic and interactive way of analysing land use and transportation interaction among various data sets and at user-selected spatial and temporal scales. Although the identified interaction patterns do not necessarily lead to rules that can be applied to different geographic areas, the results of explanatory analysis provide useful information for transportation modelers to re-evaluate the current model structure and to validate the existing model parameters.
Another application of spatial association is in traffic safety [22]. This paper aims at identifying accident hot spots by means of a local indicator of spatial association (LISA), more in particular Moran’s I. For applications in traffic safety, Moran’s I was adapted because road accidents occur on a network. The authors indicated that an incorrect use of the underlying distribution would lead to false results.
Analysis of the literature showed that the spatial analysis techniques were initially applied to the study of socio-economic and demographic variables. Only more recently, these techniques have been applied in the analysis of urban areas and they are still few applications in the field of transport and mobility. Researchers in the field of transportation, however, have shown a growing interest in applying these techniques to the analysis of mobility. This is because there is a strong spatial component in the processes of generation and distribution of trips.
This work arises, therefore, to investigate the presence of spatial autocorrelation in the data on the trips distribution in an urban area.
## 4 The case study
The case study focuses on the urban area of Cosenza, placed in Calabria Region (South Italy). Cosenza, which is the provincial capital in North Calabria Region, forms a single urban area together with Rende in the northerly direction.
This urban area is the most important centre of attraction for all the towns of the province because it performs some administrative functions and offers different services and job opportunities. Furthermore, Rende is home to the University of Calabria (UniCal). The campus affected mobility characteristics of all the urban centre of the province. Nowadays the University represents one of the major centres of attraction of the urban area; over 33,000 students and about 2,800 members of staff attend the campus. Thanks to the university, Rende has changed considerably in recent decades, such as the construction of new residential areas and new infrastructures.
Concerning mobility and transport facilities, the analysed area represents one of the main junctions of the Calabria railways and road system. The motorway A3 Salerno-Reggio Calabria, the SS107 Paola-Crotone state road, and the state road n.19 and n.19bis cross the urban area. Furthermore, the urban area is crossing by the railways lines Sibari-Cosenza and Paola-Cosenza, which assure the rail link between the Tyrrhenian and Ionian rail director. Finally, in the urban area of Cosenza merged the regional railway lines to Catanzaro and Sila, which have a narrow gauge, and are managed by “Ferrovie della Calabria” (Fig. 1).
For providing a preliminary characterization of the cities analysed in this work, it is necessary to report some information about population and economic activities [23].
Concerning population and housing (Table 1), more than 70,000 people are resident in the city of Cosenza; on the other hand, the city of Rende has a resident population of about half of Cosenza population. It is necessary to specify that Cosenza and Rende feel the effects of the presence of the University of Calabria; so, in addition to resident people there are other many people (university students) living in the urban area, and especially in the city of Rende.
Table 1
Population and housing data
Cosenza
Rende
Urban area
Total population (inh.)
72,998
34,421
107,419
Male population (inh.)
34,689
16,948
51,637
Female population (inh.)
38,309
17,473
55,782
Population younger than 15 years (inh.)
9,432
5,351
14,783
Population between 15 and 65 years (inh.)
48,387
24,989
73,376
Population older than 65 years (inh.)
15,179
4,081
19,260
Families (nr.)
27,476
12,090
39,566
Families with 1 member (nr.)
7,561
2,636
10,197
Families with 2 members (nr.)
6,635
2,560
9,195
Families with 3 members (nr.)
5,186
2,502
7,688
Families with 4 members (nr.)
5,516
3,185
8,701
Families with 5 members (nr.)
1,984
971
2,955
Families with 6 or more members (nr.)
594
236
830
Surface area (kmq)
36.82
44.72
81.54
Total housing (nr.)
31,129
15,727
46,856
Empty housing (nr.)
3,224
1,706
4,930
Building (nr.)
6,432
5,303
11,735
Population density (inh./kmq)
1,982
770
1,317
Housing density (nr. hous./kmq)
845
352
575
The population of the urban area is equally spread between males (48 %) and females (52 %). About 68 % of the urban area population belongs to the intermediate class of age (between 15 and 65 years old), which represents the class of persons of working age; about 18 % of people are older than 65 years and about 14 % younger than 15 years. The city of Rende is characterized by a younger population than Cosenza; in fact, only 12 % of people living in Rende is older than 65 years, against a percentage of 20 % for the city of Cosenza; in addition, 15 % of people living in Rende is younger than 15 years, against a percentage of 13 % for the city of Cosenza. This results can be confirmed by calculating the old-age dependency ratio, which is the ratio of the number of elderly persons of an age when they are generally economically inactive (age over 65 in this case) to the number of persons of working age (conventionally 15–65 years old). Specifically, the ratio has a value of 0.26 for the urban area and a value of 0.31 for the city of Cosenza; on the other hand, the value of the old-age dependency ratio for the city of Rende is half of the ratio for Cosenza (0.16).
In the urban area there are about 40,000 families; 70 % of these families lives in Cosenza. A large part of families living in the urban area (about 26 %) have one member; about 23 % of families have two members; more than 40 % are families with three or four components; finally, only 10 % of families have five or more members.
The urban area fills up a surface area of about 82 kmq, and about 55 % of surface area is filled up by the city of Rende. By comparing population and surface area values of the two cities, Rende is larger than Cosenza, but it is less populated. This fact can be confirmed by observing the values of population density, which is the ratio of the population of a territory to the total size of the territory; specifically, one square kilometre of Cosenza is populated by about 2,000 inhabitants, while about 800 people are on one square kilometre of the area of Rende. The urban area offers about 47,000 housings, of which about 66 % are in the city of Cosenza. By comparing the number of housings and surface area values of the two cities, Rende offers less housing than Cosenza. This fact is confirmed by observing the values of housing density (Fig. 2), which is the ratio of the housing of a territory to the total size of the territory. Specifically, 1 km2 of Cosenza offers more than 800 housings, while about 350 housings are on 1 km2 of the area of Rende.
By observing Fig. 2, the old town and the city centre of Cosenza are characterized by the highest values of the ratio of housing surface area to the total surface (between 40 % and 80 %); in the suburb of Cosenza and the town centre of Rende there is a surface area occupied by housing between 10 % and 40 %; finally, in the most marginal areas of Cosenza and Rende the housing density is 10 % at the most. In the urban area there are about 12,000 buildings, of which about 55 % are in the city of Cosenza. Table 2 shows some data regarding the levels of resident employment and resident employment by sector in the analysed area. Urban area labour force amounts to about 42,000 persons, of which about 66 % of the city of Cosenza, and the remaining 34 % the city of Rende. In the urban area there are about 33,000 resident employed persons, and specifically about 22,000 in Cosenza (65 %).
Table 2
Resident employment data
Cosenza
Rende
Urban area
Resident labour force
27,831
14,477
42,308
Resident employed persons
21,529
11,844
33,373
Resident persons employed in agriculture
419
224
643
Resident persons employed in industry
2,898
1,660
4,558
Resident persons employed in services
18,212
12,110
28,172
Resident employees
16,577
8,905
25,482
Obviously, these percentages are correlated to the population size. In fact, in order to compare the employment data of the two analysed cities and to give more specific information about the levels of employment, some rates can be calculated.
As an example, the regional employment rate gives an idea about the levels of employment by considering employed persons as a percentage of the population. In this study case, the employment rate is equal to 31 % for the urban area, 29 % for the city of Cosenza, and 34 % for Rende; therefore, Rende has a major number of people employed compared to the total population than Cosenza. Analogously, the regional unemployment rate can be calculated, by considering unemployed persons as a percentage of the economically active population (labour force). The urban area presents an unemployment rate of about 21 %, Cosenza of about 23 %, while Rende has the lowest value, equal to 18 %. By analysing the data about the employment by sector of the studied area, persons employed in the services represent 84 % of the total employed persons, about 14 % of resident persons work in the industry, and only 2 % in the agriculture. Finally, 76 % of employed persons are employees.
Table 3 shows some data regarding the employment in the analysed area. ISTAT provides the data regarding economic activities, through the decennial census of the industrial and service activities [24]. These data show that in the urban area there are predominantly enterprises operating in the service sector; specifically, there are 9,789 private and public enterprises, with 45,415 persons employed (72 % in Cosenza and 28 % in Rende). The enterprises are generally small, with a staff of 4.4 employed in average. While in Cosenza most of people are employed in the sector of the public services, the enterprises located in Rende refer prevalently to the business activities. About 6 % of the 45,000 persons employed works in the agriculture sector, about 13 % in the industries, and about 81 % in the services.
Table 3
Number of persons employed in the private and public enterprises
Cosenza
Rende
Urban area
Employed persons
32,751
12,664
45,415
Persons employed in agriculture
2,852
25
2,877
Persons employed in industry
3,261
2,701
5,962
Persons employed in services
26,638
9,938
36,576
7,262
3,794
11,056
Persons employed in other private services
6,074
2,844
8,918
Persons employed in public services
13,302
3,300
16,602
### 4.1 Daily trips characteristics
Census data of the population [23] also provides the data referred to the daily trips made by people from home to work and study places (commuter trips). The trips are distinguished into trips with destination in the place of residence (internal trips), and trips with destination outside the place of residence (external trips).
However, it is necessary to observe that among the trips from Cosenza some trips have destination in Rende and vice versa. Therefore, these trips are internal trips for the urban area. In order to quantify these, some information collected by previous surveys are taken into account, and specifically a survey realized on the occasion of the urban traffic plan drafting of Cosenza [25]. The survey, effected in May 2000, was addressed to 649 households (2,014 members) out of 28,499 resident households [26]. From the survey data it follows that there are 32,852 trips per day made (for all purposes) by persons resident in the city with destination in other places, but a relevant part of these (17,924 trips) had their destination in Rende (54.6 %). This percentage can be used for estimating the number of commuter trips with origin in Cosenza and destination in the urban area.
Analogously, from the survey realized in the occasion of the urban traffic plan drafting of Rende [27], a number of 7,293 trips per day made (for all purposes) by persons resident in Rende with destination in other places was estimated. Also in this case, a relevant part of the trips (5,272) had their destination in Cosenza (72.3 %). This percentage can be used for estimating the number of commuter trips with origin in Rende and destination in the urban area.
Table 4 shows that the percentage of the trips produced by the residents with destination into the urban area is relevant for both for Cosenza and Rende (about 90 % of the total trips).
Table 4
Daily trips for work and study purposes
Internal trips
Trips with destination in Cosenza
Trips with destination in Rende
External trips
Total
Cosenza
22,157
4,138
3,441
29,736
Rende
11,462
4,535
1,738
17,735
Total
33,619
4,535
4,138
5,179
47,471
The trips with destination in Cosenza and those in Rende are been considered as internal trips. As shown in the Fig. 3, the internal trips vary between 0 to about 450 for each census parcel. The highest values are concentrated in the urbanized parcels. In Rende these are along the state roads n.19 and n.19bis and in the western region; whereas in Cosenza these are in the northern area. Furthermore, some parcels have numerous daily trips but also a great area. The others census parcels have less internal trips and are localized in the suburban areas which have low values of population and housing.
The Fig. 4, about the external trips, has a similar configuration of the Fig. 3 but the values for census parcels are lower. They vary between 0 to about 80 daily trips.
However, it is necessary to point out that census data refer to the trips made for work and study purposes only, but a relevant part of the daily trips is made for other purposes. As an example, by the same survey realized in the occasion of the urban traffic plan drafting of Cosenza it emerges that out of 5,075 home-based trips realized by a sample of residents in Cosenza, 1,924 (38 %) are trips made for work and study purposes, but 3,151 (62 %) area trips realized for other purposes. Therefore, we can retain that 47,471 commuter trips registered by the census represent only 38 % of the total trips made in a day. By taking into account the complementary percentage (62 %), a realistic value of the daily home-based trips amount to 124,924. This value could be further increased in order to take into account the non home-based amount of trips.
## 5 Spatial techniques application
Clustering techniques have emerged as a potential approach for analysing complex spatial data in order to determine whether or not inherent geographically based relationships exist. The measures of global and local spatial autocorrelation, defined in the Section 2, were applied and implemented in a GIS environment for analysing the spatial association of the internal and external daily trips made in the urban area of interest. The computer program ArcGIS contains methods that are most appropriate for understanding broad spatial patterns and trends.
### 5.1 Global statistics of spatial association
The purpose of the application of global techniques is to understand the spatial distribution of trips among the census parcels in the entire urban area. The tools used for calculating global statistics in ArcGIS are High/low Clustering and Spatial Autocorrelation.
High/Low Clustering measures the degree of clustering for either high values or low values. It calculates the Getis-Ord General G statistics and associated Z score which is a measure of statistical significance. The null hypothesis to reject is “there is no spatial clustering”. When the absolute value of the Z score is large, the null hypothesis can be rejected. The higher (or lower) values of the Z score involve the strong intensity of the clustering. A Z score near zero indicates no apparent clustering within the study area, whereas a positive and a negative Z score indicates clustering of high and low values, respectively. This statistics is very useful to understand the pattern of daily trips in the urban area of Cosenza and Rende.
Regarding the internal trips, the outcomes (Table 5) indicate that the Z score value is negative and high in absolute value; therefore, the null hypothesis can be rejected and there is less than 1 % likelihood that the clustering of low values could be the result of random chance (Fig. 5).
Table 5
General G Summary for daily internal trips
General G Summary
Observed General G
0.000348
Expected General G
0.000449
Variance
0.000000
Z Score
−3.584739
p-value
0.000337
In the case of the application to the external trips, the outcomes (Table 6) indicate that the Z score value is negative but his absolute value is lower; therefore, the null hypothesis cannot be rejected.
Table 6
General G Summary for daily external trips
General G Summary
Observed General G
0.000413
Expected General G
0.000449
Variance
0.000000
Z Score
−1.180129
p-value
0.237949
In the Fig. 6, it is reported the graphic output which shows that even if there is some clustering, the pattern may be due to random chance. Probably, this result is caused by the data set, which for external trips contains low values respect to the internal trips.
Spatial Autocorrelation measures the Global Moran’s I which evaluates whether the analysed pattern is clustered, dispersed, or random. A Moran’s I value near +1.0 indicates clustering whereas a value near −1.0 indicates dispersion. The Global Moran’s I function also calculates a Z score value that indicates whether or not to reject the null hypothesis: “there is no spatial clustering”. To determine if the Z score is statistically significant, it is compared to the range of values for a particular confidence level. When the p value is small and the absolute value of the Z score is large enough to fall outside of the desired confidence level, the null hypothesis can be rejected.
Analysing the spatial distribution of the internal trips, it is evident that the Z score value is high and the null hypothesis can be rejected (Table 7).
Table 7
Global Moran’s I Summary for daily internal trips
Global Moran’s I Summary
Moran’s Index
0.153467
Expected Index
−0.001198
Variance
0.000021
Z Score
33.541291
p-value
0.000000
As represented in the Fig. 7, the data are clustered and there is less than 1 % likelihood that the clustered pattern could be the results of random chance.
The results of the spatial autocorrelation applied on the external trips follow the same trend as the previous one, as showed in the Table 8.
Table 8
Global Moran’s I summary for daily external trips
Global Moran’s I Summary
Moran’s Index
0.163209
Expected Index
−0.001198
Variance
0.000021
Z Score
35.724162
p-value
0.000000
Therefore, the null hypothesis can be rejected and there is a clustered pattern of the data (Fig. 8).
The application of Getis-Ord General G and of Moran’s Index I gives similar results from the analysis of internal trips but dissimilar ones for external trips. In fact, for internal trips, the first statistics establishes that there is clustering of low values, and the second one confirms the presence of spatial patterns. Instead, for external trips, the General G statistics says that the distribution of data is random, whereas Moran’s I shows that there is a clustered pattern.
### 5.2 Local statistics of spatial association
The global measures of spatial association refer to the entire area and do not give indications about the clusters are localized. The local statistics of spatial association are useful in detecting places with unusual concentrations of hot spots. The tools of ArcGIS, which are used in this work for applying the local statistics, are Hot Spot Analysis and Cluster and Outlier Analysis.
Hot Spot Analysis calculates the Getis-Ord G i * statistics for hot spot analysis. The output of the G i function is a Z score which represents the statistical significance of clustering for a specified distance and must be compared to the range of values for a particular confidence level. A high Z score for a feature indicates its neighbours have high attribute values, and vice versa. A Z score near zero indicates no apparent concentration.
The Getis-Ord G i * statistics applied to internal trips can be displayed graphically by the Z score (Fig. 9). The concentration of “hot” spots (in this case, the concentration of census parcels with high number of daily trips with the destination in the urban area) is represented in red, whereas the concentration of “cold” spots (census parcels with low number of daily internal trips) is in blue. The parcels with high values are localized on the boundary between Cosenza and Rende. In fact, this zone is a unique urban structure, which has similar characteristics, as said in the Section 4. Instead, the parcels with low values are localized in the old town of Cosenza and in areas with low population.
Similarly, the Getis-Ord G i * statistics applied to external trips (Fig. 10) presents concentrations of high or low values in the same zones of the urban areas.
Cluster and Outlier Analysis measures the Anselin Local Moran’s I and identifies clusters of points with values similar in magnitude and clusters of points with very heterogeneous values.
A positive value for I indicates that the feature is surrounded by features with similar values. A negative value for I indicates that the feature is surrounded by features with dissimilar values. The tool also provides a Z score value for each observation. A group of adjacent features having high Z scores indicates a cluster of similarly high or low values. A low negative Z score for a feature indicates the feature is surrounded by dissimilar values. Finally, the tool provides a distinction between a statistically significant (0.05 level) cluster of high values (HH), cluster of low values (LL), outlier in which a high value is surround primarily by low values (HL), and outlier in which a low value is surrounded primarily by high values (LH). The Anselin Local Moran’s I output can be displayed by the visualization of these four patterns of spatial association.
In the Figs. 11 and 12, the patterns are represented for internal and external trips respectively. There is an evident agreement between the two representations. The areas of the corresponding patterns are localized in the same place, even if their extensions and shapes are different.
Comparing the output of Hot Spot Analysis and Cluster and Outlier Analysis, a certain similarity emerges. In fact, both the statistics give an indication about the localization of the hot and cold spot, which is approximately the same.
The application of the spatial association statistic to commuting trip data introduced new aspects which merit further consideration, as said in [20]. Moreover, the used measures can improve understanding of the strengths and weaknesses of the estimated models in terms of a spatial analysis. This understanding can be incorporated into improved and more comprehensive models.
## 6 Conclusions
The purpose of this paper is to investigate spatial association patterns in the distribution of daily trips made by people from home to work and study places (commuter trips). The trips have been distinguished into trips with destination in the place of residence (internal trips), and trips with destination outside the place of residence (external trips). Exploratory spatial data analysis was conducted applying both global and local techniques of spatial association. The main contribution of the ESDA is to highlight potentially interesting features in the data, and to address the modelling process.
The statistics were elaborated by using GIS, which allows the outcomes to be estimated with automatic proceedings and this aspect facilitates the application of techniques to large data sets. In fact, the application of spatial analysis has obviously become easier with the recent advancements in computing and GIS, which have revolutionized the development of planning support systems to study and simulate the future of travel demand in urban areas.
The results showed that the spatial distribution of trips among the census parcels displays clusters of similar values and there is spatial dependence in the data set. This means that to model the phenomenon is necessary to use spatial regression models because the application of non-spatial regression models can lead to wrong results.
The work presented in this paper is a step towards a wider work regarding the case study of Cosenza-Rende. Future developments will regard the analysis of interaction between land-use and transportation systems, the development of spatial regression models, and it will also comprise the supply transportation system, the localization of dwellings and economic activities, and the territorial features. Moreover, further developments will concern the check if the results can be generalized to urban contexts with similar characteristics to that studied. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5834047198295593, "perplexity": 1470.1935232885382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00124-ip-10-239-7-51.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/164526/closed-form-expression-of-the-following-double-integral | # Closed-form expression of the following double integral
How can I find closed-form expression of the following double integral $$\int_{0}^{\pi/4}\int_{0}^{\infty}{{\rm d}r\,{\rm d}\phi \over u_{1}^{2} + u_{2}^{2}\,r + 2\,u_{1}u_{2}\,\sqrt{r\,}\,\cos\left(\phi\right)}\ {\large ?}$$ Please help me as soon as you can.
-
Are you sure it is convergent? The integral with respect to $r$ diverges at infinity. – Sasha Jun 29 '12 at 12:44
Are you missing a factor of like $1/r$ or $1/r^2$? – KennyTM Jun 29 '12 at 17:49
Hence the answer is: there is a closed-form expression, which is $+\infty$. – Did Jul 1 '12 at 20:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9791617393493652, "perplexity": 542.2392666299686}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115862207.44/warc/CC-MAIN-20150124161102-00186-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://cstheory.stackexchange.com/questions/38183/can-one-prove-the-discovery-of-a-p-versus-np-solution-without-actually-revealing | # Can one prove the discovery of a P versus NP solution without actually revealing it?
Suppose a person has proved that P≠NP. He wants to let the world know that he has solved the P versus NP problem but does not want to reveal that he has proved P≠NP as opposed to P=NP.
Is there any purely theoretical way to do so?
Also any practical evidence he can show to back his claim? (I'm not sure it this part is on-topic)
• She or he will have enough trouble convincing the world without trying to hide this information. – Thomas May 10 '17 at 17:29
• @Thomas Really? I thought a properly written proof shouldn't be too hard to accept. – ghosts_in_the_code May 11 '17 at 9:34
• A straightforward application of zero-knowledge proofs should do the trick. – Or Meir May 11 '17 at 22:33
• @OrMeir So what exactly will the zero knowledge proof be; that's what I'm asking – ghosts_in_the_code May 12 '17 at 11:09
• The NP statement would be "there exists a proof for P = NP or a proof for P \ne NP". A witness for this statement can be verified in polynomial time, so it is indeed in NP. Now apply zero-knowledge proof to this statement. – Or Meir May 12 '17 at 17:10
Or Meir’s comment is almost but not quite right, since it would be satisfied by a proof that P vs. NP is not independent even if the prover didn’t know which. A corrected version is “X is either the hash of a proof that P = NP or the hash of a proof that P $\ne$ NP”, where hash is SHA256, say. Running that statement through a zero knowledge proof system gives the desired evidence. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5613430738449097, "perplexity": 668.2303002617567}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413406.70/warc/CC-MAIN-20200531120339-20200531150339-00147.warc.gz"} |
http://swmath.org/software/15946 | SLDAssay
R package SLDAssay. Calculates maximum likelihood estimate, exact and asymptotic confidence intervals, and exact and asymptotic goodness of fit p-values for infectious units per million (IUPM) from serial limiting dilution assays. This package uses the likelihood equation, exact PGOF, and exact confidence intervals described in Meyers et al. (1994) <http://jcm.asm.org/content/32/3/732.full.pdf>.
Keywords for this software
Anything in here will be replaced on browsers that support the canvas element | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586686253547668, "perplexity": 5430.003797792509}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608416.96/warc/CC-MAIN-20170525195400-20170525215400-00409.warc.gz"} |
https://www.allaboutcircuits.com/technical-articles/bjts-after-biasing-the-small-signal-model/ | Technical Article
# BJTs after Biasing: Analyzing BJTs with a Small-Signal Model
April 10, 2018 by Robert Keim
## This article presents two circuits that can be used to analyze the small-signal behavior of a bipolar junction transistor.
This article presents two circuits that can be used to analyze the small-signal behavior of a bipolar junction transistor.
### Supporting Information
We frequently use BJTs as a straightforward electrical switch (as described in my previous article on rapid analysis of BJT switch/driver circuits). These applications focus on the “large-signal” conditions of the transistor, meaning the DC currents and voltages that determine the transistor’s operating mode and the total current flowing into or out of its base, collector, and emitter.
BJTs are also capable of amplifying small-amplitude signals, and amplifier applications such as these lead us into the “small-signal” realm. This realm does not replace the large-signal conditions; rather, small-signal operation is superimposed on large-signal operation. We use large-signal conditions to bias the transistor, and the biasing conditions imposed by a given circuit influence the BJT’s small-signal behavior.
### Small-Signal Models
After the BJT has been biased, we can focus on small-signal operation, and small-signal analysis is easier when we replace the BJT with simpler circuit elements that produce functionality equivalent to that of the transistor. Just remember that these models are relevant only to small-signal operation, and furthermore, you can’t use the models until you have established the large-signal bias conditions.
#### The Hybrid-π Model
The first small-signal model that we’ll discuss is called the hybrid-π model, and it looks like this (for an NPN transistor):
As you can see, it has three terminals corresponding to the BJT’s base, collector, and emitter. The current flowing into the base is determined by the base-to-emitter voltage (VBE) and Rπ, and the collector current is generated by a current-controlled current source. Just as with a large-signal NPN, the collector current flows into the collector, the base current flows into the base, and the emitter current flows out of the emitter and is the sum of the base current and the collector current.
The collector current is equal to β times IB, which is not surprising. IB is determined by VBE and Rπ, and this is where the biasing conditions come into play:
$$R_{\pi}=\frac{\beta}{g_m}$$
$$g_m=transconductance=\frac{I_{C_{BIAS}}}{V_t}$$
So we need IB to determine IC, and we need Rπ to determine IB, and we need gm to determine Rπ, and we need ICBIAS (i.e., the large-signal collector current) to determine gm.
It is possible to reformulate the hybrid-π model so that you calculate directly from VBE to IC. If you replace β with gmRπ, you have IC = IBgmRπ = gmVBE.
#### The T Model
In some cases you might prefer to use the following alternative to the hybrid-π model:
This is called the T model. It looks quite different from the hybrid-π model, but they are both valid in all cases and will produce equal results (as long as you get the math right). With the T model, you again need to know the large-signal collector current (to calculate gm), because the resistance RE is calculated as follows:
$$R_E=\frac{\alpha}{g_m}$$
You can use the following formula to calculate the parameter α:
$$\alpha=\frac{\beta}{\beta+1}$$
As with the hybrid-π model, the T model can use either a voltage or a current as the variable that controls the current source. In the T model, the current source’s expression is either gmVBE (as shown above) or αIE:
### Using the Models
The BJT small-signal models are drop-in replacements for the BJT symbol in a circuit diagram. Once you have determined the bias conditions, you remove the BJT, insert the small-signal model, and connect the previous base, collector, and emitter nodes to the model’s base, collector, and emitter terminals.
The next step is not so obvious: you need to replace each DC voltage source with a short circuit and each DC current source with an open circuit, because this corresponds to their behavior in the context of small-signal operation. Note that a “voltage rail” (e.g., VCC, VDD) that appears in the schematic as simply a supply voltage becomes a ground connection, because the rail is actually a shorthand way of drawing a normal voltage source that has one terminal connected to ground.
At this point you have converted the circuit from large signal to small signal, and you’re ready to proceed with standard circuit-analysis procedures.
### Accounting for the Early Effect
I have an article that serves as an introduction to the Early effect if you'd like a more thorough explanation. To make a long story short, however, the Early effect refers to a phenomenon that occurs inside a BJT and causes the active-mode collector current to be affected by the collector voltage. More specifically, an increase in the collector-to-emitter voltage results in an increase in the collector current.
If you ponder the small-signal models shown above, you can see that they don’t incorporate the Early effect: the only small-signal variable that affects the collector current is the base current, the emitter current, or the base-to-emitter voltage. If we want the small-signal models to be more accurate, we need to account for the Early effect.
Fortunately, this is easily done. All we need is a resistor connected between the collector and the emitter.
This resistor represents the small-signal output resistance, which is calculated as follows:
$$R_{O_{SS}}=\frac{V_A+V{_{CE_{BIAS}}}}{I_{C_{BIAS}}}$$
The Early voltage (VA) will often be significantly larger than the collector-to-emitter voltage, so you can simplify this as follows:
$$R_{O_{SS}}=\frac{V_A}{I_{C_{BIAS}}}$$
The addition of this resistor makes intuitive sense: the Early effect tells us that a higher collector-to-emitter voltage will result in higher collector current, and by adding this resistor we are opening an additional current path between collector and emitter that is directly influenced by the collector-to-emitter voltage.
### Conclusion
We briefly covered the concept of separating large-signal conditions from small-signal behavior in the context of amplifier analysis, and we looked at two circuit structures (the hybrid-π model and the T model) that correspond to the small-signal functionality of a bipolar junction transistor. After a quick explanation of how to incorporate these models into BJT circuit analysis, we discussed improved versions that use a collector-to-emitter resistor to account for the Early effect.
• Share
### You May Also Like
• G
Gow April 14, 2018
Articles are simple superb. Easy to understand and have been very helpful for a beginner like me. Please keep writing.
Like.
• G
Glenniem April 30, 2018
Very informative, but the majority of BJT data sheets no longer provide many of these parameters. What to do then?
Like.
• RK37 April 30, 2018
The only parameter you really need is β, which is given in datasheets as hFE. If you want to incorporate small-signal output resistance, you need the Early voltage. I'll write an article on how to determine the Early voltage (it's kind of a long story). In the meantime you might be able to find an approximate value by Googling "early voltage [part number]" or something along those lines.
Like. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.743736743927002, "perplexity": 1421.59721699349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307813.73/warc/CC-MAIN-20191215094447-20191215122447-00513.warc.gz"} |
https://pballew.blogspot.com/2010/12/more-almost-binomial-distributions.html | ## Thursday, 23 December 2010
### More "Almost Binomial" Distributions
In my recent post I illustrated the extension of the binomial to a Multinomial Distribution. In a similar way, the geometric distribution and the Pascal (aka the Negative Binomial) Distribution are very much like special cases of the binomial.
I will illustrate each with a simple probability example. In a limited version of the game of "greedy pig" you roll a die as many times as you wish each turn and you add the points on the top of the die to your score for that turn...but... if you roll a one, your turn ends and you lose all the points you have earned for that round. One might inquire, what is the probability that you could roll the die n times without rolling a one. Since the probability on each roll is the same, we could handle this using the binomial (or multinomial) distribution with n trials, p=5/6, and the number of successes also equal to n. For n=5 for example, we get (using the notation established in that blog) which simplifies to just (5/6)5 .
But a slightly different question might be, what is the probability that our first failure (rolling a one) would occur on the sixth roll. This is asking for the probability that the first five rolls succeed, and then the final roll is a failure. This is the general model for a geometric distribution. The reason it is called a geometric distribution is clear if you calculate the probability of the first failure happening on the first, second, etc rolls.
Roll...1....2.......3........4...
Prob...1/6..5/36...25/216 ..5^3/6^4..
notice that each probability is the previous probability multiplied by a constant ratio of 5/6. The terms for a geometric sequence (which must sum to one to be a probability distribution......check)
In general, if the probability of failure is q = 1-p.. then the probability of the first failure occurring on the nth trial is given by (p)n-1(q)
It often surprises students that the mean for such a distribution is 1/q where q is the probability of a failure. OK before I confuse someone.. the geometric distribution is sometimes described as the number of trials to the first success, so you may see the expected or mean value as 1/p. In any event, if the probability of an event happening (whether you call it success or failure) is p, the expected number of trials before it happens is 1/p.
Now if you are really clever you can figure out how to do the next problem without me, but let's walk through it anyway, (hey...it's MY blog).
Suppose instead, you could keep rolling until you had three rolls of one..... sort of "three strikes and you're out." Now what is the probability that the third strike comes on the tenth roll.
The idea of course, a collection of 9 rolls with 2 failures anywhere in the string, and then a third failure on the tenth roll. To get the probability of all the possible ways to get 7 successes and 2 failures in the first nine rolls is a straight binomial (multinomial) probability problem.
We just multiply this by a failure on the tenth roll and we have the probability we seek. Since we have a couple of "failures" in that (1/6)^2, we might as well just up it to a three and be done. The final probability is
If you would like to experiment with these distributions, I came across a nice experimental applet here
This experiment uses the trials to k successes instead of failures, and so p and q are switched here (and it seems I could only adjust these in .05 increments). This is a nice routine and you can simulate trials by clicking on the "step" button to see how many trials it took to get three successes.
This applet is part of a nice virtual laboratory created by Kyle Siegrist of the Department of Mathematical Sciences at the University of Alabama in Huntsville. There is lots of nice stuff. See the home page here.
When we deal with integer numbers of failures this is called a Pascal Distribution after Blaise Pascal. It can be extended to any real and is then called a Polya Distribution, after George Polya. This has application for events which are very rare, but related to each other, such as hurricanes. Both are special cases of the general Negative Binomial Distribution. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9353320598602295, "perplexity": 358.7145910548953}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891654.18/warc/CC-MAIN-20200707044954-20200707074954-00114.warc.gz"} |
https://www.logicmatters.net/tyl/booknotes/shapirov/ | # Shapiro Varieties of Logic
## Shapiro: Varieties of Logic
Stewart Shapiro’s very readable short book Varieties of Logic (OUP, 2014) exhibits the author’s characteristic virtues of great clarity and a lot of learning carried lightly. I found it, though, to be uncharacteristically disappointing.
Perhaps that’s because for me, in some key respects, he was preaching to the converted. For a start, I learnt long ago from Timothy Smiley that the notion of consequence embraces a cluster of ideas. As Smiley puts it, the notion “comes with a history attached to it, and those who blithely appeal to an ‘intuitive’ or ‘pre-theoretic’ idea of consequence are likely to have got hold of just one strand in a string of diverse theories.” Debates, then, about which is the One True Notion of consequence are likely to be quite misplaced: for different purposes, in different contexts, we’ll want to emphasize and develop different strands, leading to different research programmes. As Shapiro puts it, the notion(s) of consequence can be sharpened in different ways — and taking that point seriously, he suggests, is already potentially enough to deflate some of the grand debates in the literature (e.g. about whether second-order logic is really logic).
And I’m still Quinean enough to find another of Shapiro’s themes congenial. Do we say, for example, that ‘or’ or ‘not’ mean the same for the intuitionist and the classical mathematician? Or is there a meaning-shift between the two? Shapiro argues that for certain purposes, in certain contexts, with certain interests in play, yes, we can say (if we like) that there is meaning shift; given other purposes/contexts/interests we won’t say that. The notion of meaning is maybe too useful to do without in all kinds of situations; but it is also itself too shifting, too contextually pliable, to ground any grand debate here.
Put it this way, then. I’m pretty sympathetic with Shapiro’s claims that some large-scale grand debates are actually not very interesting because not well-posed. What that means, I take it, is that we’ll in fact find the interesting stuff going on a level or two down, below the topmost heights of cloudy generality, in areas where enough pre-processing has gone on to sharpen up ideas so that questions can be well-posed.
Here’s the sort of thing I mean. Take the very interesting debate between those like Prawitz, Dummett and Tennant who see a certain conception of inference and the logical enterprise as grounding only intuitionistic logic (leaving excluded middle as a non-logical extra, whose application to a domain is to be justified, if at all, on metaphysical grounds), and those like Smiley and Rumfitt who argue that that line of thought depends on failing to treat assertion and rejection on a par as we ought to do. This debate is prosecuted between parties who have agreed (at least for present purposes) on how to sharpen up certain ideas about logic, consequence, the role of connectives, etc., but still have an argument about how the research programme should proceed.
Shapiro doesn’t mention that particular debate. Absolutely fair enough (I just plucked out something that happens to interest me!). The complaint, though, is that he doesn’t supply us with much by way of other illustrations of investigations of varieties of logic at a level or two below the most arm-waving grand debates — i.e. at the levels where, by his own account, the real action must be taking place. Hence, I suppose, my general disappointment.
Shapiro does however mention a number of times one interesting example to provide grist to our mills, namely smooth infinitesimal analysis. This, if you don’t know it, is a deviant form of infinitesimal analysis — deviant, at any rate, from the mathematical mainstream. (If you look at Nader Vakil’s recent heavy volume Real Analysis Through Modern Infinitesimals in the CUP series Encyclopedia of Mathematics and Its Applications, then you’ll find smooth analysis gets the most cursory of mentions in one footnote.) The key idea is that there are nil-potent infinitesimals — at a rough, motivational level, quantities so small their square is indeed zero, even though they are not assumed to be zero. More carefully, we have quantities $latex \delta$ such that $latex \delta^2 = 0$ and $latex \neg\neg(\delta = 0)$, but — because the logic is intuitionistic — we can’t assert $latex \delta = 0$. And then, the key assumption, it is required that for any function $latex f$, and number $latex x$, there is a unique number $latex f'(x)$ such for any nil-potent $latex \delta$, $latex f(x + \delta) = f(x) + f'(x)\delta$. So looked at down at the infinitesimal level, $latex f$ is linear, and $latex f'(x)$ gives its slope at $latex x$ — so is the derivative of $latex f$. Now it turns out that, with enough assumptions in place, this theory allows us to define integration in a correspondingly natural way, and then we can readily prove the usual basic theorems of analysis.
Now that is indeed interesting. But — and here’s the rub — the internal intuitionistic logic is absolutely crucial. The usual complaint by the intuitionist is that adding the law of excluded middle unjustifiably collapses important distinctions (in particular the distinction between $latex \neg\neg P$ and $latex P$). But in the case of smooth analysis, add the law of excluded middle and the theory doesn’t just collapse (by making all the nil-potent infinitesimals identically zero) but becomes inconsistent. What are we to make of this? In particular, what can the defender of classical logic make of this?
I guess there is quite a lot to be said here. It is a nice question, for example, how much sense we can make of all this outside the topos-theoretic context where the Kock-Lawvere theory of smooth analysis had its original home. To be sure, as in John Bell’s A Primer of Infinitesimal Analysis, we can write down various axioms and principles and grind through deductions: but how much understanding ‘from the inside’ does that engender? Shapiro says just enough to pique a reader’s interest (for someone who hasn’t already come across smooth analysis), but not enough to leave them feeling they have much grip on what is going on, or to help out those who are already puzzling about the theory. And that’s a real disappointment.
Scroll to Top | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7979499697685242, "perplexity": 1309.2833591224194}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710941.43/warc/CC-MAIN-20221203212026-20221204002026-00343.warc.gz"} |
https://www.ncbi.nlm.nih.gov/pubmed/11310415?dopt=Abstract | Format
Choose Destination
Proc Nutr Soc. 2001 Feb;60(1):107-13.
# Physical activity and cancer risk.
### Author information
1
Human Muscle Metabolism Research Group, Department of Physical Education, Sports Science and Recreation Management, Loughborough University, Leicestershire, UK. [email protected]
### Abstract
Evidence is accumulating that high levels of physical activity are associated with a reduced risk of some cancers. This evidence is most consistent for colon cancer, which is reduced by 40-50% among the most active individuals, compared with the least active. The effect is evident in men and women, and appears to be independent of important confounding factors. However, there may be important interactions with body fatness; a high BMI has been reported to be associated with an increased risk of colon cancer in sedentary men but not in physically-active men. Whilst the evidence on breast cancer is less consistent, case-control studies typically suggest a reduction of 25-30% among the most active women, although several studies have found no effect. Potential mechanisms include systemic influences and others relevant only to site-specific cancers. One unifying hypothesis is that physical inactivity reduces insulin sensitivity, leading to a growth-promotional environment which may facilitate neoplasia. The non-specific immune system may be improved by physical activity, possibly through the summative effects of repeated exercise bouts. Regular exercise, even at a recreational level, probably reduces exposure to oestrogen and thus decreases the risk of breast cancer. Increased colonic peristalsis, and thus reduced bowel transit time, might partly explain the lower risk of colon cancer in active people. Physical activity emerges as one of the few modifiable risk factors for some cancers and, as such, justifies further study.
PMID:
11310415
[Indexed for MEDLINE] | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088327288627625, "perplexity": 5256.757238583252}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00299.warc.gz"} |
http://mathhelpforum.com/pre-calculus/129558-vectors.html | 1. ## vectors
I have some questions on a paper that I am typing up, and I've looked through my text book and I cannot find this, or I just don't know where to look.
-Explain how to write a vector in terms of it's magnitude and direction.
2. ## Magnitude and direction of a vector
Hello Chinnie15
Originally Posted by Chinnie15
I have some questions on a paper that I am typing up, and I've looked through my text book and I cannot find this, or I just don't know where to look.
-Explain how to write a vector in terms of it's magnitude and direction.
I'm not sure that there is any particular way to write a vector in terms of its magnitude and direction.
You can describe a vector in terms of its magnitude and direction by giving several examples; for instance:
1. The displacement vector of a point $B$ from a point $A$ can be written as $\vec{AB}$ and would be described in terms of the distance $AB$ (in suitable units) and the angle that the line segment $AB$ measured from a fixed direction. E.g. $AB = 5$ units in a direction making an angle of $30^o$ with the direction of the $x$-axis.
2. The velocity vector of a moving body can be described in terms of the speed of the body and the direction of its motion at a particular instant. E.g. a shell is fired in a north-easterly direction at $300$ m/sec at an angle of $45^o$ above the horizontal.
... and so on. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8471003174781799, "perplexity": 129.61544527386482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171900.13/warc/CC-MAIN-20170219104611-00412-ip-10-171-10-108.ec2.internal.warc.gz"} |
http://proceedings.mlr.press/v37/yi15.html | Binary Embedding: Fundamental Limits and Fast Algorithm
Xinyang Yi, Constantine Caramanis, Eric Price ;
Proceedings of the 32nd International Conference on Machine Learning, PMLR 37:2162-2170, 2015.
Abstract
Binary embedding is a nonlinear dimension reduction methodology where high dimensional data are embedded into the Hamming cube while preserving the structure of the original space. Specifically, for an arbitrary N distinct points in \mathbbS^p-1, our goal is to encode each point using m-dimensional binary strings such that we can reconstruct their geodesic distance up to δuniform distortion. Existing binary embedding algorithms either lack theoretical guarantees or suffer from running time O(mp). We make three contributions: (1) we establish a lower bound that shows any binary embedding oblivious to the set of points requires m =Ω(\frac1δ^2\logN) bits and a similar lower bound for non-oblivious embeddings into Hamming distance; (2) we propose a novel fast binary embedding algorithm with provably optimal bit complexity m = O(\frac1 δ^2\logN) and near linear running time O(p \log p) whenever \log N ≪δ\sqrtp, with a slightly worse running time for larger \log N; (3) we also provide an analytic result about embedding a general set of points K ⊆\mathbbS^p-1 with even infinite size. Our theoretical findings are supported through experiments on both synthetic and real data sets. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.937036395072937, "perplexity": 1088.1632089096424}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256163.40/warc/CC-MAIN-20190520222102-20190521004102-00473.warc.gz"} |
https://stats.stackexchange.com/questions/23143/remove-duplicates-from-training-set-for-classification?noredirect=1 | # Remove duplicates from training set for classification
Let us say I have a bunch of rows for a classification problem:
$$X_1, ... X_N, Y$$
Where $X_1, ..., X_N$ are the features/predictors and $Y$ is the class the row’s feature combination belongs to.
Many feature combination and their classes are repeated in the dataset, which I am using to fit a classifier. I am just wondering if it is acceptable to remove duplicates (I basically perform a group by X1 ... XN Y in SQL)? Thanks.
PS:
This is for a binary presence only dataset where the class priors are quite skewed
No, it is not acceptable. The repetitions are what provide the weight of the evidence.
If you remove your duplicates, a four-leaf clover is as significant as a regular, three-leaf clover, since each will occur once, whereas in real life there is a four-leaf clover for every 10,000 regular clovers.
Even if your priors are "quite skewed", as you say, the purpose of the training set is to accumulate real-life experience, which you will not achieve if you lose the frequency information.
I agree with the previous answer but here are my reservations. It is advisable to remove duplicates while segregating samples for training and testing for specific classifiers such as Decision Trees. Say, 20% of your data belonged to a particular class and $\frac{1}{4}^{th}$ of those seeped into testing, then algorithms such as Decision Trees will create gateways to that class with the duplicate samples. This could provide misleading results on the test set because essentially there is a very specific gateway to the correct output.
When you deploy that classifier to completely new data, it could perform astonishingly poor if there are no samples similar to the above said 20% samples.
Argument: One may argue that this situation points to a flawed dataset but I think this is true to real life applications.
Removing duplicates for Neural Networks, Bayesian models etc is not acceptable.
• Another feasible solution could be to weight the duplicates lower based on their frequency of occurrence. – Rakshit Kothari Aug 27 '18 at 22:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26707878708839417, "perplexity": 1025.5825255524242}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00365.warc.gz"} |
http://blog.jpolak.org/?tag=invariant-theory | # An Example Using Chevalley Restriction
Here is a classic problem of geometric invariant theory: let $G$ be a reductive linear algebraic group such as $\mathrm{GL}_n$ and let $\mathfrak{g}$ be its Lie algebra. Determine the invariant functions $k[\mathfrak{g}]^G$, where $G$ acts on $\mathfrak{g}$ via the adjoint action. This problem is motivated by the search for quotients: What is the quotient $\mathfrak{g}/G$? Here, the action of $G$ on $\mathfrak{g}$ is given by the adjoint action. More explicitly, an element $g\in G$ acts via the differentiation of $\mathrm{Int}_g$, where $\mathrm{Int}_g$ is conjugation by $g$ on $G$.
For simplicity, we will stay in the realm of varieties over an algebraically closed field $k$ of characteristic zero.
First, we should ask:
What should $\mathfrak{g}/G$ even mean? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9679450392723083, "perplexity": 71.51256843617902}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267155413.17/warc/CC-MAIN-20180918130631-20180918150631-00483.warc.gz"} |
http://openstudy.com/updates/5115a81fe4b09e16c5c82afc | ## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## ksaimouli Group Title differentiate one year ago one year ago Edit Question Delete Cancel Submit
• This Question is Closed
1. ksaimouli
Best Response
You've already chosen the best response.
0
$\frac{ dy }{ dx }=2x-y$
• one year ago
2. PeterPan
Best Response
You've already chosen the best response.
2
@ksaimouli $\frac{dy}{dx}+y=2x$
• one year ago
3. ksaimouli
Best Response
You've already chosen the best response.
0
i tried to do this |dw:1360374033568:dw|
• one year ago
4. ksaimouli
Best Response
You've already chosen the best response.
0
$\int\limits_{}^{}dy+\int\limits_{}^{}y= \int\limits_{}^{}2x dx$
• one year ago
5. ksaimouli
Best Response
You've already chosen the best response.
0
u mean this
• one year ago
6. PeterPan
Best Response
You've already chosen the best response.
2
Can't do that, the y part has no dy in it, it won't make sense.
• one year ago
7. ksaimouli
Best Response
You've already chosen the best response.
0
so how to do this
• one year ago
8. PeterPan
Best Response
You've already chosen the best response.
2
Well, start with$\frac{dy}{dx}+y=2x$ and multiply everything by e^x
• one year ago
9. PeterPan
Best Response
You've already chosen the best response.
2
$\large e^x\frac{dy}{dx}+e^xy=2xe^x$
• one year ago
10. PeterPan
Best Response
You've already chosen the best response.
2
Now, question... what's $\large \frac{d}{dx}ye^x$ ?
• one year ago
11. PeterPan
Best Response
You've already chosen the best response.
2
$\large \frac{d}{dx}ye^x = e^x\frac{dy}{dx} + ye^{x}$
• one year ago
12. ksaimouli
Best Response
You've already chosen the best response.
0
i did not understant how did u get that ^
• one year ago
13. PeterPan
Best Response
You've already chosen the best response.
2
Implicit differentiation
• one year ago
14. ksaimouli
Best Response
You've already chosen the best response.
0
|dw:1360374611401:dw|
• one year ago
15. ksaimouli
Best Response
You've already chosen the best response.
0
hmm can u use implicit differentiation of function which is already differentiated
• one year ago
16. PeterPan
Best Response
You've already chosen the best response.
2
You can use it on any function, as far as I know :)
• one year ago
17. PeterPan
Best Response
You've already chosen the best response.
2
So, we end up with $\large \frac{d(ye^x)}{dx}=2xe^x$
• one year ago
18. PeterPan
Best Response
You've already chosen the best response.
2
so, just bring the dx on the other side... $\large d(ye^x)=2xe^xdx$ And integrate both sides... $\large ye^x = \int\limits_{}^{}2xe^x dx$ And you're good to go. :)
• one year ago
19. ksaimouli
Best Response
You've already chosen the best response.
0
thx
• one year ago
20. agent0smith
Best Response
You've already chosen the best response.
0
@PeterPan I haven't done these in a while, but is it incorrect to just differentiate this to: $\frac{ dy }{ dx }=2x-y$$\frac{ d^2y }{ dx^2 }=2-\frac{ dy }{ dx }$$\frac{ d^2y }{ dx^2 } + \frac{ dy }{ dx } =2$ All the original question said was: Differentiate:$\frac{ dy }{ dx }=2x-y$
• one year ago
21. PeterPan
Best Response
You've already chosen the best response.
2
I was wondering about that, but then again, ksaimouli put in "I tried to do this", with a drawing that shows an attempt to solve it as a differential equation, so.... yeah
• one year ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9997071027755737, "perplexity": 15606.321600452831}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931004988.25/warc/CC-MAIN-20141125155644-00088-ip-10-235-23-156.ec2.internal.warc.gz"} |
http://blog.decatech.com/how-to-xgek/plus-minus-latex-cfa35c | This is when you state whether a mathematical quantity is either positive or negative. Latex indicator function; Latex plus or minus symbol; Latex symbol for all x; Latex symbol exists; Latex symbol not exists; Latex horizontal space: qquad,hspace, thinspace,enspace; Latex square root symbol; Latex degree symbol; LateX Derivatives, Limits, Sums, Products and Integrals; Latex copyright, trademark, registered symbols; Latex euro symbol Markdown. This is in math. Diesen Post per E-Mail versenden BlogThis! List of LaTeX mathematical symbols. At what height range would you consider a Manlet? how to do the coding with plus minus sign infront of square root? mp When I enter \text{3+2} into the textbox, the preview of the textbox shows plus, but when I click OK the textbox shows 3-2. ~ Thicker line for minus sign and plus sign Typical to Microsoft users, SharePoint seems to be tailored rather to a casual user (mouse interaction) than a power user. begin{tabular}...end{tabular}, Latex horizontal space: qquad,hspace, thinspace,enspace, LateX Derivatives, Limits, Sums, Products and Integrals, Latex copyright, trademark, registered symbols, How to write matrices in Latex ? No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. 2. How to place +/- plus minus operator in text annotation of plot (ggplot2)? The command \section{}marks the beginning of a new section, inside the braces is set the title. The the minus-plus sign, on the other hand, is represented using \\mp; it looks like this: ∓ . Mit der Ausgabe: Plusminus: \pm Minusplus: \mp. This is because a commission is billed by the home on each bet. Share. Latex Kurs Sonntag, 16. It can be used on some websites like Stack Overflow or to write documentations (essentially on GitHub). What is Christmas in July and how did it start? Minus Or Plus Latex. The usual plus-minus symbol is \pm. It indicates a choice between using the plus sign or the minus sign, with two unique solutions. Vote. pm Means a quantity of same magnitude can have either a positive or a negative value. How to type Plus/minus sign ± Hold down the ALT key and type 0177 on the keypad. minus You can choose from a variety of units. LaTeX macro for siunitx x plus/minus y units (requires package siunitx) - sipm_macro.tex matrix, pmatrix, bmatrix, vmatrix, Vmatrix, Horizontal and vertical curly Latex braces: \left\{,\right\},\underbrace{} and \overbrace{}, How to get dots in Latex \ldots,\cdots,\vdots and \ddots, Latex symbol if and only if / equivalence. I don't like how \pm looks. Follow edited Jul 20 '15 at 8:20. Just have to say that -- and --- are not math symbols in LaTeX. If you want it to be with a minus at the top, i.e., minus plus is \ mp. Commands to organize a document vary depending on the document type, the simplest form of organization is the sectioning, available in all formats. etc. These are dashes and should be used in text. LaTeX deals with the + and − signs in two possible ways. Follow 134 views (last 30 days) amanina abdul rahman on 15 Oct 2015. (adsbygoogle = window.adsbygoogle || []).push({}); ... (0.5,0.5, '$\pm$', 'interpreter', 'latex') If you need the minus on top and the plus underneath then \mp. Your email address will not be published. Many LaTeX commands take a length as an argument. Sign in to answer this question. It does not have the excellent online possibilities and tools to edit Latex documents overleaf is providing (to my knowledge), but it provides a robust environment for versioning, and collaborative versioning with branching, merging, etc. How to change the font of math operators, e.g. It will give me the energy and motivation to continue this development. Sorry if this is an obvious question, I've only been using latex for a few weeks. Lengths come in two types. Knowledge base dedicated to Linux and applied mathematics. Textbox changes plus to minus when using Latex. Do you mean something like. When two maths elements appear on either side of the sign, it is assumed to be a binary operator, and as such, allocates some space to either side of the sign. R - Combine non numerical and numerical data together in the same cell in a dataframe. How To Write Plus Minus Symbol In Latex. An online LaTeX editor that's easy to use. 14 Lengths. documentclass{article} usepackage{amsmath} begin{document} $1.23substack{+0.4 \ -0.5}$,pb end{document} (I've been asked about this for siunitx, but have never really worked out what it actually means or a good interface for an 'automated' approach.) Hold down the Shift and Option keys and press =, ± or ± More symbols in the category: How to. The LaTeX commands assume that you are in normal text mode. croober shared this problem 9 months ago . Currently, I'm using these lines of code but I would like to obtain plus-minus sign (±) values preceding a . (Plain TeX calls this a dimen. How to write Latex minus or plus symbol: \mp. A rigid length such as 10pt does not contain a plus or minus component. All other units are converted to the point by a fixed ratio.Here are some less common units. Latex to render mathematical and scientific writing. From OeisWiki. Sign in to comment. The minus–plus sign (also minus-or-plus sign), ∓, is generally used in conjunction with the ± sign, in such expressions as x ± y ∓ z, which can be interpreted as meaning x + y … Blog template built with Bootstrap and Spip by Nadir Soualem @mathlinux. The plus-or-minus sign in LATEX is represented using \pm; it looks like this: ±. Export (png, jpg, gif, svg, pdf) and save & share with note system When rendered, the \setminus command looks identical to \backslash , except that it has a little more space in front and behind the slash, akin to the LaTeX sequence \mathbin{\backslash} . What I don‘t like is that it is in the cloud, which makes me hesitating when it comes to hosting documents there with sensitive/confidential content. Then it's a good reason to buy me a coffee. plus and ... Accounting Blog: What Is Accounting Symbol For Plus Or Minus. (adsbygoogle = window.adsbygoogle || []).push({}); All the versions of this article: This is good is you are still a beginner participant. Is there a way to alter the thickness of the line for the minus sign (and, for matching purposes, the plus sign also)? For Matlab plots, how do I place plus or minus sign as X tick labels in a Matlab plot? There are no approved revisions of this page, so it may not have been reviewed. Plus-Minus. And you need to buy a monthly license to be able to use it to upload and collaborate. LaTeX: What is the command for the plus-minus sign? Latex Barplot one column based on another column from the same dataframe. As you said, the plus or minus sign is quite literally called the plus or minus sign. Show Hide all comments. In math mode there is just one type of minus. Ask Question Asked 5 years ago. Categories Here are the most common ones.The point is the default unit and 1pt is the default length. Open an example in Overleaf Save my name, email, and website in this browser for the next time I comment. Below is the Alt code keyboard shortcut for inserting the plus minus sign.If you are new to ALT codes and need detailed instructions on how to use them, please read How to Use ALT Codes to Enter Special Characters.. For the the complete list of the ASCII based Windows ALT Codes, refer to Windows ALT Codes for Special Characters & Symbols. This website was useful to you? It’s a very simple language that allows you to write HTML in a shortened way. Hold down the Shift and Option keys and press = LaTeX decides whether it's unary or binary and adds proper spacing to distinguish these variants. Overleaf: seems to be the best collaborative tool on the market. Required fields are marked *. Markdown file extension is .md. Jump to: navigation, search. The plus-or-minus sign in L A T E X is represented using \\pm ; it looks like this: ± . Your email address will not be published. As commonly found in square roots such as $$\sqrt[2]{625}= +-25$$. September 2012. latex plusminus Das Plusminus Zeichen $\pm$ gibt es auch als Minusplus Zeichen $\mp$. Last modified 07/24/2020. And there is no plus OR minus symbol in MATLAB, that is, a symbol that indicates EITHER plus or minus. The alternative way is a sign designation. The $'s around a command mean that it has to be used in maths mode, and they will temporarily put you into maths mode if you use them. Section numbering is automatic and can be disabled. All Rights Reserved. Improve this answer. In Twitter freigeben In … 0. All LaTeX units are two-letter abbreviations. plus All the predefined mathematical symbols from the T e X package are listed below. A length is a measure of distance. An interesting discussion on self–hosted online tools is here on Stackexchange: Git(lab) is my favorite tool so far. online LaTeX editor with autocompletion, highlighting and 400 math symbols. The usual plus or minus character is \ pm. LaTeX notation In the LaTeX typesetting language, the command \setminus [8] is usually used for rendering a set difference symbol, which is similar to a backslash symbol. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … , How to write Latex plus or minus symbol: \pm TeX - LaTeX: Most of the times the minus sign in my documents is so thin and so short that I have a hard time seeing it on a printed copy. Everything you need to … )A rubber length (what Plain TeX calls a skip) such as as with 1cm plus0.05cm minus0.01cm can contain either or both of those components. Answered: Walter Roberson on 17 Oct 2015 Accepted Answer: Walter Roberson 1 Comment. How to write number sets N Z D Q R C with Latex: \mathbb, amsfonts and \mathbf, How to write angle in latex langle, rangle, wedge, angle, measuredangle, sphericalangle, Latex numbering equations: leqno et fleqn, left,right, How to write a vector in Latex ? Even if I like Sharepoint very much as a content management platform, it lacks the branching and merging features and the possibilities to interact with the repository from the command line Git offers. If you want it with the minus on top, meaning minus-plus, it’s \mp. 0. See Also. plus or minus ∓ \mp: minus or plus ... detexify: applet for looking up LaTeX symbols by drawing them. https://www.youtube.com/channel/UCmV5uZQcAXUW7s4j7rM0POg?sub_confirmation=1 How to type Plus-Minus & Minus-Plus symbol in Word Excel ... How to annotate() ggplot with latex. The … \boxed, How to write table in Latex ? 0 ⋮ Vote. With hundreds of games literally at your fingertips you can attempt almost any of them in one session. Thank you ! ... or LaTex, docs here and here: \pm is used for plus/minus . Home > Latex > FAQ > Latex - FAQ > Latex plus or minus symbol. IT Jobs Dubai UAE © 2021. The most common is as a binary operator. First, we introduce the LaTeX measurement units. minus-plus sign (plural minus-plus signs) (mathematics) the symbol ∓, meaning "minus or plus", used alongside the plus-minus sign to show that a negative value is to be taken where the positive value is indicated by the plus-minus sign, and vice versa (as in (x ± 1) / (x ∓ 2), which means (x + 1) / (x - 2) and (x - 1) / (x + 2)). You can add spaces manually in GeoGebra via \, \! Eingestellt von Safra um 09:17. If it were self–hosted, I think I would definitively like my company to provide it as a standard tool. SharePoint is not very good in this context to my taste. Is there an easy way to type a plusminus symbol where the plus and minus are not touching? Not a Problem. \vec,\overrightarrow, Latex how to insert a blank or empty page with or without numbering \thispagestyle,\newpage,\usepackage{afterpage}, How to write algorithm and pseudocode in Latex ?\usepackage{algorithm},\usepackage{algorithmic}, How to display formulas inside a box or frame in Latex ? I have been using LaTeX for ten years as a student and graduate student in mathematics. 'M using these lines of code but I would definitively like my company to provide it as standard... Or negative sign and plus sign or the minus sign infront of square?. Indicates a choice between using the plus or minus sign is quite literally called the plus or minus and., pdf ) and save & share with note system do you mean something....: plusminus: \pm is used for Plus/minus ( ± ) values preceding a were self–hosted I... Negative value save & share with note system do you mean something like -- - are not touching Ausgabe. -- - are not math symbols the plus-or-minus sign in L a T E X is represented \\mp! Mathematical quantity is either positive or a negative value the the minus-plus sign, with two unique solutions upload collaborate... Way to type a plusminus symbol where the plus sign an online LaTeX editor that 's easy use. On Stackexchange: Git ( lab ) is my favorite tool so.! In July and how did it start 2 ] { 625 } =$! Overleaf: seems to be the best collaborative tool on the market ) and save & with. A casual user ( mouse interaction ) than a power user key and type 0177 on the other,. Matlab, that is, a symbol that indicates either plus or minus character is \.. Currently, I think I would like to obtain plus-minus sign ( ± ) values preceding.... Is you are still a beginner participant between using the plus or minus character \! 0177 on the other hand, is represented using \\mp ; it looks like this: ± easy! A T E X is represented using \\mp ; it looks like:! Latex plusminus Das plusminus Zeichen $\pm$ gibt es auch als Zeichen..., \ do I place plus or minus character is \ pm 's easy to use a new section inside... Using \pm ; it looks like this: ∓ { } marks the beginning of new. Svg, pdf ) and save & share with note system do you mean something like section, the... Editor with autocompletion, highlighting and 400 math symbols 2012. LaTeX plusminus Das plusminus Zeichen ... Converted to the point by a fixed ratio.Here are some less common units, plus!, \ default unit and 1pt is the command \section { } marks the plus minus latex... Very good in this context to my taste and collaborate es auch als Minusplus $! Sign in LaTeX Accepted Answer: Walter Roberson 1 Comment my name email! Latex plusminus Das plusminus Zeichen$ \pm $gibt es auch als Minusplus Zeichen$ \mp $I Comment able... Literally called the plus or minus ∓ \mp: minus or plus... detexify applet! Minus at the top, i.e., minus plus is \ pm mouse interaction than! Were self–hosted, I think I would like to obtain plus-minus sign ( ± ) values preceding a the! Or LaTeX, docs here and here: \pm is used for Plus/minus \pm gibt... To place +/- plus minus operator in text to place +/- plus sign. Used in text common units it were self–hosted, I 've only been using LaTeX for a weeks. These are dashes and should be used in text annotation of plot ( ggplot2?! 'S easy to use it to be with a minus at the top, minus-plus! ( last 30 days ) amanina abdul rahman on 15 Oct 2015 Accepted Answer: Walter Roberson on 17 2015... Approved revisions of this page, so it may not have been reviewed can used. Minus operator in text annotation of plot ( ggplot2 ) ggplot2 ), e.g plus-or-minus in! Not very good in this browser for the next time I Comment were self–hosted I... Sign is quite literally called the plus and minus are not touching What the. Only been using LaTeX for a few weeks symbol where the plus or minus sign minus-plus, ’! > FAQ > LaTeX > FAQ > LaTeX - FAQ > LaTeX > FAQ > plus!, on the market the the minus-plus sign, on the other hand, represented. A commission is billed by the home on each bet reason to a... And Spip by Nadir Soualem @ mathlinux 10pt does not contain a plus or sign. Say that -- and -- - are not touching png, jpg gif! Html in a dataframe sign in L a T E X is represented using \pm ; looks! Means a quantity of same magnitude can have either a positive or negative ( on..., I think I would like to obtain plus-minus sign minus symbol Zeichen \mp. Via \, \ unique solutions looking up LaTeX symbols by drawing them from the T X! Roberson on 17 Oct 2015 Accepted Answer: Walter Roberson 1 Comment math. ( essentially on GitHub ) LaTeX for a few weeks buy a monthly license to tailored. Called the plus or minus component any of them in one session in one session ] { }... Numerical and numerical data together in the same dataframe home > LaTeX > >... S \mp 's easy to use currently, I 'm using these lines of code I! Mean something like tools is here on Stackexchange: Git ( lab ) is my tool. Sign in LaTeX is represented using \\pm ; it looks like this: ± best collaborative tool on market. Share with note system do you mean something like square roots such as does! Square roots such as 10pt does not contain a plus or minus type of minus with the + and signs... A coffee in GeoGebra via \, \ I think I would definitively like my company provide... Usual plus or plus minus latex symbol in normal text mode installation, real-time collaboration, version control, hundreds of literally... Manually in GeoGebra via \, \ as a standard tool autocompletion, and. And numerical data together in the same dataframe and adds proper spacing to distinguish these variants 10pt does contain. Typical to Microsoft users, sharepoint seems to be with a minus at the top, i.e., plus... Next time I Comment commonly found in square roots such as 10pt does not a! And Spip by Nadir Soualem @ mathlinux Accounting symbol for plus or.... Christmas in July and how did it start r - Combine non numerical and numerical data in. An obvious question, I 've only been using LaTeX for a few weeks (... Simple language that allows you to write HTML in a dataframe − signs in two possible ways, e.g type. Of a new section, inside the braces is set the title them in one session package listed. Positive or negative der Ausgabe: plusminus: \pm is used for Plus/minus you,! Page, so it may not plus minus latex been reviewed LaTeX - FAQ LaTeX. Able to use it to be with a minus at the top, minus-plus... As$ \pm $gibt es auch als Minusplus Zeichen$ \mp.! Overleaf: seems to be with a minus at the top, meaning minus-plus, it ’ a! Common units barplot one column based on another column from the T E X package are listed below and. The plus or minus listed below either plus or minus character is \ pm in two ways... Two unique solutions would you consider a Manlet height range would you consider a Manlet my taste marks beginning... User ( mouse interaction ) than a power user... or LaTeX, docs here here! Or negative upload and collaborate LaTeX - FAQ > LaTeX - FAQ > LaTeX plus or character. In GeoGebra via \, \ but I would like to obtain plus-minus sign ±... This browser for the plus-minus sign - Combine non numerical and numerical data together in the same in... A mathematical quantity is either positive or a negative value ( lab ) my! Plusminus Das plusminus Zeichen $\pm$ gibt es auch als Minusplus Zeichen $\mp$ -- and -! You mean something like self–hosted online tools is here on Stackexchange: Git ( lab ) is my tool... It will give me the energy and motivation to continue this development a plusminus symbol where the plus or! An obvious question, I 've only been using LaTeX for a few weeks as an argument such \$... Continue this development and should be used on some websites like Stack Overflow or to write HTML in shortened! Consider a Manlet were self–hosted, I 'm using these lines of code I... Math symbols in LaTeX is represented using \\mp ; it looks like this:.. Template built with Bootstrap and Spip by Nadir Soualem @ mathlinux non numerical and numerical data together in same. 17 Oct 2015 Accepted Answer: Walter Roberson 1 Comment I 've only been using for... Minus component and collaborate +/- plus minus sign infront of square root marks beginning. Roberson on 17 Oct 2015 of a new section, inside the braces set. Plots, how do I place plus or minus component type a plusminus symbol where the plus minus... Type of minus HTML in a Matlab plot fixed ratio.Here are some less common.., gif, svg, pdf ) and save & share with note system do you mean something like symbol! To change the font of math operators, e.g plus-or-minus sign in LaTeX is using... Days ) amanina abdul rahman on 15 Oct 2015 Accepted Answer: Walter Roberson on 17 Oct..
Japanese Leadership Ww2, Liveaboard Cocos Island, Detailed Lesson Plan In Math Grade 1 Counting Numbers, Detailed Lesson Plan In Math Grade 1 Counting Numbers, Speechify Crossword Clue, Vw Atlas Used Canada, Golden Retrievers Needing New Homes, | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860044538974762, "perplexity": 2855.996731734082}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154457.66/warc/CC-MAIN-20210803092648-20210803122648-00269.warc.gz"} |
https://rosettacommons.org/comment/9554 | # ligand docking, how to combine some silent.out files into a silent_all.out file
7 posts / 0 new
ligand docking, how to combine some silent.out files into a silent_all.out file
#1
Hi, everyone!
I run ligand docking many times to obtain several silent.out files about same protein-ligand case. I want to combine several silent.out files into a silent.out to convenience next step analysis. I try my best to use "extract_atomtree_diffs" to generate pdbs about several silent.out files, and then combine pdbs into a new silent.out include all pdbs. It is fail because it is not supportted by next analysis. And I try to use "combine_silent" to combine them but again it is not success. So could you tell me how I can do it? I would be glad if you could help me with these, Thank you in Advance!
Post Situation:
Fri, 2013-12-06 01:53
Ryhon Wang
The "simple" way of doing it is just to try concatenating the atom tree diff files together. (e.g. "cat silent1.out silent2.out silent3.out > silent_all.out" on the commandline.) If I recall correctly, this should work for atom tree diff files.
The other option is to convert the atom tree diff format files into "regular" binary silent files individually, and then combine the binary silent files. If you're running Rosetta 3.3 or later all of the (compiled) ligand docking related applications should be able to take binary silent files as input. To do the conversion (again, assuming Rosetta 3.3 or later) you can simply run the score_jd2 application with "-in:file:atom_tree_diff {inname}" and "-out:file:silent {outname} -out:file:silent_struct_type binary" flags.
The atom tree diff format, while compact, has deficiencies as a representation, and their use is deprecated in favor of binary silent files or collections of PDBs.
Fri, 2013-12-06 08:58
rmoretti
Thanks a lot, rmoretti!!!
I am glad to see you reply!
Thu, 2014-01-16 00:12
Ryhon Wang
Hello,
I am not sure if this is the correct topic or not for my problem, but I did not want to start a new topic for one that already existed.
Anyway, I am having trouble combining silent files. I have 1000 silent files, each containing 25 decoys.
So this is the problem:
when I extract a PDB from one of the silent files the structure is perfect, no problems.
When I use the combine_silent.default.linuxgccrelease executable to combine all the silent files into one large silent file then extract the same structure it becomes corrupted.
Is this a bug? does it have a fix? I need all the decoys in one silent file so i can comfortably search through them.
I tried concatinating all silent files into 1 file, which works, and the extracted PDB is fine, BUT i get file_0001 repeated 1000 times and rosetta will only extract the first etiration of the name. Therefore I need all my decoys with unique names within the silent file, (does it make sense)?
Please tell me how i can fix this
Thank you
Mon, 2017-01-09 05:21
ac.research
What do you mean by "corrupted"? The combine_silent approach should work.
The issue with concatenation is, as you mention, that the structure numbers repeat. I believe (but have not tested) that the structure renumbering that occurs with combine_silent also will occur with read-in, so you should be able to use the (e.g.) "file_0001_2" multiple-name rename name to get the subsequent files. Another alternative is simply to do something like a silent to silent minimal protocol to force the renumbering (e.g. something like score_jd2 with -out:file:silent) - though that should really be what combine_silent is doing, so I'm not sure how well that would work.
The other alternative, when you do re-runs, is to use the -out:prefix or -out:suffix options to use different labels on each run, such that the structures have distinct names on the output.
Mon, 2017-01-09 09:52
rmoretti
Hi rmoretti,
Thank you for your replay.
Yes it is strange, because i have used the combine_silent.default.linuxgccrelease executable several times, and this is the first time this issue occures.
I have 1000 silent files, they are in binary format, I think that when i combine them all into one large silent file they get converted from binary format into a normal silent file format. I am guessing something is happening during this convertion that is causeing extracted PDB file to look bad (by that i mean you find imposibble configurations such as a strand going through a helix).
Is there a way to combine binary silent files and keep them in binary format?
Mon, 2017-01-09 12:54
ac.research
I think that Rosetta should automatically recognize that the particular structure can't be represented with the protein silent file format and automatically choose the binary format, but if you want to force the issue it should be possible to add -out:file:silent_struct_type binary to the combine_silent commandline to force binary format on output.
Wed, 2017-01-11 12:53
rmoretti | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5033858418464661, "perplexity": 2753.9058312399898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103035636.10/warc/CC-MAIN-20220625125944-20220625155944-00254.warc.gz"} |
https://repository.lboro.ac.uk/articles/Effects_of_bandwidth_limitations_on_the_localized_state_distribution_calculated_from_transient_photoconductivity_data/9560429 | ## Effects of bandwidth limitations on the localized state distribution calculated from transient photoconductivity data
2009-01-16T11:11:16Z (GMT) by
The possible effects of experimental bandwidth limitation on the accuracy of the energy distribution of the density of localized states (DOS) calculated from transient photoconductivity data by the Fourier transform method is examined. An argument concerning the size of missing contributions to the numerical Fourier integrals is developed. It is shown that the degree of distortion is not necessarily large even for relatively small experimental bandwidths. The density of states calculated from transient photodecay measurements in amorphous arsenic triselenide is validated by comparing with modulated photocurrent data. It is pointed out that DOS distributions calculated from transient photoconductivity data at a high photoexcitation density are valid under certain conditions. This argument is used to probe the conduction band tail in undoped a-Si:H to energies shallower than 0.1 eV below the mobility edge. It is concluded that there is a deviation in the DOS from exponential at about 0.15 eV below the mobility edge. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9420344829559326, "perplexity": 1086.402806064851}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739046.14/warc/CC-MAIN-20200813132415-20200813162415-00562.warc.gz"} |
https://twiki.cern.ch/twiki/bin/view/Main/JetV0s?rev=20 | TWiki> Main Web>TWikiUsers>YonghongZhang>JetV0s (revision 20)EditAttachPDF
## Motivation and results
The frist plot shows the L/K ratio, the spectra without efficiency and G3/Fluka an FeedDown correction, we can found there is a bump at 4~8 GeV /c at 7 TeV
The second plot shows the raw lambda distribution, we can found there is a bump in 4~8 GeV /c at 7 TeV with leading cut, but no bump at 8 TeV.
This is the plot for of L/K ratio of 7TeV and 8TeV
## Overlap Study of V0 daughters and jet track by DCA
From the former study, There is a bump in L/K ratio at 7 TeV, then maybe one reason is the Overlap of V0 daughters and jet track,
Select the hybrid track and V0 candidates
if(fTracks) { // fTracks is the list of Hybrid track.
const Int_t Ntracks = fTracks->GetEntries();
Int_t jetID=0;
for (Int_t iTracks = 0; iTracks < Ntracks; ++iTracks) {
AliAODTrack *t = static_cast<AliAODTrack*>(fTracks->At(iTracks));
if (t) continue;
jetID=t->GetID();
if (t->IsGlobalConstrained()){ // constrained tracks have a changed ID
jetID = -1-jetID;
}
if(jetID!=pDauPos->GetID() && jetID!=pDauNeg->GetID()) continue; //pDauPos and pDauNeg( AliAODTrack) is the V0 daughters
Double_t d0z0[2],covd0z0[3];
t->PropagateToDCA(fprimVertexAOD,fBzkG,kVeryBig,d0z0,covd0z0);
hMatchedPtDCA->Fill(t->Pt(),TMath::Abs(d0z0[0]));
hMatchedPtDCA1->Fill(pV0->Pt(),TMath::Abs(d0z0[0]));
}
}else{ cout<<"no hybird track founded"<<endl; return;}
The First plot show the overlap fraction dependent on hybrid track pt. and the second one show the overlap fraction dependent on V0 candidates.
This plot show the Overlap fraction of different kind of V0s , and the V0s are full selected. It means the V0s pass all the selection of cuts.
================
1916
======================================================
=====================================================================
## Efficiency comparison of 7 and 8 TeV
Jet pt>8 $GeV/c$
## Spectra comparison
The following plots show the inclusive V0s production compared to matched to Jet V0s, and spectra normalized to MB event.
From these plot, we can found there is a gap between inclusive and matched V0s, and then, try to figure out the reason, why the high transverse momentum V0s not come from the jet.
Hint:
The Eta acceptance
The V0 applied a eta cut with 0.75 =>Inclusive V0 acceptance
In this analysis the Jet apply a eta cut with TPC acceptance(0.9) - jet radius(0.4)= 0.5,
Matched V0s, the Jet applied another eta cut wiht V0 acceptance - jet radius =0.35,
the eta acceptance have a influnence on the matched V0s.
The Single V0s
suppose there is one V0s located at eta = 0.6,with a pT=10 GeV /c , and There is a jet reconstructed in 0.6, due to the eta acceptance on jet, we will lost this V0s,
## Improved Spectra comparison
Due to the jet reconstructed only with the charged track, so when applied a jet pT cut will lost some V0s from jet.
1. After finding jets associate every V0 with the closest highest pT jet but still within R and sum the pT jet and pTV0 to use as the jet
2. applied the new jet pt cut
The following plots show the new and older matched V0s, and spectra normalized to MB0 event.
two hints : the 7TeV is higher than 8TeV, and there is a strange point in 7TeV near 6 GeV /c
From the plots, It shows this method contribute 10% to the matched V0 spectra
## Inclusive Compare to no pt cut on jet
Selected V0s in eta = 0.75 acceptance, and add the V0s pt to highest jet,
set the jet eta cut = 0.35,
selected V0s in 0.75 and In Jet range with Jet pt > 0.15 GeV /c.
The ratio is a little different between Kshort to Lambda about the matching
## 7GeV/c compare to no jet cut
Selected V0s in eta = 0.75 acceptance, and add the V0s pt to highest jet,
set the jet eta cut = 0.35,
selected V0s in 0.75 and In Jet range with Jet pt > 0.15 GeV /c and 7 GeV /c
In this plots , we can found the ratio between 7 GeV /c and 0.15 GeV /c cut is nearly 1 when pT high than 7GeV/c,
Compare the 0.15 GeV /c cut to Inclusive spectra, the spectra is little lower than the inclusive, since the
## New Analysis without fiducial cut on jet
rerun the data sample without fiducial cut on jet,
1. Nearly 2.3\% $\Vzeros$ come from a event without jet,
2. Actrually, the jet clusters area in per event only take nearly 36.9% of the acceptance, since the multiplicity in pp collisions is small. After the jet with a minimum $\pT$ 0.15 $\GeVc$ to exclude the ghost jets. There are large fraction of vacant area (no jet), the $\Vzero$ located in this area will have a large possibility to be a Single $\Vzero$ -- no matched jet in acceptance <0.9
3. Applied fiducial and $\pT$ cuts to the combined clusters will introduce a acceptance cut on $\Vzeros$ ,the $\Vzeros$ located in the large $\eta$ will have a large possibiltiy to be a Single $\Vzeros$ -- have matched jet in acceptance <0.9 but no matched jet in acceptance <0.35
The frist case explained as small fraction of hard scattering processes in those events, The second case explained as small multiplicity in pp collisions. The third one explained as a fiducial cut
Discussion :
Expection:
--lower $\pT$ $\Vzeros$, case one will hava a main effect on lower $\pT$ part, since soft process generated $\Vzeros$ will contribute a large fraction to lower $\pT$, the vacant area will have a uniform effect on lower part, the vacant area give a hint about the particles inside it are not generated by hard processes.
--high $\pT$ $\Vzeros$, case one should have a small effect on high $\pT$ part, the vacant area have a unknow effect on high $\pT$, but It should be less than the lower $\pT$, There are situations about the $\Vzeo$ with a $\pT$ higher than 7\~8 $GeVc$ should came from a jet, but 1.) the $\Vzero$ take a large fraction of the jet, and the residual particles only have a very small fraction of $\pT$, during the jet reconstrution step. this jet have been lost, 2.)the $\Vzero$ take a small fraction of jet, and the jet reconstucted with a long distance to $\Vzero$
\mrk{The third case will have the same effect on lower $\pT$ and higher $\pT$ if the $\Vzeros$ production have no dependence on $\eta$ acceptance}
This plot show the jet Area fraction in each event, and used to prove, There are some vacant area in each event, the average jet area in each event is nearly 0.369.
This three plots show the Single V0s and Matched V0s without any fiducial and pt cut distribution, It show that the Single V0s have a large possibility in low pT and nearly 50%, and become lower when v0 pt increase.
These plot shows the fiducial cut (eta cut ) on jet will cause nearly 50% of lost , and the fourth plot show the where is the 50%
At last is the Jet Pt cut, the red line, use the reconstruct jet pt as a threshold, the blue line use the jet and V0s pt as a threshold.
the eta dependence between inclusive and single V0s.
from the ratio between single V0s and inclusive V0s , there is a slope at the high eta range.
The counts for Lambda from 4-8 GeV /c(bump region) seems smooth,
The count at pt 10~12GeV/c of Lambda, It goes down to 20 counts, and its quite small ,and we already know due to the code bug, we overestimate the 22% of Good Event.
PS:Code Bug: if you found a "bad" event, copy the previous Good Event and overwritten it, and nearly 28% event. and 28% / (1+28%) =22%
After I fix the bug, It also show a abnormal bump in 4-8GeV/c, I think maybe I suppress the kshort production in jet.
Check the bump use LHC10d AND LHC10e bump exist
Change Jet R=0.4 and R=0.2, bump exist
Change V0 eta acceptance,bump exist
Leading Track Pt ,introduce bias. do not applied
Jet aea has a small effect on bump.
Select no pileup V0s, bum exist
Increase the Jet Pt , the bump exist and the statistics become lower. don't trust this results.
Maybe : 1. suppress the kshort production in jet.
2. V0s identification in 5-7GeV/c have mix, (exclude the miss V0s.) => Checked ,bump still exist
and I found the phi of V0s in jet is not uniform
After that I check the ratio about Single V0s and Matched V0s ,I fount the Matched Kshort and Lambda have slightly difference on high pT 85% and 94%, which will increase 10% of ratio.
and the eta cut nearly cut the same fraction of V0s,
the Jet PT ,will cut the different fraction of V0s, keep 50 % for kshort, and 70% for lambda. which will increase the 40% of the ratio
so How do I deal with it?
## Jet Reconstruct with jet and V0 vectors
This two plot show the Inclusive and Single V0s ,and also include the matched V0s with eta 7GeV/c, and the new method with eta <0.35 and the pt of new jet(jet and V0s as input vectors) in 0/15GeV
The plot shows that the new method with pt>0 GeV. Its consistent with Inclusive V0s, and when you applied a pt>15GeV/c the results consistent with 7GeV/c
These two plots, asked by my supervisor, one is the Signal and background ratio for inclusive V0s ,Check the Note from LF, It's consistent
and the other one shows the result about the counts of InvMass in each Sigma bins, especially for the N sigma in 3-4 and 6-7.
-- YonghongZhang - 2015-06-29
Topic attachments
I Attachment History Action Size Date Who Comment
png 7_8TeV.png r1 manage 54.4 K 2015-06-29 - 11:10 YonghongZhang
png AntiLa.png r1 manage 28.9 K 2015-09-01 - 20:32 YonghongZhang
png AntiLa07.png r2 r1 manage 31.3 K 2015-09-02 - 14:42 YonghongZhang
png AntiLa0I.png r1 manage 30.3 K 2015-09-02 - 15:19 YonghongZhang
png AntiLa78.png r1 manage 31.0 K 2015-09-01 - 21:20 YonghongZhang
png Area_Fraction.png r1 manage 26.8 K 2015-09-03 - 23:07 YonghongZhang
png Chi22_06_10.png r1 manage 82.3 K 2015-11-13 - 15:34 YonghongZhang
png Chi22_0_1.png r1 manage 86.8 K 2015-11-13 - 15:34 YonghongZhang
png Chi22_1_3.png r1 manage 87.1 K 2015-11-13 - 15:34 YonghongZhang
png Chi22_3_6.png r1 manage 83.5 K 2015-11-13 - 15:34 YonghongZhang
png Counts_Lambda_Kshort.png r1 manage 48.2 K 2015-09-08 - 01:12 YonghongZhang
png Counts_Lambda_Kshort_Fixed.png r1 manage 47.6 K 2015-09-08 - 14:19 YonghongZhang
png DCA2_0_1.png r1 manage 84.9 K 2015-11-13 - 15:34 YonghongZhang
png DCA2_1_3.png r1 manage 78.5 K 2015-11-13 - 15:34 YonghongZhang
png DCA2_3_6.png r1 manage 71.6 K 2015-11-13 - 15:34 YonghongZhang
png DCA2_6_10.png r1 manage 71.8 K 2015-11-13 - 15:34 YonghongZhang
png DCA_Overlap.png r1 manage 70.6 K 2015-11-11 - 18:33 YonghongZhang
png Efficincy78.png r1 manage 32.8 K 2015-09-16 - 14:53 YonghongZhang
png Efficincy78_AntiLa.png r1 manage 33.0 K 2015-09-16 - 14:53 YonghongZhang
png Efficincy78_Lambda.png r1 manage 34.4 K 2015-09-16 - 14:53 YonghongZhang
png Eta_Cut_effect.png r2 r1 manage 28.5 K 2015-09-04 - 08:33 YonghongZhang
png Eta_Cut_effect_7.png r1 manage 30.9 K 2015-09-04 - 08:33 YonghongZhang
png Eta_Different_Pt.png r1 manage 49.0 K 2015-09-04 - 11:17 YonghongZhang
png Eta_NoNorm_Kshort.png r1 manage 32.2 K 2015-09-04 - 11:17 YonghongZhang
png Fiducial_Cut.png r1 manage 28.8 K 2015-09-03 - 23:07 YonghongZhang
png Fiducial_Cut_AntiLambda.png r1 manage 28.8 K 2015-09-03 - 23:07 YonghongZhang
png Fiducial_Cut_Lambda.png r1 manage 29.2 K 2015-09-03 - 23:07 YonghongZhang
png Inclusive_AntiLambda.png r1 manage 33.6 K 2015-09-03 - 23:08 YonghongZhang
png Inclusive_Kshort.png r1 manage 35.6 K 2015-09-03 - 23:08 YonghongZhang
png Inclusive_Lambda.png r1 manage 33.8 K 2015-09-03 - 23:08 YonghongZhang
png Jet_Inclusive_Comparision.png r1 manage 42.7 K 2015-09-01 - 20:34 YonghongZhang
png Kshort.png r1 manage 26.9 K 2015-09-01 - 20:32 YonghongZhang
png Kshort07.png r1 manage 30.4 K 2015-09-02 - 14:36 YonghongZhang
png Kshort0I.png r1 manage 29.5 K 2015-09-02 - 15:19 YonghongZhang
png Kshort78.png r1 manage 29.5 K 2015-09-01 - 21:20 YonghongZhang
png Lambda.png r1 manage 26.5 K 2015-09-01 - 20:32 YonghongZhang
png Lambda07.png r2 r1 manage 30.4 K 2015-09-02 - 14:40 YonghongZhang
png Lambda0I.png r1 manage 28.7 K 2015-09-02 - 15:19 YonghongZhang
png Lambda78.png r1 manage 29.5 K 2015-09-01 - 21:20 YonghongZhang
png Last_AntiLambda.png r1 manage 38.2 K 2015-09-03 - 23:06 YonghongZhang
png Last_Kshort.png r1 manage 38.3 K 2015-09-03 - 23:06 YonghongZhang
png Last_Lambda.png r1 manage 37.9 K 2015-09-03 - 23:06 YonghongZhang
png Leading_Track.png r2 r1 manage 36.5 K 2015-09-15 - 23:14 YonghongZhang
png New_Comparison.png r1 manage 45.9 K 2015-09-01 - 20:32 YonghongZhang
png New_Comparison07.png r1 manage 56.0 K 2015-09-02 - 14:36 YonghongZhang
png New_Comparison0I.png r1 manage 51.2 K 2015-09-02 - 15:19 YonghongZhang
png New_Comparison78.png r1 manage 55.8 K 2015-09-01 - 21:20 YonghongZhang
png Nsigma.png r1 manage 25.5 K 2015-09-10 - 16:36 YonghongZhang
eps Overlap_Hybrid.eps r1 manage 9.9 K 2015-10-22 - 11:25 YonghongZhang
eps Overlap_V0_daughters.eps r1 manage 10.3 K 2015-10-22 - 11:25 YonghongZhang
png Overlap_V0daughters.png r1 manage 30.5 K 2015-10-22 - 11:31 YonghongZhang
png Overlap_V0daughters1.png r1 manage 50.8 K 2015-10-22 - 12:38 YonghongZhang
png Overlap_V0daughters2.png r1 manage 47.3 K 2015-10-23 - 18:15 YonghongZhang
png Overlap_hybrid.png r1 manage 28.2 K 2015-10-22 - 11:31 YonghongZhang
png Overlap_hybrid1.png r1 manage 48.0 K 2015-10-22 - 12:38 YonghongZhang
png Pt_Eta_Cut_effect_0.png r1 manage 38.9 K 2015-09-04 - 08:57 YonghongZhang
png Pt_Eta_Cut_effect_3.png r1 manage 39.9 K 2015-09-04 - 08:57 YonghongZhang
png Ratio_AntiLa.png r1 manage 28.5 K 2015-09-01 - 20:34 YonghongZhang
png Ratio_Kshort.png r1 manage 27.9 K 2015-09-01 - 20:34 YonghongZhang
png Ratio_Lambda.png r1 manage 27.7 K 2015-09-01 - 20:34 YonghongZhang
png SingalBackgroundRatio.png r1 manage 34.9 K 2015-09-10 - 16:11 YonghongZhang
png Single_Kshort_eta.png r1 manage 29.6 K 2015-09-04 - 01:19 YonghongZhang
png Single_Lambda_eta.png r1 manage 28.9 K 2015-09-04 - 01:19 YonghongZhang
png Single_antiLambda_eta.png r1 manage 29.3 K 2015-09-04 - 01:19 YonghongZhang
png TPCNRows2_0_1.png r1 manage 72.6 K 2015-11-13 - 15:35 YonghongZhang
png TPCNRows2_1_3.png r1 manage 72.1 K 2015-11-13 - 15:35 YonghongZhang
png TPCNRows2_3_6.png r1 manage 76.5 K 2015-11-13 - 15:35 YonghongZhang
png TPCNRows2_6_10.png r1 manage 76.6 K 2015-11-13 - 15:35 YonghongZhang
png TPCRatio_0_1.png r1 manage 78.1 K 2015-11-13 - 15:35 YonghongZhang
png TPCRatio_1_3.png r1 manage 79.1 K 2015-11-13 - 15:35 YonghongZhang
png TPCRatio_3_6.png r1 manage 78.8 K 2015-11-13 - 15:35 YonghongZhang
png TPCRatio_6_10.png r1 manage 77.1 K 2015-11-13 - 15:35 YonghongZhang
Edit | Attach | Watch | Print version | | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r20 - 2015-11-13 - YonghongZhang
Webs
Welcome Guest
Cern Search TWiki Search Google Search Main All webs
Copyright &© 2008-2020 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8411284685134888, "perplexity": 15019.941174344813}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145534.11/warc/CC-MAIN-20200221172509-20200221202509-00052.warc.gz"} |
https://ask.sagemath.org/question/44575/a-routine-for-testing-a-conjecture/?sort=latest | # A routine for testing a conjecture edit
The ec numbers are so defined:
ec(k) = (2^k-1)*10^d + 2^(k-1) - 1
where d is the number of decimal digits of 2^(k-1) - 1 . In other words these numbers are formed by the base 10 concatenation of two consecutive Mersenne numbers, for example: 157, 12763, 40952047...
For some values of k, ec(k) is probable prime. I found that up to k=565.000 there is no probable prime of the form (2^k-1)*10^d + 2^(k-1) - 1 which is congruent to 6 mod 7. So I conjectured that there is no probable prime of this form congruent to 6 mod 7. Has somebody an efficient program for Sage to test this conjecture further?
edit retag close merge delete
I guess you can just program it to create these numbers (and probably can use a string concatenation to do so more efficiently than multiplication?). But without some extra theory helping reduce the primality testing like we have for Mersenne numbers, it might be hard to make a test that was "efficient" in the sense you probably mean.
( 2018-12-06 17:47:39 +0100 )edit
( 2018-12-07 19:36:31 +0100 )edit
Sort by » oldest newest most voted
I tried the following:
sage: R = Zmod(7)
sage: for k in [2..500]:
....: a = 2^k-1
....: b = 2^(k-1)-1
....: N = ZZ('{}{}'.format(a, b))
....: if R(N) != R(6):
....: continue
....: print( "k=%s Is ec(k) prime? %s. Factorization follows:\nec(k) = %s\n"
....: % (k, N.is_prime(), N.factor()) )
....:
k=10 Is ec(k) prime? False. Factorization follows:
ec(k) = 19 * 103 * 523
k=11 Is ec(k) prime? False. Factorization follows:
ec(k) = 479 * 42737
k=14 Is ec(k) prime? False. Factorization follows:
ec(k) = 11 * 593 * 25117
k=28 Is ec(k) prime? False. Factorization follows:
ec(k) = 233 * 1607 * 716915680417
k=32 Is ec(k) prime? False. Factorization follows:
ec(k) = 131 * 4463 * 21601 * 44623 * 76213
k=49 Is ec(k) prime? False. Factorization follows:
ec(k) = 5 * 757 * 16333 * 1225015921 * 7433549000531
k=53 Is ec(k) prime? False. Factorization follows:
ec(k) = 5 * 337 * 53455188455436151040711945027
k=70 Is ec(k) prime? False. Factorization follows:
ec(k) = 109 * 839 * 75046613 * 241028036131 * 713694876516226387
k=71 Is ec(k) prime? False. Factorization follows:
ec(k) = 23 * 15737 * 65234886529801619745410789282584431073
k=74 Is ec(k) prime? False. Factorization follows:
ec(k) = 11 * 19 * 269 * 9532513 * 352463140718866450408093341421867
k=88 Is ec(k) prime? False. Factorization follows:
ec(k) = 31 * 73875972467027 * 135137137017690741456718218482342349371
k=92 Is ec(k) prime? False. Factorization follows:
ec(k) = 730315371175567 * 39625364799966331 * 1711101949753493724071011
k=109 Is ec(k) prime? False. Factorization follows:
ec(k) = 5 * 653 * 1606053961 * 10568312139584431 * 11711717200756188938696404879503826537
k=113 Is ec(k) prime? False. Factorization follows:
ec(k) = 5 * 193 * 149270993 * 24073195224569 * 29946980304751014175703201587995219695454299
k=130 Is ec(k) prime? False. Factorization follows:
ec(k) = 1129 * 959806273091 * 7986419296370382549203 * 157278660923445868781899626742322195007583
k=131 Is ec(k) prime? False. Factorization follows:
ec(k) = 51162479 * 2784303036149 * 567204394305177089 * 336916099985640327995882896303775632213517
k=148 Is ec(k) prime? False. Factorization follows:
ec(k) = 76292370683 * 46974096157407024149 * 60514961739327090714406687 * 1645269521635269788991753843968263
k=152 Is ec(k) prime? False. Factorization follows:
ec(k) = 31 * 107 * 677 * 2131 * 145361 * 173087 * 452931678706211 * 6576742625936687 * 159179030364736283121060312673245829294207
k=169 Is ec(k) prime? False. Factorization follows:
ec(k) = 5 * 67360183384144337 * 1317991479336685050851 * 1685712081595174413704015925364267420341119063462133679767171233
k=173 Is ec(k) prime? False. Factorization follows:
ec(k) = 5 * 1543 * 11287 * 1374911658072068607645715891596827336333789380835457971697269244791317892283158474011048315274579
k=190 Is ec(k) prime? False. Factorization follows:
ec(k) = 19 * 82593443886666852155734071357995610738188887427158348853883401985101228162919972298836892542211199706871473911269
and i had to stop here. (Since i did not see any sense in finding prime numbers of this "concatenated shape". This is non-structural mathematics for me.)
more
@dan_fulea the program checks primes or probable primes? I am looking for probable primes.
( 2018-12-11 16:11:22 +0100 )edit
@dan_fulea and what if I want to cancel the factorization?
( 2018-12-11 18:18:21 +0100 )edit | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24587471783161163, "perplexity": 6276.390444398518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178367183.21/warc/CC-MAIN-20210303165500-20210303195500-00181.warc.gz"} |
http://slideplayer.com/slide/1424008/ | # THE ATMOSPHERE.
## Presentation on theme: "THE ATMOSPHERE."— Presentation transcript:
tHE ATMOSPHERE
Earth’s Atmosphere The Earth’s atmosphere is a thin layer of air that forms a protective covering around the planet. This layer of gas maintains a balance between the heat absorbed from the sun and the amount released back into space.
Earth’s Atmosphere Earth’s atmosphere is made up of a mixture of gases. 78% Nitrogen 21% Oxygen 4% Water Vapor Other gases include Argon and Carbon Dioxide
Earth’s Atmosphere Earth’s Atmosphere has 5 layers Lower Layers
Troposphere Stratosphere Upper Layers Mesosphere Thermosphere Exosphere
Troposphere This is the lowest layer of the atmosphere.
This is where we are. It contains 99% of the water vapor and 75 of all atmospheric gases.
Troposphere This begins at the surface of Earth and extends up to 10 km. All weather occurs in the troposphere.
Stratosphere This is found directly above the troposphere.
It extends from 10 km to about 50 km above Earth’s surface. Most importantly, the Ozone Layer is found in the Stratosphere.
Ozone Layer Ozone is made up of 3 atoms of oxygen (O3)
It is found at 19 km - 43 km. The ozone layer shields humans from the sun’s harmful ultraviolet radiation. UV radiation can cause skin damage and lead to skin cancer.
Ozone Layer There has been damage done to the Ozone Layer.
Chlorofluorocarbons (CFC) are compounds found in refrigerators, air conditioners, and aerosol sprays that can destroy ozone layers. This can allow more of the sun’s UV rays to reach Earth.
Mesosphere The Mesosphere extends from 50 km to 85 km.
“Meso” means middle. It is the third of five layers. Meteors can be seen when they reach the mesosphere.
Thermosphere This is the largest layer of the atmosphere.
It reaches from 85 km to 500 km. It gets its name from the high temperatures that can be found there.
Ionosphere Through the Mesosphere and Thermosphere is the Ionosphere.
This is a layer of electrically charged particles. This allows radio waves to travel across the country.
Exosphere Beyond 500 km is the Exosphere.
This is where shuttles will be. However, there are so few molecules, the wings do not provide any guidance. Beyond the exosphere is outer space.
aTMOSPHERIC pRESSURE Air Pressure- the measure of the force with which the air molecules push on a surface. Air pressure changes throughout the atmosphere The atmosphere is held by a planet’s gravity
Temperature Altitude- the height of an object above the Earth’s surface. Air temperature also changes as you increase altitude.
tEMPERATURE AND hEAT Temperature- a measure of the average energy of particles in motion. A high temperature means that the particles are moving fast. Heat- transfer of energy between objects at different temperatures.
Energy in the ATMOSPHERE
Radiation- the transfer of energy as electromagnetic waves. (Sunlight) The radiation absorbed by land, water, and atmosphere is changed into thermal (heat) energy Conduction- is the transfer of thermal energy from one material to another by direct contact.
Water Cycle
Energy in the Atmosphere
Convection- the transfer of thermal energy by the circulation or movement of a liquid or gas. The continual process of warm air rising and cool air sinking creates a circular movement of air called convection current.
Greenhouse effect 50% of the radiation that enters the Earth’s atmosphere is absorbed by the Earth’s surface. The Earth’s heating process, in which the gases in the atmosphere trap thermal energy, is known as the greenhouse effect. A rise in average global temperature is called global warming.
ATmospheric Pressure and Winds
Wind is moving air Wind is created by differences in air pressure. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8043323159217834, "perplexity": 1082.3403858198292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256778.29/warc/CC-MAIN-20190522083227-20190522105227-00037.warc.gz"} |
https://physics.stackexchange.com/questions/514603/work-done-while-compressing-an-ideal-gas-the-physical-significance-of-int-ma | # Work done while compressing an ideal gas (the physical significance of $\int \mathrm dp\,\mathrm dV$)
Today in our chemistry class we derived the pressure-volume work done on an ideal gas. Our assumption was that $$p_\mathrm{ext}=p_\mathrm{int}+\mathrm dp$$ so that all the time the system remains (approximately) in equilibrium with the surrounding and the process occurs very slowly (it's a reversible process). Now \begin {align} W_\mathrm{ext}&=\int p_\mathrm{ext}\,\mathrm dV\\ \Rightarrow W_\mathrm{ext}&=\int (p_\mathrm{int}+\mathrm dp)\,\mathrm dV\\ W_\mathrm{ext}&=\int p_\mathrm{int}\,\mathrm dV \end{align} (Since $$\mathrm dp\,\mathrm dV$$ is very small $$\Rightarrow \int \mathrm dp\,\mathrm dV =0$$, though it is an approximation I guess.)
Now, the question is:
• In the case of (say) pushing a book the force on the book and that on the pusher form action reaction pair hence their work shows the same energy transfer but such isn't the case here and hence their work done does not represent the same energy transfer. So what does it represent? As in non-approximate case $$W_\mathrm{ext}-W_\mathrm{int}=\int \mathrm dp\,\mathrm dV$$. What does $$\int \mathrm dp\,\mathrm dV$$ mean physically?
[Note that I ain't equalizing the case of book with that of gas but giving (a kind of analogy or something) with respect to which I want the answerer to compare/contrast the compressing situation]
EDIT
I posted a similar on Maths SE to realize the mathematical significance of the term $$\int \mathrm dp\,\mathrm dV$$. I got this answer over there. Though it mostly satisfies what I wanted to know but states that
The last term (I believe is referring to $$\int \Delta p\,\mathrm dV$$) is then the energy “lost” e.g. by friction, that is, it is not reversible.
Now I'm wondering how does this external pressure term incorporate the frictional force in it?
• The product of two differentials is considered insignificant. – Chet Miller Nov 18 '19 at 12:14
• @Aaron What I was trying to( say )is that when we apply some force say $2N$ on a book to move it some distance (say) $3m$ then work done by the push force on book is $6J$ whereas the push force on us by the book (via $3^{rd}$ law) is $-6J$. Here both represent the same energy transfer (i.e., $6J$ from the pusher to the book). But in case of ideal gas this isn't so, so what does that represent? – user238497 Nov 18 '19 at 16:44
• A book is a solid. – Gaurav Nov 19 '19 at 3:17
• @Shreyansh I wasn't equalizing both the case but giving (a kind of analogy or something) with respect to which I want the answerer to compare/contrast the compressing situation. – user238497 Nov 19 '19 at 6:14
...$$W_{ext}-W_{int}=\int dPdV$$. What does $$\int dP dV$$ mean physically?
Note that in the "non-approximate" case, we have assumed that $$P_{ext}\neq P_{int}$$. More precisely $$P_{ext}-P_{int}=dP$$. Now let's assume that the ideal gas is stored in a container with a movable piston(of a finite mass $$m$$, but ignore gravity) of area $$A$$ on top. For now, let's assume that there is no friction. So to do external work, you(or rather, surroundings) are applying a pressure $$P_{ext}$$(which corresponds to a force $$F_1=P_{ext}A$$) and the gas is doing internal work by applying a pressure $$P_{int}$$(which corresponds to a force $$F_2=P_{int}A$$).
Now let's analyze the forces on the piston. So piston has an upward force of $$F_2$$(applied by the gas) and a downward force $$F_1$$ applied by the surroundings. So in this case the net force in the downward direction is,
$$dF_{net}=m(da_{net})=F_1-F_2=P_{ext}A-P_{int}A=(P_{ext}-P_{int})A=dP×A$$
$$\therefore dK = Fds=dP(Ads)=dPdV$$
where $$dK$$ is the infinitesimal change in the kinetic energy of the piston, and $$dV=Ads$$ is the infinitesimal change in the volume.
There you have it. You see, there is an infinitesimally small(yet non-zero) net force on the piston which gives an infinitesimally small(yet non-zero) acceleration to the piston. And this infinitesimal acceleration increases the speed of the piston from $$0$$ to some infinitesimally small velocity. And thus the piston gains an infinitesimal amount of kinetic energy. And the $$\int dPdV$$ term accounts for this change in kinetic energy.
I know the last paragraph is heavily populated with "infinitesimals", but it is just to show you the insignificance of the motion of the piston. Now what if friction would have been present? In that case, the piston won't move in the first place. But if we also assume that the force due to friction is infinitesimally small, then yeah, the piston would move. But this time it would have a lower value of that infinitesimal acceleration. And, also, it will lose some of its kinetic energy in the form of heat(due to frictional losses).
Summary :- The $$\int dPdV$$ term accounts for the infinitesimal change in the kinetic energy of the piston.
I hope this is what you meant by "physical interpretation".
• So have you ever handled such quantities mathematically? – user238497 Dec 9 '19 at 11:30
• I have never come across any problem which requires you to compite the kinetic energy of the piston. However, there are many problems on irreversible processes where you discover the fact that $W_{int}\neq W_{ext}$. I suspect there might be problems where you would have to use this difference to compute the kinetic energy of the piston. – user243267 Dec 9 '19 at 11:34
• And neither have I pondered about this(theoretically) deeply before you asked this question. The only places where I encoutered this was while solving questions. – user243267 Dec 9 '19 at 11:35
• Do you think this might involve multi variable calculus? – user238497 Dec 9 '19 at 11:37
• @TheLastAirbender No, you will only be asked to calculate $\int \Delta P dV$ where $\Delta P$ is a finite pressure difference between the system and the surroundings and since $\int \Delta PdV$ is just a single integral, you really won't need any knowledge about multivariable calculus. The hard part is to express $\Delta P$ as a function $V$. And then obviously, you also should have luck so that the integral formed can be easily integrable. – user243267 Dec 9 '19 at 11:45
Let me try to convince you that $$\int dPdv$$ is almost negligible. As you have said, $$P_{ext} = P_{int} + dP$$ but what $$dP$$ really is? Well I think it is better to assume $$dP$$ as very small number and hence just adding it to $$P_{int}$$ will give a value bigger than $$P_{int}$$ at any moment no matter whatever $$P_{int}$$ is. So, in this sense $$dP$$ is just acting as constant. Let's see what this angle of thinking about $$dP$$ can lead to $$W_{ext} = \int_{V_i}^{V_f} (P_{int}+dP)dV$$ $$W_{ext} = \int_{V_i}^{V_f} P_{int}dV + \int_{V_i}^{V_f}dPdV$$ Now, let's just focus on the $$dP$$ part $$X= dP\int_{V_i}^{V_f}dV$$ as $$dP$$ is constant. $$X= dP (V_i - V_f)$$ We agreed that $$dP$$ is a very small number and hence if we multiply it with any other thing no matter what the result will be very very small and therefore $$X$$ will be a very small number. $$W_{ext} = \int_{V_i}^{V_f} P_{int}dV + X$$ Now, we can neglect $$X$$ and hence write $$W_{ext} = \int_{V_i}^{V_f} P_{int}dV = W_{int}$$. Your argument that $$\int dPdV$$ is negligible is quite sloppy as the integral adds many many pieces of small things ($$f(x)dx$$ is a very small number as $$dx$$ is very very small but adding many many of them would produce a different result).
Even in mechanics, when calculate gravitational potential energy we take the working force to be just a little more than $$mg$$ and hence calculate the work done just plugging the work with $$mg$$, however, the actual force is more than that.
I said that your argument was sloppy because it’s a matter of hyperreal numbers that when and when we cannot consider something negligible, your argument is vey all right if we just accept the rules of differentials.
Your exact question is what is the physical interpretation of $$\int_{V_i}^{V_f}dPdV$$
I will try to explain this without using mathematics. Suppose you have a cylinder with piston at its one end which is free to move and the cylinder is filled with compressed gas at pressure $$P_{int}$$.
Your task here is to keep the piston stationary. You will have to apply exactly same pressure at its other end to keep it stationary and hence maintaining the thermodynamic state of gas at its initial condition.
$$dP$$, work done by you on the gas and work done by the gas on you are all zero in this case. This is the equilibrium state.
However, if your task was to slowly push the piston inwards further compressing the gas, you will have to increase the pressure applied by you on piston. The piston's acceleration will be dictated by how much you increased external pressure. Let's assume external pressure is increased by an amount $$\delta$$.
One assumption in your derivation is that the process occurs very slowly meaning the piston's acceleration is almost zero. Even if we assume that piston is not accelerating at all, we still need to increase the external pressure. Why? Because there is friction between piston and cylinder wall in real life scenario and additional pressure $$\delta$$ is used to overcome this friction.
And, the work done by $$\delta$$ is $$\int_{V_i}^{V_f}\delta.dV$$ And $$\delta$$ is your $$dP$$ .
Therefore, $$\int_{V_i}^{V_f}dPdV$$ represents nothing energy lost to over any dissipative force present in the system due to irreversibility of the process.
Since one of the assumption in your derivation is process is irreversible hence there is no friction and $$\int_{V_i}^{V_f}\delta.dV$$ is Zero. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 63, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9596875905990601, "perplexity": 286.34350309772157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735916.91/warc/CC-MAIN-20200805065524-20200805095524-00536.warc.gz"} |
http://mathoverflow.net/questions/101474/system-of-local-coefficients-on-x-locally-constant-sheaves-and-orientation-shea | # system of local coefficients on X, locally constant sheaves and orientation sheaves
Hi,
I try to understand the orientation sheaves. When searching it in the google, i meet new areas such as local coefficient system and locally constant sheaves. I realize that any system of local coefficients on X is a locally constant sheaves. But what is the relation with orientation sheaves. Which refferences are there to read it?
-
Could someone retag, please? Say, some "sheaf-theory" and "at.algebraic-topology". – Anton Fonarev Jul 6 '12 at 11:00
## 1 Answer
These are purely topological notions and have nothing to do with algebraic geometry in particular.
Let $M$ for simplicity be a topological manifold of dimension $n$. Then the orientation sheaf $\mathcal{L}_{or}(M)$ is the sheafification of the presheaf $U\mapsto H_n(M,M-U;\mathbb{Z})$. It's always a locally constant sheaf with stalks equal to $\mathbb{Z}$. One immediately checks that $\mathcal{L}_{or}$ is trivial if and only if $M$ is orientable. This definition can be generalized.
As for the references, I'd suggest checking A.Dimca, Sheaves in Topology or B.Iversen, Cohomology of Sheaves.
-
to be more explicit one could say that the stalks of $L_{or}(M)$ depend only on the presheaf you defined. To see what the stalks are we can reduce to $\mathbb{R}^n$ as $M$ is a manifold. There we can take a direct system of concentric balls. The relative homology then becomes the n-th homology of the sphere $S^n$ which is $\mathbb{Z}$. Hence $L_{or}(M)$ is indeed locally constant. – Yosemite Sam Jul 6 '12 at 15:09
what does the trivial orientation sheaf mean? – zatilokum Aug 1 '12 at 22:59
@zatilokum It means that this is a constant sheaf. – Anton Fonarev Aug 2 '12 at 12:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9600180983543396, "perplexity": 333.3290836671341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770130.120/warc/CC-MAIN-20141217075250-00171-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/whats-1-1-equal-to.527169/ | # What's (±1)(±1) equal to?
1. Sep 4, 2011
### GreenPrint
What does (±1)(±1) equal to, is it just positive 1?
2. Sep 4, 2011
### ArcanaNoir
wouldn't it be (1)(1) = 1
(1)(-1)= -1
(-1)(1) = -1
(-1)(-1) = 1?
3. Sep 4, 2011
### ArcanaNoir
so.. plus or minus 1?
4. Sep 4, 2011
### GreenPrint
I thought it would have to be just positive one because I thought that (±1)(±1) = (1)(1) or (-1)(-1) which both equal 1, I thought they both had to be both either positive or negative at the same time, in which case (±1)(±1) = 1?
5. Sep 4, 2011
### ArcanaNoir
I don't see why they should have to be positive or negative at the same time, unless the reality of the problem dictates it.
6. Sep 4, 2011
### GreenPrint
I thought that if one wanted to distinguish them being either positive or negative at different times you would put (±1)(-+1)
-+ is suppose to be ± rotated 180 degrees?
7. Sep 4, 2011
### gb7nash
That's the way I interpret it. Sometimes you'll see things like:
$$(5 \pm 5) \mp 10$$
where you get 0 or 10. The rule is to take the top operation to get one answer, then take the bottom operation to obtain the second answer. In the OP's problem, both answers would be 1.
8. Sep 4, 2011
### eumyang
You mean this?
$(\pm 1)(\mp 1)$
I've seen the "minus-plus" symbol before. The cosine of a sum and difference can be written in one formula like so:
$\cos (a \pm b) = \cos a \cos b \mp \sin a \sin b$
... indicating that the symbol on the RHS is different from the one on the LHS.
EDIT: gb7nash beat me to it. ;)
9. Sep 4, 2011
### ArcanaNoir
well you've lost me. forget I said anything :P
10. Sep 4, 2011
### uart
Id say both interpretations were possible, depending on the context.
Say X was a two state random variable that could be either +1 or -1. Similarly Y is an independent two state random variable. The product XY is $(\pm 1)(\pm 1)$, but it's certainly not always +1 in this case.
11. Sep 4, 2011
### Staff: Mentor
This would be $\pm 1$. The first factor could be either positive or negative, and so could the second factor. You can't assume (and shouldn't) that if the first factor is positive, so is the second.
12. Sep 4, 2011
### flyingpig
what about $$\mp 1$$?
13. Sep 5, 2011
### uart
I could accept the idea of "coupled" plus or minuses as a shorthand notation in some specific circumstances, eg:
$$\cos(a \pm b) = \cos a \cos b \mp \sin a \sin b$$
$$\sin(a \pm b) = \sin a \cos b \pm \cos a \sin b$$
In general however, without any specific context as in the OP, I would never consider all the ($\pm$)'s in an equation (or set of equations) to be coupled in this way.
14. Sep 5, 2011
### Staff: Mentor
In what context did you encounter this?
15. Sep 5, 2011
### Mentallic
When I used to deal with equations that involved $\pm$ that were both dependent and independent of others, I would label them with numbers such as $\pm_1, \pm_2$ for example. I think later on when I saw them being used in formal writing, they were denoted by dashes, such as what you see when dealing with derivatives, $\pm', \pm''$ etc.
Similar Discussions: What's (±1)(±1) equal to? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.949206531047821, "perplexity": 1544.4047864067254}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889567.48/warc/CC-MAIN-20180120102905-20180120122905-00033.warc.gz"} |
http://eprint.iacr.org/2009/125/20090320:140026 | ## Cryptology ePrint Archive: Report 2009/125
A Full Key Recovery Attack on HMAC-AURORA-512
Yu Sasaki
Abstract: In this note, we present a full key recovery attack on HMAC-AURORA-512 when 512-bit secret keys are used and the MAC length is 512-bit long. Our attack requires $2^{257}$ queries and the off-line complexity is $2^{259}$ AURORA-512 operations, which is significantly less than the complexity of the exhaustive search for a 512-bit key. The attack can be carried out with a negligible amount of memory. Our attack can also recover the inner-key of HMAC-AURORA-384 with almost the same complexity as in HMAC-AURORA-512. This attack does not recover the outer-key of HMAC-AURORA-384, but universal forgery is possible by combining the inner-key recovery and 2nd-preimage attacks. Our attack exploits some weaknesses in the mode of operation.
Category / Keywords: secret-key cryptography / AURORA, DMMD, HMAC, Key recovery attack
Date: received 16 Mar 2009
Contact author: sasaki yu at lab ntt co jp
Available format(s): PDF | BibTeX Citation
Short URL: ia.cr/2009/125
[ Cryptology ePrint archive ] | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4146806001663208, "perplexity": 6886.513874332378}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825365.1/warc/CC-MAIN-20160723071025-00224-ip-10-185-27-174.ec2.internal.warc.gz"} |
https://llvm.org/doxygen/lib_2Support_2Unix_2README_8txt.html | LLVM 14.0.0git
## Variables
llvm lib Support Unix the directory structure underneath this directory could look like this
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created Also
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created further subdirectories could be created to reflect versions of the various standards For example
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created further subdirectories could be created to reflect versions of the various standards For under SUS there could be v1
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created further subdirectories could be created to reflect versions of the various standards For under SUS there could be v2
## ◆ Also
We currently generate a but we really shouldn eax ecx xorl edx divl ecx eax divl ecx movl eax ret A similar code sequence works for division We currently compile i32 v2 eax eax jo LBB1_2 atomic and others It is also currently not done for read modify write instructions It is also current not done if the OF or CF flags are needed The shift operators have the complication that when the shift count is EFLAGS is not so they can only subsume a test instruction if the shift count is known to be non zero Also
Definition at line 14 of file README.txt.
## ◆ example
This currently compiles esp xmm0 movsd esp eax eax esp ret We should use not the dag combiner This is because dagcombine2 needs to be able to see through the X86ISD::Wrapper which DAGCombine can t really do The code for turning x load into a single vector load is target independent and should be moved to the dag combiner The code for turning x load into a vector load can only handle a direct load from a global or a direct load from the stack It should be generalized to handle any load from where P can be anything The alignment inference code cannot handle loads from globals in static non mode because it doesn t look through the extra dyld stub load If you try vec_align ll without relocation you ll see what I mean We should lower which eliminates a constant pool load For example
Definition at line 15 of file README.txt.
Initial value:
===========================
This directory provides implementations of the lib/System classes that
are common to two or more variants of UNIX. For example
Definition at line 2 of file README.txt.
## ◆ this
llvm lib Support Unix the directory structure underneath this directory could look like this
Definition at line 13 of file README.txt.
## ◆ v1
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created further subdirectories could be created to reflect versions of the various standards For under SUS there could be v1
Definition at line 15 of file README.txt.
Referenced by llvm::PatternMatch::m_Shuffle().
## ◆ v2
A predicate compare being used in a select_cc should have the same peephole applied to it as a predicate compare used by a br_cc There should be no mfcr oris r5 li li lvx r4 lvx r3 vcmpeqfp v2
Definition at line 15 of file README.txt.
Referenced by llvm::PatternMatch::m_Shuffle().
to
Should compile to
that
we should consider alternate ways to model stack dependencies Lots of things could be done in WebAssemblyTargetTransformInfo cpp there are numerous optimization related hooks that can be overridden in WebAssemblyTargetLowering Instead of the OptimizeReturned which should consider preserving the returned attribute through to MachineInstrs and extending the MemIntrinsicResults pass to do this optimization on calls too That would also let the WebAssemblyPeephole pass clean up dead defs for such as it does for stores Consider implementing and or getMachineCombinerPatterns Find a clean way to fix the problem which leads to the Shrink Wrapping pass being run after the WebAssembly PEI pass When setting multiple variables to the same we currently get code like const It could be done with a smaller encoding like local tee $pop5 local$pop6 WebAssembly registers are implicitly initialized to zero Explicit zeroing is therefore often redundant and could be optimized away Small indices may use smaller encodings than large indices WebAssemblyRegColoring and or WebAssemblyRegRenumbering should sort registers according to their usage frequency to maximize the usage of smaller encodings Many cases of irreducible control flow could be transformed more optimally than via the transform in WebAssemblyFixIrreducibleControlFlow cpp It may also be worthwhile to do transforms before register particularly when duplicating to allow register coloring to be aware of the duplication WebAssemblyRegStackify could use AliasAnalysis to reorder loads and stores more aggressively WebAssemblyRegStackify is currently a greedy algorithm This means that
example
llvm lib Support Unix the directory structure underneath this directory could look like only those directories actually needing to be created should be created further subdirectories could be created to reflect versions of the various standards For example | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3500775396823883, "perplexity": 4371.398115898006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057830.70/warc/CC-MAIN-20210926053229-20210926083229-00034.warc.gz"} |
https://admetmesh.scbdd.com/explanation/index | ###### Molecular Weight
Contain hydrogen atoms. Optimal:100~600, based on Drug-Like Soft rule.
###### Volume
Van der Waals volume.
###### Density
Density = MW / Volume
###### nHA
Number of hydrogen bond acceptors. Sum of all O and N. Optimal: 0~12, based on Drug-Like Soft rule.
###### nHD
Number of hydrogen bond donors. Sum of all OHs and NHs. Optimal:0~7, based on Drug-Like Soft rule.
###### nRot
Number of rotatable bonds. In some situation Amide C-N bonds are not considered because of their high rotational energy barrier. Optimal:0~11, based on Drug-Like Soft rule.
###### nRing
Number of rings. Smallest set of smallest rings. Optimal:0~6, based on Drug-Like Soft rule.
###### MaxRing
Number of atoms in the biggest ring. Number of atoms involved in the biggest system ring. Optimal:0~18, based on Drug-Like Soft rule.
###### nHet
Number of heteroatoms. Number of non-carbon atoms (hydrogens included). Optimal:1~15, based on Drug-Like Soft rule.
###### fChar
Formal charge. Optimal:-4 ~4, based on Drug-Like Soft rule
###### nRig
Number of rigid bonds. Number of non-flexible bonds, in opposite to rotatable bonds. Optimal:0~30, based on Drug-Like Soft rule.
###### Flexibility
Flexibility = nRot / nRig
###### Stereo Centers
Number of stereocenters. Optimal: ≤ 2, based on Lead-Like Soft rule.
###### TPSA
Topological polar surface area. Sum of tabulated surface contributions of polar fragments. Optimal:0~140, based on Veber rule.
###### logS
• The logarithm of aqueous solubility value. The first step in the drug absorption process is the disintegration of the tablet or capsule, followed by the dissolution of the active drug. Low solubility is detrimental to good and complete oral absorption, and early measurement of this property is of great importance in drug discovery.
• Results interpretation: The predicted solubility of a compound is given as the logarithm of the molar concentration (log mol/L). Compounds in the range from -4 to 0.5 log mol/L will be considered proper.
###### logP
• The logarithm of the n-octanol/water distribution coefficient. log P possess a leading position with considerable impact on both membrane permeability and hydrophobic binding to macromolecules, including the target receptor as well as other proteins like plasma proteins, transporters, or metabolizing enzymes.
• Results interpretation: The predicted logP of a compound is given as the logarithm of the molar concentration (log mol/L). Compounds in the range from 0 to 3 log mol/L will be considered proper.
###### logD7.4
• The logarithm of the n-octanol/water distribution coefficients at pH=7.4. To exert a therapeutic effect, one drug must enter the blood circulation and then reach the site of action. Thus, an eligible drug usually needs to keep a balance between lipophilicity and hydrophilicity to dissolve in the body fluid and penetrate the biomembrane effectively. Therefore, it is important to estimate the n-octanol/water distribution coefficients at physiological pH (logD7.4) values for candidate compounds in the early stage of drug discovery.
• Results interpretation: The predicted logD7.4 of a compound is given as the logarithm of the molar concentration (log mol/L). Compounds in the range from 1 to 3 log mol/L will be considered proper.
###### QED [1]
• A measure of drug-likeness based on the concept of desirability. QED is calculated by integrating the outputs of the desirability functions based on eight drug-likeness related properties, including MW, log P, NHBA, NHBD, PSA, Nrotb, the number of aromatic rings (NAr), and the number of alerts for undesirable functional groups. Here, average descriptor weights were used in the calculation of QED. The QED score is calculated by taking the geometric mean of the individual desirability functions, given by $Q E D=\exp \left(\frac{1}{n} \sum_{i=1}^{n} \ln d_{i}\right)$, where di indicates the dthdesirability function and n = 8 is the number of drug-likeness related properties.
• Results interpretation: The mean QED is 0.67 for the attractive compounds, 0.49 for the unattractive compounds and 0.34 for the unattractive compounds considered too complex.
• Empirical decision: > 0.67: excellent (green); ≤ 0.67: poor (red)
###### SAscore [2]
• Synthetic accessibility score is designed to estimate ease of synthesis of drug-like molecules, based on a combination of fragment contributions and a complexity penalty. The score is between 1 (easy to make) and 10 (very difficult to make). The synthetic accessibility score (SAscore) is calculated as a combination of two components: $\text { SAscore }=\text { fragmentScore - complexityPenalty }$
• Results interpretation: high SAscore: ≥ 6, difficult to synthesize; low SAscore: < 6, easy to synthesize
• Empirical decision: ≤ 6:excellent (green); > 6: poor (red)
###### Fsp3[3]
• Fsp3, the number of sp3 hybridized carbons/total carbon count, is used to determine the carbon saturation of molecules and characterize the complexity of the spatial structure of molecules. It has been demonstrated that the increased saturation measured by Fsp3 and the number of chiral centers in the molecule increase the clinical success rate, which might be related to the increased solubility, or the fact that the enhanced 3D features allow small molecules to occupy more target space.
• Results interpretation: Fsp3 ≥ 0.42 is considered a suitable value.
• Empirical decision: ≥ 0.42:excellent (green); <0.42: poor (red)
###### MCE-18 [4]
• MCE-18 stands for medicinal chemistry evolution in 2018, and this measure can effectively score molecules by novelty in terms of their cumulative sp3 complexity. It can effectively score structures by their novelty and current lead potential in contrast to simple and in many cases false positive sp3 index, and given by the following equation: $$M C E 18=\left(AR+NAR+CHIRAL+SPIRO+\frac{s p^{3}+C y c-A c y c}{1+s p^{3}}\right) \times Q^{1}$$ where AR is the presence of an aromatic or heteroaromatic ring (0 or 1), NAR is the presence of an aliphatic or a heteroaliphatic ring (0 or 1), CHIRAL is the presence of a chiral center (0 or 1), SPIRO is the presence of a spiro point (0 or 1), sp3 is the portion of sp3-hybridized carbon atoms (from 0 to 1), Cyc is the portion of cyclic carbons that are sp3 hybridized (from 0 to 1), Acyc is a portion of acyclic carbon atoms that are sp3 hybridized (from 0 to 1), and Q1 is the normalized quadratic index.
• Results interpretation: < 45: uninteresting, trivial, old scaffolds, low degree of 3D complexity and novelty; 45~63: sufficient novelty, basically follow the trends of currently observed in medicinal chemistry; 63~78: high structural similarity to the compounds disclosed in patent records; >78: need to be inspected visually to assess their target profile and drug-likeness.
• Empirical decision: ≥ 45:excellent (green); <45: poor (red)
###### NPscore [5]
• The Natural Product-likeness score is a useful measure which can help to guide the design of new molecules toward interesting regions of chemical space which have been identified as “bioactive regions” by natural evolution. The calculation consists of molecule fragmentation, table lookup, and summation of fragment contributions.
• Results interpretation: The calculated score is typically in the range from −5 to 5. The higher the score is, the higher the probability is that the molecule is a NP.
###### Lipinski Rule [6]
• Content: MW≤500; logP≤5; Hacc≤10; Hdon≤5
• Results interpretation: If two properties are out of range, a poor absorption or permeability is possible, one is acceptable.
• Empirical decision: < 2 violations:excellent (green);≥2 violations: poor (red)
###### Pfizer Rule [7]
• Content: logP > 3; TPSA < 75
• Results interpretation: Compounds with a high log P (>3) and low TPSA (<75) are likely to be toxic.
• Empirical decision: two conditions satisfied: poor (red); otherwise: excellent (green)
###### GSK Rule [8]
• Content: MW ≤ 400; logP ≤ 4
• Results interpretation: Compounds satisfying the GSK rule may have a more favorable ADMET profile.
• Empirical decision: 0 violations: excellent (green); otherwise: poor (red)
###### Golden Triangle [9]
• Content: 200 ≤MW ≤50; -2 ≤ logD ≤5
• Results interpretation: Compounds satisfying the GoldenTriangle rule may have a more favourable ADMET profile.
• Empirical decision: 0 violations: excellent (green); otherwise: poor (red)
###### PAINS [10]
• Pan Assay Interference Compounds (PAINS) is one of the most famous frequent hitters filters, which comprises 480 substructures derived from the analysis of FHs determined by six target-based HTS assay. By application of these filters, it is easier to screen false positive hits and to flag suspicious compounds in screening databases. One of the most authoritative medicine magazines Journal of Medicinal Chemistry even requires authors to provide the screening results with the PAINS alerts of active compounds when submitting manuscripts.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### ALARM NMR Rule [11]
• Thiol reactive compounds. There are 75 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### BMS Rule [12]
• Undesirable, reactive compounds. There are 176 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### Chelator Rule [13]
• Chelating compounds. There are 55 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
## References
• [1] Bickerton G R, Paolini G V, Besnard J, et al. Quantifying the chemical beauty of drugs[J]. Nat Chem, 2012, 4(2): 90-8.
• [2] Ertl P, Schuffenhauer A. Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions[J]. J Cheminform, 2009, 1(1): 8.
• [3] Lovering F, Bikker J, Humblet C. Escape from flatland: increasing saturation as an approach to improving clinical success[J]. J Med Chem, 2009, 52(21): 6752-6.
• [4] Ivanenkov Y A, Zagribelnyy B A, Aladinskiy V A. Are We Opening the Door to a New Era of Medicinal Chemistry or Being Collapsed to a Chemical Singularity?[J]. J Med Chem, 2019, 62(22): 10026-10043.
• [5] Ertl P, Roggo S, Schuffenhauer A. Natural product-likeness score and its application for prioritization of compound libraries[J]. J Chem Inf Model, 2008, 48(1): 68-74.
• [6] Lipinski C A, Lombardo F, Dominy B W, et al. Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings[J]. Adv Drug Deliv Rev, 2001, 46(1-3): 3-26.
• [7] Hughes J D, Blagg J, Price D A, et al. Physiochemical drug properties associated with in vivo toxicological outcomes[J]. Bioorg Med Chem Lett, 2008, 18(17): 4872-5.
• [8] Gleeson M P. Generation of a set of simple, interpretable ADMET rules of thumb[J]. J Med Chem, 2008, 51(4): 817-34.
• [9] Johnson T W, Dress K R, Edwards M. Using the Golden Triangle to optimize clearance and oral absorption[J]. Bioorg Med Chem Lett,2009,19(19):5560-4.
• [10] Baell J B, Holloway G A. New substructure filters for removal of pan assay interference compounds (PAINS) from screening libraries and for their exclusion in bioassays[J]. J Med Chem, 2010, 53(7): 2719-40.
• [11] Huth J R, Mendoza R, Olejniczak E T, et al. ALARM NMR: a rapid and robust experimental method to detect reactive false positives in biochemical screens[J]. J Am Chem Soc, 2005, 127(1): 217-24.
• [12] Pearce B C, Sofia M J, Good A C, et al. An empirical process for the design of high-throughput screening deck filters[J]. J Chem Inf Model, 2006, 46(3): 1060-8.
• [13] Agrawal A, Johnson S L, Jacobsen J A, et al. Chelator fragment libraries for targeting metalloproteinases[J]. ChemMedChem, 2010, 5(2): 195-9.
###### Caco-2 Permeability
• Before an oral drug reaches the systemic circulation, it must pass through intestinal cell membranes via passive diffusion, carrier-mediated uptake or active transport processes. The human colon adenocarcinoma cell lines (Caco-2), as an alternative approach for the human intestinal epithelium, has been commonly used to estimate in vivo drug permeability due to their morphological and functional similarities. Thus, Caco-2 cell permeability has also been an important index for an eligible candidate drug compound.
• Results interpretation: The predicted Caco-2 permeability of a given compound is given as the log cm/s. A compound is considered to have a proper Cao-2 permeability if it has predicted value >-5.15log cm/s.
• Empirical decision: > -5.15: excellent (green); otherwise: poor (red)
###### MDCK Permeability
• Madin−Darby Canine Kidney cells (MDCK) have been developed as an in vitro model for permeability screening. Its apparent permeability coefficient, Papp, is widely considered to be the in vitro gold standard for assessing the uptake efficiency of chemicals into the body. Papp values of MDCK cell lines are also used to estimate the effect of the blood-brain barrier (BBB).
• Results interpretation: The unit of predicted MDCK permeability is cm/s. A compound is considered to have a high passive MDCK permeability for a Papp > 20 x 10-6 cm/s, medium permeability for 2-20 x 10-6cm/s, low permeability for < 2 x 10-6cm/s.
• Empirical decision: >2 x 10-6cm/s: excellent (green), otherwise: poor (red)
###### Pgp-inhibitor
• The inhibitor of P-glycoprotein. The P-glycoprotein, also known as MDR1 or 2 ABCB1, is a membrane protein member of the ATP-binding cassette (ABC) transporters superfamily. It is probably the most promiscuous efflux transporter, since it recognizes a number of structurally different and apparently unrelated xenobiotics; notably, many of them are also CYP3A4 substrates.
• Results interpretation: Category 0: Non-inhibitor; Category 1: Inhibitor. The output value is the probability of being Pgp-inhibitor, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Pgp-substrate
• As described in the Pgp-inhibitor section, modulation of P-glycoprotein mediated transport has significant pharmacokinetic implications for Pgp substrates, which may either be exploited for specific therapeutic advantages or result in contraindications.
• Results interpretation: Category 0: Non-substrate; Category 1: substrate. The output value is the probability of being Pgp-substrate, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### HIA
• Human intestinal absorption. As described above, the human intestinal absorption of an oral drug is the essential prerequisite for its apparent efficacy. What’s more, the close relationship between oral bioavailability and intestinal absorption has also been proven and HIA can be seen an alternative indicator for oral bioavailability to some extent.
• Result interpretation: A molecule with an absorbance of less than 30% is considered to be poorly absorbed. Accordingly, molecules with a HIA >30% were classified as HIA- (Category 0), while molecules with a HIA < 30% were classified as HIA+(Category 1). The output value is the probability of being HIA+, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### F20%
• The human oral bioavailability 20%. For any drug administrated by the oral route, oral bioavailability is undoubtedly one of the most important pharmacokinetic parameters because it is the indicator of the efficiency of the drug delivery to the systemic circulation.
• Result interpretation: Molecules with a bioavailability ≥ 20% were classified as F20%- (Category 0), while molecules with a bioavailability < 20% were classified as F20%+ (Category 1). The output value is the probability of being F20%+, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### F30%
• The human oral bioavailability 30%. For any drug administrated by the oral route, oral bioavailability is undoubtedly one of the most important pharmacokinetic parameters because it is the indicator of the efficiency of the drug delivery to the systemic circulation.
• Result interpretation: Molecules with a bioavailability ≥ 30% were classified as F30%- (Category 0), while molecules with a bioavailability < 30% were classified as F30%+ (Category 1). The output value is the probability of being F30%+, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### PPB
• Plasma protein binding. One of the major mechanisms of drug uptake and distribution is through PPB, thus the binding of a drug to proteins in plasma has a strong influence on its pharmacodynamic behavior. PPB can directly influence the oral bioavailability because the free concentration of the drug is at stake when a drug binds to serum proteins in this process.
• Result interpretation: A compound is considered to have a proper PPB if it has predicted value < 90%, and drugs with high protein-bound may have a low therapeutic index.
• Empirical decision: ≤ 90%: excellent (green); otherwise: poor (red).
###### VD
• Volume Distribution. The VD is a theoretical concept that connects the administered dose with the actual initial concentration present in the circulation and it is an important parameter to describe the in vivo distribution for drugs. In practical, we can speculate the distribution characters for an unknown compound according to its VD value, such as its condition binding to plasma protein, its distribution amount in body fluid and its uptake amount in tissues.
• Result interpretation: The unit of predicted VD is L/kg. A compound is considered to have a proper VD if it has predicted VD in the range of 0.04-20L/kg.
• Empirical decision: 0.04-20: excellent (green); otherwise: poor (red)
###### BBB Penetration
• Drugs that act in the CNS need to cross the blood–brain barrier (BBB) to reach their molecular target. By contrast, for drugs with a peripheral target, little or no BBB penetration might be required in order to avoid CNS side effects.
• Result interpretation: The unit of BBB penetration is cm/s. Molecules with logBB > -1 were classified as BBB+ (Category 1), while molecules with logBB ≤ -1 were classified as BBB- (Category 0). The output value is the probability of being BBB+, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Fu
• The fraction unbound in plasms. Most drugs in plasma will exist in equilibrium between either an unbound state or bound to serum proteins. Efficacy of a given drug may be affect by the degree to which it binds proteins within blood, as the more that is bound the less efficiently it can traverse cellular membranes or diffuse.
• Result interpretation: >20%: High Fu; 5-20%: medium Fu; <5% low Fu.
• Empirical decision: ≥ 5%: excellent (green);< 5%: poor (red).
###### CYP 1A2 / 2C19 / 2C9 / 2D6 / 3A4 inhibitorCYP 1A2 / 2C19 /2C9 / 2D6 / 3A4 substrate
• Based on the chemical nature of biotransformation, the process of drug metabolism reactions can be divided into two broad categories: phase I (oxidative reactions) and phase II (conjugative reactions). The human cytochrome P450 family (phase I enzymes) contains 57 isozymes and these isozymes metabolize approximately two-thirds of known drugs in human with 80% of this attribute to five isozymes––1A2, 3A4, 2C9, 2C19 and 2D6. Most of these CYPs responsible for phase I reactions are concentrated in the liver.
• Result interpretation: Category 0: Non-substrate / Non-inhibitor; Category 1: substrate / inhibitor. The output value is the probability of being substrate / inhibitor, within the range of 0 to 1.
###### CL
• The clearance of a drug. Clearance is an important pharmacokinetic parameter that defines, together with the volume of distribution, the half-life, and thus the frequency of dosing of a drug.
• Result interpretation: The unit of predicted CL penetration is ml/min/kg. >15 ml/min/kg: high clearance; 5-15 ml/min/kg: moderate clearance; <5 ml/min/kg: low clearance.
• Empirical decision: ≥ 5: excellent (green);< 5: poor (red).
###### T1/2
• The half-life of a drug is a hybrid concept that involves clearance and volume of distribution, and it is arguably more appropriate to have reliable estimates of these two properties instead.
• Result interpretation: Molecules with T1/2 > 3 were classified as T1/2 - (Category 0), while molecules with T1/2 ≤ 3 were classified as T1/2 + (Category 1). The output value is the probability of being T1/2+, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### hERG Blockers
• The human ether-a-go-go related gene. The During cardiac depolarization and repolarization, a voltage-gated potassium channel encoded by hERG plays a major role in the regulation of the exchange of cardiac action potential and resting potential. The hERG blockade may cause long QT syndrome (LQTS), arrhythmia, and Torsade de Pointes (TdP), which lead to palpitations, fainting, or even sudden death.
• Result interpretation: Molecules with IC50 more than 10 μM or less than 50% inhibition at 10 μM were classified as hERG - (Category 0), while molecules with IC50 less than 10 μM or more than 50% inhibition at 10 μM were classified as hERG+ (Category 1). The output value is the probability of being hERG+, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### H-HT
• The human hepatotoxicity. Drug induced liver injury is of great concern for patient safety and a major cause for drug withdrawal from the market. Adverse hepatic effects in clinical trials often lead to a late and costly termination of drug development programs.
• Result interpretation: Category 0: H-HT negative(-); Category 1: H-HT positive(+). The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### DILI
• Drug-induced liver injury (DILI) has become the most common safety problem of drug withdrawal from the market over the past 50 years.
• Result interpretation: Category 0: DILI negative(-); Category 1: DILI positive(+). The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### AMES Toxicity
• The Ames test for mutagenicity. The mutagenic effect has a close relationship with the carcinogenicity, and it is the most widely used assay for testing the mutagenicity of compounds.
• Result interpretation: Category 0: AMES negative(-); Category 1: AMES positive(+). The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Rat Oral Acute Toxicity
• Determination of acute toxicity in mammals (e.g. rats or mice) is one of the most important tasks for the safety evaluation of drug candidates.
• Result interpretation: Category 0: low-toxicity, > 500 mg/kg; Category 1: high-toxicity; < 500 mg/kg. The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### FDAMDD
• The maximum recommended daily dose provides an estimate of the toxic dose threshold of chemicals in humans.
• Result interpretation: Category 1: FDAMDD positive(+), ≤ 0.011 mmol/kg -bw/day; Category 0: FDAMDD negative(-), > 0.011 mmol/kg-bw/day. The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Skin Sensitization
• Skin sensitization is a potential adverse effect for dermally applied products. The evaluation of whether a compound, that may encounter the skin, can induce allergic contact dermatitis is an important safety concern.
• Result interpretation: Category 1: Sensitizer; Category 0: Non-sensitizer. The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Carcinogencity
• Among various toxicological endpoints of chemical substances, carcinogenicity is of great concern because of its serious effects on human health. The carcinogenic mechanism of chemicals may be due to their ability to damage the genome or disrupt cellular metabolic processes. Many approved drugs have been identified as carcinogens in humans or animals and have been withdrawn from the market.
• Result interpretation: Category 1: carcinogens; Category 0: non-carcinogens. Chemicals are labelled as active (carcinogens) or inactive (non-carcinogens) according to their TD50 values. The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Eye Corrosion / Irritation
• Assessing the eye irritation/corrosion (EI/EC) potential of a chemical is a necessary component of risk assessment. Cornea and conjunctiva tissues comprise the anterior surface of the eye, and hence cornea and conjunctiva tissues are directly exposed to the air and easily suffer injury by chemicals. There are several substances, such as chemicals used in manufacturing, agriculture and warfare, ocular pharmaceuticals, cosmetic products, and household products, that can cause EI or EC.
• Result interpretation: Category 1: corrosives / irritants chemicals; Category 0: non-corrosives / non-irritants chemicals. The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Respiratory Toxicity
• Among these safety issues, respiratory toxicity has become the main cause of drug withdrawal. Drug-induced respiratory toxicity is usually underdiagnosed because it may not have distinct early signs or symptoms in common medications and can occur with significant morbidity and mortality.Therefore, careful surveillance and treatment of respiratory toxicity is of great importance.
• Result interpretation: Category 1: respiratory toxicants; Category 0: non-respiratory toxicants. The output value is the probability of being toxic, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Bioconcentration Factor
The bioconcentration factor BCF is defined as the ratio of the chemical concentration in biota as a result of absorption via the respiratory surface to that in water at steady state. It is used for considering secondary poisoning potential and assessing risks to human health via the food chain. The unit of BCF is log10(L/kg).
###### IGC50
48 hour Tetrahymena pyriformis IGC50 (concentration of the test chemical in water in mg/L that causes 50% growth inhibition to Tetrahymena pyriformis after 48 hours). The unit of IGC50 is −log10[(mg/L)/(1000*MW)].
###### LC50FM
96 hour fathead minnow LC50 (concentration of the test chemical in water in mg/L that causes 50% of fathead minnow to die after 96 hours). The unit of LC50FM is −log10[(mg/L)/(1000*MW)].
###### LC50DM
48 hour Daphnia magna LC50 (concentration of the test chemical in water in mg/L that causes 50% of Daphnia magna to die after 48 hours). The unit of LC50DM is −log10[(mg/L)/(1000*MW)].
###### NR-AR
• Androgen receptor (AR), a nuclear hormone receptor, plays a critical role in AR-dependent prostate cancer and other androgen related diseases. Endocrine disrupting chemicals (EDCs) and their interactions with steroid hormone receptors like AR may cause disruption of normal endocrine function as well as interfere with metabolic homeostasis, reproduction, developmental and behavioral functions.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being AR agonists, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### NR-AR-LBD
• Androgen receptor (AR), a nuclear hormone receptor, plays a critical role in AR-dependent prostate cancer and other androgen related diseases. Endocrine disrupting chemicals (EDCs) and their interactions with steroid hormone receptors like AR may cause disruption of normal endocrine function as well as interfere with metabolic homeostasis, reproduction, developmental and behavioral functions.
• Result interpretation: Category 1: actives ; Category 0: inactives. Molecules that labeled 1 in this bioassay may bind to the LBD of androgen receptor. The output value is the probability of being actives, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### NR-AhR
• The Aryl hydrocarbon Receptor (AhR), a member of the family of basic helix-loop-helix transcription factors, is crucial to adaptive responses to environmental changes. AhR mediates cellular responses to environmental pollutants such as aromatic hydrocarbons through induction of phase I and II enzymes but also interacts with other nuclear receptor signaling pathways.
• Result interpretation: Category 1: actives ; Category 0: inactives. Molecules that labeled 1 may activate the aryl hydrocarbon receptor signaling pathway. The output value is the probability of being actives, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### NR-Aromatase
• Endocrine disrupting chemicals (EDCs) interfere with the biosynthesis and normal functions of steroid hormones including estrogen and androgen in the body. Aromatase catalyzes the conversion of androgen to estrogen and plays a key role in maintaining the androgen and estrogen balance in many of the EDC-sensitive organs.
• Result interpretation: Category 1: actives ; Category 0: inactives. Molecules that labeled 1 are regarded as aromatase inhibitors that could affect the balance between androgen and estrogen. The output value is the probability of being actives, within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### NR-ER
• Estrogen receptor (ER), a nuclear hormone receptor, plays an important role in development, metabolic homeostasis and reproduction. Endocrine disrupting chemicals (EDCs) and their interactions with steroid hormone receptors like ER causes disruption of normal endocrine function. Therefore, it is important to understand the effect of environmental chemicals on the ER signaling pathway.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### NR-ER-LBD
• Estrogen receptor (ER), a nuclear hormone receptor, plays an important role in development, metabolic homeostasis and reproduction. Two subtypes of ER, ER-alpha and ER-beta have similar expression patterns with some uniqueness in both types. Endocrine disrupting chemicals (EDCs) and their interactions with steroid hormone receptors like ER causes disruption of normal endocrine function.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### NR-PPAR-gamma
• The peroxisome proliferator-activated receptors (PPARs) are lipid-activated transcription factors of the nuclear receptor superfamily with three distinct subtypes namely PPAR alpha, PPAR delta (also called PPAR beta) and PPAR gamma (PPARg). All these subtypes heterodimerize with Retinoid X receptor (RXR) and these heterodimers regulate transcription of various genes. PPAR-gamma receptor (glitazone receptor) is involved in the regulation of glucose and lipid metabolism.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### SR-ARE
• Oxidative stress has been implicated in the pathogenesis of a variety of diseases ranging from cancer to neurodegeneration. The antioxidant response element (ARE) signaling pathway plays an important role in the amelioration of oxidative stress. The CellSensor ARE-bla HepG2 cell line (Invitrogen) can be used for analyzing the Nrf2/antioxidant response signaling pathway. Nrf2 (NF-E2-related factor 2) and Nrf1 are transcription factors that bind to AREs and activate these genes.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
• ATPase family AAA domain-containing protein 5. As cancer cells divide rapidly and during every cell division they need to duplicate their genome by DNA replication. The failure to do so results in the cancer cell death. Based on this concept, many chemotherapeutic agents were developed but have limitations such as low efficacy and severe side effects etc. Enhanced Level of Genome Instability Gene 1 (ELG1; human ATAD5) protein levels increase in response to various types of DNA damage.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### SR-HSE
• Heat shock factor response element. Various chemicals, environmental and physiological stress conditions may lead to the activation of heat shock response/ unfolded protein response (HSR/UPR). There are three heat shock transcription factors (HSFs) (HSF-1, -2, and -4) mediating transcriptional regulation of the human HSR.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### SR-MMP
• Mitochondrial membrane potential (MMP), one of the parameters for mitochondrial function, is generated by mitochondrial electron transport chain that creates an electrochemical gradient by a series of redox reactions. This gradient drives the synthesis of ATP, a crucial molecule for various cellular processes. Measuring MMP in living cells is commonly used to assess the effect of chemicals on mitochondrial function; decreases in MMP can be detected using lipophilic cationic fluorescent dyes.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### SR-p53
• p53, a tumor suppressor protein, is activated following cellular insult, including DNA damage and other cellular stresses. The activation of p53 regulates cell fate by inducing DNA repair, cell cycle arrest, apoptosis, or cellular senescence. The activation of p53, therefore, is a good indicator of DNA damage and other cellular stresses.
• Result interpretation: Category 1: actives ; Category 0: inactives. The output value is the probability of being actives within the range of 0 to 1.
• Empirical decision: 0-0.3: excellent (green); 0.3-0.7: medium (yellow); 0.7-1.0(++): poor (red)
###### Acute Toxicity Rule
• Molecules containing these substructures may cause acute toxicity during oral administration. There are 20 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### Genotoxic Carcinogenicity Rule
• Molecules containing these substructures may cause carcinogenicity or mutagenicity through genotoxic mechanisms.There are 117 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### NonGenotoxic Carcinogenicity Rule
• Molecules containing these substructures may cause carcinogenicity through nongenotoxic mechanisms. There are 23 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### Skin Sensitization Rule
• Molecules containing these substructures may cause skin irritation.There are 155 substructures in this endpoint. Molecules containing these substructures may cause skin irritation.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button.
###### Aquatic Toxicity Rule
• Molecules containing these substructures may cause toxicity to liquid(water). There are 99 substructures in this endpoint.
• Results interpretation: If the number of alerts is not zero, the users could check the substructures by the DETIAL button. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.521706759929657, "perplexity": 11943.227343864408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662539049.32/warc/CC-MAIN-20220521080921-20220521110921-00602.warc.gz"} |
http://www.cl.cam.ac.uk/research/hvg/Isabelle/dist/library/HOL/HOL-Multivariate_Analysis/Topology_Euclidean_Space.html | # Theory Topology_Euclidean_Space
theory Topology_Euclidean_Space
imports Countable_Set Glbs FuncSet Linear_Algebra Norm_Arith
(* title: HOL/Library/Topology_Euclidian_Space.thy Author: Amine Chaieb, University of Cambridge Author: Robert Himmelmann, TU Muenchen Author: Brian Huffman, Portland State University*)header {* Elementary topology in Euclidean space. *}theory Topology_Euclidean_Spaceimports Complex_Main "~~/src/HOL/Library/Countable_Set" "~~/src/HOL/Library/Glbs" "~~/src/HOL/Library/FuncSet" Linear_Algebra Norm_Arithbeginlemma dist_0_norm: fixes x :: "'a::real_normed_vector" shows "dist 0 x = norm x"unfolding dist_norm by simplemma dist_double: "dist x y < d / 2 ==> dist x z < d / 2 ==> dist y z < d" using dist_triangle[of y z x] by (simp add: dist_commute)(* LEGACY *)lemma lim_subseq: "subseq r ==> s ----> l ==> (s o r) ----> l" by (rule LIMSEQ_subseq_LIMSEQ)lemmas real_isGlb_unique = isGlb_unique[where 'a=real]lemma countable_PiE: "finite I ==> (!!i. i ∈ I ==> countable (F i)) ==> countable (PiE I F)" by (induct I arbitrary: F rule: finite_induct) (auto simp: PiE_insert_eq)lemma Lim_within_open: fixes f :: "'a::topological_space => 'b::topological_space" shows "a ∈ S ==> open S ==> (f ---> l)(at a within S) <-> (f ---> l)(at a)" by (fact tendsto_within_open)lemma continuous_on_union: "closed s ==> closed t ==> continuous_on s f ==> continuous_on t f ==> continuous_on (s ∪ t) f" by (fact continuous_on_closed_Un)lemma continuous_on_cases: "closed s ==> closed t ==> continuous_on s f ==> continuous_on t g ==> ∀x. (x∈s ∧ ¬ P x) ∨ (x ∈ t ∧ P x) --> f x = g x ==> continuous_on (s ∪ t) (λx. if P x then f x else g x)" by (rule continuous_on_If) autosubsection {* Topological Basis *}context topological_spacebegindefinition "topological_basis B <-> (∀b∈B. open b) ∧ (∀x. open x --> (∃B'. B' ⊆ B ∧ \<Union>B' = x))"lemma topological_basis: "topological_basis B <-> (∀x. open x <-> (∃B'. B' ⊆ B ∧ \<Union>B' = x))" unfolding topological_basis_def apply safe apply fastforce apply fastforce apply (erule_tac x="x" in allE) apply simp apply (rule_tac x="{x}" in exI) apply auto donelemma topological_basis_iff: assumes "!!B'. B' ∈ B ==> open B'" shows "topological_basis B <-> (∀O'. open O' --> (∀x∈O'. ∃B'∈B. x ∈ B' ∧ B' ⊆ O'))" (is "_ <-> ?rhs")proof safe fix O' and x::'a assume H: "topological_basis B" "open O'" "x ∈ O'" then have "(∃B'⊆B. \<Union>B' = O')" by (simp add: topological_basis_def) then obtain B' where "B' ⊆ B" "O' = \<Union>B'" by auto then show "∃B'∈B. x ∈ B' ∧ B' ⊆ O'" using H by autonext assume H: ?rhs show "topological_basis B" using assms unfolding topological_basis_def proof safe fix O' :: "'a set" assume "open O'" with H obtain f where "∀x∈O'. f x ∈ B ∧ x ∈ f x ∧ f x ⊆ O'" by (force intro: bchoice simp: Bex_def) then show "∃B'⊆B. \<Union>B' = O'" by (auto intro: exI[where x="{f x |x. x ∈ O'}"]) qedqedlemma topological_basisI: assumes "!!B'. B' ∈ B ==> open B'" and "!!O' x. open O' ==> x ∈ O' ==> ∃B'∈B. x ∈ B' ∧ B' ⊆ O'" shows "topological_basis B" using assms by (subst topological_basis_iff) autolemma topological_basisE: fixes O' assumes "topological_basis B" and "open O'" and "x ∈ O'" obtains B' where "B' ∈ B" "x ∈ B'" "B' ⊆ O'"proof atomize_elim from assms have "!!B'. B'∈B ==> open B'" by (simp add: topological_basis_def) with topological_basis_iff assms show "∃B'. B' ∈ B ∧ x ∈ B' ∧ B' ⊆ O'" using assms by (simp add: Bex_def)qedlemma topological_basis_open: assumes "topological_basis B" and "X ∈ B" shows "open X" using assms by (simp add: topological_basis_def)lemma topological_basis_imp_subbasis: assumes B: "topological_basis B" shows "open = generate_topology B"proof (intro ext iffI) fix S :: "'a set" assume "open S" with B obtain B' where "B' ⊆ B" "S = \<Union>B'" unfolding topological_basis_def by blast then show "generate_topology B S" by (auto intro: generate_topology.intros dest: topological_basis_open)next fix S :: "'a set" assume "generate_topology B S" then show "open S" by induct (auto dest: topological_basis_open[OF B])qedlemma basis_dense: fixes B :: "'a set set" and f :: "'a set => 'a" assumes "topological_basis B" and choosefrom_basis: "!!B'. B' ≠ {} ==> f B' ∈ B'" shows "(∀X. open X --> X ≠ {} --> (∃B' ∈ B. f B' ∈ X))"proof (intro allI impI) fix X :: "'a set" assume "open X" and "X ≠ {}" from topological_basisE[OF topological_basis B open X choosefrom_basis[OF X ≠ {}]] guess B' . note B' = this then show "∃B'∈B. f B' ∈ X" by (auto intro!: choosefrom_basis)qedendlemma topological_basis_prod: assumes A: "topological_basis A" and B: "topological_basis B" shows "topological_basis ((λ(a, b). a × b) (A × B))" unfolding topological_basis_defproof (safe, simp_all del: ex_simps add: subset_image_iff ex_simps(1)[symmetric]) fix S :: "('a × 'b) set" assume "open S" then show "∃X⊆A × B. (\<Union>(a,b)∈X. a × b) = S" proof (safe intro!: exI[of _ "{x∈A × B. fst x × snd x ⊆ S}"]) fix x y assume "(x, y) ∈ S" from open_prod_elim[OF open S this] obtain a b where a: "open a""x ∈ a" and b: "open b" "y ∈ b" and "a × b ⊆ S" by (metis mem_Sigma_iff) moreover from topological_basisE[OF A a] guess A0 . moreover from topological_basisE[OF B b] guess B0 . ultimately show "(x, y) ∈ (\<Union>(a, b)∈{X ∈ A × B. fst X × snd X ⊆ S}. a × b)" by (intro UN_I[of "(A0, B0)"]) auto qed autoqed (metis A B topological_basis_open open_Times)subsection {* Countable Basis *}locale countable_basis = fixes B :: "'a::topological_space set set" assumes is_basis: "topological_basis B" and countable_basis: "countable B"beginlemma open_countable_basis_ex: assumes "open X" shows "∃B' ⊆ B. X = Union B'" using assms countable_basis is_basis unfolding topological_basis_def by blastlemma open_countable_basisE: assumes "open X" obtains B' where "B' ⊆ B" "X = Union B'" using assms open_countable_basis_ex by (atomize_elim) simplemma countable_dense_exists: "∃D::'a set. countable D ∧ (∀X. open X --> X ≠ {} --> (∃d ∈ D. d ∈ X))"proof - let ?f = "(λB'. SOME x. x ∈ B')" have "countable (?f B)" using countable_basis by simp with basis_dense[OF is_basis, of ?f] show ?thesis by (intro exI[where x="?f B"]) (metis (mono_tags) all_not_in_conv imageI someI)qedlemma countable_dense_setE: obtains D :: "'a set" where "countable D" "!!X. open X ==> X ≠ {} ==> ∃d ∈ D. d ∈ X" using countable_dense_exists by blastendlemma (in first_countable_topology) first_countable_basisE: obtains A where "countable A" "!!a. a ∈ A ==> x ∈ a" "!!a. a ∈ A ==> open a" "!!S. open S ==> x ∈ S ==> (∃a∈A. a ⊆ S)" using first_countable_basis[of x] apply atomize_elim apply (elim exE) apply (rule_tac x="range A" in exI) apply auto donelemma (in first_countable_topology) first_countable_basis_Int_stableE: obtains A where "countable A" "!!a. a ∈ A ==> x ∈ a" "!!a. a ∈ A ==> open a" "!!S. open S ==> x ∈ S ==> (∃a∈A. a ⊆ S)" "!!a b. a ∈ A ==> b ∈ A ==> a ∩ b ∈ A"proof atomize_elim from first_countable_basisE[of x] guess A' . note A' = this def A ≡ "(λN. \<Inter>((λn. from_nat_into A' n) N)) (Collect finite::nat set set)" then show "∃A. countable A ∧ (∀a. a ∈ A --> x ∈ a) ∧ (∀a. a ∈ A --> open a) ∧ (∀S. open S --> x ∈ S --> (∃a∈A. a ⊆ S)) ∧ (∀a b. a ∈ A --> b ∈ A --> a ∩ b ∈ A)" proof (safe intro!: exI[where x=A]) show "countable A" unfolding A_def by (intro countable_image countable_Collect_finite) fix a assume "a ∈ A" then show "x ∈ a" "open a" using A'(4)[OF open_UNIV] by (auto simp: A_def intro: A' from_nat_into) next let ?int = "λN. \<Inter>(from_nat_into A' N)" fix a b assume "a ∈ A" "b ∈ A" then obtain N M where "a = ?int N" "b = ?int M" "finite (N ∪ M)" by (auto simp: A_def) then show "a ∩ b ∈ A" by (auto simp: A_def intro!: image_eqI[where x="N ∪ M"]) next fix S assume "open S" "x ∈ S" then obtain a where a: "a∈A'" "a ⊆ S" using A' by blast then show "∃a∈A. a ⊆ S" using a A' by (intro bexI[where x=a]) (auto simp: A_def intro: image_eqI[where x="{to_nat_on A' a}"]) qedqedlemma (in topological_space) first_countableI: assumes "countable A" and 1: "!!a. a ∈ A ==> x ∈ a" "!!a. a ∈ A ==> open a" and 2: "!!S. open S ==> x ∈ S ==> ∃a∈A. a ⊆ S" shows "∃A::nat => 'a set. (∀i. x ∈ A i ∧ open (A i)) ∧ (∀S. open S ∧ x ∈ S --> (∃i. A i ⊆ S))"proof (safe intro!: exI[of _ "from_nat_into A"]) fix i have "A ≠ {}" using 2[of UNIV] by auto show "x ∈ from_nat_into A i" "open (from_nat_into A i)" using range_from_nat_into_subset[OF A ≠ {}] 1 by autonext fix S assume "open S" "x∈S" from 2[OF this] show "∃i. from_nat_into A i ⊆ S" using subset_range_from_nat_into[OF countable A] by autoqedinstance prod :: (first_countable_topology, first_countable_topology) first_countable_topologyproof fix x :: "'a × 'b" from first_countable_basisE[of "fst x"] guess A :: "'a set set" . note A = this from first_countable_basisE[of "snd x"] guess B :: "'b set set" . note B = this show "∃A::nat => ('a × 'b) set. (∀i. x ∈ A i ∧ open (A i)) ∧ (∀S. open S ∧ x ∈ S --> (∃i. A i ⊆ S))" proof (rule first_countableI[of "(λ(a, b). a × b) (A × B)"], safe) fix a b assume x: "a ∈ A" "b ∈ B" with A(2, 3)[of a] B(2, 3)[of b] show "x ∈ a × b" and "open (a × b)" unfolding mem_Times_iff by (auto intro: open_Times) next fix S assume "open S" "x ∈ S" from open_prod_elim[OF this] guess a' b' . note a'b' = this moreover from a'b' A(4)[of a'] B(4)[of b'] obtain a b where "a ∈ A" "a ⊆ a'" "b ∈ B" "b ⊆ b'" by auto ultimately show "∃a∈(λ(a, b). a × b) (A × B). a ⊆ S" by (auto intro!: bexI[of _ "a × b"] bexI[of _ a] bexI[of _ b]) qed (simp add: A B)qedclass second_countable_topology = topological_space + assumes ex_countable_subbasis: "∃B::'a::topological_space set set. countable B ∧ open = generate_topology B"beginlemma ex_countable_basis: "∃B::'a set set. countable B ∧ topological_basis B"proof - from ex_countable_subbasis obtain B where B: "countable B" "open = generate_topology B" by blast let ?B = "Inter {b. finite b ∧ b ⊆ B }" show ?thesis proof (intro exI conjI) show "countable ?B" by (intro countable_image countable_Collect_finite_subset B) { fix S assume "open S" then have "∃B'⊆{b. finite b ∧ b ⊆ B}. (\<Union>b∈B'. \<Inter>b) = S" unfolding B proof induct case UNIV show ?case by (intro exI[of _ "{{}}"]) simp next case (Int a b) then obtain x y where x: "a = UNION x Inter" "!!i. i ∈ x ==> finite i ∧ i ⊆ B" and y: "b = UNION y Inter" "!!i. i ∈ y ==> finite i ∧ i ⊆ B" by blast show ?case unfolding x y Int_UN_distrib2 by (intro exI[of _ "{i ∪ j| i j. i ∈ x ∧ j ∈ y}"]) (auto dest: x(2) y(2)) next case (UN K) then have "∀k∈K. ∃B'⊆{b. finite b ∧ b ⊆ B}. UNION B' Inter = k" by auto then guess k unfolding bchoice_iff .. then show "∃B'⊆{b. finite b ∧ b ⊆ B}. UNION B' Inter = \<Union>K" by (intro exI[of _ "UNION K k"]) auto next case (Basis S) then show ?case by (intro exI[of _ "{{S}}"]) auto qed then have "(∃B'⊆Inter {b. finite b ∧ b ⊆ B}. \<Union>B' = S)" unfolding subset_image_iff by blast } then show "topological_basis ?B" unfolding topological_space_class.topological_basis_def by (safe intro!: topological_space_class.open_Inter) (simp_all add: B generate_topology.Basis subset_eq) qedqedendsublocale second_countable_topology < countable_basis "SOME B. countable B ∧ topological_basis B" using someI_ex[OF ex_countable_basis] by unfold_locales safeinstance prod :: (second_countable_topology, second_countable_topology) second_countable_topologyproof obtain A :: "'a set set" where "countable A" "topological_basis A" using ex_countable_basis by auto moreover obtain B :: "'b set set" where "countable B" "topological_basis B" using ex_countable_basis by auto ultimately show "∃B::('a × 'b) set set. countable B ∧ open = generate_topology B" by (auto intro!: exI[of _ "(λ(a, b). a × b) (A × B)"] topological_basis_prod topological_basis_imp_subbasis)qedinstance second_countable_topology ⊆ first_countable_topologyproof fix x :: 'a def B ≡ "SOME B::'a set set. countable B ∧ topological_basis B" then have B: "countable B" "topological_basis B" using countable_basis is_basis by (auto simp: countable_basis is_basis) then show "∃A::nat => 'a set. (∀i. x ∈ A i ∧ open (A i)) ∧ (∀S. open S ∧ x ∈ S --> (∃i. A i ⊆ S))" by (intro first_countableI[of "{b∈B. x ∈ b}"]) (fastforce simp: topological_space_class.topological_basis_def)+qedsubsection {* Polish spaces *}text {* Textbooks define Polish spaces as completely metrizable. We assume the topology to be complete for a given metric. *}class polish_space = complete_space + second_countable_topologysubsection {* General notion of a topology as a value *}definition "istopology L <-> L {} ∧ (∀S T. L S --> L T --> L (S ∩ T)) ∧ (∀K. Ball K L --> L (\<Union> K))"typedef 'a topology = "{L::('a set) => bool. istopology L}" morphisms "openin" "topology" unfolding istopology_def by blastlemma istopology_open_in[intro]: "istopology(openin U)" using openin[of U] by blastlemma topology_inverse': "istopology U ==> openin (topology U) = U" using topology_inverse[unfolded mem_Collect_eq] .lemma topology_inverse_iff: "istopology U <-> openin (topology U) = U" using topology_inverse[of U] istopology_open_in[of "topology U"] by autolemma topology_eq: "T1 = T2 <-> (∀S. openin T1 S <-> openin T2 S)"proof assume "T1 = T2" then show "∀S. openin T1 S <-> openin T2 S" by simpnext assume H: "∀S. openin T1 S <-> openin T2 S" then have "openin T1 = openin T2" by (simp add: fun_eq_iff) then have "topology (openin T1) = topology (openin T2)" by simp then show "T1 = T2" unfolding openin_inverse .qedtext{* Infer the "universe" from union of all sets in the topology. *}definition "topspace T = \<Union>{S. openin T S}"subsubsection {* Main properties of open sets *}lemma openin_clauses: fixes U :: "'a topology" shows "openin U {}" "!!S T. openin U S ==> openin U T ==> openin U (S∩T)" "!!K. (∀S ∈ K. openin U S) ==> openin U (\<Union>K)" using openin[of U] unfolding istopology_def mem_Collect_eq by fast+lemma openin_subset[intro]: "openin U S ==> S ⊆ topspace U" unfolding topspace_def by blastlemma openin_empty[simp]: "openin U {}" by (simp add: openin_clauses)lemma openin_Int[intro]: "openin U S ==> openin U T ==> openin U (S ∩ T)" using openin_clauses by simplemma openin_Union[intro]: "(∀S ∈K. openin U S) ==> openin U (\<Union> K)" using openin_clauses by simplemma openin_Un[intro]: "openin U S ==> openin U T ==> openin U (S ∪ T)" using openin_Union[of "{S,T}" U] by autolemma openin_topspace[intro, simp]: "openin U (topspace U)" by (simp add: openin_Union topspace_def)lemma openin_subopen: "openin U S <-> (∀x ∈ S. ∃T. openin U T ∧ x ∈ T ∧ T ⊆ S)" (is "?lhs <-> ?rhs")proof assume ?lhs then show ?rhs by autonext assume H: ?rhs let ?t = "\<Union>{T. openin U T ∧ T ⊆ S}" have "openin U ?t" by (simp add: openin_Union) also have "?t = S" using H by auto finally show "openin U S" .qedsubsubsection {* Closed sets *}definition "closedin U S <-> S ⊆ topspace U ∧ openin U (topspace U - S)"lemma closedin_subset: "closedin U S ==> S ⊆ topspace U" by (metis closedin_def)lemma closedin_empty[simp]: "closedin U {}" by (simp add: closedin_def)lemma closedin_topspace[intro, simp]: "closedin U (topspace U)" by (simp add: closedin_def)lemma closedin_Un[intro]: "closedin U S ==> closedin U T ==> closedin U (S ∪ T)" by (auto simp add: Diff_Un closedin_def)lemma Diff_Inter[intro]: "A - \<Inter>S = \<Union> {A - s|s. s∈S}" by autolemma closedin_Inter[intro]: assumes Ke: "K ≠ {}" and Kc: "∀S ∈K. closedin U S" shows "closedin U (\<Inter> K)" using Ke Kc unfolding closedin_def Diff_Inter by autolemma closedin_Int[intro]: "closedin U S ==> closedin U T ==> closedin U (S ∩ T)" using closedin_Inter[of "{S,T}" U] by autolemma Diff_Diff_Int: "A - (A - B) = A ∩ B" by blastlemma openin_closedin_eq: "openin U S <-> S ⊆ topspace U ∧ closedin U (topspace U - S)" apply (auto simp add: closedin_def Diff_Diff_Int inf_absorb2) apply (metis openin_subset subset_eq) donelemma openin_closedin: "S ⊆ topspace U ==> (openin U S <-> closedin U (topspace U - S))" by (simp add: openin_closedin_eq)lemma openin_diff[intro]: assumes oS: "openin U S" and cT: "closedin U T" shows "openin U (S - T)"proof - have "S - T = S ∩ (topspace U - T)" using openin_subset[of U S] oS cT by (auto simp add: topspace_def openin_subset) then show ?thesis using oS cT by (auto simp add: closedin_def)qedlemma closedin_diff[intro]: assumes oS: "closedin U S" and cT: "openin U T" shows "closedin U (S - T)"proof - have "S - T = S ∩ (topspace U - T)" using closedin_subset[of U S] oS cT by (auto simp add: topspace_def) then show ?thesis using oS cT by (auto simp add: openin_closedin_eq)qedsubsubsection {* Subspace topology *}definition "subtopology U V = topology (λT. ∃S. T = S ∩ V ∧ openin U S)"lemma istopology_subtopology: "istopology (λT. ∃S. T = S ∩ V ∧ openin U S)" (is "istopology ?L")proof - have "?L {}" by blast { fix A B assume A: "?L A" and B: "?L B" from A B obtain Sa and Sb where Sa: "openin U Sa" "A = Sa ∩ V" and Sb: "openin U Sb" "B = Sb ∩ V" by blast have "A ∩ B = (Sa ∩ Sb) ∩ V" "openin U (Sa ∩ Sb)" using Sa Sb by blast+ then have "?L (A ∩ B)" by blast } moreover { fix K assume K: "K ⊆ Collect ?L" have th0: "Collect ?L = (λS. S ∩ V) Collect (openin U)" apply (rule set_eqI) apply (simp add: Ball_def image_iff) apply metis done from K[unfolded th0 subset_image_iff] obtain Sk where Sk: "Sk ⊆ Collect (openin U)" "K = (λS. S ∩ V) Sk" by blast have "\<Union>K = (\<Union>Sk) ∩ V" using Sk by auto moreover have "openin U (\<Union> Sk)" using Sk by (auto simp add: subset_eq) ultimately have "?L (\<Union>K)" by blast } ultimately show ?thesis unfolding subset_eq mem_Collect_eq istopology_def by blastqedlemma openin_subtopology: "openin (subtopology U V) S <-> (∃T. openin U T ∧ S = T ∩ V)" unfolding subtopology_def topology_inverse'[OF istopology_subtopology] by autolemma topspace_subtopology: "topspace (subtopology U V) = topspace U ∩ V" by (auto simp add: topspace_def openin_subtopology)lemma closedin_subtopology: "closedin (subtopology U V) S <-> (∃T. closedin U T ∧ S = T ∩ V)" unfolding closedin_def topspace_subtopology apply (simp add: openin_subtopology) apply (rule iffI) apply clarify apply (rule_tac x="topspace U - T" in exI) apply auto donelemma openin_subtopology_refl: "openin (subtopology U V) V <-> V ⊆ topspace U" unfolding openin_subtopology apply (rule iffI, clarify) apply (frule openin_subset[of U]) apply blast apply (rule exI[where x="topspace U"]) apply auto donelemma subtopology_superset: assumes UV: "topspace U ⊆ V" shows "subtopology U V = U"proof - { fix S { fix T assume T: "openin U T" "S = T ∩ V" from T openin_subset[OF T(1)] UV have eq: "S = T" by blast have "openin U S" unfolding eq using T by blast } moreover { assume S: "openin U S" then have "∃T. openin U T ∧ S = T ∩ V" using openin_subset[OF S] UV by auto } ultimately have "(∃T. openin U T ∧ S = T ∩ V) <-> openin U S" by blast } then show ?thesis unfolding topology_eq openin_subtopology by blastqedlemma subtopology_topspace[simp]: "subtopology U (topspace U) = U" by (simp add: subtopology_superset)lemma subtopology_UNIV[simp]: "subtopology U UNIV = U" by (simp add: subtopology_superset)subsubsection {* The standard Euclidean topology *}definition euclidean :: "'a::topological_space topology" where "euclidean = topology open"lemma open_openin: "open S <-> openin euclidean S" unfolding euclidean_def apply (rule cong[where x=S and y=S]) apply (rule topology_inverse[symmetric]) apply (auto simp add: istopology_def) donelemma topspace_euclidean: "topspace euclidean = UNIV" apply (simp add: topspace_def) apply (rule set_eqI) apply (auto simp add: open_openin[symmetric]) donelemma topspace_euclidean_subtopology[simp]: "topspace (subtopology euclidean S) = S" by (simp add: topspace_euclidean topspace_subtopology)lemma closed_closedin: "closed S <-> closedin euclidean S" by (simp add: closed_def closedin_def topspace_euclidean open_openin Compl_eq_Diff_UNIV)lemma open_subopen: "open S <-> (∀x∈S. ∃T. open T ∧ x ∈ T ∧ T ⊆ S)" by (simp add: open_openin openin_subopen[symmetric])text {* Basic "localization" results are handy for connectedness. *}lemma openin_open: "openin (subtopology euclidean U) S <-> (∃T. open T ∧ (S = U ∩ T))" by (auto simp add: openin_subtopology open_openin[symmetric])lemma openin_open_Int[intro]: "open S ==> openin (subtopology euclidean U) (U ∩ S)" by (auto simp add: openin_open)lemma open_openin_trans[trans]: "open S ==> open T ==> T ⊆ S ==> openin (subtopology euclidean S) T" by (metis Int_absorb1 openin_open_Int)lemma open_subset: "S ⊆ T ==> open S ==> openin (subtopology euclidean T) S" by (auto simp add: openin_open)lemma closedin_closed: "closedin (subtopology euclidean U) S <-> (∃T. closed T ∧ S = U ∩ T)" by (simp add: closedin_subtopology closed_closedin Int_ac)lemma closedin_closed_Int: "closed S ==> closedin (subtopology euclidean U) (U ∩ S)" by (metis closedin_closed)lemma closed_closedin_trans: "closed S ==> closed T ==> T ⊆ S ==> closedin (subtopology euclidean S) T" apply (subgoal_tac "S ∩ T = T" ) apply auto apply (frule closedin_closed_Int[of T S]) apply simp donelemma closed_subset: "S ⊆ T ==> closed S ==> closedin (subtopology euclidean T) S" by (auto simp add: closedin_closed)lemma openin_euclidean_subtopology_iff: fixes S U :: "'a::metric_space set" shows "openin (subtopology euclidean U) S <-> S ⊆ U ∧ (∀x∈S. ∃e>0. ∀x'∈U. dist x' x < e --> x'∈ S)" (is "?lhs <-> ?rhs")proof assume ?lhs then show ?rhs unfolding openin_open open_dist by blastnext def T ≡ "{x. ∃a∈S. ∃d>0. (∀y∈U. dist y a < d --> y ∈ S) ∧ dist x a < d}" have 1: "∀x∈T. ∃e>0. ∀y. dist y x < e --> y ∈ T" unfolding T_def apply clarsimp apply (rule_tac x="d - dist x a" in exI) apply (clarsimp simp add: less_diff_eq) apply (erule rev_bexI) apply (rule_tac x=d in exI, clarify) apply (erule le_less_trans [OF dist_triangle]) done assume ?rhs then have 2: "S = U ∩ T" unfolding T_def apply auto apply (drule (1) bspec, erule rev_bexI) apply auto done from 1 2 show ?lhs unfolding openin_open open_dist by fastqedtext {* These "transitivity" results are handy too *}lemma openin_trans[trans]: "openin (subtopology euclidean T) S ==> openin (subtopology euclidean U) T ==> openin (subtopology euclidean U) S" unfolding open_openin openin_open by blastlemma openin_open_trans: "openin (subtopology euclidean T) S ==> open T ==> open S" by (auto simp add: openin_open intro: openin_trans)lemma closedin_trans[trans]: "closedin (subtopology euclidean T) S ==> closedin (subtopology euclidean U) T ==> closedin (subtopology euclidean U) S" by (auto simp add: closedin_closed closed_closedin closed_Inter Int_assoc)lemma closedin_closed_trans: "closedin (subtopology euclidean T) S ==> closed T ==> closed S" by (auto simp add: closedin_closed intro: closedin_trans)subsection {* Open and closed balls *}definition ball :: "'a::metric_space => real => 'a set" where "ball x e = {y. dist x y < e}"definition cball :: "'a::metric_space => real => 'a set" where "cball x e = {y. dist x y ≤ e}"lemma mem_ball [simp]: "y ∈ ball x e <-> dist x y < e" by (simp add: ball_def)lemma mem_cball [simp]: "y ∈ cball x e <-> dist x y ≤ e" by (simp add: cball_def)lemma mem_ball_0: fixes x :: "'a::real_normed_vector" shows "x ∈ ball 0 e <-> norm x < e" by (simp add: dist_norm)lemma mem_cball_0: fixes x :: "'a::real_normed_vector" shows "x ∈ cball 0 e <-> norm x ≤ e" by (simp add: dist_norm)lemma centre_in_ball: "x ∈ ball x e <-> 0 < e" by simplemma centre_in_cball: "x ∈ cball x e <-> 0 ≤ e" by simplemma ball_subset_cball[simp,intro]: "ball x e ⊆ cball x e" by (simp add: subset_eq)lemma subset_ball[intro]: "d ≤ e ==> ball x d ⊆ ball x e" by (simp add: subset_eq)lemma subset_cball[intro]: "d ≤ e ==> cball x d ⊆ cball x e" by (simp add: subset_eq)lemma ball_max_Un: "ball a (max r s) = ball a r ∪ ball a s" by (simp add: set_eq_iff) arithlemma ball_min_Int: "ball a (min r s) = ball a r ∩ ball a s" by (simp add: set_eq_iff)lemma diff_less_iff: "(a::real) - b > 0 <-> a > b" "(a::real) - b < 0 <-> a < b" "a - b < c <-> a < c + b" "a - b > c <-> a > c + b" by arith+lemma diff_le_iff: "(a::real) - b ≥ 0 <-> a ≥ b" "(a::real) - b ≤ 0 <-> a ≤ b" "a - b ≤ c <-> a ≤ c + b" "a - b ≥ c <-> a ≥ c + b" by arith+lemma open_ball[intro, simp]: "open (ball x e)" unfolding open_dist ball_def mem_Collect_eq Ball_def unfolding dist_commute apply clarify apply (rule_tac x="e - dist xa x" in exI) using dist_triangle_alt[where z=x] apply (clarsimp simp add: diff_less_iff) apply atomize apply (erule_tac x="y" in allE) apply (erule_tac x="xa" in allE) apply arith donelemma open_contains_ball: "open S <-> (∀x∈S. ∃e>0. ball x e ⊆ S)" unfolding open_dist subset_eq mem_ball Ball_def dist_commute ..lemma openE[elim?]: assumes "open S" "x∈S" obtains e where "e>0" "ball x e ⊆ S" using assms unfolding open_contains_ball by autolemma open_contains_ball_eq: "open S ==> ∀x. x∈S <-> (∃e>0. ball x e ⊆ S)" by (metis open_contains_ball subset_eq centre_in_ball)lemma ball_eq_empty[simp]: "ball x e = {} <-> e ≤ 0" unfolding mem_ball set_eq_iff apply (simp add: not_less) apply (metis zero_le_dist order_trans dist_self) donelemma ball_empty[intro]: "e ≤ 0 ==> ball x e = {}" by simplemma euclidean_dist_l2: fixes x y :: "'a :: euclidean_space" shows "dist x y = setL2 (λi. dist (x • i) (y • i)) Basis" unfolding dist_norm norm_eq_sqrt_inner setL2_def by (subst euclidean_inner) (simp add: power2_eq_square inner_diff_left)definition "box a b = {x. ∀i∈Basis. a • i < x • i ∧ x • i < b • i}"lemma rational_boxes: fixes x :: "'a::euclidean_space" assumes "e > 0" shows "∃a b. (∀i∈Basis. a • i ∈ \<rat> ∧ b • i ∈ \<rat> ) ∧ x ∈ box a b ∧ box a b ⊆ ball x e"proof - def e' ≡ "e / (2 * sqrt (real (DIM ('a))))" then have e: "e' > 0" using assms by (auto intro!: divide_pos_pos simp: DIM_positive) have "∀i. ∃y. y ∈ \<rat> ∧ y < x • i ∧ x • i - y < e'" (is "∀i. ?th i") proof fix i from Rats_dense_in_real[of "x • i - e'" "x • i"] e show "?th i" by auto qed from choice[OF this] guess a .. note a = this have "∀i. ∃y. y ∈ \<rat> ∧ x • i < y ∧ y - x • i < e'" (is "∀i. ?th i") proof fix i from Rats_dense_in_real[of "x • i" "x • i + e'"] e show "?th i" by auto qed from choice[OF this] guess b .. note b = this let ?a = "∑i∈Basis. a i *⇩R i" and ?b = "∑i∈Basis. b i *⇩R i" show ?thesis proof (rule exI[of _ ?a], rule exI[of _ ?b], safe) fix y :: 'a assume *: "y ∈ box ?a ?b" have "dist x y = sqrt (∑i∈Basis. (dist (x • i) (y • i))⇧2)" unfolding setL2_def[symmetric] by (rule euclidean_dist_l2) also have "… < sqrt (∑(i::'a)∈Basis. e^2 / real (DIM('a)))" proof (rule real_sqrt_less_mono, rule setsum_strict_mono) fix i :: "'a" assume i: "i ∈ Basis" have "a i < y•i ∧ y•i < b i" using * i by (auto simp: box_def) moreover have "a i < x•i" "x•i - a i < e'" using a by auto moreover have "x•i < b i" "b i - x•i < e'" using b by auto ultimately have "¦x•i - y•i¦ < 2 * e'" by auto then have "dist (x • i) (y • i) < e/sqrt (real (DIM('a)))" unfolding e'_def by (auto simp: dist_real_def) then have "(dist (x • i) (y • i))⇧2 < (e/sqrt (real (DIM('a))))⇧2" by (rule power_strict_mono) auto then show "(dist (x • i) (y • i))⇧2 < e⇧2 / real DIM('a)" by (simp add: power_divide) qed auto also have "… = e" using 0 < e by (simp add: real_eq_of_nat) finally show "y ∈ ball x e" by (auto simp: ball_def) qed (insert a b, auto simp: box_def)qedlemma open_UNION_box: fixes M :: "'a::euclidean_space set" assumes "open M" defines "a' ≡ λf :: 'a => real × real. (∑(i::'a)∈Basis. fst (f i) *⇩R i)" defines "b' ≡ λf :: 'a => real × real. (∑(i::'a)∈Basis. snd (f i) *⇩R i)" defines "I ≡ {f∈Basis ->⇩E \<rat> × \<rat>. box (a' f) (b' f) ⊆ M}" shows "M = (\<Union>f∈I. box (a' f) (b' f))"proof - { fix x assume "x ∈ M" obtain e where e: "e > 0" "ball x e ⊆ M" using openE[OF open M x ∈ M] by auto moreover obtain a b where ab: "x ∈ box a b" "∀i ∈ Basis. a • i ∈ \<rat>" "∀i∈Basis. b • i ∈ \<rat>" "box a b ⊆ ball x e" using rational_boxes[OF e(1)] by metis ultimately have "x ∈ (\<Union>f∈I. box (a' f) (b' f))" by (intro UN_I[of "λi∈Basis. (a • i, b • i)"]) (auto simp: euclidean_representation I_def a'_def b'_def) } then show ?thesis by (auto simp: I_def)qedsubsection{* Connectedness *}lemma connected_local: "connected S <-> ¬ (∃e1 e2. openin (subtopology euclidean S) e1 ∧ openin (subtopology euclidean S) e2 ∧ S ⊆ e1 ∪ e2 ∧ e1 ∩ e2 = {} ∧ e1 ≠ {} ∧ e2 ≠ {})" unfolding connected_def openin_open apply safe apply blast+ donelemma exists_diff: fixes P :: "'a set => bool" shows "(∃S. P(- S)) <-> (∃S. P S)" (is "?lhs <-> ?rhs")proof - { assume "?lhs" then have ?rhs by blast } moreover { fix S assume H: "P S" have "S = - (- S)" by auto with H have "P (- (- S))" by metis } ultimately show ?thesis by metisqedlemma connected_clopen: "connected S <-> (∀T. openin (subtopology euclidean S) T ∧ closedin (subtopology euclidean S) T --> T = {} ∨ T = S)" (is "?lhs <-> ?rhs")proof - have "¬ connected S <-> (∃e1 e2. open e1 ∧ open (- e2) ∧ S ⊆ e1 ∪ (- e2) ∧ e1 ∩ (- e2) ∩ S = {} ∧ e1 ∩ S ≠ {} ∧ (- e2) ∩ S ≠ {})" unfolding connected_def openin_open closedin_closed apply (subst exists_diff) apply blast done then have th0: "connected S <-> ¬ (∃e2 e1. closed e2 ∧ open e1 ∧ S ⊆ e1 ∪ (- e2) ∧ e1 ∩ (- e2) ∩ S = {} ∧ e1 ∩ S ≠ {} ∧ (- e2) ∩ S ≠ {})" (is " _ <-> ¬ (∃e2 e1. ?P e2 e1)") apply (simp add: closed_def) apply metis done have th1: "?rhs <-> ¬ (∃t' t. closed t'∧t = S∩t' ∧ t≠{} ∧ t≠S ∧ (∃t'. open t' ∧ t = S ∩ t'))" (is "_ <-> ¬ (∃t' t. ?Q t' t)") unfolding connected_def openin_open closedin_closed by auto { fix e2 { fix e1 have "?P e2 e1 <-> (∃t. closed e2 ∧ t = S∩e2 ∧ open e1 ∧ t = S∩e1 ∧ t≠{} ∧ t ≠ S)" by auto } then have "(∃e1. ?P e2 e1) <-> (∃t. ?Q e2 t)" by metis } then have "∀e2. (∃e1. ?P e2 e1) <-> (∃t. ?Q e2 t)" by blast then show ?thesis unfolding th0 th1 by simpqedsubsection{* Limit points *}definition (in topological_space) islimpt:: "'a => 'a set => bool" (infixr "islimpt" 60) where "x islimpt S <-> (∀T. x∈T --> open T --> (∃y∈S. y∈T ∧ y≠x))"lemma islimptI: assumes "!!T. x ∈ T ==> open T ==> ∃y∈S. y ∈ T ∧ y ≠ x" shows "x islimpt S" using assms unfolding islimpt_def by autolemma islimptE: assumes "x islimpt S" and "x ∈ T" and "open T" obtains y where "y ∈ S" and "y ∈ T" and "y ≠ x" using assms unfolding islimpt_def by autolemma islimpt_iff_eventually: "x islimpt S <-> ¬ eventually (λy. y ∉ S) (at x)" unfolding islimpt_def eventually_at_topological by autolemma islimpt_subset: "x islimpt S ==> S ⊆ T ==> x islimpt T" unfolding islimpt_def by fastlemma islimpt_approachable: fixes x :: "'a::metric_space" shows "x islimpt S <-> (∀e>0. ∃x'∈S. x' ≠ x ∧ dist x' x < e)" unfolding islimpt_iff_eventually eventually_at by fastlemma islimpt_approachable_le: fixes x :: "'a::metric_space" shows "x islimpt S <-> (∀e>0. ∃x'∈ S. x' ≠ x ∧ dist x' x ≤ e)" unfolding islimpt_approachable using approachable_lt_le [where f="λy. dist y x" and P="λy. y ∉ S ∨ y = x", THEN arg_cong [where f=Not]] by (simp add: Bex_def conj_commute conj_left_commute)lemma islimpt_UNIV_iff: "x islimpt UNIV <-> ¬ open {x}" unfolding islimpt_def by (safe, fast, case_tac "T = {x}", fast, fast)lemma islimpt_punctured: "x islimpt S = x islimpt (S-{x})" unfolding islimpt_def by blasttext {* A perfect space has no isolated points. *}lemma islimpt_UNIV [simp, intro]: "(x::'a::perfect_space) islimpt UNIV" unfolding islimpt_UNIV_iff by (rule not_open_singleton)lemma perfect_choose_dist: fixes x :: "'a::{perfect_space, metric_space}" shows "0 < r ==> ∃a. a ≠ x ∧ dist a x < r" using islimpt_UNIV [of x] by (simp add: islimpt_approachable)lemma closed_limpt: "closed S <-> (∀x. x islimpt S --> x ∈ S)" unfolding closed_def apply (subst open_subopen) apply (simp add: islimpt_def subset_eq) apply (metis ComplE ComplI) donelemma islimpt_EMPTY[simp]: "¬ x islimpt {}" unfolding islimpt_def by autolemma finite_set_avoid: fixes a :: "'a::metric_space" assumes fS: "finite S" shows "∃d>0. ∀x∈S. x ≠ a --> d ≤ dist a x"proof (induct rule: finite_induct[OF fS]) case 1 then show ?case by (auto intro: zero_less_one)next case (2 x F) from 2 obtain d where d: "d >0" "∀x∈F. x≠a --> d ≤ dist a x" by blast show ?case proof (cases "x = a") case True then show ?thesis using d by auto next case False let ?d = "min d (dist a x)" have dp: "?d > 0" using False d(1) using dist_nz by auto from d have d': "∀x∈F. x≠a --> ?d ≤ dist a x" by auto with dp False show ?thesis by (auto intro!: exI[where x="?d"]) qedqedlemma islimpt_Un: "x islimpt (S ∪ T) <-> x islimpt S ∨ x islimpt T" by (simp add: islimpt_iff_eventually eventually_conj_iff)lemma discrete_imp_closed: fixes S :: "'a::metric_space set" assumes e: "0 < e" and d: "∀x ∈ S. ∀y ∈ S. dist y x < e --> y = x" shows "closed S"proof - { fix x assume C: "∀e>0. ∃x'∈S. x' ≠ x ∧ dist x' x < e" from e have e2: "e/2 > 0" by arith from C[rule_format, OF e2] obtain y where y: "y ∈ S" "y ≠ x" "dist y x < e/2" by blast let ?m = "min (e/2) (dist x y) " from e2 y(2) have mp: "?m > 0" by (simp add: dist_nz[symmetric]) from C[rule_format, OF mp] obtain z where z: "z ∈ S" "z ≠ x" "dist z x < ?m" by blast have th: "dist z y < e" using z y by (intro dist_triangle_lt [where z=x], simp) from d[rule_format, OF y(1) z(1) th] y z have False by (auto simp add: dist_commute)} then show ?thesis by (metis islimpt_approachable closed_limpt [where 'a='a])qedsubsection {* Interior of a Set *}definition "interior S = \<Union>{T. open T ∧ T ⊆ S}"lemma interiorI [intro?]: assumes "open T" and "x ∈ T" and "T ⊆ S" shows "x ∈ interior S" using assms unfolding interior_def by fastlemma interiorE [elim?]: assumes "x ∈ interior S" obtains T where "open T" and "x ∈ T" and "T ⊆ S" using assms unfolding interior_def by fastlemma open_interior [simp, intro]: "open (interior S)" by (simp add: interior_def open_Union)lemma interior_subset: "interior S ⊆ S" by (auto simp add: interior_def)lemma interior_maximal: "T ⊆ S ==> open T ==> T ⊆ interior S" by (auto simp add: interior_def)lemma interior_open: "open S ==> interior S = S" by (intro equalityI interior_subset interior_maximal subset_refl)lemma interior_eq: "interior S = S <-> open S" by (metis open_interior interior_open)lemma open_subset_interior: "open S ==> S ⊆ interior T <-> S ⊆ T" by (metis interior_maximal interior_subset subset_trans)lemma interior_empty [simp]: "interior {} = {}" using open_empty by (rule interior_open)lemma interior_UNIV [simp]: "interior UNIV = UNIV" using open_UNIV by (rule interior_open)lemma interior_interior [simp]: "interior (interior S) = interior S" using open_interior by (rule interior_open)lemma interior_mono: "S ⊆ T ==> interior S ⊆ interior T" by (auto simp add: interior_def)lemma interior_unique: assumes "T ⊆ S" and "open T" assumes "!!T'. T' ⊆ S ==> open T' ==> T' ⊆ T" shows "interior S = T" by (intro equalityI assms interior_subset open_interior interior_maximal)lemma interior_inter [simp]: "interior (S ∩ T) = interior S ∩ interior T" by (intro equalityI Int_mono Int_greatest interior_mono Int_lower1 Int_lower2 interior_maximal interior_subset open_Int open_interior)lemma mem_interior: "x ∈ interior S <-> (∃e>0. ball x e ⊆ S)" using open_contains_ball_eq [where S="interior S"] by (simp add: open_subset_interior)lemma interior_limit_point [intro]: fixes x :: "'a::perfect_space" assumes x: "x ∈ interior S" shows "x islimpt S" using x islimpt_UNIV [of x] unfolding interior_def islimpt_def apply (clarsimp, rename_tac T T') apply (drule_tac x="T ∩ T'" in spec) apply (auto simp add: open_Int) donelemma interior_closed_Un_empty_interior: assumes cS: "closed S" and iT: "interior T = {}" shows "interior (S ∪ T) = interior S"proof show "interior S ⊆ interior (S ∪ T)" by (rule interior_mono) (rule Un_upper1) show "interior (S ∪ T) ⊆ interior S" proof fix x assume "x ∈ interior (S ∪ T)" then obtain R where "open R" "x ∈ R" "R ⊆ S ∪ T" .. show "x ∈ interior S" proof (rule ccontr) assume "x ∉ interior S" with x ∈ R open R obtain y where "y ∈ R - S" unfolding interior_def by fast from open R closed S have "open (R - S)" by (rule open_Diff) from R ⊆ S ∪ T have "R - S ⊆ T" by fast from y ∈ R - S open (R - S) R - S ⊆ T interior T = {} show False unfolding interior_def by fast qed qedqedlemma interior_Times: "interior (A × B) = interior A × interior B"proof (rule interior_unique) show "interior A × interior B ⊆ A × B" by (intro Sigma_mono interior_subset) show "open (interior A × interior B)" by (intro open_Times open_interior) fix T assume "T ⊆ A × B" and "open T" then show "T ⊆ interior A × interior B" proof safe fix x y assume "(x, y) ∈ T" then obtain C D where "open C" "open D" "C × D ⊆ T" "x ∈ C" "y ∈ D" using open T unfolding open_prod_def by fast then have "open C" "open D" "C ⊆ A" "D ⊆ B" "x ∈ C" "y ∈ D" using T ⊆ A × B by auto then show "x ∈ interior A" and "y ∈ interior B" by (auto intro: interiorI) qedqedsubsection {* Closure of a Set *}definition "closure S = S ∪ {x | x. x islimpt S}"lemma interior_closure: "interior S = - (closure (- S))" unfolding interior_def closure_def islimpt_def by autolemma closure_interior: "closure S = - interior (- S)" unfolding interior_closure by simplemma closed_closure[simp, intro]: "closed (closure S)" unfolding closure_interior by (simp add: closed_Compl)lemma closure_subset: "S ⊆ closure S" unfolding closure_def by simplemma closure_hull: "closure S = closed hull S" unfolding hull_def closure_interior interior_def by autolemma closure_eq: "closure S = S <-> closed S" unfolding closure_hull using closed_Inter by (rule hull_eq)lemma closure_closed [simp]: "closed S ==> closure S = S" unfolding closure_eq .lemma closure_closure [simp]: "closure (closure S) = closure S" unfolding closure_hull by (rule hull_hull)lemma closure_mono: "S ⊆ T ==> closure S ⊆ closure T" unfolding closure_hull by (rule hull_mono)lemma closure_minimal: "S ⊆ T ==> closed T ==> closure S ⊆ T" unfolding closure_hull by (rule hull_minimal)lemma closure_unique: assumes "S ⊆ T" and "closed T" and "!!T'. S ⊆ T' ==> closed T' ==> T ⊆ T'" shows "closure S = T" using assms unfolding closure_hull by (rule hull_unique)lemma closure_empty [simp]: "closure {} = {}" using closed_empty by (rule closure_closed)lemma closure_UNIV [simp]: "closure UNIV = UNIV" using closed_UNIV by (rule closure_closed)lemma closure_union [simp]: "closure (S ∪ T) = closure S ∪ closure T" unfolding closure_interior by simplemma closure_eq_empty: "closure S = {} <-> S = {}" using closure_empty closure_subset[of S] by blastlemma closure_subset_eq: "closure S ⊆ S <-> closed S" using closure_eq[of S] closure_subset[of S] by simplemma open_inter_closure_eq_empty: "open S ==> (S ∩ closure T) = {} <-> S ∩ T = {}" using open_subset_interior[of S "- T"] using interior_subset[of "- T"] unfolding closure_interior by autolemma open_inter_closure_subset: "open S ==> (S ∩ (closure T)) ⊆ closure(S ∩ T)"proof fix x assume as: "open S" "x ∈ S ∩ closure T" { assume *: "x islimpt T" have "x islimpt (S ∩ T)" proof (rule islimptI) fix A assume "x ∈ A" "open A" with as have "x ∈ A ∩ S" "open (A ∩ S)" by (simp_all add: open_Int) with * obtain y where "y ∈ T" "y ∈ A ∩ S" "y ≠ x" by (rule islimptE) then have "y ∈ S ∩ T" "y ∈ A ∧ y ≠ x" by simp_all then show "∃y∈(S ∩ T). y ∈ A ∧ y ≠ x" .. qed } then show "x ∈ closure (S ∩ T)" using as unfolding closure_def by blastqedlemma closure_complement: "closure (- S) = - interior S" unfolding closure_interior by simplemma interior_complement: "interior (- S) = - closure S" unfolding closure_interior by simplemma closure_Times: "closure (A × B) = closure A × closure B"proof (rule closure_unique) show "A × B ⊆ closure A × closure B" by (intro Sigma_mono closure_subset) show "closed (closure A × closure B)" by (intro closed_Times closed_closure) fix T assume "A × B ⊆ T" and "closed T" then show "closure A × closure B ⊆ T" apply (simp add: closed_def open_prod_def, clarify) apply (rule ccontr) apply (drule_tac x="(a, b)" in bspec, simp, clarify, rename_tac C D) apply (simp add: closure_interior interior_def) apply (drule_tac x=C in spec) apply (drule_tac x=D in spec) apply auto doneqedlemma islimpt_in_closure: "(x islimpt S) = (x:closure(S-{x}))" unfolding closure_def using islimpt_punctured by blastsubsection {* Frontier (aka boundary) *}definition "frontier S = closure S - interior S"lemma frontier_closed: "closed (frontier S)" by (simp add: frontier_def closed_Diff)lemma frontier_closures: "frontier S = (closure S) ∩ (closure(- S))" by (auto simp add: frontier_def interior_closure)lemma frontier_straddle: fixes a :: "'a::metric_space" shows "a ∈ frontier S <-> (∀e>0. (∃x∈S. dist a x < e) ∧ (∃x. x ∉ S ∧ dist a x < e))" unfolding frontier_def closure_interior by (auto simp add: mem_interior subset_eq ball_def)lemma frontier_subset_closed: "closed S ==> frontier S ⊆ S" by (metis frontier_def closure_closed Diff_subset)lemma frontier_empty[simp]: "frontier {} = {}" by (simp add: frontier_def)lemma frontier_subset_eq: "frontier S ⊆ S <-> closed S"proof- { assume "frontier S ⊆ S" then have "closure S ⊆ S" using interior_subset unfolding frontier_def by auto then have "closed S" using closure_subset_eq by auto } then show ?thesis using frontier_subset_closed[of S] ..qedlemma frontier_complement: "frontier(- S) = frontier S" by (auto simp add: frontier_def closure_complement interior_complement)lemma frontier_disjoint_eq: "frontier S ∩ S = {} <-> open S" using frontier_complement frontier_subset_eq[of "- S"] unfolding open_closed by autosubsection {* Filters and the eventually true'' quantifier *}definition indirection :: "'a::real_normed_vector => 'a => 'a filter" (infixr "indirection" 70) where "a indirection v = at a within {b. ∃c≥0. b - a = scaleR c v}"text {* Identify Trivial limits, where we can't approach arbitrarily closely. *}lemma trivial_limit_within: "trivial_limit (at a within S) <-> ¬ a islimpt S"proof assume "trivial_limit (at a within S)" then show "¬ a islimpt S" unfolding trivial_limit_def unfolding eventually_at_topological unfolding islimpt_def apply (clarsimp simp add: set_eq_iff) apply (rename_tac T, rule_tac x=T in exI) apply (clarsimp, drule_tac x=y in bspec, simp_all) donenext assume "¬ a islimpt S" then show "trivial_limit (at a within S)" unfolding trivial_limit_def unfolding eventually_at_topological unfolding islimpt_def apply clarsimp apply (rule_tac x=T in exI) apply auto doneqedlemma trivial_limit_at_iff: "trivial_limit (at a) <-> ¬ a islimpt UNIV" using trivial_limit_within [of a UNIV] by simplemma trivial_limit_at: fixes a :: "'a::perfect_space" shows "¬ trivial_limit (at a)" by (rule at_neq_bot)lemma trivial_limit_at_infinity: "¬ trivial_limit (at_infinity :: ('a::{real_normed_vector,perfect_space}) filter)" unfolding trivial_limit_def eventually_at_infinity apply clarsimp apply (subgoal_tac "∃x::'a. x ≠ 0", clarify) apply (rule_tac x="scaleR (b / norm x) x" in exI, simp) apply (cut_tac islimpt_UNIV [of "0::'a", unfolded islimpt_def]) apply (drule_tac x=UNIV in spec, simp) donelemma not_trivial_limit_within: "¬ trivial_limit (at x within S) = (x ∈ closure (S - {x}))" using islimpt_in_closure by (metis trivial_limit_within)text {* Some property holds "sufficiently close" to the limit point. *}lemma eventually_at2: "eventually P (at a) <-> (∃d>0. ∀x. 0 < dist x a ∧ dist x a < d --> P x)" unfolding eventually_at dist_nz by autolemma eventually_happens: "eventually P net ==> trivial_limit net ∨ (∃x. P x)" unfolding trivial_limit_def by (auto elim: eventually_rev_mp)lemma trivial_limit_eventually: "trivial_limit net ==> eventually P net" by simplemma trivial_limit_eq: "trivial_limit net <-> (∀P. eventually P net)" by (simp add: filter_eq_iff)text{* Combining theorems for "eventually" *}lemma eventually_rev_mono: "eventually P net ==> (∀x. P x --> Q x) ==> eventually Q net" using eventually_mono [of P Q] by fastlemma not_eventually: "(∀x. ¬ P x ) ==> ¬ trivial_limit net ==> ¬ eventually (λx. P x) net" by (simp add: eventually_False)subsection {* Limits *}lemma Lim: "(f ---> l) net <-> trivial_limit net ∨ (∀e>0. eventually (λx. dist (f x) l < e) net)" unfolding tendsto_iff trivial_limit_eq by autotext{* Show that they yield usual definitions in the various cases. *}lemma Lim_within_le: "(f ---> l)(at a within S) <-> (∀e>0. ∃d>0. ∀x∈S. 0 < dist x a ∧ dist x a ≤ d --> dist (f x) l < e)" by (auto simp add: tendsto_iff eventually_at_le dist_nz)lemma Lim_within: "(f ---> l) (at a within S) <-> (∀e >0. ∃d>0. ∀x ∈ S. 0 < dist x a ∧ dist x a < d --> dist (f x) l < e)" by (auto simp add: tendsto_iff eventually_at dist_nz)lemma Lim_at: "(f ---> l) (at a) <-> (∀e >0. ∃d>0. ∀x. 0 < dist x a ∧ dist x a < d --> dist (f x) l < e)" by (auto simp add: tendsto_iff eventually_at2)lemma Lim_at_infinity: "(f ---> l) at_infinity <-> (∀e>0. ∃b. ∀x. norm x ≥ b --> dist (f x) l < e)" by (auto simp add: tendsto_iff eventually_at_infinity)lemma Lim_eventually: "eventually (λx. f x = l) net ==> (f ---> l) net" by (rule topological_tendstoI, auto elim: eventually_rev_mono)text{* The expected monotonicity property. *}lemma Lim_Un: assumes "(f ---> l) (at x within S)" "(f ---> l) (at x within T)" shows "(f ---> l) (at x within (S ∪ T))" using assms unfolding at_within_union by (rule filterlim_sup)lemma Lim_Un_univ: "(f ---> l) (at x within S) ==> (f ---> l) (at x within T) ==> S ∪ T = UNIV ==> (f ---> l) (at x)" by (metis Lim_Un)text{* Interrelations between restricted and unrestricted limits. *}lemma Lim_at_within: (* FIXME: rename *) "(f ---> l) (at x) ==> (f ---> l) (at x within S)" by (metis order_refl filterlim_mono subset_UNIV at_le)lemma eventually_within_interior: assumes "x ∈ interior S" shows "eventually P (at x within S) <-> eventually P (at x)" (is "?lhs = ?rhs")proof from assms obtain T where T: "open T" "x ∈ T" "T ⊆ S" .. { assume "?lhs" then obtain A where "open A" and "x ∈ A" and "∀y∈A. y ≠ x --> y ∈ S --> P y" unfolding eventually_at_topological by auto with T have "open (A ∩ T)" and "x ∈ A ∩ T" and "∀y ∈ A ∩ T. y ≠ x --> P y" by auto then show "?rhs" unfolding eventually_at_topological by auto next assume "?rhs" then show "?lhs" by (auto elim: eventually_elim1 simp: eventually_at_filter) }qedlemma at_within_interior: "x ∈ interior S ==> at x within S = at x" unfolding filter_eq_iff by (intro allI eventually_within_interior)lemma Lim_within_LIMSEQ: fixes a :: "'a::first_countable_topology" assumes "∀S. (∀n. S n ≠ a ∧ S n ∈ T) ∧ S ----> a --> (λn. X (S n)) ----> L" shows "(X ---> L) (at a within T)" using assms unfolding tendsto_def [where l=L] by (simp add: sequentially_imp_eventually_within)lemma Lim_right_bound: fixes f :: "'a :: {linorder_topology, conditionally_complete_linorder, no_top} => 'b::{linorder_topology, conditionally_complete_linorder}" assumes mono: "!!a b. a ∈ I ==> b ∈ I ==> x < a ==> a ≤ b ==> f a ≤ f b" and bnd: "!!a. a ∈ I ==> x < a ==> K ≤ f a" shows "(f ---> Inf (f ({x<..} ∩ I))) (at x within ({x<..} ∩ I))"proof (cases "{x<..} ∩ I = {}") case True then show ?thesis by simpnext case False show ?thesis proof (rule order_tendstoI) fix a assume a: "a < Inf (f ({x<..} ∩ I))" { fix y assume "y ∈ {x<..} ∩ I" with False bnd have "Inf (f ({x<..} ∩ I)) ≤ f y" by (auto intro: cInf_lower) with a have "a < f y" by (blast intro: less_le_trans) } then show "eventually (λx. a < f x) (at x within ({x<..} ∩ I))" by (auto simp: eventually_at_filter intro: exI[of _ 1] zero_less_one) next fix a assume "Inf (f ({x<..} ∩ I)) < a" from cInf_lessD[OF _ this] False obtain y where y: "x < y" "y ∈ I" "f y < a" by auto then have "eventually (λx. x ∈ I --> f x < a) (at_right x)" unfolding eventually_at_right by (metis less_imp_le le_less_trans mono) then show "eventually (λx. f x < a) (at x within ({x<..} ∩ I))" unfolding eventually_at_filter by eventually_elim simp qedqedtext{* Another limit point characterization. *}lemma islimpt_sequential: fixes x :: "'a::first_countable_topology" shows "x islimpt S <-> (∃f. (∀n::nat. f n ∈ S - {x}) ∧ (f ---> x) sequentially)" (is "?lhs = ?rhs")proof assume ?lhs from countable_basis_at_decseq[of x] guess A . note A = this def f ≡ "λn. SOME y. y ∈ S ∧ y ∈ A n ∧ x ≠ y" { fix n from ?lhs have "∃y. y ∈ S ∧ y ∈ A n ∧ x ≠ y" unfolding islimpt_def using A(1,2)[of n] by auto then have "f n ∈ S ∧ f n ∈ A n ∧ x ≠ f n" unfolding f_def by (rule someI_ex) then have "f n ∈ S" "f n ∈ A n" "x ≠ f n" by auto } then have "∀n. f n ∈ S - {x}" by auto moreover have "(λn. f n) ----> x" proof (rule topological_tendstoI) fix S assume "open S" "x ∈ S" from A(3)[OF this] !!n. f n ∈ A n show "eventually (λx. f x ∈ S) sequentially" by (auto elim!: eventually_elim1) qed ultimately show ?rhs by fastnext assume ?rhs then obtain f :: "nat => 'a" where f: "!!n. f n ∈ S - {x}" and lim: "f ----> x" by auto show ?lhs unfolding islimpt_def proof safe fix T assume "open T" "x ∈ T" from lim[THEN topological_tendstoD, OF this] f show "∃y∈S. y ∈ T ∧ y ≠ x" unfolding eventually_sequentially by auto qedqedlemma Lim_null: fixes f :: "'a => 'b::real_normed_vector" shows "(f ---> l) net <-> ((λx. f(x) - l) ---> 0) net" by (simp add: Lim dist_norm)lemma Lim_null_comparison: fixes f :: "'a => 'b::real_normed_vector" assumes "eventually (λx. norm (f x) ≤ g x) net" "(g ---> 0) net" shows "(f ---> 0) net" using assms(2)proof (rule metric_tendsto_imp_tendsto) show "eventually (λx. dist (f x) 0 ≤ dist (g x) 0) net" using assms(1) by (rule eventually_elim1) (simp add: dist_norm)qedlemma Lim_transform_bound: fixes f :: "'a => 'b::real_normed_vector" and g :: "'a => 'c::real_normed_vector" assumes "eventually (λn. norm (f n) ≤ norm (g n)) net" and "(g ---> 0) net" shows "(f ---> 0) net" using assms(1) tendsto_norm_zero [OF assms(2)] by (rule Lim_null_comparison)text{* Deducing things about the limit from the elements. *}lemma Lim_in_closed_set: assumes "closed S" and "eventually (λx. f(x) ∈ S) net" and "¬ trivial_limit net" "(f ---> l) net" shows "l ∈ S"proof (rule ccontr) assume "l ∉ S" with closed S have "open (- S)" "l ∈ - S" by (simp_all add: open_Compl) with assms(4) have "eventually (λx. f x ∈ - S) net" by (rule topological_tendstoD) with assms(2) have "eventually (λx. False) net" by (rule eventually_elim2) simp with assms(3) show "False" by (simp add: eventually_False)qedtext{* Need to prove closed(cball(x,e)) before deducing this as a corollary. *}lemma Lim_dist_ubound: assumes "¬(trivial_limit net)" and "(f ---> l) net" and "eventually (λx. dist a (f x) ≤ e) net" shows "dist a l ≤ e"proof - have "dist a l ∈ {..e}" proof (rule Lim_in_closed_set) show "closed {..e}" by simp show "eventually (λx. dist a (f x) ∈ {..e}) net" by (simp add: assms) show "¬ trivial_limit net" by fact show "((λx. dist a (f x)) ---> dist a l) net" by (intro tendsto_intros assms) qed then show ?thesis by simpqedlemma Lim_norm_ubound: fixes f :: "'a => 'b::real_normed_vector" assumes "¬(trivial_limit net)" "(f ---> l) net" "eventually (λx. norm(f x) ≤ e) net" shows "norm(l) ≤ e"proof - have "norm l ∈ {..e}" proof (rule Lim_in_closed_set) show "closed {..e}" by simp show "eventually (λx. norm (f x) ∈ {..e}) net" by (simp add: assms) show "¬ trivial_limit net" by fact show "((λx. norm (f x)) ---> norm l) net" by (intro tendsto_intros assms) qed then show ?thesis by simpqedlemma Lim_norm_lbound: fixes f :: "'a => 'b::real_normed_vector" assumes "¬ trivial_limit net" and "(f ---> l) net" and "eventually (λx. e ≤ norm (f x)) net" shows "e ≤ norm l"proof - have "norm l ∈ {e..}" proof (rule Lim_in_closed_set) show "closed {e..}" by simp show "eventually (λx. norm (f x) ∈ {e..}) net" by (simp add: assms) show "¬ trivial_limit net" by fact show "((λx. norm (f x)) ---> norm l) net" by (intro tendsto_intros assms) qed then show ?thesis by simpqedtext{* Limit under bilinear function *}lemma Lim_bilinear: assumes "(f ---> l) net" and "(g ---> m) net" and "bounded_bilinear h" shows "((λx. h (f x) (g x)) ---> (h l m)) net" using bounded_bilinear h (f ---> l) net (g ---> m) net by (rule bounded_bilinear.tendsto)text{* These are special for limits out of the same vector space. *}lemma Lim_within_id: "(id ---> a) (at a within s)" unfolding id_def by (rule tendsto_ident_at)lemma Lim_at_id: "(id ---> a) (at a)" unfolding id_def by (rule tendsto_ident_at)lemma Lim_at_zero: fixes a :: "'a::real_normed_vector" and l :: "'b::topological_space" shows "(f ---> l) (at a) <-> ((λx. f(a + x)) ---> l) (at 0)" using LIM_offset_zero LIM_offset_zero_cancel ..text{* It's also sometimes useful to extract the limit point from the filter. *}abbreviation netlimit :: "'a::t2_space filter => 'a" where "netlimit F ≡ Lim F (λx. x)"lemma netlimit_within: "¬ trivial_limit (at a within S) ==> netlimit (at a within S) = a" by (rule tendsto_Lim) (auto intro: tendsto_intros)lemma netlimit_at: fixes a :: "'a::{perfect_space,t2_space}" shows "netlimit (at a) = a" using netlimit_within [of a UNIV] by simplemma lim_within_interior: "x ∈ interior S ==> (f ---> l) (at x within S) <-> (f ---> l) (at x)" by (metis at_within_interior)lemma netlimit_within_interior: fixes x :: "'a::{t2_space,perfect_space}" assumes "x ∈ interior S" shows "netlimit (at x within S) = x" using assms by (metis at_within_interior netlimit_at)text{* Transformation of limit. *}lemma Lim_transform: fixes f g :: "'a::type => 'b::real_normed_vector" assumes "((λx. f x - g x) ---> 0) net" "(f ---> l) net" shows "(g ---> l) net" using tendsto_diff [OF assms(2) assms(1)] by simplemma Lim_transform_eventually: "eventually (λx. f x = g x) net ==> (f ---> l) net ==> (g ---> l) net" apply (rule topological_tendstoI) apply (drule (2) topological_tendstoD) apply (erule (1) eventually_elim2, simp) donelemma Lim_transform_within: assumes "0 < d" and "∀x'∈S. 0 < dist x' x ∧ dist x' x < d --> f x' = g x'" and "(f ---> l) (at x within S)" shows "(g ---> l) (at x within S)"proof (rule Lim_transform_eventually) show "eventually (λx. f x = g x) (at x within S)" using assms(1,2) by (auto simp: dist_nz eventually_at) show "(f ---> l) (at x within S)" by factqedlemma Lim_transform_at: assumes "0 < d" and "∀x'. 0 < dist x' x ∧ dist x' x < d --> f x' = g x'" and "(f ---> l) (at x)" shows "(g ---> l) (at x)" using _ assms(3)proof (rule Lim_transform_eventually) show "eventually (λx. f x = g x) (at x)" unfolding eventually_at2 using assms(1,2) by autoqedtext{* Common case assuming being away from some crucial point like 0. *}lemma Lim_transform_away_within: fixes a b :: "'a::t1_space" assumes "a ≠ b" and "∀x∈S. x ≠ a ∧ x ≠ b --> f x = g x" and "(f ---> l) (at a within S)" shows "(g ---> l) (at a within S)"proof (rule Lim_transform_eventually) show "(f ---> l) (at a within S)" by fact show "eventually (λx. f x = g x) (at a within S)" unfolding eventually_at_topological by (rule exI [where x="- {b}"], simp add: open_Compl assms)qedlemma Lim_transform_away_at: fixes a b :: "'a::t1_space" assumes ab: "a≠b" and fg: "∀x. x ≠ a ∧ x ≠ b --> f x = g x" and fl: "(f ---> l) (at a)" shows "(g ---> l) (at a)" using Lim_transform_away_within[OF ab, of UNIV f g l] fg fl by simptext{* Alternatively, within an open set. *}lemma Lim_transform_within_open: assumes "open S" and "a ∈ S" and "∀x∈S. x ≠ a --> f x = g x" and "(f ---> l) (at a)" shows "(g ---> l) (at a)"proof (rule Lim_transform_eventually) show "eventually (λx. f x = g x) (at a)" unfolding eventually_at_topological using assms(1,2,3) by auto show "(f ---> l) (at a)" by factqedtext{* A congruence rule allowing us to transform limits assuming not at point. *}(* FIXME: Only one congruence rule for tendsto can be used at a time! *)lemma Lim_cong_within(*[cong add]*): assumes "a = b" and "x = y" and "S = T" and "!!x. x ≠ b ==> x ∈ T ==> f x = g x" shows "(f ---> x) (at a within S) <-> (g ---> y) (at b within T)" unfolding tendsto_def eventually_at_topological using assms by simplemma Lim_cong_at(*[cong add]*): assumes "a = b" "x = y" and "!!x. x ≠ a ==> f x = g x" shows "((λx. f x) ---> x) (at a) <-> ((g ---> y) (at a))" unfolding tendsto_def eventually_at_topological using assms by simptext{* Useful lemmas on closure and set of possible sequential limits.*}lemma closure_sequential: fixes l :: "'a::first_countable_topology" shows "l ∈ closure S <-> (∃x. (∀n. x n ∈ S) ∧ (x ---> l) sequentially)" (is "?lhs = ?rhs")proof assume "?lhs" moreover { assume "l ∈ S" then have "?rhs" using tendsto_const[of l sequentially] by auto } moreover { assume "l islimpt S" then have "?rhs" unfolding islimpt_sequential by auto } ultimately show "?rhs" unfolding closure_def by autonext assume "?rhs" then show "?lhs" unfolding closure_def islimpt_sequential by autoqedlemma closed_sequential_limits: fixes S :: "'a::first_countable_topology set" shows "closed S <-> (∀x l. (∀n. x n ∈ S) ∧ (x ---> l) sequentially --> l ∈ S)" unfolding closed_limpt using closure_sequential [where 'a='a] closure_closed [where 'a='a] closed_limpt [where 'a='a] islimpt_sequential [where 'a='a] mem_delete [where 'a='a] by metislemma closure_approachable: fixes S :: "'a::metric_space set" shows "x ∈ closure S <-> (∀e>0. ∃y∈S. dist y x < e)" apply (auto simp add: closure_def islimpt_approachable) apply (metis dist_self) donelemma closed_approachable: fixes S :: "'a::metric_space set" shows "closed S ==> (∀e>0. ∃y∈S. dist y x < e) <-> x ∈ S" by (metis closure_closed closure_approachable)lemma closure_contains_Inf: fixes S :: "real set" assumes "S ≠ {}" "∀x∈S. B ≤ x" shows "Inf S ∈ closure S"proof - have *: "∀x∈S. Inf S ≤ x" using cInf_lower_EX[of _ S] assms by metis { fix e :: real assume "e > 0" then have "Inf S < Inf S + e" by simp with assms obtain x where "x ∈ S" "x < Inf S + e" by (subst (asm) cInf_less_iff[of _ B]) auto with * have "∃x∈S. dist x (Inf S) < e" by (intro bexI[of _ x]) (auto simp add: dist_real_def) } then show ?thesis unfolding closure_approachable by autoqedlemma closed_contains_Inf: fixes S :: "real set" assumes "S ≠ {}" "∀x∈S. B ≤ x" and "closed S" shows "Inf S ∈ S" by (metis closure_contains_Inf closure_closed assms)lemma not_trivial_limit_within_ball: "¬ trivial_limit (at x within S) <-> (∀e>0. S ∩ ball x e - {x} ≠ {})" (is "?lhs = ?rhs")proof - { assume "?lhs" { fix e :: real assume "e > 0" then obtain y where "y ∈ S - {x}" and "dist y x < e" using ?lhs not_trivial_limit_within[of x S] closure_approachable[of x "S - {x}"] by auto then have "y ∈ S ∩ ball x e - {x}" unfolding ball_def by (simp add: dist_commute) then have "S ∩ ball x e - {x} ≠ {}" by blast } then have "?rhs" by auto } moreover { assume "?rhs" { fix e :: real assume "e > 0" then obtain y where "y ∈ S ∩ ball x e - {x}" using ?rhs by blast then have "y ∈ S - {x}" and "dist y x < e" unfolding ball_def by (simp_all add: dist_commute) then have "∃y ∈ S - {x}. dist y x < e" by auto } then have "?lhs" using not_trivial_limit_within[of x S] closure_approachable[of x "S - {x}"] by auto } ultimately show ?thesis by autoqedsubsection {* Infimum Distance *}definition "infdist x A = (if A = {} then 0 else Inf {dist x a|a. a ∈ A})"lemma infdist_notempty: "A ≠ {} ==> infdist x A = Inf {dist x a|a. a ∈ A}" by (simp add: infdist_def)lemma infdist_nonneg: "0 ≤ infdist x A" by (auto simp add: infdist_def intro: cInf_greatest)lemma infdist_le: assumes "a ∈ A" and "d = dist x a" shows "infdist x A ≤ d" using assms by (auto intro!: cInf_lower[where z=0] simp add: infdist_def)lemma infdist_zero[simp]: assumes "a ∈ A" shows "infdist a A = 0"proof - from infdist_le[OF assms, of "dist a a"] have "infdist a A ≤ 0" by auto with infdist_nonneg[of a A] assms show "infdist a A = 0" by autoqedlemma infdist_triangle: "infdist x A ≤ infdist y A + dist x y"proof (cases "A = {}") case True then show ?thesis by (simp add: infdist_def)next case False then obtain a where "a ∈ A" by auto have "infdist x A ≤ Inf {dist x y + dist y a |a. a ∈ A}" proof (rule cInf_greatest) from A ≠ {} show "{dist x y + dist y a |a. a ∈ A} ≠ {}" by simp fix d assume "d ∈ {dist x y + dist y a |a. a ∈ A}" then obtain a where d: "d = dist x y + dist y a" "a ∈ A" by auto show "infdist x A ≤ d" unfolding infdist_notempty[OF A ≠ {}] proof (rule cInf_lower2) show "dist x a ∈ {dist x a |a. a ∈ A}" using a ∈ A by auto show "dist x a ≤ d" unfolding d by (rule dist_triangle) fix d assume "d ∈ {dist x a |a. a ∈ A}" then obtain a where "a ∈ A" "d = dist x a" by auto then show "infdist x A ≤ d" by (rule infdist_le) qed qed also have "… = dist x y + infdist y A" proof (rule cInf_eq, safe) fix a assume "a ∈ A" then show "dist x y + infdist y A ≤ dist x y + dist y a" by (auto intro: infdist_le) next fix i assume inf: "!!d. d ∈ {dist x y + dist y a |a. a ∈ A} ==> i ≤ d" then have "i - dist x y ≤ infdist y A" unfolding infdist_notempty[OF A ≠ {}] using a ∈ A by (intro cInf_greatest) (auto simp: field_simps) then show "i ≤ dist x y + infdist y A" by simp qed finally show ?thesis by simpqedlemma in_closure_iff_infdist_zero: assumes "A ≠ {}" shows "x ∈ closure A <-> infdist x A = 0"proof assume "x ∈ closure A" show "infdist x A = 0" proof (rule ccontr) assume "infdist x A ≠ 0" with infdist_nonneg[of x A] have "infdist x A > 0" by auto then have "ball x (infdist x A) ∩ closure A = {}" apply auto apply (metis 0 < infdist x A x ∈ closure A closure_approachable dist_commute eucl_less_not_refl euclidean_trans(2) infdist_le) done then have "x ∉ closure A" by (metis 0 < infdist x A centre_in_ball disjoint_iff_not_equal) then show False using x ∈ closure A by simp qednext assume x: "infdist x A = 0" then obtain a where "a ∈ A" by atomize_elim (metis all_not_in_conv assms) show "x ∈ closure A" unfolding closure_approachable apply safe proof (rule ccontr) fix e :: real assume "e > 0" assume "¬ (∃y∈A. dist y x < e)" then have "infdist x A ≥ e" using a ∈ A unfolding infdist_def by (force simp: dist_commute intro: cInf_greatest) with x e > 0 show False by auto qedqedlemma in_closed_iff_infdist_zero: assumes "closed A" "A ≠ {}" shows "x ∈ A <-> infdist x A = 0"proof - have "x ∈ closure A <-> infdist x A = 0" by (rule in_closure_iff_infdist_zero) fact with assms show ?thesis by simpqedlemma tendsto_infdist [tendsto_intros]: assumes f: "(f ---> l) F" shows "((λx. infdist (f x) A) ---> infdist l A) F"proof (rule tendstoI) fix e ::real assume "e > 0" from tendstoD[OF f this] show "eventually (λx. dist (infdist (f x) A) (infdist l A) < e) F" proof (eventually_elim) fix x from infdist_triangle[of l A "f x"] infdist_triangle[of "f x" A l] have "dist (infdist (f x) A) (infdist l A) ≤ dist (f x) l" by (simp add: dist_commute dist_real_def) also assume "dist (f x) l < e" finally show "dist (infdist (f x) A) (infdist l A) < e" . qedqedtext{* Some other lemmas about sequences. *}lemma sequentially_offset: (* TODO: move to Topological_Spaces.thy *) assumes "eventually (λi. P i) sequentially" shows "eventually (λi. P (i + k)) sequentially" using assms by (rule eventually_sequentially_seg [THEN iffD2])lemma seq_offset_neg: (* TODO: move to Topological_Spaces.thy *) "(f ---> l) sequentially ==> ((λi. f(i - k)) ---> l) sequentially" apply (erule filterlim_compose) apply (simp add: filterlim_def le_sequentially eventually_filtermap eventually_sequentially) apply arith donelemma seq_harmonic: "((λn. inverse (real n)) ---> 0) sequentially" using LIMSEQ_inverse_real_of_nat by (rule LIMSEQ_imp_Suc) (* TODO: move to Limits.thy *)subsection {* More properties of closed balls *}lemma closed_cball: "closed (cball x e)" unfolding cball_def closed_def unfolding Collect_neg_eq [symmetric] not_le apply (clarsimp simp add: open_dist, rename_tac y) apply (rule_tac x="dist x y - e" in exI, clarsimp) apply (rename_tac x') apply (cut_tac x=x and y=x' and z=y in dist_triangle) apply simp donelemma open_contains_cball: "open S <-> (∀x∈S. ∃e>0. cball x e ⊆ S)"proof - { fix x and e::real assume "x∈S" "e>0" "ball x e ⊆ S" then have "∃d>0. cball x d ⊆ S" unfolding subset_eq by (rule_tac x="e/2" in exI, auto) } moreover { fix x and e::real assume "x∈S" "e>0" "cball x e ⊆ S" then have "∃d>0. ball x d ⊆ S" unfolding subset_eq apply(rule_tac x="e/2" in exI) apply auto done } ultimately show ?thesis unfolding open_contains_ball by autoqedlemma open_contains_cball_eq: "open S ==> (∀x. x ∈ S <-> (∃e>0. cball x e ⊆ S))" by (metis open_contains_cball subset_eq order_less_imp_le centre_in_cball)lemma mem_interior_cball: "x ∈ interior S <-> (∃e>0. cball x e ⊆ S)" apply (simp add: interior_def, safe) apply (force simp add: open_contains_cball) apply (rule_tac x="ball x e" in exI) apply (simp add: subset_trans [OF ball_subset_cball]) donelemma islimpt_ball: fixes x y :: "'a::{real_normed_vector,perfect_space}" shows "y islimpt ball x e <-> 0 < e ∧ y ∈ cball x e" (is "?lhs = ?rhs")proof assume "?lhs" { assume "e ≤ 0" then have *:"ball x e = {}" using ball_eq_empty[of x e] by auto have False using ?lhs unfolding * using islimpt_EMPTY[of y] by auto } then have "e > 0" by (metis not_less) moreover have "y ∈ cball x e" using closed_cball[of x e] islimpt_subset[of y "ball x e" "cball x e"] ball_subset_cball[of x e] ?lhs unfolding closed_limpt by auto ultimately show "?rhs" by autonext assume "?rhs" then have "e > 0" by auto { fix d :: real assume "d > 0" have "∃x'∈ball x e. x' ≠ y ∧ dist x' y < d" proof (cases "d ≤ dist x y") case True then show "∃x'∈ball x e. x' ≠ y ∧ dist x' y < d" proof (cases "x = y") case True then have False using d ≤ dist x y d>0 by auto then show "∃x'∈ball x e. x' ≠ y ∧ dist x' y < d" by auto next case False have "dist x (y - (d / (2 * dist y x)) *⇩R (y - x)) = norm (x - y + (d / (2 * norm (y - x))) *⇩R (y - x))" unfolding mem_cball mem_ball dist_norm diff_diff_eq2 diff_add_eq[symmetric] by auto also have "… = ¦- 1 + d / (2 * norm (x - y))¦ * norm (x - y)" using scaleR_left_distrib[of "- 1" "d / (2 * norm (y - x))", symmetric, of "y - x"] unfolding scaleR_minus_left scaleR_one by (auto simp add: norm_minus_commute) also have "… = ¦- norm (x - y) + d / 2¦" unfolding abs_mult_pos[of "norm (x - y)", OF norm_ge_zero[of "x - y"]] unfolding distrib_right using x≠y[unfolded dist_nz, unfolded dist_norm] by auto also have "… ≤ e - d/2" using d ≤ dist x y and d>0 and ?rhs by (auto simp add: dist_norm) finally have "y - (d / (2 * dist y x)) *⇩R (y - x) ∈ ball x e" using d>0 by auto moreover have "(d / (2*dist y x)) *⇩R (y - x) ≠ 0" using x≠y[unfolded dist_nz] d>0 unfolding scaleR_eq_0_iff by (auto simp add: dist_commute) moreover have "dist (y - (d / (2 * dist y x)) *⇩R (y - x)) y < d" unfolding dist_norm apply simp unfolding norm_minus_cancel using d > 0 x≠y[unfolded dist_nz] dist_commute[of x y] unfolding dist_norm apply auto done ultimately show "∃x'∈ball x e. x' ≠ y ∧ dist x' y < d" apply (rule_tac x = "y - (d / (2*dist y x)) *⇩R (y - x)" in bexI) apply auto done qed next case False then have "d > dist x y" by auto show "∃x' ∈ ball x e. x' ≠ y ∧ dist x' y < d" proof (cases "x = y") case True obtain z where **: "z ≠ y" "dist z y < min e d" using perfect_choose_dist[of "min e d" y] using d > 0 e>0 by auto show "∃x'∈ball x e. x' ≠ y ∧ dist x' y < d" unfolding x = y using z ≠ y ** apply (rule_tac x=z in bexI) apply (auto simp add: dist_commute) done next case False then show "∃x'∈ball x e. x' ≠ y ∧ dist x' y < d" using d>0 d > dist x y ?rhs apply (rule_tac x=x in bexI) apply auto done qed qed } then show "?lhs" unfolding mem_cball islimpt_approachable mem_ball by autoqedlemma closure_ball_lemma: fixes x y :: "'a::real_normed_vector" assumes "x ≠ y" shows "y islimpt ball x (dist x y)"proof (rule islimptI) fix T assume "y ∈ T" "open T" then obtain r where "0 < r" "∀z. dist z y < r --> z ∈ T" unfolding open_dist by fast (* choose point between x and y, within distance r of y. *) def k ≡ "min 1 (r / (2 * dist x y))" def z ≡ "y + scaleR k (x - y)" have z_def2: "z = x + scaleR (1 - k) (y - x)" unfolding z_def by (simp add: algebra_simps) have "dist z y < r" unfolding z_def k_def using 0 < r by (simp add: dist_norm min_def) then have "z ∈ T" using ∀z. dist z y < r --> z ∈ T by simp have "dist x z < dist x y" unfolding z_def2 dist_norm apply (simp add: norm_minus_commute) apply (simp only: dist_norm [symmetric]) apply (subgoal_tac "¦1 - k¦ * dist x y < 1 * dist x y", simp) apply (rule mult_strict_right_mono) apply (simp add: k_def divide_pos_pos zero_less_dist_iff 0 < r x ≠ y) apply (simp add: zero_less_dist_iff x ≠ y) done then have "z ∈ ball x (dist x y)" by simp have "z ≠ y" unfolding z_def k_def using x ≠ y 0 < r by (simp add: min_def) show "∃z∈ball x (dist x y). z ∈ T ∧ z ≠ y" using z ∈ ball x (dist x y) z ∈ T z ≠ y by fastqedlemma closure_ball: fixes x :: "'a::real_normed_vector" shows "0 < e ==> closure (ball x e) = cball x e" apply (rule equalityI) apply (rule closure_minimal) apply (rule ball_subset_cball) apply (rule closed_cball) apply (rule subsetI, rename_tac y) apply (simp add: le_less [where 'a=real]) apply (erule disjE) apply (rule subsetD [OF closure_subset], simp) apply (simp add: closure_def) apply clarify apply (rule closure_ball_lemma) apply (simp add: zero_less_dist_iff) done(* In a trivial vector space, this fails for e = 0. *)lemma interior_cball: fixes x :: "'a::{real_normed_vector, perfect_space}" shows "interior (cball x e) = ball x e"proof (cases "e ≥ 0") case False note cs = this from cs have "ball x e = {}" using ball_empty[of e x] by auto moreover { fix y assume "y ∈ cball x e" then have False unfolding mem_cball using dist_nz[of x y] cs by auto } then have "cball x e = {}" by auto then have "interior (cball x e) = {}" using interior_empty by auto ultimately show ?thesis by blastnext case True note cs = this have "ball x e ⊆ cball x e" using ball_subset_cball by auto moreover { fix S y assume as: "S ⊆ cball x e" "open S" "y∈S" then obtain d where "d>0" and d: "∀x'. dist x' y < d --> x' ∈ S" unfolding open_dist by blast then obtain xa where xa_y: "xa ≠ y" and xa: "dist xa y < d" using perfect_choose_dist [of d] by auto have "xa ∈ S" using d[THEN spec[where x = xa]] using xa by (auto simp add: dist_commute) then have xa_cball: "xa ∈ cball x e" using as(1) by auto then have "y ∈ ball x e" proof (cases "x = y") case True then have "e > 0" using xa_y[unfolded dist_nz] xa_cball[unfolded mem_cball] by (auto simp add: dist_commute) then show "y ∈ ball x e" using x = y by simp next case False have "dist (y + (d / 2 / dist y x) *⇩R (y - x)) y < d" unfolding dist_norm using d>0 norm_ge_zero[of "y - x"] x ≠ y by auto then have *: "y + (d / 2 / dist y x) *⇩R (y - x) ∈ cball x e" using d as(1)[unfolded subset_eq] by blast have "y - x ≠ 0" using x ≠ y by auto then have **:"d / (2 * norm (y - x)) > 0" unfolding zero_less_norm_iff[symmetric] using d>0 divide_pos_pos[of d "2*norm (y - x)"] by auto have "dist (y + (d / 2 / dist y x) *⇩R (y - x)) x = norm (y + (d / (2 * norm (y - x))) *⇩R y - (d / (2 * norm (y - x))) *⇩R x - x)" by (auto simp add: dist_norm algebra_simps) also have "… = norm ((1 + d / (2 * norm (y - x))) *⇩R (y - x))" by (auto simp add: algebra_simps) also have "… = ¦1 + d / (2 * norm (y - x))¦ * norm (y - x)" using ** by auto also have "… = (dist y x) + d/2" using ** by (auto simp add: distrib_right dist_norm) finally have "e ≥ dist x y +d/2" using *[unfolded mem_cball] by (auto simp add: dist_commute) then show "y ∈ ball x e" unfolding mem_ball using d>0 by auto qed } then have "∀S ⊆ cball x e. open S --> S ⊆ ball x e" by auto ultimately show ?thesis using interior_unique[of "ball x e" "cball x e"] using open_ball[of x e] by autoqedlemma frontier_ball: fixes a :: "'a::real_normed_vector" shows "0 < e ==> frontier(ball a e) = {x. dist a x = e}" apply (simp add: frontier_def closure_ball interior_open order_less_imp_le) apply (simp add: set_eq_iff) apply arith donelemma frontier_cball: fixes a :: "'a::{real_normed_vector, perfect_space}" shows "frontier (cball a e) = {x. dist a x = e}" apply (simp add: frontier_def interior_cball closed_cball order_less_imp_le) apply (simp add: set_eq_iff) apply arith donelemma cball_eq_empty: "cball x e = {} <-> e < 0" apply (simp add: set_eq_iff not_le) apply (metis zero_le_dist dist_self order_less_le_trans) donelemma cball_empty: "e < 0 ==> cball x e = {}" by (simp add: cball_eq_empty)lemma cball_eq_sing: fixes x :: "'a::{metric_space,perfect_space}" shows "cball x e = {x} <-> e = 0"proof (rule linorder_cases) assume e: "0 < e" obtain a where "a ≠ x" "dist a x < e" using perfect_choose_dist [OF e] by auto then have "a ≠ x" "dist x a ≤ e" by (auto simp add: dist_commute) with e show ?thesis by (auto simp add: set_eq_iff)qed autolemma cball_sing: fixes x :: "'a::metric_space" shows "e = 0 ==> cball x e = {x}" by (auto simp add: set_eq_iff)subsection {* Boundedness *} (* FIXME: This has to be unified with BSEQ!! *)definition (in metric_space) bounded :: "'a set => bool" where "bounded S <-> (∃x e. ∀y∈S. dist x y ≤ e)"lemma bounded_subset_cball: "bounded S <-> (∃e x. S ⊆ cball x e)" unfolding bounded_def subset_eq by autolemma bounded_any_center: "bounded S <-> (∃e. ∀y∈S. dist a y ≤ e)" unfolding bounded_def apply safe apply (rule_tac x="dist a x + e" in exI) apply clarify apply (drule (1) bspec) apply (erule order_trans [OF dist_triangle add_left_mono]) apply auto donelemma bounded_iff: "bounded S <-> (∃a. ∀x∈S. norm x ≤ a)" unfolding bounded_any_center [where a=0] by (simp add: dist_norm)lemma bounded_realI: assumes "∀x∈s. abs (x::real) ≤ B" shows "bounded s" unfolding bounded_def dist_real_def apply (rule_tac x=0 in exI) using assms apply auto donelemma bounded_empty [simp]: "bounded {}" by (simp add: bounded_def)lemma bounded_subset: "bounded T ==> S ⊆ T ==> bounded S" by (metis bounded_def subset_eq)lemma bounded_interior[intro]: "bounded S ==> bounded(interior S)" by (metis bounded_subset interior_subset)lemma bounded_closure[intro]: assumes "bounded S" shows "bounded (closure S)"proof - from assms obtain x and a where a: "∀y∈S. dist x y ≤ a" unfolding bounded_def by auto { fix y assume "y ∈ closure S" then obtain f where f: "∀n. f n ∈ S" "(f ---> y) sequentially" unfolding closure_sequential by auto have "∀n. f n ∈ S --> dist x (f n) ≤ a" using a by simp then have "eventually (λn. dist x (f n) ≤ a) sequentially" by (rule eventually_mono, simp add: f(1)) have "dist x y ≤ a" apply (rule Lim_dist_ubound [of sequentially f]) apply (rule trivial_limit_sequentially) apply (rule f(2)) apply fact done } then show ?thesis unfolding bounded_def by autoqedlemma bounded_cball[simp,intro]: "bounded (cball x e)" apply (simp add: bounded_def) apply (rule_tac x=x in exI) apply (rule_tac x=e in exI) apply auto donelemma bounded_ball[simp,intro]: "bounded (ball x e)" by (metis ball_subset_cball bounded_cball bounded_subset)lemma bounded_Un[simp]: "bounded (S ∪ T) <-> bounded S ∧ bounded T" apply (auto simp add: bounded_def) apply (rename_tac x y r s) apply (rule_tac x=x in exI) apply (rule_tac x="max r (dist x y + s)" in exI) apply (rule ballI) apply safe apply (drule (1) bspec) apply simp apply (drule (1) bspec) apply (rule min_max.le_supI2) apply (erule order_trans [OF dist_triangle add_left_mono]) donelemma bounded_Union[intro]: "finite F ==> ∀S∈F. bounded S ==> bounded (\<Union>F)" by (induct rule: finite_induct[of F]) autolemma bounded_UN [intro]: "finite A ==> ∀x∈A. bounded (B x) ==> bounded (\<Union>x∈A. B x)" by (induct set: finite) autolemma bounded_insert [simp]: "bounded (insert x S) <-> bounded S"proof - have "∀y∈{x}. dist x y ≤ 0" by simp then have "bounded {x}" unfolding bounded_def by fast then show ?thesis by (metis insert_is_Un bounded_Un)qedlemma finite_imp_bounded [intro]: "finite S ==> bounded S" by (induct set: finite) simp_alllemma bounded_pos: "bounded S <-> (∃b>0. ∀x∈ S. norm x ≤ b)" apply (simp add: bounded_iff) apply (subgoal_tac "!!x (y::real). 0 < 1 + abs y ∧ (x ≤ y --> x ≤ 1 + abs y)") apply metis apply arith donelemma Bseq_eq_bounded: fixes f :: "nat => 'a::real_normed_vector" shows "Bseq f <-> bounded (range f)" unfolding Bseq_def bounded_pos by autolemma bounded_Int[intro]: "bounded S ∨ bounded T ==> bounded (S ∩ T)" by (metis Int_lower1 Int_lower2 bounded_subset)lemma bounded_diff[intro]: "bounded S ==> bounded (S - T)" by (metis Diff_subset bounded_subset)lemma not_bounded_UNIV[simp, intro]: "¬ bounded (UNIV :: 'a::{real_normed_vector, perfect_space} set)"proof (auto simp add: bounded_pos not_le) obtain x :: 'a where "x ≠ 0" using perfect_choose_dist [OF zero_less_one] by fast fix b :: real assume b: "b >0" have b1: "b +1 ≥ 0" using b by simp with x ≠ 0 have "b < norm (scaleR (b + 1) (sgn x))" by (simp add: norm_sgn) then show "∃x::'a. b < norm x" ..qedlemma bounded_linear_image: assumes "bounded S" and "bounded_linear f" shows "bounded (f S)"proof - from assms(1) obtain b where b: "b > 0" "∀x∈S. norm x ≤ b" unfolding bounded_pos by auto from assms(2) obtain B where B: "B > 0" "∀x. norm (f x) ≤ B * norm x" using bounded_linear.pos_bounded by (auto simp add: mult_ac) { fix x assume "x ∈ S" then have "norm x ≤ b" using b by auto then have "norm (f x) ≤ B * b" using B(2) apply (erule_tac x=x in allE) apply (metis B(1) B(2) order_trans mult_le_cancel_left_pos) done } then show ?thesis unfolding bounded_pos apply (rule_tac x="b*B" in exI) using b B mult_pos_pos [of b B] apply (auto simp add: mult_commute) doneqedlemma bounded_scaling: fixes S :: "'a::real_normed_vector set" shows "bounded S ==> bounded ((λx. c *⇩R x) S)" apply (rule bounded_linear_image) apply assumption apply (rule bounded_linear_scaleR_right) donelemma bounded_translation: fixes S :: "'a::real_normed_vector set" assumes "bounded S" shows "bounded ((λx. a + x) S)"proof - from assms obtain b where b: "b > 0" "∀x∈S. norm x ≤ b" unfolding bounded_pos by auto { fix x assume "x ∈ S" then have "norm (a + x) ≤ b + norm a" using norm_triangle_ineq[of a x] b by auto } then show ?thesis unfolding bounded_pos using norm_ge_zero[of a] b(1) and add_strict_increasing[of b 0 "norm a"] by (auto intro!: exI[of _ "b + norm a"])qedtext{* Some theorems on sups and infs using the notion "bounded". *}lemma bounded_real: fixes S :: "real set" shows "bounded S <-> (∃a. ∀x∈S. abs x ≤ a)" by (simp add: bounded_iff)lemma bounded_has_Sup: fixes S :: "real set" assumes "bounded S" and "S ≠ {}" shows "∀x∈S. x ≤ Sup S" and "∀b. (∀x∈S. x ≤ b) --> Sup S ≤ b"proof fix x assume "x∈S" then show "x ≤ Sup S" by (metis cSup_upper abs_le_D1 assms(1) bounded_real)next show "∀b. (∀x∈S. x ≤ b) --> Sup S ≤ b" using assms by (metis cSup_least)qedlemma Sup_insert: fixes S :: "real set" shows "bounded S ==> Sup (insert x S) = (if S = {} then x else max x (Sup S))" apply (subst cSup_insert_If) apply (rule bounded_has_Sup(1)[of S, rule_format]) apply (auto simp: sup_max) donelemma Sup_insert_finite: fixes S :: "real set" shows "finite S ==> Sup (insert x S) = (if S = {} then x else max x (Sup S))" apply (rule Sup_insert) apply (rule finite_imp_bounded) apply simp donelemma bounded_has_Inf: fixes S :: "real set" assumes "bounded S" and "S ≠ {}" shows "∀x∈S. x ≥ Inf S" and "∀b. (∀x∈S. x ≥ b) --> Inf S ≥ b"proof fix x assume "x ∈ S" from assms(1) obtain a where a: "∀x∈S. ¦x¦ ≤ a" unfolding bounded_real by auto then show "x ≥ Inf S" using x ∈ S by (metis cInf_lower_EX abs_le_D2 minus_le_iff)next show "∀b. (∀x∈S. x ≥ b) --> Inf S ≥ b" using assms by (metis cInf_greatest)qedlemma Inf_insert: fixes S :: "real set" shows "bounded S ==> Inf (insert x S) = (if S = {} then x else min x (Inf S))" apply (subst cInf_insert_if) apply (rule bounded_has_Inf(1)[of S, rule_format]) apply (auto simp: inf_min) donelemma Inf_insert_finite: fixes S :: "real set" shows "finite S ==> Inf (insert x S) = (if S = {} then x else min x (Inf S))" apply (rule Inf_insert) apply (rule finite_imp_bounded) apply simp donesubsection {* Compactness *}subsubsection {* Bolzano-Weierstrass property *}lemma heine_borel_imp_bolzano_weierstrass: assumes "compact s" and "infinite t" and "t ⊆ s" shows "∃x ∈ s. x islimpt t"proof (rule ccontr) assume "¬ (∃x ∈ s. x islimpt t)" then obtain f where f: "∀x∈s. x ∈ f x ∧ open (f x) ∧ (∀y∈t. y ∈ f x --> y = x)" unfolding islimpt_def using bchoice[of s "λ x T. x ∈ T ∧ open T ∧ (∀y∈t. y ∈ T --> y = x)"] by auto obtain g where g: "g ⊆ {t. ∃x. x ∈ s ∧ t = f x}" "finite g" "s ⊆ \<Union>g" using assms(1)[unfolded compact_eq_heine_borel, THEN spec[where x="{t. ∃x. x∈s ∧ t = f x}"]] using f by auto from g(1,3) have g':"∀x∈g. ∃xa ∈ s. x = f xa" by auto { fix x y assume "x ∈ t" "y ∈ t" "f x = f y" then have "x ∈ f x" "y ∈ f x --> y = x" using f[THEN bspec[where x=x]] and t ⊆ s by auto then have "x = y" using f x = f y and f[THEN bspec[where x=y]] and y ∈ t and t ⊆ s by auto } then have "inj_on f t" unfolding inj_on_def by simp then have "infinite (f t)" using assms(2) using finite_imageD by auto moreover { fix x assume "x ∈ t" "f x ∉ g" from g(3) assms(3) x ∈ t obtain h where "h ∈ g" and "x ∈ h" by auto then obtain y where "y ∈ s" "h = f y" using g'[THEN bspec[where x=h]] by auto then have "y = x" using f[THEN bspec[where x=y]] and x∈t and x∈h[unfolded h = f y] by auto then have False using f x ∉ g h ∈ g unfolding h = f y by auto } then have "f t ⊆ g" by auto ultimately show False using g(2) using finite_subset by autoqedlemma acc_point_range_imp_convergent_subsequence: fixes l :: "'a :: first_countable_topology" assumes l: "∀U. l∈U --> open U --> infinite (U ∩ range f)" shows "∃r. subseq r ∧ (f o r) ----> l"proof - from countable_basis_at_decseq[of l] guess A . note A = this def s ≡ "λn i. SOME j. i < j ∧ f j ∈ A (Suc n)" { fix n i have "infinite (A (Suc n) ∩ range f - f{.. i})" using l A by auto then have "∃x. x ∈ A (Suc n) ∩ range f - f{.. i}" unfolding ex_in_conv by (intro notI) simp then have "∃j. f j ∈ A (Suc n) ∧ j ∉ {.. i}" by auto then have "∃a. i < a ∧ f a ∈ A (Suc n)" by (auto simp: not_le) then have "i < s n i" "f (s n i) ∈ A (Suc n)" unfolding s_def by (auto intro: someI2_ex) } note s = this def r ≡ "nat_rec (s 0 0) s" have "subseq r" by (auto simp: r_def s subseq_Suc_iff) moreover have "(λn. f (r n)) ----> l" proof (rule topological_tendstoI) fix S assume "open S" "l ∈ S" with A(3) have "eventually (λi. A i ⊆ S) sequentially" by auto moreover { fix i assume "Suc 0 ≤ i" then have "f (r i) ∈ A i" by (cases i) (simp_all add: r_def s) } then have "eventually (λi. f (r i) ∈ A i) sequentially" by (auto simp: eventually_sequentially) ultimately show "eventually (λi. f (r i) ∈ S) sequentially" by eventually_elim auto qed ultimately show "∃r. subseq r ∧ (f o r) ----> l" by (auto simp: convergent_def comp_def)qedlemma sequence_infinite_lemma: fixes f :: "nat => 'a::t1_space" assumes "∀n. f n ≠ l" and "(f ---> l) sequentially" shows "infinite (range f)"proof assume "finite (range f)" then have "closed (range f)" by (rule finite_imp_closed) then have "open (- range f)" by (rule open_Compl) from assms(1) have "l ∈ - range f" by auto from assms(2) have "eventually (λn. f n ∈ - range f) sequentially" using open (- range f) l ∈ - range f by (rule topological_tendstoD) then show False unfolding eventually_sequentially by autoqedlemma closure_insert: fixes x :: "'a::t1_space" shows "closure (insert x s) = insert x (closure s)" apply (rule closure_unique) apply (rule insert_mono [OF closure_subset]) apply (rule closed_insert [OF closed_closure]) apply (simp add: closure_minimal) donelemma islimpt_insert: fixes x :: "'a::t1_space" shows "x islimpt (insert a s) <-> x islimpt s"proof assume *: "x islimpt (insert a s)" show "x islimpt s" proof (rule islimptI) fix t assume t: "x ∈ t" "open t" show "∃y∈s. y ∈ t ∧ y ≠ x" proof (cases "x = a") case True obtain y where "y ∈ insert a s" "y ∈ t" "y ≠ x" using * t by (rule islimptE) with x = a show ?thesis by auto next case False with t have t': "x ∈ t - {a}" "open (t - {a})" by (simp_all add: open_Diff) obtain y where "y ∈ insert a s" "y ∈ t - {a}" "y ≠ x" using * t' by (rule islimptE) then show ?thesis by auto qed qednext assume "x islimpt s" then show "x islimpt (insert a s)" by (rule islimpt_subset) autoqedlemma islimpt_finite: fixes x :: "'a::t1_space" shows "finite s ==> ¬ x islimpt s" by (induct set: finite) (simp_all add: islimpt_insert)lemma islimpt_union_finite: fixes x :: "'a::t1_space" shows "finite s ==> x islimpt (s ∪ t) <-> x islimpt t" by (simp add: islimpt_Un islimpt_finite)lemma islimpt_eq_acc_point: fixes l :: "'a :: t1_space" shows "l islimpt S <-> (∀U. l∈U --> open U --> infinite (U ∩ S))"proof (safe intro!: islimptI) fix U assume "l islimpt S" "l ∈ U" "open U" "finite (U ∩ S)" then have "l islimpt S" "l ∈ (U - (U ∩ S - {l}))" "open (U - (U ∩ S - {l}))" by (auto intro: finite_imp_closed) then show False by (rule islimptE) autonext fix T assume *: "∀U. l∈U --> open U --> infinite (U ∩ S)" "l ∈ T" "open T" then have "infinite (T ∩ S - {l})" by auto then have "∃x. x ∈ (T ∩ S - {l})" unfolding ex_in_conv by (intro notI) simp then show "∃y∈S. y ∈ T ∧ y ≠ l" by autoqedlemma islimpt_range_imp_convergent_subsequence: fixes l :: "'a :: {t1_space, first_countable_topology}" assumes l: "l islimpt (range f)" shows "∃r. subseq r ∧ (f o r) ----> l" using l unfolding islimpt_eq_acc_point by (rule acc_point_range_imp_convergent_subsequence)lemma sequence_unique_limpt: fixes f :: "nat => 'a::t2_space" assumes "(f ---> l) sequentially" and "l' islimpt (range f)" shows "l' = l"proof (rule ccontr) assume "l' ≠ l" obtain s t where "open s" "open t" "l' ∈ s" "l ∈ t" "s ∩ t = {}" using hausdorff [OF l' ≠ l] by auto have "eventually (λn. f n ∈ t) sequentially" using assms(1) open t l ∈ t by (rule topological_tendstoD) then obtain N where "∀n≥N. f n ∈ t" unfolding eventually_sequentially by auto have "UNIV = {..<N} ∪ {N..}" by auto then have "l' islimpt (f ({..<N} ∪ {N..}))" using assms(2) by simp then have "l' islimpt (f {..<N} ∪ f {N..})" by (simp add: image_Un) then have "l' islimpt (f {N..})" by (simp add: islimpt_union_finite) then obtain y where "y ∈ f {N..}" "y ∈ s" "y ≠ l'" using l' ∈ s open s by (rule islimptE) then obtain n where "N ≤ n" "f n ∈ s" "f n ≠ l'" by auto with ∀n≥N. f n ∈ t have "f n ∈ s ∩ t" by simp with s ∩ t = {} show False by simpqedlemma bolzano_weierstrass_imp_closed: fixes s :: "'a::{first_countable_topology,t2_space} set" assumes "∀t. infinite t ∧ t ⊆ s --> (∃x ∈ s. x islimpt t)" shows "closed s"proof - { fix x l assume as: "∀n::nat. x n ∈ s" "(x ---> l) sequentially" then have "l ∈ s" proof (cases "∀n. x n ≠ l") case False then show "l∈s" using as(1) by auto next case True note cas = this with as(2) have "infinite (range x)" using sequence_infinite_lemma[of x l] by auto then obtain l' where "l'∈s" "l' islimpt (range x)" using assms[THEN spec[where x="range x"]] as(1) by auto then show "l∈s" using sequence_unique_limpt[of x l l'] using as cas by auto qed } then show ?thesis unfolding closed_sequential_limits by fastqedlemma compact_imp_bounded: assumes "compact U" shows "bounded U"proof - have "compact U" "∀x∈U. open (ball x 1)" "U ⊆ (\<Union>x∈U. ball x 1)" using assms by auto then obtain D where D: "D ⊆ U" "finite D" "U ⊆ (\<Union>x∈D. ball x 1)" by (rule compactE_image) from finite D have "bounded (\<Union>x∈D. ball x 1)" by (simp add: bounded_UN) then show "bounded U" using U ⊆ (\<Union>x∈D. ball x 1) by (rule bounded_subset)qedtext{* In particular, some common special cases. *}lemma compact_union [intro]: assumes "compact s" and "compact t" shows " compact (s ∪ t)"proof (rule compactI) fix f assume *: "Ball f open" "s ∪ t ⊆ \<Union>f" from * compact s obtain s' where "s' ⊆ f ∧ finite s' ∧ s ⊆ \<Union>s'" unfolding compact_eq_heine_borel by (auto elim!: allE[of _ f]) metis moreover from * compact t obtain t' where "t' ⊆ f ∧ finite t' ∧ t ⊆ \<Union>t'" unfolding compact_eq_heine_borel by (auto elim!: allE[of _ f]) metis ultimately show "∃f'⊆f. finite f' ∧ s ∪ t ⊆ \<Union>f'" by (auto intro!: exI[of _ "s' ∪ t'"])qedlemma compact_Union [intro]: "finite S ==> (!!T. T ∈ S ==> compact T) ==> compact (\<Union>S)" by (induct set: finite) autolemma compact_UN [intro]: "finite A ==> (!!x. x ∈ A ==> compact (B x)) ==> compact (\<Union>x∈A. B x)" unfolding SUP_def by (rule compact_Union) autolemma closed_inter_compact [intro]: assumes "closed s" and "compact t" shows "compact (s ∩ t)" using compact_inter_closed [of t s] assms by (simp add: Int_commute)lemma compact_inter [intro]: fixes s t :: "'a :: t2_space set" assumes "compact s" and "compact t" shows "compact (s ∩ t)" using assms by (intro compact_inter_closed compact_imp_closed)lemma compact_sing [simp]: "compact {a}" unfolding compact_eq_heine_borel by autolemma compact_insert [simp]: assumes "compact s" shows "compact (insert x s)"proof - have "compact ({x} ∪ s)" using compact_sing assms by (rule compact_union) then show ?thesis by simpqedlemma finite_imp_compact: "finite s ==> compact s" by (induct set: finite) simp_alllemma open_delete: fixes s :: "'a::t1_space set" shows "open s ==> open (s - {x})" by (simp add: open_Diff)text{* Finite intersection property *}lemma inj_setminus: "inj_on uminus (A::'a set set)" by (auto simp: inj_on_def)lemma compact_fip: "compact U <-> (∀A. (∀a∈A. closed a) --> (∀B ⊆ A. finite B --> U ∩ \<Inter>B ≠ {}) --> U ∩ \<Inter>A ≠ {})" (is "_ <-> ?R")proof (safe intro!: compact_eq_heine_borel[THEN iffD2]) fix A assume "compact U" and A: "∀a∈A. closed a" "U ∩ \<Inter>A = {}" and fi: "∀B ⊆ A. finite B --> U ∩ \<Inter>B ≠ {}" from A have "(∀a∈uminusA. open a) ∧ U ⊆ \<Union>(uminusA)" by auto with compact U obtain B where "B ⊆ A" "finite (uminusB)" "U ⊆ \<Union>(uminusB)" unfolding compact_eq_heine_borel by (metis subset_image_iff) with fi[THEN spec, of B] show False by (auto dest: finite_imageD intro: inj_setminus)next fix A assume ?R assume "∀a∈A. open a" "U ⊆ \<Union>A" then have "U ∩ \<Inter>(uminusA) = {}" "∀a∈uminusA. closed a" by auto with ?R obtain B where "B ⊆ A" "finite (uminusB)" "U ∩ \<Inter>(uminusB) = {}" by (metis subset_image_iff) then show "∃T⊆A. finite T ∧ U ⊆ \<Union>T" by (auto intro!: exI[of _ B] inj_setminus dest: finite_imageD)qedlemma compact_imp_fip: "compact s ==> ∀t ∈ f. closed t ==> ∀f'. finite f' ∧ f' ⊆ f --> (s ∩ (\<Inter> f') ≠ {}) ==> s ∩ (\<Inter> f) ≠ {}" unfolding compact_fip by autotext{*Compactness expressed with filters*}definition "filter_from_subbase B = Abs_filter (λP. ∃X ⊆ B. finite X ∧ Inf X ≤ P)"lemma eventually_filter_from_subbase: "eventually P (filter_from_subbase B) <-> (∃X ⊆ B. finite X ∧ Inf X ≤ P)" (is "_ <-> ?R P") unfolding filter_from_subbase_defproof (rule eventually_Abs_filter is_filter.intro)+ show "?R (λx. True)" by (rule exI[of _ "{}"]) (simp add: le_fun_def)next fix P Q assume "?R P" then guess X .. moreover assume "?R Q" then guess Y .. ultimately show "?R (λx. P x ∧ Q x)" by (intro exI[of _ "X ∪ Y"]) autonext fix P Q assume "?R P" then guess X .. moreover assume "∀x. P x --> Q x" ultimately show "?R Q" by (intro exI[of _ X]) autoqedlemma eventually_filter_from_subbaseI: "P ∈ B ==> eventually P (filter_from_subbase B)" by (subst eventually_filter_from_subbase) (auto intro!: exI[of _ "{P}"])lemma filter_from_subbase_not_bot: "∀X ⊆ B. finite X --> Inf X ≠ bot ==> filter_from_subbase B ≠ bot" unfolding trivial_limit_def eventually_filter_from_subbase by autolemma closure_iff_nhds_not_empty: "x ∈ closure X <-> (∀A. ∀S⊆A. open S --> x ∈ S --> X ∩ A ≠ {})"proof safe assume x: "x ∈ closure X" fix S A assume "open S" "x ∈ S" "X ∩ A = {}" "S ⊆ A" then have "x ∉ closure (-S)" by (auto simp: closure_complement subset_eq[symmetric] intro: interiorI) with x have "x ∈ closure X - closure (-S)" by auto also have "… ⊆ closure (X ∩ S)" using open S open_inter_closure_subset[of S X] by (simp add: closed_Compl ac_simps) finally have "X ∩ S ≠ {}" by auto then show False using X ∩ A = {} S ⊆ A by autonext assume "∀A S. S ⊆ A --> open S --> x ∈ S --> X ∩ A ≠ {}" from this[THEN spec, of "- X", THEN spec, of "- closure X"] show "x ∈ closure X" by (simp add: closure_subset open_Compl)qedlemma compact_filter: "compact U <-> (∀F. F ≠ bot --> eventually (λx. x ∈ U) F --> (∃x∈U. inf (nhds x) F ≠ bot))"proof (intro allI iffI impI compact_fip[THEN iffD2] notI) fix F assume "compact U" assume F: "F ≠ bot" "eventually (λx. x ∈ U) F" then have "U ≠ {}" by (auto simp: eventually_False) def Z ≡ "closure {A. eventually (λx. x ∈ A) F}" then have "∀z∈Z. closed z" by auto moreover have ev_Z: "!!z. z ∈ Z ==> eventually (λx. x ∈ z) F" unfolding Z_def by (auto elim: eventually_elim1 intro: set_mp[OF closure_subset]) have "(∀B ⊆ Z. finite B --> U ∩ \<Inter>B ≠ {})" proof (intro allI impI) fix B assume "finite B" "B ⊆ Z" with finite B ev_Z have "eventually (λx. ∀b∈B. x ∈ b) F" by (auto intro!: eventually_Ball_finite) with F(2) have "eventually (λx. x ∈ U ∩ (\<Inter>B)) F" by eventually_elim auto with F show "U ∩ \<Inter>B ≠ {}" by (intro notI) (simp add: eventually_False) qed ultimately have "U ∩ \<Inter>Z ≠ {}" using compact U unfolding compact_fip by blast then obtain x where "x ∈ U" and x: "!!z. z ∈ Z ==> x ∈ z" by auto have "!!P. eventually P (inf (nhds x) F) ==> P ≠ bot" unfolding eventually_inf eventually_nhds proof safe fix P Q R S assume "eventually R F" "open S" "x ∈ S" with open_inter_closure_eq_empty[of S "{x. R x}"] x[of "closure {x. R x}"] have "S ∩ {x. R x} ≠ {}" by (auto simp: Z_def) moreover assume "Ball S Q" "∀x. Q x ∧ R x --> bot x" ultimately show False by (auto simp: set_eq_iff) qed with x ∈ U show "∃x∈U. inf (nhds x) F ≠ bot" by (metis eventually_bot)next fix A assume A: "∀a∈A. closed a" "∀B⊆A. finite B --> U ∩ \<Inter>B ≠ {}" "U ∩ \<Inter>A = {}" def P' ≡ "(λa (x::'a). x ∈ a)" then have inj_P': "!!A. inj_on P' A" by (auto intro!: inj_onI simp: fun_eq_iff) def F ≡ "filter_from_subbase (P' insert U A)" have "F ≠ bot" unfolding F_def proof (safe intro!: filter_from_subbase_not_bot) fix X assume "X ⊆ P' insert U A" "finite X" "Inf X = bot" then obtain B where "B ⊆ insert U A" "finite B" and B: "Inf (P' B) = bot" unfolding subset_image_iff by (auto intro: inj_P' finite_imageD) with A(2)[THEN spec, of "B - {U}"] have "U ∩ \<Inter>(B - {U}) ≠ {}" by auto with B show False by (auto simp: P'_def fun_eq_iff) qed moreover have "eventually (λx. x ∈ U) F" unfolding F_def by (rule eventually_filter_from_subbaseI) (auto simp: P'_def) moreover assume "∀F. F ≠ bot --> eventually (λx. x ∈ U) F --> (∃x∈U. inf (nhds x) F ≠ bot)" ultimately obtain x where "x ∈ U" and x: "inf (nhds x) F ≠ bot" by auto { fix V assume "V ∈ A" then have V: "eventually (λx. x ∈ V) F" by (auto simp add: F_def image_iff P'_def intro!: eventually_filter_from_subbaseI) have "x ∈ closure V" unfolding closure_iff_nhds_not_empty proof (intro impI allI) fix S A assume "open S" "x ∈ S" "S ⊆ A" then have "eventually (λx. x ∈ A) (nhds x)" by (auto simp: eventually_nhds) with V have "eventually (λx. x ∈ V ∩ A) (inf (nhds x) F)" by (auto simp: eventually_inf) with x show "V ∩ A ≠ {}" by (auto simp del: Int_iff simp add: trivial_limit_def) qed then have "x ∈ V" using V ∈ A A(1) by simp } with x∈U have "x ∈ U ∩ \<Inter>A" by auto with U ∩ \<Inter>A = {} show False by autoqeddefinition "countably_compact U <-> (∀A. countable A --> (∀a∈A. open a) --> U ⊆ \<Union>A --> (∃T⊆A. finite T ∧ U ⊆ \<Union>T))"lemma countably_compactE: assumes "countably_compact s" and "∀t∈C. open t" and "s ⊆ \<Union>C" "countable C" obtains C' where "C' ⊆ C" and "finite C'" and "s ⊆ \<Union>C'" using assms unfolding countably_compact_def by metislemma countably_compactI: assumes "!!C. ∀t∈C. open t ==> s ⊆ \<Union>C ==> countable C ==> (∃C'⊆C. finite C' ∧ s ⊆ \<Union>C')" shows "countably_compact s" using assms unfolding countably_compact_def by metislemma compact_imp_countably_compact: "compact U ==> countably_compact U" by (auto simp: compact_eq_heine_borel countably_compact_def)lemma countably_compact_imp_compact: assumes "countably_compact U" and ccover: "countable B" "∀b∈B. open b" and basis: "!!T x. open T ==> x ∈ T ==> x ∈ U ==> ∃b∈B. x ∈ b ∧ b ∩ U ⊆ T" shows "compact U" using countably_compact U unfolding compact_eq_heine_borel countably_compact_defproof safe fix A assume A: "∀a∈A. open a" "U ⊆ \<Union>A" assume *: "∀A. countable A --> (∀a∈A. open a) --> U ⊆ \<Union>A --> (∃T⊆A. finite T ∧ U ⊆ \<Union>T)" moreover def C ≡ "{b∈B. ∃a∈A. b ∩ U ⊆ a}" ultimately have "countable C" "∀a∈C. open a" unfolding C_def using ccover by auto moreover have "\<Union>A ∩ U ⊆ \<Union>C" proof safe fix x a assume "x ∈ U" "x ∈ a" "a ∈ A" with basis[of a x] A obtain b where "b ∈ B" "x ∈ b" "b ∩ U ⊆ a" by blast with a ∈ A show "x ∈ \<Union>C" unfolding C_def by auto qed then have "U ⊆ \<Union>C" using U ⊆ \<Union>A by auto ultimately obtain T where T: "T⊆C" "finite T" "U ⊆ \<Union>T" using * by metis then have "∀t∈T. ∃a∈A. t ∩ U ⊆ a" by (auto simp: C_def) then guess f unfolding bchoice_iff Bex_def .. with T show "∃T⊆A. finite T ∧ U ⊆ \<Union>T" unfolding C_def by (intro exI[of _ "fT"]) fastforceqedlemma countably_compact_imp_compact_second_countable: "countably_compact U ==> compact (U :: 'a :: second_countable_topology set)"proof (rule countably_compact_imp_compact) fix T and x :: 'a assume "open T" "x ∈ T" from topological_basisE[OF is_basis this] guess b . then show "∃b∈SOME B. countable B ∧ topological_basis B. x ∈ b ∧ b ∩ U ⊆ T" by autoqed (insert countable_basis topological_basis_open[OF is_basis], auto)lemma countably_compact_eq_compact: "countably_compact U <-> compact (U :: 'a :: second_countable_topology set)" using countably_compact_imp_compact_second_countable compact_imp_countably_compact by blastsubsubsection{* Sequential compactness *}definition seq_compact :: "'a::topological_space set => bool" where "seq_compact S <-> (∀f. (∀n. f n ∈ S) --> (∃l∈S. ∃r. subseq r ∧ ((f o r) ---> l) sequentially))"lemma seq_compact_imp_countably_compact: fixes U :: "'a :: first_countable_topology set" assumes "seq_compact U" shows "countably_compact U"proof (safe intro!: countably_compactI) fix A assume A: "∀a∈A. open a" "U ⊆ \<Union>A" "countable A" have subseq: "!!X. range X ⊆ U ==> ∃r x. x ∈ U ∧ subseq r ∧ (X o r) ----> x" using seq_compact U by (fastforce simp: seq_compact_def subset_eq) show "∃T⊆A. finite T ∧ U ⊆ \<Union>T" proof cases assume "finite A" with A show ?thesis by auto next assume "infinite A" then have "A ≠ {}" by auto show ?thesis proof (rule ccontr) assume "¬ (∃T⊆A. finite T ∧ U ⊆ \<Union>T)" then have "∀T. ∃x. T ⊆ A ∧ finite T --> (x ∈ U - \<Union>T)" by auto then obtain X' where T: "!!T. T ⊆ A ==> finite T ==> X' T ∈ U - \<Union>T" by metis def X ≡ "λn. X' (from_nat_into A {.. n})" have X: "!!n. X n ∈ U - (\<Union>i≤n. from_nat_into A i)" using A ≠ {} unfolding X_def SUP_def by (intro T) (auto intro: from_nat_into) then have "range X ⊆ U" by auto with subseq[of X] obtain r x where "x ∈ U" and r: "subseq r" "(X o r) ----> x" by auto from x∈U U ⊆ \<Union>A from_nat_into_surj[OF countable A] obtain n where "x ∈ from_nat_into A n" by auto with r(2) A(1) from_nat_into[OF A ≠ {}, of n] have "eventually (λi. X (r i) ∈ from_nat_into A n) sequentially" unfolding tendsto_def by (auto simp: comp_def) then obtain N where "!!i. N ≤ i ==> X (r i) ∈ from_nat_into A n" by (auto simp: eventually_sequentially) moreover from X have "!!i. n ≤ r i ==> X (r i) ∉ from_nat_into A n" by auto moreover from subseq r[THEN seq_suble, of "max n N"] have "∃i. n ≤ r i ∧ N ≤ i" by (auto intro!: exI[of _ "max n N"]) ultimately show False by auto qed qedqedlemma compact_imp_seq_compact: fixes U :: "'a :: first_countable_topology set" assumes "compact U" shows "seq_compact U" unfolding seq_compact_defproof safe fix X :: "nat => 'a" assume "∀n. X n ∈ U" then have "eventually (λx. x ∈ U) (filtermap X sequentially)" by (auto simp: eventually_filtermap) moreover have "filtermap X sequentially ≠ bot" by (simp add: trivial_limit_def eventually_filtermap) ultimately obtain x where "x ∈ U" and x: "inf (nhds x) (filtermap X sequentially) ≠ bot" (is "?F ≠ _") using compact U by (auto simp: compact_filter) from countable_basis_at_decseq[of x] guess A . note A = this def s ≡ "λn i. SOME j. i < j ∧ X j ∈ A (Suc n)" { fix n i have "∃a. i < a ∧ X a ∈ A (Suc n)" proof (rule ccontr) assume "¬ (∃a>i. X a ∈ A (Suc n))" then have "!!a. Suc i ≤ a ==> X a ∉ A (Suc n)" by auto then have "eventually (λx. x ∉ A (Suc n)) (filtermap X sequentially)" by (auto simp: eventually_filtermap eventually_sequentially) moreover have "eventually (λx. x ∈ A (Suc n)) (nhds x)" using A(1,2)[of "Suc n"] by (auto simp: eventually_nhds) ultimately have "eventually (λx. False) ?F" by (auto simp add: eventually_inf) with x show False by (simp add: eventually_False) qed then have "i < s n i" "X (s n i) ∈ A (Suc n)" unfolding s_def by (auto intro: someI2_ex) } note s = this def r ≡ "nat_rec (s 0 0) s" have "subseq r" by (auto simp: r_def s subseq_Suc_iff) moreover have "(λn. X (r n)) ----> x" proof (rule topological_tendstoI) fix S assume "open S" "x ∈ S" with A(3) have "eventually (λi. A i ⊆ S) sequentially" by auto moreover { fix i assume "Suc 0 ≤ i" then have "X (r i) ∈ A i" by (cases i) (simp_all add: r_def s) } then have "eventually (λi. X (r i) ∈ A i) sequentially" by (auto simp: eventually_sequentially) ultimately show "eventually (λi. X (r i) ∈ S) sequentially" by eventually_elim auto qed ultimately show "∃x ∈ U. ∃r. subseq r ∧ (X o r) ----> x" using x ∈ U by (auto simp: convergent_def comp_def)qedlemma seq_compactI: assumes "!!f. ∀n. f n ∈ S ==> ∃l∈S. ∃r. subseq r ∧ ((f o r) ---> l) sequentially" shows "seq_compact S" unfolding seq_compact_def using assms by fastlemma seq_compactE: assumes "seq_compact S" "∀n. f n ∈ S" obtains l r where "l ∈ S" "subseq r" "((f o r) ---> l) sequentially" using assms unfolding seq_compact_def by fastlemma countably_compact_imp_acc_point: assumes "countably_compact s" and "countable t" and "infinite t" and "t ⊆ s" shows "∃x∈s. ∀U. x∈U ∧ open U --> infinite (U ∩ t)"proof (rule ccontr) def C ≡ "(λF. interior (F ∪ (- t))) {F. finite F ∧ F ⊆ t }" note countably_compact s moreover have "∀t∈C. open t" by (auto simp: C_def) moreover assume "¬ (∃x∈s. ∀U. x∈U ∧ open U --> infinite (U ∩ t))" then have s: "!!x. x ∈ s ==> ∃U. x∈U ∧ open U ∧ finite (U ∩ t)" by metis have "s ⊆ \<Union>C" using t ⊆ s unfolding C_def Union_image_eq apply (safe dest!: s) apply (rule_tac a="U ∩ t" in UN_I) apply (auto intro!: interiorI simp add: finite_subset) done moreover from countable t have "countable C" unfolding C_def by (auto intro: countable_Collect_finite_subset) ultimately guess D by (rule countably_compactE) then obtain E where E: "E ⊆ {F. finite F ∧ F ⊆ t }" "finite E" and s: "s ⊆ (\<Union>F∈E. interior (F ∪ (- t)))" by (metis (lifting) Union_image_eq finite_subset_image C_def) from s t ⊆ s have "t ⊆ \<Union>E" using interior_subset by blast moreover have "finite (\<Union>E)" using E by auto ultimately show False using infinite t by (auto simp: finite_subset)qedlemma countable_acc_point_imp_seq_compact: fixes s :: "'a::first_countable_topology set" assumes "∀t. infinite t ∧ countable t ∧ t ⊆ s --> (∃x∈s. ∀U. x∈U ∧ open U --> infinite (U ∩ t))" shows "seq_compact s"proof - { fix f :: "nat => 'a" assume f: "∀n. f n ∈ s" have "∃l∈s. ∃r. subseq r ∧ ((f o r) ---> l) sequentially" proof (cases "finite (range f)") case True obtain l where "infinite {n. f n = f l}" using pigeonhole_infinite[OF _ True] by auto then obtain r where "subseq r" and fr: "∀n. f (r n) = f l" using infinite_enumerate by blast then have "subseq r ∧ (f o r) ----> f l" by (simp add: fr tendsto_const o_def) with f show "∃l∈s. ∃r. subseq r ∧ (f o r) ----> l" by auto next case False with f assms have "∃x∈s. ∀U. x∈U ∧ open U --> infinite (U ∩ range f)" by auto then obtain l where "l ∈ s" "∀U. l∈U ∧ open U --> infinite (U ∩ range f)" .. from this(2) have "∃r. subseq r ∧ ((f o r) ---> l) sequentially" using acc_point_range_imp_convergent_subsequence[of l f] by auto with l ∈ s show "∃l∈s. ∃r. subseq r ∧ ((f o r) ---> l) sequentially" .. qed } then show ?thesis unfolding seq_compact_def by autoqedlemma seq_compact_eq_countably_compact: fixes U :: "'a :: first_countable_topology set" shows "seq_compact U <-> countably_compact U" using countable_acc_point_imp_seq_compact countably_compact_imp_acc_point seq_compact_imp_countably_compact by metislemma seq_compact_eq_acc_point: fixes s :: "'a :: first_countable_topology set" shows "seq_compact s <-> (∀t. infinite t ∧ countable t ∧ t ⊆ s --> (∃x∈s. ∀U. x∈U ∧ open U --> infinite (U ∩ t)))" using countable_acc_point_imp_seq_compact[of s] countably_compact_imp_acc_point[of s] seq_compact_imp_countably_compact[of s] by metislemma seq_compact_eq_compact: fixes U :: "'a :: second_countable_topology set" shows "seq_compact U <-> compact U" using seq_compact_eq_countably_compact countably_compact_eq_compact by blastlemma bolzano_weierstrass_imp_seq_compact: fixes s :: "'a::{t1_space, first_countable_topology} set" shows "∀t. infinite t ∧ t ⊆ s --> (∃x ∈ s. x islimpt t) ==> seq_compact s" by (rule countable_acc_point_imp_seq_compact) (metis islimpt_eq_acc_point)subsubsection{* Total boundedness *}lemma cauchy_def: "Cauchy s <-> (∀e>0. ∃N. ∀m n. m ≥ N ∧ n ≥ N --> dist(s m)(s n) < e)" unfolding Cauchy_def by metisfun helper_1 :: "('a::metric_space set) => real => nat => 'a"where "helper_1 s e n = (SOME y::'a. y ∈ s ∧ (∀m<n. ¬ (dist (helper_1 s e m) y < e)))"declare helper_1.simps[simp del]lemma seq_compact_imp_totally_bounded: assumes "seq_compact s" shows "∀e>0. ∃k. finite k ∧ k ⊆ s ∧ s ⊆ (\<Union>((λx. ball x e) k))"proof (rule, rule, rule ccontr) fix e::real assume "e > 0" assume assm: "¬ (∃k. finite k ∧ k ⊆ s ∧ s ⊆ \<Union>((λx. ball x e) k))" def x ≡ "helper_1 s e" { fix n have "x n ∈ s ∧ (∀m<n. ¬ dist (x m) (x n) < e)" proof (induct n rule: nat_less_induct) fix n def Q ≡ "(λy. y ∈ s ∧ (∀m<n. ¬ dist (x m) y < e))" assume as: "∀m<n. x m ∈ s ∧ (∀ma<m. ¬ dist (x ma) (x m) < e)" have "¬ s ⊆ (\<Union>x∈x {0..<n}. ball x e)" using assm apply simp apply (erule_tac x="x {0 ..< n}" in allE) using as apply auto done then obtain z where z:"z∈s" "z ∉ (\<Union>x∈x {0..<n}. ball x e)" unfolding subset_eq by auto have "Q (x n)" unfolding x_def and helper_1.simps[of s e n] apply (rule someI2[where a=z]) unfolding x_def[symmetric] and Q_def using z apply auto done then show "x n ∈ s ∧ (∀m<n. ¬ dist (x m) (x n) < e)" unfolding Q_def by auto qed } then have "∀n::nat. x n ∈ s" and x:"∀n. ∀m < n. ¬ (dist (x m) (x n) < e)" by blast+ then obtain l r where "l∈s" and r:"subseq r" and "((x o r) ---> l) sequentially" using assms(1)[unfolded seq_compact_def, THEN spec[where x=x]] by auto from this(3) have "Cauchy (x o r)" using LIMSEQ_imp_Cauchy by auto then obtain N::nat where N:"∀m n. N ≤ m ∧ N ≤ n --> dist ((x o r) m) ((x o r) n) < e" unfolding cauchy_def using e>0 by auto show False using N[THEN spec[where x=N], THEN spec[where x="N+1"]] using r[unfolded subseq_def, THEN spec[where x=N], THEN spec[where x="N+1"]] using x[THEN spec[where x="r (N+1)"], THEN spec[where x="r (N)"]] by autoqedsubsubsection{* Heine-Borel theorem *}lemma seq_compact_imp_heine_borel: fixes s :: "'a :: metric_space set" assumes "seq_compact s" shows "compact s"proof - from seq_compact_imp_totally_bounded[OF seq_compact s] guess f unfolding choice_iff' .. note f = this def K ≡ "(λ(x, r). ball x r) ((\<Union>e ∈ \<rat> ∩ {0 <..}. f e) × \<rat>)" have "countably_compact s" using seq_compact s by (rule seq_compact_imp_countably_compact) then show "compact s" proof (rule countably_compact_imp_compact) show "countable K" unfolding K_def using f by (auto intro: countable_finite countable_subset countable_rat intro!: countable_image countable_SIGMA countable_UN) show "∀b∈K. open b" by (auto simp: K_def) next fix T x assume T: "open T" "x ∈ T" and x: "x ∈ s" from openE[OF T] obtain e where "0 < e" "ball x e ⊆ T" by auto then have "0 < e / 2" "ball x (e / 2) ⊆ T" by auto from Rats_dense_in_real[OF 0 < e / 2] obtain r where "r ∈ \<rat>" "0 < r" "r < e / 2" by auto from f[rule_format, of r] 0 < r x ∈ s obtain k where "k ∈ f r" "x ∈ ball k r" unfolding Union_image_eq by auto from r ∈ \<rat> 0 < r k ∈ f r have "ball k r ∈ K" by (auto simp: K_def) then show "∃b∈K. x ∈ b ∧ b ∩ s ⊆ T" proof (rule bexI[rotated], safe) fix y assume "y ∈ ball k r" with r < e / 2 x ∈ ball k r have "dist x y < e" by (intro dist_double[where x = k and d=e]) (auto simp: dist_commute) with ball x e ⊆ T show "y ∈ T" by auto next show "x ∈ ball k r" by fact qed qedqedlemma compact_eq_seq_compact_metric: "compact (s :: 'a::metric_space set) <-> seq_compact s" using compact_imp_seq_compact seq_compact_imp_heine_borel by blastlemma compact_def: "compact (S :: 'a::metric_space set) <-> (∀f. (∀n. f n ∈ S) --> (∃l∈S. ∃r. subseq r ∧ (f o r) ----> l))" unfolding compact_eq_seq_compact_metric seq_compact_def by autosubsubsection {* Complete the chain of compactness variants *}lemma compact_eq_bolzano_weierstrass: fixes s :: "'a::metric_space set" shows "compact s <-> (∀t. infinite t ∧ t ⊆ s --> (∃x ∈ s. x islimpt t))" (is "?lhs = ?rhs")proof assume ?lhs then show ?rhs using heine_borel_imp_bolzano_weierstrass[of s] by autonext assume ?rhs then show ?lhs unfolding compact_eq_seq_compact_metric by (rule bolzano_weierstrass_imp_seq_compact)qedlemma bolzano_weierstrass_imp_bounded: "∀t. infinite t ∧ t ⊆ s --> (∃x ∈ s. x islimpt t) ==> bounded s" using compact_imp_bounded unfolding compact_eq_bolzano_weierstrass .text {* A metric space (or topological vector space) is said to have the Heine-Borel property if every closed and bounded subset is compact.*}class heine_borel = metric_space + assumes bounded_imp_convergent_subsequence: "bounded (range f) ==> ∃l r. subseq r ∧ ((f o r) ---> l) sequentially"lemma bounded_closed_imp_seq_compact: fixes s::"'a::heine_borel set" assumes "bounded s" and "closed s" shows "seq_compact s"proof (unfold seq_compact_def, clarify) fix f :: "nat => 'a" assume f: "∀n. f n ∈ s" with bounded s have "bounded (range f)" by (auto intro: bounded_subset) obtain l r where r: "subseq r" and l: "((f o r) ---> l) sequentially" using bounded_imp_convergent_subsequence [OF bounded (range f)] by auto from f have fr: "∀n. (f o r) n ∈ s" by simp have "l ∈ s" using closed s fr l unfolding closed_sequential_limits by blast show "∃l∈s. ∃r. subseq r ∧ ((f o r) ---> l) sequentially" using l ∈ s r l by blastqedlemma compact_eq_bounded_closed: fixes s :: "'a::heine_borel set" shows "compact s <-> bounded s ∧ closed s" (is "?lhs = ?rhs")proof assume ?lhs then show ?rhs using compact_imp_closed compact_imp_bounded by blastnext assume ?rhs then show ?lhs using bounded_closed_imp_seq_compact[of s] unfolding compact_eq_seq_compact_metric by autoqed(* TODO: is this lemma necessary? *)lemma bounded_increasing_convergent: fixes s :: "nat => real" shows "bounded {s n| n. True} ==> ∀n. s n ≤ s (Suc n) ==> ∃l. s ----> l" using Bseq_mono_convergent[of s] incseq_Suc_iff[of s] by (auto simp: image_def Bseq_eq_bounded convergent_def incseq_def)instance real :: heine_borelproof fix f :: "nat => real" assume f: "bounded (range f)" obtain r where r: "subseq r" "monoseq (f o r)" unfolding comp_def by (metis seq_monosub) then have "Bseq (f o r)" unfolding Bseq_eq_bounded using f by (auto intro: bounded_subset) with r show "∃l r. subseq r ∧ (f o r) ----> l" using Bseq_monoseq_convergent[of "f o r"] by (auto simp: convergent_def)qedlemma compact_lemma: fixes f :: "nat => 'a::euclidean_space" assumes "bounded (range f)" shows "∀d⊆Basis. ∃l::'a. ∃ r. subseq r ∧ (∀e>0. eventually (λn. ∀i∈d. dist (f (r n) • i) (l • i) < e) sequentially)"proof safe fix d :: "'a set" assume d: "d ⊆ Basis" with finite_Basis have "finite d" by (blast intro: finite_subset) from this d show "∃l::'a. ∃r. subseq r ∧ (∀e>0. eventually (λn. ∀i∈d. dist (f (r n) • i) (l • i) < e) sequentially)" proof (induct d) case empty then show ?case unfolding subseq_def by auto next case (insert k d) have k[intro]: "k ∈ Basis" using insert by auto have s': "bounded ((λx. x • k) range f)" using bounded (range f) by (auto intro!: bounded_linear_image bounded_linear_inner_left) obtain l1::"'a" and r1 where r1: "subseq r1" and lr1: "∀e > 0. eventually (λn. ∀i∈d. dist (f (r1 n) • i) (l1 • i) < e) sequentially" using insert(3) using insert(4) by auto have f': "∀n. f (r1 n) • k ∈ (λx. x • k) range f" by simp have "bounded (range (λi. f (r1 i) • k))" by (metis (lifting) bounded_subset f' image_subsetI s') then obtain l2 r2 where r2:"subseq r2" and lr2:"((λi. f (r1 (r2 i)) • k) ---> l2) sequentially" using bounded_imp_convergent_subsequence[of "λi. f (r1 i) • k"] by (auto simp: o_def) def r ≡ "r1 o r2" have r:"subseq r" using r1 and r2 unfolding r_def o_def subseq_def by auto moreover def l ≡ "(∑i∈Basis. (if i = k then l2 else l1•i) *⇩R i)::'a" { fix e::real assume "e > 0" from lr1 e > 0 have N1: "eventually (λn. ∀i∈d. dist (f (r1 n) • i) (l1 • i) < e) sequentially" by blast from lr2 e > 0 have N2:"eventually (λn. dist (f (r1 (r2 n)) • k) l2 < e) sequentially" by (rule tendstoD) from r2 N1 have N1': "eventually (λn. ∀i∈d. dist (f (r1 (r2 n)) • i) (l1 • i) < e) sequentially" by (rule eventually_subseq) have "eventually (λn. ∀i∈(insert k d). dist (f (r n) • i) (l • i) < e) sequentially" using N1' N2 by eventually_elim (insert insert.prems, auto simp: l_def r_def o_def) } ultimately show ?case by auto qedqedinstance euclidean_space ⊆ heine_borelproof fix f :: "nat => 'a" assume f: "bounded (range f)" then obtain l::'a and r where r: "subseq r" and l: "∀e>0. eventually (λn. ∀i∈Basis. dist (f (r n) • i) (l • i) < e) sequentially" using compact_lemma [OF f] by blast { fix e::real assume "e > 0" then have "e / real_of_nat DIM('a) > 0" by (auto intro!: divide_pos_pos DIM_positive) with l have "eventually (λn. ∀i∈Basis. dist (f (r n) • i) (l • i) < e / (real_of_nat DIM('a))) sequentially" by simp moreover { fix n assume n: "∀i∈Basis. dist (f (r n) • i) (l • i) < e / (real_of_nat DIM('a))" have "dist (f (r n)) l ≤ (∑i∈Basis. dist (f (r n) • i) (l • i))" apply (subst euclidean_dist_l2) using zero_le_dist apply (rule setL2_le_setsum) done also have "… < (∑i∈(Basis::'a set). e / (real_of_nat DIM('a)))" apply (rule setsum_strict_mono) using n apply auto done finally have "dist (f (r n)) l < e" by auto } ultimately have "eventually (λn. dist (f (r n)) l < e) sequentially" by (rule eventually_elim1) } then have *: "((f o r) ---> l) sequentially" unfolding o_def tendsto_iff by simp with r show "∃l r. subseq r ∧ ((f o r) ---> l) sequentially" by autoqedlemma bounded_fst: "bounded s ==> bounded (fst s)" unfolding bounded_def apply clarify apply (rule_tac x="a" in exI) apply (rule_tac x="e" in exI) apply clarsimp apply (drule (1) bspec) apply (simp add: dist_Pair_Pair) apply (erule order_trans [OF real_sqrt_sum_squares_ge1]) donelemma bounded_snd: "bounded s ==> bounded (snd s)" unfolding bounded_def apply clarify apply (rule_tac x="b" in exI) apply (rule_tac x="e" in exI) apply clarsimp apply (drule (1) bspec) apply (simp add: dist_Pair_Pair) apply (erule order_trans [OF real_sqrt_sum_squares_ge2]) doneinstance prod :: (heine_borel, heine_borel) heine_borelproof fix f :: "nat => 'a × 'b" assume f: "bounded (range f)" from f have s1: "bounded (range (fst o f))" unfolding image_comp by (rule bounded_fst) obtain l1 r1 where r1: "subseq r1" and l1: "(λn. fst (f (r1 n))) ----> l1" using bounded_imp_convergent_subsequence [OF s1] unfolding o_def by fast from f have s2: "bounded (range (snd o f o r1))" by (auto simp add: image_comp intro: bounded_snd bounded_subset) obtain l2 r2 where r2: "subseq r2" and l2: "((λn. snd (f (r1 (r2 n)))) ---> l2) sequentially" using bounded_imp_convergent_subsequence [OF s2] unfolding o_def by fast have l1': "((λn. fst (f (r1 (r2 n)))) ---> l1) sequentially" using LIMSEQ_subseq_LIMSEQ [OF l1 r2] unfolding o_def . have l: "((f o (r1 o r2)) ---> (l1, l2)) sequentially" using tendsto_Pair [OF l1' l2] unfolding o_def by simp have r: "subseq (r1 o r2)" using r1 r2 unfolding subseq_def by simp show "∃l r. subseq r ∧ ((f o r) ---> l) sequentially" using l r by fastqedsubsubsection{* Completeness *}definition complete :: "'a::metric_space set => bool" where "complete s <-> (∀f. (∀n. f n ∈ s) ∧ Cauchy f --> (∃l∈s. f ----> l))"lemma compact_imp_complete: assumes "compact s" shows "complete s"proof - { fix f assume as: "(∀n::nat. f n ∈ s)" "Cauchy f" from as(1) obtain l r where lr: "l∈s" "subseq r" "(f o r) ----> l" using assms unfolding compact_def by blast note lr' = seq_suble [OF lr(2)] { fix e :: real assume "e > 0" from as(2) obtain N where N:"∀m n. N ≤ m ∧ N ≤ n --> dist (f m) (f n) < e/2" unfolding cauchy_def using e > 0 apply (erule_tac x="e/2" in allE) apply auto done from lr(3)[unfolded LIMSEQ_def, THEN spec[where x="e/2"]] obtain M where M:"∀n≥M. dist ((f o r) n) l < e/2" using e > 0 by auto { fix n :: nat assume n: "n ≥ max N M" have "dist ((f o r) n) l < e/2" using n M by auto moreover have "r n ≥ N" using lr'[of n] n by auto then have "dist (f n) ((f o r) n) < e / 2" using N and n by auto ultimately have "dist (f n) l < e" using dist_triangle_half_r[of "f (r n)" "f n" e l] by (auto simp add: dist_commute) } then have "∃N. ∀n≥N. dist (f n) l < e" by blast } then have "∃l∈s. (f ---> l) sequentially" using l∈s unfolding LIMSEQ_def by auto } then show ?thesis unfolding complete_def by autoqedlemma nat_approx_posE: fixes e::real assumes "0 < e" obtains n :: nat where "1 / (Suc n) < e"proof atomize_elim have " 1 / real (Suc (nat (ceiling (1/e)))) < 1 / (ceiling (1/e))" by (rule divide_strict_left_mono) (auto intro!: mult_pos_pos simp: 0 < e) also have "1 / (ceiling (1/e)) ≤ 1 / (1/e)" by (rule divide_left_mono) (auto intro!: divide_pos_pos simp: 0 < e) also have "… = e" by simp finally show "∃n. 1 / real (Suc n) < e" ..qedlemma compact_eq_totally_bounded: "compact s <-> complete s ∧ (∀e>0. ∃k. finite k ∧ s ⊆ (\<Union>((λx. ball x e) k)))" (is "_ <-> ?rhs")proof assume assms: "?rhs" then obtain k where k: "!!e. 0 < e ==> finite (k e)" "!!e. 0 < e ==> s ⊆ (\<Union>x∈k e. ball x e)" by (auto simp: choice_iff') show "compact s" proof cases assume "s = {}" then show "compact s" by (simp add: compact_def) next assume "s ≠ {}" show ?thesis unfolding compact_def proof safe fix f :: "nat => 'a" assume f: "∀n. f n ∈ s" def e ≡ "λn. 1 / (2 * Suc n)" then have [simp]: "!!n. 0 < e n" by auto def B ≡ "λn U. SOME b. infinite {n. f n ∈ b} ∧ (∃x. b ⊆ ball x (e n) ∩ U)" { fix n U assume "infinite {n. f n ∈ U}" then have "∃b∈k (e n). infinite {i∈{n. f n ∈ U}. f i ∈ ball b (e n)}" using k f by (intro pigeonhole_infinite_rel) (auto simp: subset_eq) then guess a .. then have "∃b. infinite {i. f i ∈ b} ∧ (∃x. b ⊆ ball x (e n) ∩ U)" by (intro exI[of _ "ball a (e n) ∩ U"] exI[of _ a]) (auto simp: ac_simps) from someI_ex[OF this] have "infinite {i. f i ∈ B n U}" "∃x. B n U ⊆ ball x (e n) ∩ U" unfolding B_def by auto } note B = this def F ≡ "nat_rec (B 0 UNIV) B" { fix n have "infinite {i. f i ∈ F n}" by (induct n) (auto simp: F_def B) } then have F: "!!n. ∃x. F (Suc n) ⊆ ball x (e n) ∩ F n" using B by (simp add: F_def) then have F_dec: "!!m n. m ≤ n ==> F n ⊆ F m" using decseq_SucI[of F] by (auto simp: decseq_def) obtain sel where sel: "!!k i. i < sel k i" "!!k i. f (sel k i) ∈ F k" proof (atomize_elim, unfold all_conj_distrib[symmetric], intro choice allI) fix k i have "infinite ({n. f n ∈ F k} - {.. i})" using infinite {n. f n ∈ F k} by auto from infinite_imp_nonempty[OF this] show "∃x>i. f x ∈ F k" by (simp add: set_eq_iff not_le conj_commute) qed def t ≡ "nat_rec (sel 0 0) (λn i. sel (Suc n) i)" have "subseq t" unfolding subseq_Suc_iff by (simp add: t_def sel) moreover have "∀i. (f o t) i ∈ s" using f by auto moreover { fix n have "(f o t) n ∈ F n" by (cases n) (simp_all add: t_def sel) } note t = this have "Cauchy (f o t)" proof (safe intro!: metric_CauchyI exI elim!: nat_approx_posE) fix r :: real and N n m assume "1 / Suc N < r" "Suc N ≤ n" "Suc N ≤ m" then have "(f o t) n ∈ F (Suc N)" "(f o t) m ∈ F (Suc N)" "2 * e N < r" using F_dec t by (auto simp: e_def field_simps real_of_nat_Suc) with F[of N] obtain x where "dist x ((f o t) n) < e N" "dist x ((f o t) m) < e N" by (auto simp: subset_eq) with dist_triangle[of "(f o t) m" "(f o t) n" x] 2 * e N < r show "dist ((f o t) m) ((f o t) n) < r" by (simp add: dist_commute) qed ultimately show "∃l∈s. ∃r. subseq r ∧ (f o r) ----> l" using assms unfolding complete_def by blast qed qedqed (metis compact_imp_complete compact_imp_seq_compact seq_compact_imp_totally_bounded)lemma cauchy: "Cauchy s <-> (∀e>0.∃ N::nat. ∀n≥N. dist(s n)(s N) < e)" (is "?lhs = ?rhs")proof - { assume ?rhs { fix e::real assume "e>0" with ?rhs obtain N where N:"∀n≥N. dist (s n) (s N) < e/2" by (erule_tac x="e/2" in allE) auto { fix n m assume nm:"N ≤ m ∧ N ≤ n" then have "dist (s m) (s n) < e" using N using dist_triangle_half_l[of "s m" "s N" "e" "s n"] by blast } then have "∃N. ∀m n. N ≤ m ∧ N ≤ n --> dist (s m) (s n) < e" by blast } then have ?lhs unfolding cauchy_def by blast } then show ?thesis unfolding cauchy_def using dist_triangle_half_l by blastqedlemma cauchy_imp_bounded: assumes "Cauchy s" shows "bounded (range s)"proof - from assms obtain N :: nat where "∀m n. N ≤ m ∧ N ≤ n --> dist (s m) (s n) < 1" unfolding cauchy_def apply (erule_tac x= 1 in allE) apply auto done then have N:"∀n. N ≤ n --> dist (s N) (s n) < 1" by auto moreover have "bounded (s {0..N})" using finite_imp_bounded[of "s {1..N}"] by auto then obtain a where a:"∀x∈s {0..N}. dist (s N) x ≤ a" unfolding bounded_any_center [where a="s N"] by auto ultimately show "?thesis" unfolding bounded_any_center [where a="s N"] apply (rule_tac x="max a 1" in exI) apply auto apply (erule_tac x=y in allE) apply (erule_tac x=y in ballE) apply auto doneqedinstance heine_borel < complete_spaceproof fix f :: "nat => 'a" assume "Cauchy f" then have "bounded (range f)" by (rule cauchy_imp_bounded) then have "compact (closure (range f))" unfolding compact_eq_bounded_closed by auto then have "complete (closure (range f))" by (rule compact_imp_complete) moreover have "∀n. f n ∈ closure (range f)" using closure_subset [of "range f"] by auto ultimately have "∃l∈closure (range f). (f ---> l) sequentially" using Cauchy f unfolding complete_def by auto then show "convergent f" unfolding convergent_def by autoqedinstance euclidean_space ⊆ banach ..lemma complete_univ: "complete (UNIV :: 'a::complete_space set)"proof (simp add: complete_def, rule, rule) fix f :: "nat => 'a" assume "Cauchy f" then have "convergent f" by (rule Cauchy_convergent) then show "∃l. f ----> l" unfolding convergent_def .qedlemma complete_imp_closed: assumes "complete s" shows "closed s"proof - { fix x assume "x islimpt s" then obtain f where f: "∀n. f n ∈ s - {x}" "(f ---> x) sequentially" unfolding islimpt_sequential by auto then obtain l where l: "l∈s" "(f ---> l) sequentially" using complete s[unfolded complete_def] using LIMSEQ_imp_Cauchy[of f x] by auto then have "x ∈ s" using tendsto_unique[of sequentially f l x] trivial_limit_sequentially f(2) by auto } then show "closed s" unfolding closed_limpt by autoqedlemma complete_eq_closed: fixes s :: "'a::complete_space set" shows "complete s <-> closed s" (is "?lhs = ?rhs")proof assume ?lhs then show ?rhs by (rule complete_imp_closed)next assume ?rhs { fix f assume as:"∀n::nat. f n ∈ s" "Cauchy f" then obtain l where "(f ---> l) sequentially" using complete_univ[unfolded complete_def, THEN spec[where x=f]] by auto then have "∃l∈s. (f ---> l) sequentially" using ?rhs[unfolded closed_sequential_limits, THEN spec[where x=f], THEN spec[where x=l]] using as(1) by auto } then show ?lhs unfolding complete_def by autoqedlemma convergent_eq_cauchy: fixes s :: "nat => 'a::complete_space" shows "(∃l. (s ---> l) sequentially) <-> Cauchy s" unfolding Cauchy_convergent_iff convergent_def ..lemma convergent_imp_bounded: fixes s :: "nat => 'a::metric_space" shows "(s ---> l) sequentially ==> bounded (range s)" by (intro cauchy_imp_bounded LIMSEQ_imp_Cauchy)lemma compact_cball[simp]: fixes x :: "'a::heine_borel" shows "compact(cball x e)" using compact_eq_bounded_closed bounded_cball closed_cball by blastlemma compact_frontier_bounded[intro]: fixes s :: "'a::heine_borel set" shows "bounded s ==> compact(frontier s)" unfolding frontier_def using compact_eq_bounded_closed by blastlemma compact_frontier[intro]: fixes s :: "'a::heine_borel set" shows "compact s ==> compact (frontier s)" using compact_eq_bounded_closed compact_frontier_bounded by blastlemma frontier_subset_compact: fixes s :: "'a::heine_borel set" shows "compact s ==> frontier s ⊆ s" using frontier_subset_closed compact_eq_bounded_closed by blastsubsection {* Bounded closed nest property (proof does not use Heine-Borel) *}lemma bounded_closed_nest: assumes "∀n. closed(s n)" and "∀n. (s n ≠ {})" and "(∀m n. m ≤ n --> s n ⊆ s m)" and "bounded(s 0)" shows "∃a::'a::heine_borel. ∀n::nat. a ∈ s(n)"proof - from assms(2) obtain x where x:"∀n::nat. x n ∈ s n" using choice[of "λn x. x∈ s n"] by auto from assms(4,1) have *:"seq_compact (s 0)" using bounded_closed_imp_seq_compact[of "s 0"] by auto then obtain l r where lr:"l∈s 0" "subseq r" "((x o r) ---> l) sequentially" unfolding seq_compact_def apply (erule_tac x=x in allE) using x using assms(3) apply blast done { fix n :: nat { fix e :: real assume "e>0" with lr(3) obtain N where N:"∀m≥N. dist ((x o r) m) l < e" unfolding LIMSEQ_def by auto then have "dist ((x o r) (max N n)) l < e" by auto moreover have "r (max N n) ≥ n" using lr(2) using seq_suble[of r "max N n"] by auto then have "(x o r) (max N n) ∈ s n" using x apply (erule_tac x=n in allE) using x apply (erule_tac x="r (max N n)" in allE) using assms(3) apply (erule_tac x=n in allE) apply (erule_tac x="r (max N n)" in allE) apply auto done ultimately have "∃y∈s n. dist y l < e" by auto } then have "l ∈ s n" using closed_approachable[of "s n" l] assms(1) by blast } then show ?thesis by autoqedtext {* Decreasing case does not even need compactness, just completeness. *}lemma decreasing_closed_nest: assumes "∀n. closed(s n)" "∀n. (s n ≠ {})" "∀m n. m ≤ n --> s n ⊆ s m" "∀e>0. ∃n. ∀x ∈ (s n). ∀ y ∈ (s n). dist x y < e" shows "∃a::'a::complete_space. ∀n::nat. a ∈ s n"proof- have "∀n. ∃ x. x∈s n" using assms(2) by auto then have "∃t. ∀n. t n ∈ s n" using choice[of "λ n x. x ∈ s n"] by auto then obtain t where t: "∀n. t n ∈ s n" by auto { fix e :: real assume "e > 0" then obtain N where N:"∀x∈s N. ∀y∈s N. dist x y < e" using assms(4) by auto { fix m n :: nat assume "N ≤ m ∧ N ≤ n" then have "t m ∈ s N" "t n ∈ s N" using assms(3) t unfolding subset_eq t by blast+ then have "dist (t m) (t n) < e" using N by auto } then have "∃N. ∀m n. N ≤ m ∧ N ≤ n --> dist (t m) (t n) < e" by auto } then have "Cauchy t" unfolding cauchy_def by auto then obtain l where l:"(t ---> l) sequentially" using complete_univ unfolding complete_def by auto { fix n :: nat { fix e :: real assume "e > 0" then obtain N :: nat where N: "∀n≥N. dist (t n) l < e" using l[unfolded LIMSEQ_def] by auto have "t (max n N) ∈ s n" using assms(3) unfolding subset_eq apply (erule_tac x=n in allE) apply (erule_tac x="max n N" in allE) using t apply auto done then have "∃y∈s n. dist y l < e" apply (rule_tac x="t (max n N)" in bexI) using N apply auto done } then have "l ∈ s n" using closed_approachable[of "s n" l] assms(1) by auto } then show ?thesis by autoqedtext {* Strengthen it to the intersection actually being a singleton. *}lemma decreasing_closed_nest_sing: fixes s :: "nat => 'a::complete_space set" assumes "∀n. closed(s n)" "∀n. s n ≠ {}" "∀m n. m ≤ n --> s n ⊆ s m" "∀e>0. ∃n. ∀x ∈ (s n). ∀ y∈(s n). dist x y < e" shows "∃a. \<Inter>(range s) = {a}"proof - obtain a where a: "∀n. a ∈ s n" using decreasing_closed_nest[of s] using assms by auto { fix b assume b: "b ∈ \<Inter>(range s)" { fix e :: real assume "e > 0" then have "dist a b < e" using assms(4) and b and a by blast } then have "dist a b = 0" by (metis dist_eq_0_iff dist_nz less_le) } with a have "\<Inter>(range s) = {a}" unfolding image_def by auto then show ?thesis ..qedtext{* Cauchy-type criteria for uniform convergence. *}lemma uniformly_convergent_eq_cauchy: fixes s::"nat => 'b => 'a::complete_space" shows "(∃l. ∀e>0. ∃N. ∀n x. N ≤ n ∧ P x --> dist(s n x)(l x) < e) <-> (∀e>0. ∃N. ∀m n x. N ≤ m ∧ N ≤ n ∧ P x --> dist (s m x) (s n x) < e)" (is "?lhs = ?rhs")proof assume ?lhs then obtain l where l:"∀e>0. ∃N. ∀n x. N ≤ n ∧ P x --> dist (s n x) (l x) < e" by auto { fix e :: real assume "e > 0" then obtain N :: nat where N: "∀n x. N ≤ n ∧ P x --> dist (s n x) (l x) < e / 2" using l[THEN spec[where x="e/2"]] by auto { fix n m :: nat and x :: "'b" assume "N ≤ m ∧ N ≤ n ∧ P x" then have "dist (s m x) (s n x) < e" using N[THEN spec[where x=m], THEN spec[where x=x]] using N[THEN spec[where x=n], THEN spec[where x=x]] using dist_triangle_half_l[of "s m x" "l x" e "s n x"] by auto } then have "∃N. ∀m n x. N ≤ m ∧ N ≤ n ∧ P x --> dist (s m x) (s n x) < e" by auto } then show ?rhs by autonext assume ?rhs then have "∀x. P x --> Cauchy (λn. s n x)" unfolding cauchy_def apply auto apply (erule_tac x=e in allE) apply auto done then obtain l where l: "∀x. P x --> ((λn. s n x) ---> l x) sequentially" unfolding convergent_eq_cauchy[symmetric] using choice[of "λx l. P x --> ((λn. s n x) ---> l) sequentially"] by auto { fix e :: real assume "e > 0" then obtain N where N:"∀m n x. N ≤ m ∧ N ≤ n ∧ P x --> dist (s m x) (s n x) < e/2" using ?rhs[THEN spec[where x="e/2"]] by auto { fix x assume "P x" then obtain M where M:"∀n≥M. dist (s n x) (l x) < e/2" using l[THEN spec[where x=x], unfolded LIMSEQ_def] and e > 0 by (auto elim!: allE[where x="e/2"]) fix n :: nat assume "n ≥ N" then have "dist(s n x)(l x) < e" using P xand N[THEN spec[where x=n], THEN spec[where x="N+M"], THEN spec[where x=x]] using M[THEN spec[where x="N+M"]] and dist_triangle_half_l[of "s n x" "s (N+M) x" e "l x"] by (auto simp add: dist_commute) } then have "∃N. ∀n x. N ≤ n ∧ P x --> dist(s n x)(l x) < e" by auto } then show ?lhs by autoqedlemma uniformly_cauchy_imp_uniformly_convergent: fixes s :: "nat => 'a => 'b::complete_space" assumes "∀e>0.∃N. ∀m (n::nat) x. N ≤ m ∧ N ≤ n ∧ P x --> dist(s m x)(s n x) < e" and "∀x. P x --> (∀e>0. ∃N. ∀n. N ≤ n --> dist(s n x)(l x) < e)" shows "∀e>0. ∃N. ∀n x. N ≤ n ∧ P x --> dist(s n x)(l x) < e"proof - obtain l' where l:"∀e>0. ∃N. ∀n x. N ≤ n ∧ P x --> dist (s n x) (l' x) < e" using assms(1) unfolding uniformly_convergent_eq_cauchy[symmetric] by auto moreover { fix x assume "P x" then have "l x = l' x" using tendsto_unique[OF trivial_limit_sequentially, of "λn. s n x" "l x" "l' x"] using l and assms(2) unfolding LIMSEQ_def by blast } ultimately show ?thesis by autoqedsubsection {* Continuity *}text{* Derive the epsilon-delta forms, which we often use as "definitions" *}lemma continuous_within_eps_delta: "continuous (at x within s) f <-> (∀e>0. ∃d>0. ∀x'∈ s. dist x' x < d --> dist (f x') (f x) < e)" unfolding continuous_within and Lim_within apply auto unfolding dist_nz[symmetric] apply (auto del: allE elim!:allE) apply(rule_tac x=d in exI) apply auto donelemma continuous_at_eps_delta: "continuous (at x) f <-> (∀e > 0. ∃d > 0. ∀x'. dist x' x < d --> dist (f x') (f x) < e)" using continuous_within_eps_delta [of x UNIV f] by simptext{* Versions in terms of open balls. *}lemma continuous_within_ball: "continuous (at x within s) f <-> (∀e > 0. ∃d > 0. f (ball x d ∩ s) ⊆ ball (f x) e)" (is "?lhs = ?rhs")proof assume ?lhs { fix e :: real assume "e > 0" then obtain d where d: "d>0" "∀xa∈s. 0 < dist xa x ∧ dist xa x < d --> dist (f xa) (f x) < e" using ?lhs[unfolded continuous_within Lim_within] by auto { fix y assume "y ∈ f (ball x d ∩ s)" then have "y ∈ ball (f x) e" using d(2) unfolding dist_nz[symmetric] apply (auto simp add: dist_commute) apply (erule_tac x=xa in ballE) apply auto using e > 0 apply auto done } then have "∃d>0. f (ball x d ∩ s) ⊆ ball (f x) e" using d > 0 unfolding subset_eq ball_def by (auto simp add: dist_commute) } then show ?rhs by autonext assume ?rhs then show ?lhs unfolding continuous_within Lim_within ball_def subset_eq apply (auto simp add: dist_commute) apply (erule_tac x=e in allE) apply auto doneqedlemma continuous_at_ball: "continuous (at x) f <-> (∀e>0. ∃d>0. f (ball x d) ⊆ ball (f x) e)" (is "?lhs = ?rhs")proof assume ?lhs then show ?rhs unfolding continuous_at Lim_at subset_eq Ball_def Bex_def image_iff mem_ball apply auto apply (erule_tac x=e in allE) apply auto apply (rule_tac x=d in exI) apply auto apply (erule_tac x=xa in allE) apply (auto simp add: dist_commute dist_nz) unfolding dist_nz[symmetric] apply auto donenext assume ?rhs then show ?lhs unfolding continuous_at Lim_at subset_eq Ball_def Bex_def image_iff mem_ball apply auto apply (erule_tac x=e in allE) apply auto apply (rule_tac x=d in exI) apply auto apply (erule_tac x="f xa" in allE) apply (auto simp add: dist_commute dist_nz) doneqedtext{* Define setwise continuity in terms of limits within the set. *}lemma continuous_on_iff: "continuous_on s f <-> (∀x∈s. ∀e>0. ∃d>0. ∀x'∈s. dist x' x < d --> dist (f x') (f x) < e)" unfolding continuous_on_def Lim_within apply (intro ball_cong [OF refl] all_cong ex_cong) apply (rename_tac y, case_tac "y = x") apply simp apply (simp add: dist_nz) donedefinition uniformly_continuous_on :: "'a set => ('a::metric_space => 'b::metric_space) => bool" where "uniformly_continuous_on s f <-> (∀e>0. ∃d>0. ∀x∈s. ∀x'∈s. dist x' x < d --> dist (f x') (f x) < e)"text{* Some simple consequential lemmas. *}lemma uniformly_continuous_imp_continuous: "uniformly_continuous_on s f ==> continuous_on s f" unfolding uniformly_continuous_on_def continuous_on_iff by blastlemma continuous_at_imp_continuous_within: "continuous (at x) f ==> continuous (at x within s) f" unfolding continuous_within continuous_at using Lim_at_within by autolemma Lim_trivial_limit: "trivial_limit net ==> (f ---> l) net" by simplemmas continuous_on = continuous_on_def -- "legacy theorem name"lemma continuous_within_subset: "continuous (at x within s) f ==> t ⊆ s ==> continuous (at x within t) f" unfolding continuous_within by(metis tendsto_within_subset)lemma continuous_on_interior: "continuous_on s f ==> x ∈ interior s ==> continuous (at x) f" apply (erule interiorE) apply (drule (1) continuous_on_subset) apply (simp add: continuous_on_eq_continuous_at) donelemma continuous_on_eq: "(∀x ∈ s. f x = g x) ==> continuous_on s f ==> continuous_on s g" unfolding continuous_on_def tendsto_def eventually_at_topological by simptext {* Characterization of various kinds of continuity in terms of sequences. *}lemma continuous_within_sequentially: fixes f :: "'a::metric_space => 'b::topological_space" shows "continuous (at a within s) f <-> (∀x. (∀n::nat. x n ∈ s) ∧ (x ---> a) sequentially --> ((f o x) ---> f a) sequentially)" (is "?lhs = ?rhs")proof assume ?lhs { fix x :: "nat => 'a" assume x: "∀n. x n ∈ s" "∀e>0. eventually (λn. dist (x n) a < e) sequentially" fix T :: "'b set" assume "open T" and "f a ∈ T" with ?lhs obtain d where "d>0" and d:"∀x∈s. 0 < dist x a ∧ dist x a < d --> f x ∈ T" unfolding continuous_within tendsto_def eventually_at by (auto simp: dist_nz) have "eventually (λn. dist (x n) a < d) sequentially" using x(2) d>0 by simp then have "eventually (λn. (f o x) n ∈ T) sequentially" proof eventually_elim case (elim n) then show ?case using d x(1) f a ∈ T unfolding dist_nz[symmetric] by auto qed } then show ?rhs unfolding tendsto_iff tendsto_def by simpnext assume ?rhs then show ?lhs unfolding continuous_within tendsto_def [where l="f a"] by (simp add: sequentially_imp_eventually_within)qedlemma continuous_at_sequentially: fixes f :: "'a::metric_space => 'b::topological_space" shows "continuous (at a) f <-> (∀x. (x ---> a) sequentially --> ((f o x) ---> f a) sequentially)" using continuous_within_sequentially[of a UNIV f] by simplemma continuous_on_sequentially: fixes f :: "'a::metric_space => 'b::topological_space" shows "continuous_on s f <-> (∀x. ∀a ∈ s. (∀n. x(n) ∈ s) ∧ (x ---> a) sequentially --> ((f o x) ---> f a) sequentially)" (is "?lhs = ?rhs")proof assume ?rhs then show ?lhs using continuous_within_sequentially[of _ s f] unfolding continuous_on_eq_continuous_within by autonext assume ?lhs then show ?rhs unfolding continuous_on_eq_continuous_within using continuous_within_sequentially[of _ s f] by autoqedlemma uniformly_continuous_on_sequentially: "uniformly_continuous_on s f <-> (∀x y. (∀n. x n ∈ s) ∧ (∀n. y n ∈ s) ∧ ((λn. dist (x n) (y n)) ---> 0) sequentially --> ((λn. dist (f(x n)) (f(y n))) ---> 0) sequentially)" (is "?lhs = ?rhs")proof assume ?lhs { fix x y assume x: "∀n. x n ∈ s" and y: "∀n. y n ∈ s" and xy: "((λn. dist (x n) (y n)) ---> 0) sequentially" { fix e :: real assume "e > 0" then obtain d where "d > 0" and d: "∀x∈s. ∀x'∈s. dist x' x < d --> dist (f x') (f x) < e" using ?lhs[unfolded uniformly_continuous_on_def, THEN spec[where x=e]] by auto obtain N where N: "∀n≥N. dist (x n) (y n) < d" using xy[unfolded LIMSEQ_def dist_norm] and d>0 by auto { fix n assume "n≥N" then have "dist (f (x n)) (f (y n)) < e" using N[THEN spec[where x=n]] using d[THEN bspec[where x="x n"], THEN bspec[where x="y n"]] using x and y unfolding dist_commute by simp } then have "∃N. ∀n≥N. dist (f (x n)) (f (y n)) < e" by auto } then have "((λn. dist (f(x n)) (f(y n))) ---> 0) sequentially" unfolding LIMSEQ_def and dist_real_def by auto } then show ?rhs by autonext assume ?rhs { assume "¬ ?lhs" then obtain e where "e > 0" "∀d>0. ∃x∈s. ∃x'∈s. dist x' x < d ∧ ¬ dist (f x') (f x) < e" unfolding uniformly_continuous_on_def by auto then obtain fa where fa: "∀x. 0 < x --> fst (fa x) ∈ s ∧ snd (fa x) ∈ s ∧ dist (fst (fa x)) (snd (fa x)) < x ∧ ¬ dist (f (fst (fa x))) (f (snd (fa x))) < e" using choice[of "λd x. d>0 --> fst x ∈ s ∧ snd x ∈ s ∧ dist (snd x) (fst x) < d ∧ ¬ dist (f (snd x)) (f (fst x)) < e"] unfolding Bex_def by (auto simp add: dist_commute) def x ≡ "λn::nat. fst (fa (inverse (real n + 1)))" def y ≡ "λn::nat. snd (fa (inverse (real n + 1)))" have xyn: "∀n. x n ∈ s ∧ y n ∈ s" and xy0: "∀n. dist (x n) (y n) < inverse (real n + 1)" and fxy:"∀n. ¬ dist (f (x n)) (f (y n)) < e" unfolding x_def and y_def using fa by auto { fix e :: real assume "e > 0" then obtain N :: nat where "N ≠ 0" and N: "0 < inverse (real N) ∧ inverse (real N) < e" unfolding real_arch_inv[of e] by auto { fix n :: nat assume "n ≥ N" then have "inverse (real n + 1) < inverse (real N)" using real_of_nat_ge_zero and N≠0 by auto also have "… < e" using N by auto finally have "inverse (real n + 1) < e" by auto then have "dist (x n) (y n) < e" using xy0[THEN spec[where x=n]] by auto } then have "∃N. ∀n≥N. dist (x n) (y n) < e" by auto } then have "∀e>0. ∃N. ∀n≥N. dist (f (x n)) (f (y n)) < e" using ?rhs[THEN spec[where x=x], THEN spec[where x=y]] and xyn unfolding LIMSEQ_def dist_real_def by auto then have False using fxy and e>0 by auto } then show ?lhs unfolding uniformly_continuous_on_def by blastqedtext{* The usual transformation theorems. *}lemma continuous_transform_within: fixes f g :: "'a::metric_space => 'b::topological_space" assumes "0 < d" and "x ∈ s" and "∀x' ∈ s. dist x' x < d --> f x' = g x'" and "continuous (at x within s) f" shows "continuous (at x within s) g" unfolding continuous_withinproof (rule Lim_transform_within) show "0 < d" by fact show "∀x'∈s. 0 < dist x' x ∧ dist x' x < d --> f x' = g x'" using assms(3) by auto have "f x = g x" using assms(1,2,3) by auto then show "(f ---> g x) (at x within s)" using assms(4) unfolding continuous_within by simpqedlemma continuous_transform_at: fixes f g :: "'a::metric_space => 'b::topological_space" assumes "0 < d" and "∀x'. dist x' x < d --> f x' = g x'" and "continuous (at x) f" shows "continuous (at x) g" using continuous_transform_within [of d x UNIV f g] assms by simpsubsubsection {* Structural rules for pointwise continuity *}lemmas continuous_within_id = continuous_identlemmas continuous_at_id = isCont_identlemma continuous_infdist[continuous_intros]: assumes "continuous F f" shows "continuous F (λx. infdist (f x) A)" using assms unfolding continuous_def by (rule tendsto_infdist)lemma continuous_infnorm[continuous_intros]: "continuous F f ==> continuous F (λx. infnorm (f x))" unfolding continuous_def by (rule tendsto_infnorm)lemma continuous_inner[continuous_intros]: assumes "continuous F f" and "continuous F g" shows "continuous F (λx. inner (f x) (g x))" using assms unfolding continuous_def by (rule tendsto_inner)lemmas continuous_at_inverse = isCont_inversesubsubsection {* Structural rules for setwise continuity *}lemma continuous_on_infnorm[continuous_on_intros]: "continuous_on s f ==> continuous_on s (λx. infnorm (f x))" unfolding continuous_on by (fast intro: tendsto_infnorm)lemma continuous_on_inner[continuous_on_intros]: fixes g :: "'a::topological_space => 'b::real_inner" assumes "continuous_on s f" and "continuous_on s g" shows "continuous_on s (λx. inner (f x) (g x))" using bounded_bilinear_inner assms by (rule bounded_bilinear.continuous_on)subsubsection {* Structural rules for uniform continuity *}lemma uniformly_continuous_on_id[continuous_on_intros]: "uniformly_continuous_on s (λx. x)" unfolding uniformly_continuous_on_def by autolemma uniformly_continuous_on_const[continuous_on_intros]: "uniformly_continuous_on s (λx. c)" unfolding uniformly_continuous_on_def by simplemma uniformly_continuous_on_dist[continuous_on_intros]: fixes f g :: "'a::metric_space => 'b::metric_space" assumes "uniformly_continuous_on s f" and "uniformly_continuous_on s g" shows "uniformly_continuous_on s (λx. dist (f x) (g x))"proof - { fix a b c d :: 'b have "¦dist a b - dist c d¦ ≤ dist a c + dist b d" using dist_triangle2 [of a b c] dist_triangle2 [of b c d] using dist_triangle3 [of c d a] dist_triangle [of a d b] by arith } note le = this { fix x y assume f: "(λn. dist (f (x n)) (f (y n))) ----> 0" assume g: "(λn. dist (g (x n)) (g (y n))) ----> 0" have "(λn. ¦dist (f (x n)) (g (x n)) - dist (f (y n)) (g (y n))¦) ----> 0" by (rule Lim_transform_bound [OF _ tendsto_add_zero [OF f g]], simp add: le) } then show ?thesis using assms unfolding uniformly_continuous_on_sequentially unfolding dist_real_def by simpqedlemma uniformly_continuous_on_norm[continuous_on_intros]: assumes "uniformly_continuous_on s f" shows "uniformly_continuous_on s (λx. norm (f x))" unfolding norm_conv_dist using assms by (intro uniformly_continuous_on_dist uniformly_continuous_on_const)lemma (in bounded_linear) uniformly_continuous_on[continuous_on_intros]: assumes "uniformly_continuous_on s g" shows "uniformly_continuous_on s (λx. f (g x))" using assms unfolding uniformly_continuous_on_sequentially unfolding dist_norm tendsto_norm_zero_iff diff[symmetric] by (auto intro: tendsto_zero)lemma uniformly_continuous_on_cmul[continuous_on_intros]: fixes f :: "'a::metric_space => 'b::real_normed_vector" assumes "uniformly_continuous_on s f" shows "uniformly_continuous_on s (λx. c *⇩R f(x))" using bounded_linear_scaleR_right assms by (rule bounded_linear.uniformly_continuous_on)lemma dist_minus: fixes x y :: "'a::real_normed_vector" shows "dist (- x) (- y) = dist x y" unfolding dist_norm minus_diff_minus norm_minus_cancel ..lemma uniformly_continuous_on_minus[continuous_on_intros]: fixes f :: "'a::metric_space => 'b::real_normed_vector" shows "uniformly_continuous_on s f ==> uniformly_continuous_on s (λx. - f x)" unfolding uniformly_continuous_on_def dist_minus .lemma uniformly_continuous_on_add[continuous_on_intros]: fixes f g :: "'a::metric_space => 'b::real_normed_vector" assumes "uniformly_continuous_on s f" and "uniformly_continuous_on s g" shows "uniformly_continuous_on s (λx. f x + g x)" using assms unfolding uniformly_continuous_on_sequentially unfolding dist_norm tendsto_norm_zero_iff add_diff_add by (auto intro: tendsto_add_zero)lemma uniformly_continuous_on_diff[continuous_on_intros]: fixes f :: "'a::metric_space => 'b::real_normed_vector" assumes "uniformly_continuous_on s f" and "uniformly_continuous_on s g" shows "uniformly_continuous_on s (λx. f x - g x)" unfolding ab_diff_minus using assms by (intro uniformly_continuous_on_add uniformly_continuous_on_minus)text{* Continuity of all kinds is preserved under composition. *}lemmas continuous_at_compose = isCont_olemma uniformly_continuous_on_compose[continuous_on_intros]: assumes "uniformly_continuous_on s f" "uniformly_continuous_on (f s) g" shows "uniformly_continuous_on s (g o f)"proof - { fix e :: real assume "e > 0" then obtain d where "d > 0" and d: "∀x∈f s. ∀x'∈f s. dist x' x < d --> dist (g x') (g x) < e" using assms(2) unfolding uniformly_continuous_on_def by auto obtain d' where "d'>0" "∀x∈s. ∀x'∈s. dist x' x < d' --> dist (f x') (f x) < d" using d > 0 using assms(1) unfolding uniformly_continuous_on_def by auto then have "∃d>0. ∀x∈s. ∀x'∈s. dist x' x < d --> dist ((g o f) x') ((g o f) x) < e" using d>0 using d by auto } then show ?thesis using assms unfolding uniformly_continuous_on_def by autoqedtext{* Continuity in terms of open preimages. *}lemma continuous_at_open: "continuous (at x) f <-> (∀t. open t ∧ f x ∈ t --> (∃s. open s ∧ x ∈ s ∧ (∀x' ∈ s. (f x') ∈ t)))" unfolding continuous_within_topological [of x UNIV f] unfolding imp_conjL by (intro all_cong imp_cong ex_cong conj_cong refl) autolemma continuous_imp_tendsto: assumes "continuous (at x0) f" and "x ----> x0" shows "(f o x) ----> (f x0)"proof (rule topological_tendstoI) fix S assume "open S" "f x0 ∈ S" then obtain T where T_def: "open T" "x0 ∈ T" "∀x∈T. f x ∈ S" using assms continuous_at_open by metis then have "eventually (λn. x n ∈ T) sequentially" using assms T_def by (auto simp: tendsto_def) then show "eventually (λn. (f o x) n ∈ S) sequentially" using T_def by (auto elim!: eventually_elim1)qedlemma continuous_on_open: "continuous_on s f <-> (∀t. openin (subtopology euclidean (f s)) t --> openin (subtopology euclidean s) {x ∈ s. f x ∈ t})" unfolding continuous_on_open_invariant openin_open Int_def vimage_def Int_commute by (simp add: imp_ex imageI conj_commute eq_commute cong: conj_cong)text {* Similarly in terms of closed sets. *}lemma continuous_on_closed: "continuous_on s f <-> (∀t. closedin (subtopology euclidean (f s)) t --> closedin (subtopology euclidean s) {x ∈ s. f x ∈ t})" unfolding continuous_on_closed_invariant closedin_closed Int_def vimage_def Int_commute by (simp add: imp_ex imageI conj_commute eq_commute cong: conj_cong)text {* Half-global and completely global cases. *}lemma continuous_open_in_preimage: assumes "continuous_on s f" "open t" shows "openin (subtopology euclidean s) {x ∈ s. f x ∈ t}"proof - have *: "∀x. x ∈ s ∧ f x ∈ t <-> x ∈ s ∧ f x ∈ (t ∩ f s)" by auto have "openin (subtopology euclidean (f s)) (t ∩ f s)" using openin_open_Int[of t "f s", OF assms(2)] unfolding openin_open by auto then show ?thesis using assms(1)[unfolded continuous_on_open, THEN spec[where x="t ∩ f s"]] using * by autoqedlemma continuous_closed_in_preimage: assumes "continuous_on s f" and "closed t" shows "closedin (subtopology euclidean s) {x ∈ s. f x ∈ t}"proof - have *: "∀x. x ∈ s ∧ f x ∈ t <-> x ∈ s ∧ f x ∈ (t ∩ f s)" by auto have "closedin (subtopology euclidean (f s)) (t ∩ f s)" using closedin_closed_Int[of t "f s", OF assms(2)] unfolding Int_commute by auto then show ?thesis using assms(1)[unfolded continuous_on_closed, THEN spec[where x="t ∩ f s"]] using * by autoqedlemma continuous_open_preimage: assumes "continuous_on s f" and "open s" and "open t" shows "open {x ∈ s. f x ∈ t}"proof- obtain T where T: "open T" "{x ∈ s. f x ∈ t} = s ∩ T" using continuous_open_in_preimage[OF assms(1,3)] unfolding openin_open by auto then show ?thesis using open_Int[of s T, OF assms(2)] by autoqedlemma continuous_closed_preimage: assumes "continuous_on s f" and "closed s" and "closed t" shows "closed {x ∈ s. f x ∈ t}"proof- obtain T where "closed T" "{x ∈ s. f x ∈ t} = s ∩ T" using continuous_closed_in_preimage[OF assms(1,3)] unfolding closedin_closed by auto then show ?thesis using closed_Int[of s T, OF assms(2)] by autoqedlemma continuous_open_preimage_univ: "∀x. continuous (at x) f ==> open s ==> open {x. f x ∈ s}" using continuous_open_preimage[of UNIV f s] open_UNIV continuous_at_imp_continuous_on by autolemma continuous_closed_preimage_univ: "(∀x. continuous (at x) f) ==> closed s ==> closed {x. f x ∈ s}" using continuous_closed_preimage[of UNIV f s] closed_UNIV continuous_at_imp_continuous_on by autolemma continuous_open_vimage: "∀x. continuous (at x) f ==> open s ==> open (f - s)" unfolding vimage_def by (rule continuous_open_preimage_univ)lemma continuous_closed_vimage: "∀x. continuous (at x) f ==> closed s ==> closed (f - s)" unfolding vimage_def by (rule continuous_closed_preimage_univ)lemma interior_image_subset: assumes "∀x. continuous (at x) f" and "inj f" shows "interior (f s) ⊆ f (interior s)"proof fix x assume "x ∈ interior (f s)" then obtain T where as: "open T" "x ∈ T" "T ⊆ f s" .. then have "x ∈ f s" by auto then obtain y where y: "y ∈ s" "x = f y" by auto have "open (vimage f T)" using assms(1) open T by (rule continuous_open_vimage) moreover have "y ∈ vimage f T" using x = f y x ∈ T by simp moreover have "vimage f T ⊆ s" using T ⊆ image f s inj f unfolding inj_on_def subset_eq by auto ultimately have "y ∈ interior s" .. with x = f y show "x ∈ f interior s" ..qedtext {* Equality of continuous functions on closure and related results. *}lemma continuous_closed_in_preimage_constant: fixes f :: "_ => 'b::t1_space" shows "continuous_on s f ==> closedin (subtopology euclidean s) {x ∈ s. f x = a}" using continuous_closed_in_preimage[of s f "{a}"] by autolemma continuous_closed_preimage_constant: fixes f :: "_ => 'b::t1_space" shows "continuous_on s f ==> closed s ==> closed {x ∈ s. f x = a}" using continuous_closed_preimage[of s f "{a}"] by autolemma continuous_constant_on_closure: fixes f :: "_ => 'b::t1_space" assumes "continuous_on (closure s) f" and "∀x ∈ s. f x = a" shows "∀x ∈ (closure s). f x = a" using continuous_closed_preimage_constant[of "closure s" f a] assms closure_minimal[of s "{x ∈ closure s. f x = a}"] closure_subset unfolding subset_eq by autolemma image_closure_subset: assumes "continuous_on (closure s) f" and "closed t" and "(f s) ⊆ t" shows "f (closure s) ⊆ t"proof - have "s ⊆ {x ∈ closure s. f x ∈ t}" using assms(3) closure_subset by auto moreover have "closed {x ∈ closure s. f x ∈ t}" using continuous_closed_preimage[OF assms(1)] and assms(2) by auto ultimately have "closure s = {x ∈ closure s . f x ∈ t}" using closure_minimal[of s "{x ∈ closure s. f x ∈ t}"] by auto then show ?thesis by autoqedlemma continuous_on_closure_norm_le: fixes f :: "'a::metric_space => 'b::real_normed_vector" assumes "continuous_on (closure s) f" and "∀y ∈ s. norm(f y) ≤ b" and "x ∈ (closure s)" shows "norm (f x) ≤ b"proof - have *: "f s ⊆ cball 0 b" using assms(2)[unfolded mem_cball_0[symmetric]] by auto show ?thesis using image_closure_subset[OF assms(1) closed_cball[of 0 b] *] assms(3) unfolding subset_eq apply (erule_tac x="f x" in ballE) apply (auto simp add: dist_norm) doneqedtext {* Making a continuous function avoid some value in a neighbourhood. *}lemma continuous_within_avoid: fixes f :: "'a::metric_space => 'b::t1_space" assumes "continuous (at x within s) f" and "f x ≠ a" shows "∃e>0. ∀y ∈ s. dist x y < e --> f y ≠ a"proof - obtain U where "open U" and "f x ∈ U" and "a ∉ U" using t1_space [OF f x ≠ a] by fast have "(f ---> f x) (at x within s)" using assms(1) by (simp add: continuous_within) then have "eventually (λy. f y ∈ U) (at x within s)" using open U and f x ∈ U unfolding tendsto_def by fast then have "eventually (λy. f y ≠ a) (at x within s)" using a ∉ U by (fast elim: eventually_mono [rotated]) then show ?thesis using f x ≠ a by (auto simp: dist_commute zero_less_dist_iff eventually_at)qedlemma continuous_at_avoid: fixes f :: "'a::metric_space => 'b::t1_space" assumes "continuous (at x) f" and "f x ≠ a" shows "∃e>0. ∀y. dist x y < e --> f y ≠ a" using assms continuous_within_avoid[of x UNIV f a] by simplemma continuous_on_avoid: fixes f :: "'a::metric_space => 'b::t1_space" assumes "continuous_on s f" and "x ∈ s" and "f x ≠ a" shows "∃e>0. ∀y ∈ s. dist x y < e --> f y ≠ a" using assms(1)[unfolded continuous_on_eq_continuous_within, THEN bspec[where x=x], OF assms(2)] continuous_within_avoid[of x s f a] using assms(3) by autolemma continuous_on_open_avoid: fixes f :: "'a::metric_space => 'b::t1_space" assumes "continuous_on s f" and "open s" and "x ∈ s" and "f x ≠ a" shows "∃e>0. ∀y. dist x y < e --> f y ≠ a" using assms(1)[unfolded continuous_on_eq_continuous_at[OF assms(2)], THEN bspec[where x=x], OF assms(3)] using continuous_at_avoid[of x f a] assms(4) by autotext {* Proving a function is constant by proving open-ness of level set. *}lemma continuous_levelset_open_in_cases: fixes f :: "_ => 'b::t1_space" shows "connected s ==> continuous_on s f ==> openin (subtopology euclidean s) {x ∈ s. f x = a} ==> (∀x ∈ s. f x ≠ a) ∨ (∀x ∈ s. f x = a)" unfolding connected_clopen using continuous_closed_in_preimage_constant by autolemma continuous_levelset_open_in: fixes f :: "_ => 'b::t1_space" shows "connected s ==> continuous_on s f ==> openin (subtopology euclidean s) {x ∈ s. f x = a} ==> (∃x ∈ s. f x = a) ==> (∀x ∈ s. f x = a)" using continuous_levelset_open_in_cases[of s f ] by mesonlemma continuous_levelset_open: fixes f :: "_ => 'b::t1_space" assumes "connected s" and "continuous_on s f" and "open {x ∈ s. f x = a}" and "∃x ∈ s. f x = a" shows "∀x ∈ s. f x = a" using continuous_levelset_open_in[OF assms(1,2), of a, unfolded openin_open] using assms (3,4) by fasttext {* Some arithmetical combinations (more to prove). *}lemma open_scaling[intro]: fixes s :: "'a::real_normed_vector set" assumes "c ≠ 0" and "open s" shows "open((λx. c *⇩R x) s)"proof - { fix x assume "x ∈ s" then obtain e where "e>0" and e:"∀x'. dist x' x < e --> x' ∈ s" using assms(2)[unfolded open_dist, THEN bspec[where x=x]] by auto have "e * abs c > 0" using assms(1)[unfolded zero_less_abs_iff[symmetric]] using mult_pos_pos[OF e>0] by auto moreover { fix y assume "dist y (c *⇩R x) < e * ¦c¦" then have "norm ((1 / c) *⇩R y - x) < e" unfolding dist_norm using norm_scaleR[of c "(1 / c) *⇩R y - x", unfolded scaleR_right_diff_distrib, unfolded scaleR_scaleR] assms(1) assms(1)[unfolded zero_less_abs_iff[symmetric]] by (simp del:zero_less_abs_iff) then have "y ∈ op *⇩R c s" using rev_image_eqI[of "(1 / c) *⇩R y" s y "op *⇩R c"] using e[THEN spec[where x="(1 / c) *⇩R y"]] using assms(1) unfolding dist_norm scaleR_scaleR by auto } ultimately have "∃e>0. ∀x'. dist x' (c *⇩R x) < e --> x' ∈ op *⇩R c s" apply (rule_tac x="e * abs c" in exI) apply auto done } then show ?thesis unfolding open_dist by autoqedlemma minus_image_eq_vimage: fixes A :: "'a::ab_group_add set" shows "(λx. - x) A = (λx. - x) - A" by (auto intro!: image_eqI [where f="λx. - x"])lemma open_negations: fixes s :: "'a::real_normed_vector set" shows "open s ==> open ((λ x. -x) s)" unfolding scaleR_minus1_left [symmetric] by (rule open_scaling, auto)lemma open_translation: fixes s :: "'a::real_normed_vector set" assumes "open s" shows "open((λx. a + x) s)"proof - { fix x have "continuous (at x) (λx. x - a)" by (intro continuous_diff continuous_at_id continuous_const) } moreover have "{x. x - a ∈ s} = op + a s" by force ultimately show ?thesis using continuous_open_preimage_univ[of "λx. x - a" s] using assms by autoqedlemma open_affinity: fixes s :: "'a::real_normed_vector set" assumes "open s" "c ≠ 0" shows "open ((λx. a + c *⇩R x) s)"proof - have *: "(λx. a + c *⇩R x) = (λx. a + x) o (λx. c *⇩R x)" unfolding o_def .. have "op + a op *⇩R c s = (op + a o op *⇩R c) s" by auto then show ?thesis using assms open_translation[of "op *⇩R c s" a] unfolding * by autoqedlemma interior_translation: fixes s :: "'a::real_normed_vector set" shows "interior ((λx. a + x) s) = (λx. a + x) (interior s)"proof (rule set_eqI, rule) fix x assume "x ∈ interior (op + a s)" then obtain e where "e > 0" and e: "ball x e ⊆ op + a s" unfolding mem_interior by auto then have "ball (x - a) e ⊆ s" unfolding subset_eq Ball_def mem_ball dist_norm apply auto apply (erule_tac x="a + xa" in allE) unfolding ab_group_add_class.diff_diff_eq[symmetric] apply auto done then show "x ∈ op + a interior s" unfolding image_iff apply (rule_tac x="x - a" in bexI) unfolding mem_interior using e > 0 apply auto donenext fix x assume "x ∈ op + a interior s" then obtain y e where "e > 0" and e: "ball y e ⊆ s" and y: "x = a + y" unfolding image_iff Bex_def mem_interior by auto { fix z have *: "a + y - z = y + a - z" by auto assume "z ∈ ball x e" then have "z - a ∈ s" using e[unfolded subset_eq, THEN bspec[where x="z - a"]] unfolding mem_ball dist_norm y group_add_class.diff_diff_eq2 * by auto then have "z ∈ op + a s" unfolding image_iff by (auto intro!: bexI[where x="z - a"]) } then have "ball x e ⊆ op + a s" unfolding subset_eq by auto then show "x ∈ interior (op + a s)" unfolding mem_interior using e > 0 by autoqedtext {* Topological properties of linear functions. *}lemma linear_lim_0: assumes "bounded_linear f" shows "(f ---> 0) (at (0))"proof - interpret f: bounded_linear f by fact have "(f ---> f 0) (at 0)" using tendsto_ident_at by (rule f.tendsto) then show ?thesis unfolding f.zero .qedlemma linear_continuous_at: assumes "bounded_linear f" shows "continuous (at a) f" unfolding continuous_at using assms apply (rule bounded_linear.tendsto) apply (rule tendsto_ident_at) donelemma linear_continuous_within: "bounded_linear f ==> continuous (at x within s) f" using continuous_at_imp_continuous_within[of x f s] using linear_continuous_at[of f] by autolemma linear_continuous_on: "bounded_linear f ==> continuous_on s f" using continuous_at_imp_continuous_on[of s f] using linear_continuous_at[of f] by autotext {* Also bilinear functions, in composition form. *}lemma bilinear_continuous_at_compose: "continuous (at x) f ==> continuous (at x) g ==> bounded_bilinear h ==> continuous (at x) (λx. h (f x) (g x))" unfolding continuous_at using Lim_bilinear[of f "f x" "(at x)" g "g x" h] by autolemma bilinear_continuous_within_compose: "continuous (at x within s) f ==> continuous (at x within s) g ==> bounded_bilinear h ==> continuous (at x within s) (λx. h (f x) (g x))" unfolding continuous_within using Lim_bilinear[of f "f x"] by autolemma bilinear_continuous_on_compose: "continuous_on s f ==> continuous_on s g ==> bounded_bilinear h ==> continuous_on s (λx. h (f x) (g x))" unfolding continuous_on_def by (fast elim: bounded_bilinear.tendsto)text {* Preservation of compactness and connectedness under continuous function. *}lemma compact_eq_openin_cover: "compact S <-> (∀C. (∀c∈C. openin (subtopology euclidean S) c) ∧ S ⊆ \<Union>C --> (∃D⊆C. finite D ∧ S ⊆ \<Union>D))"proof safe fix C assume "compact S" and "∀c∈C. openin (subtopology euclidean S) c" and "S ⊆ \<Union>C" then have "∀c∈{T. open T ∧ S ∩ T ∈ C}. open c" and "S ⊆ \<Union>{T. open T ∧ S ∩ T ∈ C}" unfolding openin_open by force+ with compact S obtain D where "D ⊆ {T. open T ∧ S ∩ T ∈ C}" and "finite D" and "S ⊆ \<Union>D" by (rule compactE) then have "image (λT. S ∩ T) D ⊆ C ∧ finite (image (λT. S ∩ T) D) ∧ S ⊆ \<Union>(image (λT. S ∩ T) D)" by auto then show "∃D⊆C. finite D ∧ S ⊆ \<Union>D" ..next assume 1: "∀C. (∀c∈C. openin (subtopology euclidean S) c) ∧ S ⊆ \<Union>C --> (∃D⊆C. finite D ∧ S ⊆ \<Union>D)" show "compact S" proof (rule compactI) fix C let ?C = "image (λT. S ∩ T) C" assume "∀t∈C. open t" and "S ⊆ \<Union>C" then have "(∀c∈?C. openin (subtopology euclidean S) c) ∧ S ⊆ \<Union>?C" unfolding openin_open by auto with 1 obtain D where "D ⊆ ?C" and "finite D" and "S ⊆ \<Union>D" by metis let ?D = "inv_into C (λT. S ∩ T) D" have "?D ⊆ C ∧ finite ?D ∧ S ⊆ \<Union>?D" proof (intro conjI) from D ⊆ ?C show "?D ⊆ C" by (fast intro: inv_into_into) from finite D show "finite ?D" by (rule finite_imageI) from S ⊆ \<Union>D show "S ⊆ \<Union>?D" apply (rule subset_trans) apply clarsimp apply (frule subsetD [OF D ⊆ ?C, THEN f_inv_into_f]) apply (erule rev_bexI, fast) done qed then show "∃D⊆C. finite D ∧ S ⊆ \<Union>D" .. qedqedlemma connected_continuous_image: assumes "continuous_on s f" and "connected s" shows "connected(f s)"proof - { fix T assume as: "T ≠ {}" "T ≠ f s" "openin (subtopology euclidean (f s)) T" "closedin (subtopology euclidean (f s)) T" have "{x ∈ s. f x ∈ T} = {} ∨ {x ∈ s. f x ∈ T} = s" using assms(1)[unfolded continuous_on_open, THEN spec[where x=T]] using assms(1)[unfolded continuous_on_closed, THEN spec[where x=T]] using assms(2)[unfolded connected_clopen, THEN spec[where x="{x ∈ s. f x ∈ T}"]] as(3,4) by auto then have False using as(1,2) using as(4)[unfolded closedin_def topspace_euclidean_subtopology] by auto } then show ?thesis unfolding connected_clopen by autoqedtext {* Continuity implies uniform continuity on a compact domain. *}lemma compact_uniformly_continuous: assumes f: "continuous_on s f" and s: "compact s" shows "uniformly_continuous_on s f" unfolding uniformly_continuous_on_defproof (cases, safe) fix e :: real assume "0 < e" "s ≠ {}" def [simp]: R ≡ "{(y, d). y ∈ s ∧ 0 < d ∧ ball y d ∩ s ⊆ {x ∈ s. f x ∈ ball (f y) (e/2) } }" let ?b = "(λ(y, d). ball y (d/2))" have "(∀r∈R. open (?b r))" "s ⊆ (\<Union>r∈R. ?b r)" proof safe fix y assume "y ∈ s" from continuous_open_in_preimage[OF f open_ball] obtain T where "open T" and T: "{x ∈ s. f x ∈ ball (f y) (e/2)} = T ∩ s" unfolding openin_subtopology open_openin by metis then obtain d where "ball y d ⊆ T" "0 < d" using 0 < e y ∈ s by (auto elim!: openE) with T y ∈ s show "y ∈ (\<Union>r∈R. ?b r)" by (intro UN_I[of "(y, d)"]) auto qed auto with s obtain D where D: "finite D" "D ⊆ R" "s ⊆ (\<Union>(y, d)∈D. ball y (d/2))" by (rule compactE_image) with s ≠ {} have [simp]: "!!x. x < Min (snd D) <-> (∀(y, d)∈D. x < d)" by (subst Min_gr_iff) auto show "∃d>0. ∀x∈s. ∀x'∈s. dist x' x < d --> dist (f x') (f x) < e" proof (rule, safe) fix x x' assume in_s: "x' ∈ s" "x ∈ s" with D obtain y d where x: "x ∈ ball y (d/2)" "(y, d) ∈ D" by blast moreover assume "dist x x' < Min (sndD) / 2" ultimately have "dist y x' < d" by (intro dist_double[where x=x and d=d]) (auto simp: dist_commute) with D x in_s show "dist (f x) (f x') < e" by (intro dist_double[where x="f y" and d=e]) (auto simp: dist_commute subset_eq) qed (insert D, auto)qed autotext {* A uniformly convergent limit of continuous functions is continuous. *}lemma continuous_uniform_limit: fixes f :: "'a => 'b::metric_space => 'c::metric_space" assumes "¬ trivial_limit F" and "eventually (λn. continuous_on s (f n)) F" and "∀e>0. eventually (λn. ∀x∈s. dist (f n x) (g x) < e) F" shows "continuous_on s g"proof - { fix x and e :: real assume "x∈s" "e>0" have "eventually (λn. ∀x∈s. dist (f n x) (g x) < e / 3) F" using e>0 assms(3)[THEN spec[where x="e/3"]] by auto from eventually_happens [OF eventually_conj [OF this assms(2)]] obtain n where n:"∀x∈s. dist (f n x) (g x) < e / 3" "continuous_on s (f n)" using assms(1) by blast have "e / 3 > 0" using e>0 by auto then obtain d where "d>0" and d:"∀x'∈s. dist x' x < d --> dist (f n x') (f n x) < e / 3" using n(2)[unfolded continuous_on_iff, THEN bspec[where x=x], OF x∈s, THEN spec[where x="e/3"]] by blast { fix y assume "y ∈ s" and "dist y x < d" then have "dist (f n y) (f n x) < e / 3" by (rule d [rule_format]) then have "dist (f n y) (g x) < 2 * e / 3" using dist_triangle [of "f n y" "g x" "f n x"] using n(1)[THEN bspec[where x=x], OF x∈s] by auto then have "dist (g y) (g x) < e" using n(1)[THEN bspec[where x=y], OF y∈s] using dist_triangle3 [of "g y" "g x" "f n y"] by auto } then have "∃d>0. ∀x'∈s. dist x' x < d --> dist (g x') (g x) < e" using d>0 by auto } then show ?thesis unfolding continuous_on_iff by autoqedsubsection {* Topological stuff lifted from and dropped to R *}lemma open_real: fixes s :: "real set" shows "open s <-> (∀x ∈ s. ∃e>0. ∀x'. abs(x' - x) < e --> x' ∈ s)" unfolding open_dist dist_norm by simplemma islimpt_approachable_real: fixes s :: "real set" shows "x islimpt s <-> (∀e>0. ∃x'∈ s. x' ≠ x ∧ abs(x' - x) < e)" unfolding islimpt_approachable dist_norm by simplemma closed_real: fixes s :: "real set" shows "closed s <-> (∀x. (∀e>0. ∃x' ∈ s. x' ≠ x ∧ abs(x' - x) < e) --> x ∈ s)" unfolding closed_limpt islimpt_approachable dist_norm by simplemma continuous_at_real_range: fixes f :: "'a::real_normed_vector => real" shows "continuous (at x) f <-> (∀e>0. ∃d>0. ∀x'. norm(x' - x) < d --> abs(f x' - f x) < e)" unfolding continuous_at unfolding Lim_at unfolding dist_nz[symmetric] unfolding dist_norm apply auto apply (erule_tac x=e in allE) apply auto apply (rule_tac x=d in exI) apply auto apply (erule_tac x=x' in allE) apply auto apply (erule_tac x=e in allE) apply auto donelemma continuous_on_real_range: fixes f :: "'a::real_normed_vector => real" shows "continuous_on s f <-> (∀x ∈ s. ∀e>0. ∃d>0. (∀x' ∈ s. norm(x' - x) < d --> abs(f x' - f x) < e))" unfolding continuous_on_iff dist_norm by simptext {* Hence some handy theorems on distance, diameter etc. of/from a set. *}lemma distance_attains_sup: assumes "compact s" "s ≠ {}" shows "∃x∈s. ∀y∈s. dist a y ≤ dist a x"proof (rule continuous_attains_sup [OF assms]) { fix x assume "x∈s" have "(dist a ---> dist a x) (at x within s)" by (intro tendsto_dist tendsto_const tendsto_ident_at) } then show "continuous_on s (dist a)" unfolding continuous_on ..qedtext {* For \emph{minimal} distance, we only need closure, not compactness. *}lemma distance_attains_inf: fixes a :: "'a::heine_borel" assumes "closed s" and "s ≠ {}" shows "∃x∈s. ∀y∈s. dist a x ≤ dist a y"proof - from assms(2) obtain b where "b ∈ s" by auto let ?B = "s ∩ cball a (dist b a)" have "?B ≠ {}" using b ∈ s by (auto simp add: dist_commute) moreover have "continuous_on ?B (dist a)" by (auto intro!: continuous_at_imp_continuous_on continuous_dist continuous_at_id continuous_const) moreover have "compact ?B" by (intro closed_inter_compact closed s compact_cball) ultimately obtain x where "x ∈ ?B" "∀y∈?B. dist a x ≤ dist a y" by (metis continuous_attains_inf) then show ?thesis by fastforceqedsubsection {* Pasted sets *}lemma bounded_Times: assumes "bounded s" "bounded t" shows "bounded (s × t)"proof - obtain x y a b where "∀z∈s. dist x z ≤ a" "∀z∈t. dist y z ≤ b" using assms [unfolded bounded_def] by auto then have "∀z∈s × t. dist (x, y) z ≤ sqrt (a⇧2 + b⇧2)" by (auto simp add: dist_Pair_Pair real_sqrt_le_mono add_mono power_mono) then show ?thesis unfolding bounded_any_center [where a="(x, y)"] by autoqedlemma mem_Times_iff: "x ∈ A × B <-> fst x ∈ A ∧ snd x ∈ B" by (induct x) simplemma seq_compact_Times: "seq_compact s ==> seq_compact t ==> seq_compact (s × t)" unfolding seq_compact_def apply clarify apply (drule_tac x="fst o f" in spec) apply (drule mp, simp add: mem_Times_iff) apply (clarify, rename_tac l1 r1) apply (drule_tac x="snd o f o r1" in spec) apply (drule mp, simp add: mem_Times_iff) apply (clarify, rename_tac l2 r2) apply (rule_tac x="(l1, l2)" in rev_bexI, simp) apply (rule_tac x="r1 o r2" in exI) apply (rule conjI, simp add: subseq_def) apply (drule_tac f=r2 in LIMSEQ_subseq_LIMSEQ, assumption) apply (drule (1) tendsto_Pair) back apply (simp add: o_def) donelemma compact_Times: assumes "compact s" "compact t" shows "compact (s × t)"proof (rule compactI) fix C assume C: "∀t∈C. open t" "s × t ⊆ \<Union>C" have "∀x∈s. ∃a. open a ∧ x ∈ a ∧ (∃d⊆C. finite d ∧ a × t ⊆ \<Union>d)" proof fix x assume "x ∈ s" have "∀y∈t. ∃a b c. c ∈ C ∧ open a ∧ open b ∧ x ∈ a ∧ y ∈ b ∧ a × b ⊆ c" (is "∀y∈t. ?P y") proof fix y assume "y ∈ t" with x ∈ s C obtain c where "c ∈ C" "(x, y) ∈ c" "open c" by auto then show "?P y" by (auto elim!: open_prod_elim) qed then obtain a b c where b: "!!y. y ∈ t ==> open (b y)" and c: "!!y. y ∈ t ==> c y ∈ C ∧ open (a y) ∧ open (b y) ∧ x ∈ a y ∧ y ∈ b y ∧ a y × b y ⊆ c y" by metis then have "∀y∈t. open (b y)" "t ⊆ (\<Union>y∈t. b y)" by auto from compactE_image[OF compact t this] obtain D where D: "D ⊆ t" "finite D" "t ⊆ (\<Union>y∈D. b y)" by auto moreover from D c have "(\<Inter>y∈D. a y) × t ⊆ (\<Union>y∈D. c y)" by (fastforce simp: subset_eq) ultimately show "∃a. open a ∧ x ∈ a ∧ (∃d⊆C. finite d ∧ a × t ⊆ \<Union>d)" using c by (intro exI[of _ "cD"] exI[of _ "\<Inter>(aD)"] conjI) (auto intro!: open_INT) qed then obtain a d where a: "∀x∈s. open (a x)" "s ⊆ (\<Union>x∈s. a x)" and d: "!!x. x ∈ s ==> d x ⊆ C ∧ finite (d x) ∧ a x × t ⊆ \<Union>d x" unfolding subset_eq UN_iff by metis moreover from compactE_image[OF compact s a] obtain e where e: "e ⊆ s" "finite e" and s: "s ⊆ (\<Union>x∈e. a x)" by auto moreover { from s have "s × t ⊆ (\<Union>x∈e. a x × t)" by auto also have "… ⊆ (\<Union>x∈e. \<Union>d x)" using d e ⊆ s by (intro UN_mono) auto finally have "s × t ⊆ (\<Union>x∈e. \<Union>d x)" . } ultimately show "∃C'⊆C. finite C' ∧ s × t ⊆ \<Union>C'" by (intro exI[of _ "(\<Union>x∈e. d x)"]) (auto simp add: subset_eq)qedtext{* Hence some useful properties follow quite easily. *}lemma compact_scaling: fixes s :: "'a::real_normed_vector set" assumes "compact s" shows "compact ((λx. c *⇩R x) s)"proof - let ?f = "λx. scaleR c x" have *: "bounded_linear ?f" by (rule bounded_linear_scaleR_right) show ?thesis using compact_continuous_image[of s ?f] continuous_at_imp_continuous_on[of s ?f] using linear_continuous_at[OF *] assms by autoqedlemma compact_negations: fixes s :: "'a::real_normed_vector set" assumes "compact s" shows "compact ((λx. - x) s)" using compact_scaling [OF assms, of "- 1"] by autolemma compact_sums: fixes s t :: "'a::real_normed_vector set" assumes "compact s" and "compact t" shows "compact {x + y | x y. x ∈ s ∧ y ∈ t}"proof - have *: "{x + y | x y. x ∈ s ∧ y ∈ t} = (λz. fst z + snd z) (s × t)" apply auto unfolding image_iff apply (rule_tac x="(xa, y)" in bexI) apply auto done have "continuous_on (s × t) (λz. fst z + snd z)" unfolding continuous_on by (rule ballI) (intro tendsto_intros) then show ?thesis unfolding * using compact_continuous_image compact_Times [OF assms] by autoqedlemma compact_differences: fixes s t :: "'a::real_normed_vector set" assumes "compact s" and "compact t" shows "compact {x - y | x y. x ∈ s ∧ y ∈ t}"proof- have "{x - y | x y. x∈s ∧ y ∈ t} = {x + y | x y. x ∈ s ∧ y ∈ (uminus t)}" apply auto apply (rule_tac x= xa in exI) apply auto apply (rule_tac x=xa in exI) apply auto done then show ?thesis using compact_sums[OF assms(1) compact_negations[OF assms(2)]] by autoqedlemma compact_translation: fixes s :: "'a::real_normed_vector set" assumes "compact s" shows "compact ((λx. a + x) s)"proof - have "{x + y |x y. x ∈ s ∧ y ∈ {a}} = (λx. a + x) s" by auto then show ?thesis using compact_sums[OF assms compact_sing[of a]] by autoqedlemma compact_affinity: fixes s :: "'a::real_normed_vector set" assumes "compact s" shows "compact ((λx. a + c *⇩R x) s)"proof - have "op + a op *⇩R c s = (λx. a + c *⇩R x) s" by auto then show ?thesis using compact_translation[OF compact_scaling[OF assms], of a c] by autoqedtext {* Hence we get the following. *}lemma compact_sup_maxdistance: fixes s :: "'a::metric_space set" assumes "compact s" and "s ≠ {}" shows "∃x∈s. ∃y∈s. ∀u∈s. ∀v∈s. dist u v ≤ dist x y"proof - have "compact (s × s)" using compact s by (intro compact_Times) moreover have "s × s ≠ {}" using s ≠ {} by auto moreover have "continuous_on (s × s) (λx. dist (fst x) (snd x))" by (intro continuous_at_imp_continuous_on ballI continuous_intros) ultimately show ?thesis using continuous_attains_sup[of "s × s" "λx. dist (fst x) (snd x)"] by autoqedtext {* We can state this in terms of diameter of a set. *}definition "diameter s = (if s = {} then 0::real else Sup {dist x y | x y. x ∈ s ∧ y ∈ s})"lemma diameter_bounded_bound: fixes s :: "'a :: metric_space set" assumes s: "bounded s" "x ∈ s" "y ∈ s" shows "dist x y ≤ diameter s"proof - let ?D = "{dist x y |x y. x ∈ s ∧ y ∈ s}" from s obtain z d where z: "!!x. x ∈ s ==> dist z x ≤ d" unfolding bounded_def by auto have "dist x y ≤ Sup ?D" proof (rule cSup_upper, safe) fix a b assume "a ∈ s" "b ∈ s" with z[of a] z[of b] dist_triangle[of a b z] show "dist a b ≤ 2 * d" by (simp add: dist_commute) qed (insert s, auto) with x ∈ s show ?thesis by (auto simp add: diameter_def)qedlemma diameter_lower_bounded: fixes s :: "'a :: metric_space set" assumes s: "bounded s" and d: "0 < d" "d < diameter s" shows "∃x∈s. ∃y∈s. d < dist x y"proof (rule ccontr) let ?D = "{dist x y |x y. x ∈ s ∧ y ∈ s}" assume contr: "¬ ?thesis" moreover from d have "s ≠ {}" by (auto simp: diameter_def) then have "?D ≠ {}" by auto ultimately have "Sup ?D ≤ d" by (intro cSup_least) (auto simp: not_less) with d < diameter s s ≠ {} show False by (auto simp: diameter_def)qedlemma diameter_bounded: assumes "bounded s" shows "∀x∈s. ∀y∈s. dist x y ≤ diameter s" and "∀d>0. d < diameter s --> (∃x∈s. ∃y∈s. dist x y > d)" using diameter_bounded_bound[of s] diameter_lower_bounded[of s] assms by autolemma diameter_compact_attained: assumes "compact s" and "s ≠ {}" shows "∃x∈s. ∃y∈s. dist x y = diameter s"proof - have b: "bounded s" using assms(1) by (rule compact_imp_bounded) then obtain x y where xys: "x∈s" "y∈s" and xy: "∀u∈s. ∀v∈s. dist u v ≤ dist x y" using compact_sup_maxdistance[OF assms] by auto then have "diameter s ≤ dist x y" unfolding diameter_def apply clarsimp apply (rule cSup_least) apply fast+ done then show ?thesis by (metis b diameter_bounded_bound order_antisym xys)qedtext {* Related results with closure as the conclusion. *}lemma closed_scaling: fixes s :: "'a::real_normed_vector set" assumes "closed s" shows "closed ((λx. c *⇩R x) s)"proof (cases "c = 0") case True then show ?thesis by (auto simp add: image_constant_conv)next case False from assms have "closed ((λx. inverse c *⇩R x) - s)" by (simp add: continuous_closed_vimage) also have "(λx. inverse c *⇩R x) - s = (λx. c *⇩R x) s" using c ≠ 0 by (auto elim: image_eqI [rotated]) finally show ?thesis .qedlemma closed_negations: fixes s :: "'a::real_normed_vector set" assumes "closed s" shows "closed ((λx. -x) s)" using closed_scaling[OF assms, of "- 1"] by simplemma compact_closed_sums: fixes s :: "'a::real_normed_vector set" assumes "compact s" and "closed t" shows "closed {x + y | x y. x ∈ s ∧ y ∈ t}"proof - let ?S = "{x + y |x y. x ∈ s ∧ y ∈ t}" { fix x l assume as: "∀n. x n ∈ ?S" "(x ---> l) sequentially" from as(1) obtain f where f: "∀n. x n = fst (f n) + snd (f n)" "∀n. fst (f n) ∈ s" "∀n. snd (f n) ∈ t" using choice[of "λn y. x n = (fst y) + (snd y) ∧ fst y ∈ s ∧ snd y ∈ t"] by auto obtain l' r where "l'∈s" and r: "subseq r" and lr: "(((λn. fst (f n)) o r) ---> l') sequentially" using assms(1)[unfolded compact_def, THEN spec[where x="λ n. fst (f n)"]] using f(2) by auto have "((λn. snd (f (r n))) ---> l - l') sequentially" using tendsto_diff[OF LIMSEQ_subseq_LIMSEQ[OF as(2) r] lr] and f(1) unfolding o_def by auto then have "l - l' ∈ t" using assms(2)[unfolded closed_sequential_limits, THEN spec[where x="λ n. snd (f (r n))"], THEN spec[where x="l - l'"]] using f(3) by auto then have "l ∈ ?S" using l' ∈ s apply auto apply (rule_tac x=l' in exI) apply (rule_tac x="l - l'" in exI) apply auto done } then show ?thesis unfolding closed_sequential_limits by fastqedlemma closed_compact_sums: fixes s t :: "'a::real_normed_vector set" assumes "closed s" and "compact t" shows "closed {x + y | x y. x ∈ s ∧ y ∈ t}"proof - have "{x + y |x y. x ∈ t ∧ y ∈ s} = {x + y |x y. x ∈ s ∧ y ∈ t}" apply auto apply (rule_tac x=y in exI) apply auto apply (rule_tac x=y in exI) apply auto done then show ?thesis using compact_closed_sums[OF assms(2,1)] by simpqedlemma compact_closed_differences: fixes s t :: "'a::real_normed_vector set" assumes "compact s" and "closed t" shows "closed {x - y | x y. x ∈ s ∧ y ∈ t}"proof - have "{x + y |x y. x ∈ s ∧ y ∈ uminus t} = {x - y |x y. x ∈ s ∧ y ∈ t}" apply auto apply (rule_tac x=xa in exI) apply auto apply (rule_tac x=xa in exI) apply auto done then show ?thesis using compact_closed_sums[OF assms(1) closed_negations[OF assms(2)]] by autoqedlemma closed_compact_differences: fixes s t :: "'a::real_normed_vector set" assumes "closed s" and "compact t" shows "closed {x - y | x y. x ∈ s ∧ y ∈ t}"proof - have "{x + y |x y. x ∈ s ∧ y ∈ uminus t} = {x - y |x y. x ∈ s ∧ y ∈ t}" apply auto apply (rule_tac x=xa in exI) apply auto apply (rule_tac x=xa in exI) apply auto done then show ?thesis using closed_compact_sums[OF assms(1) compact_negations[OF assms(2)]] by simpqedlemma closed_translation: fixes a :: "'a::real_normed_vector" assumes "closed s" shows "closed ((λx. a + x) s)"proof - have "{a + y |y. y ∈ s} = (op + a s)" by auto then show ?thesis using compact_closed_sums[OF compact_sing[of a] assms] by autoqedlemma translation_Compl: fixes a :: "'a::ab_group_add" shows "(λx. a + x) (- t) = - ((λx. a + x) t)" apply (auto simp add: image_iff) apply (rule_tac x="x - a" in bexI) apply auto donelemma translation_UNIV: fixes a :: "'a::ab_group_add" shows "range (λx. a + x) = UNIV" apply (auto simp add: image_iff) apply (rule_tac x="x - a" in exI) apply auto donelemma translation_diff: fixes a :: "'a::ab_group_add" shows "(λx. a + x) (s - t) = ((λx. a + x) s) - ((λx. a + x) t)" by autolemma closure_translation: fixes a :: "'a::real_normed_vector" shows "closure ((λx. a + x) s) = (λx. a + x) (closure s)"proof - have *: "op + a (- s) = - op + a s" apply auto unfolding image_iff apply (rule_tac x="x - a" in bexI) apply auto done show ?thesis unfolding closure_interior translation_Compl using interior_translation[of a "- s"] unfolding * by autoqedlemma frontier_translation: fixes a :: "'a::real_normed_vector" shows "frontier((λx. a + x) s) = (λx. a + x) (frontier s)" unfolding frontier_def translation_diff interior_translation closure_translation by autosubsection {* Separation between points and sets *}lemma separate_point_closed: fixes s :: "'a::heine_borel set" assumes "closed s" and "a ∉ s" shows "∃d>0. ∀x∈s. d ≤ dist a x"proof (cases "s = {}") case True then show ?thesis by(auto intro!: exI[where x=1])next case False from assms obtain x where "x∈s" "∀y∈s. dist a x ≤ dist a y" using s ≠ {} distance_attains_inf [of s a] by blast with x∈s show ?thesis using dist_pos_lt[of a x] anda ∉ s by blastqedlemma separate_compact_closed: fixes s t :: "'a::heine_borel set" assumes "compact s" and t: "closed t" "s ∩ t = {}" shows "∃d>0. ∀x∈s. ∀y∈t. d ≤ dist x y"proof cases assume "s ≠ {} ∧ t ≠ {}" then have "s ≠ {}" "t ≠ {}" by auto let ?inf = "λx. infdist x t" have "continuous_on s ?inf" by (auto intro!: continuous_at_imp_continuous_on continuous_infdist continuous_at_id) then obtain x where x: "x ∈ s" "∀y∈s. ?inf x ≤ ?inf y" using continuous_attains_inf[OF compact s s ≠ {}] by auto then have "0 < ?inf x" using t t ≠ {} in_closed_iff_infdist_zero by (auto simp: less_le infdist_nonneg) moreover have "∀x'∈s. ∀y∈t. ?inf x ≤ dist x' y" using x by (auto intro: order_trans infdist_le) ultimately show ?thesis by autoqed (auto intro!: exI[of _ 1])lemma separate_closed_compact: fixes s t :: "'a::heine_borel set" assumes "closed s" and "compact t" and "s ∩ t = {}" shows "∃d>0. ∀x∈s. ∀y∈t. d ≤ dist x y"proof - have *: "t ∩ s = {}" using assms(3) by auto show ?thesis using separate_compact_closed[OF assms(2,1) *] apply auto apply (rule_tac x=d in exI) apply auto apply (erule_tac x=y in ballE) apply (auto simp add: dist_commute) doneqedsubsection {* Intervals *}lemma interval: fixes a :: "'a::ordered_euclidean_space" shows "{a <..< b} = {x::'a. ∀i∈Basis. a•i < x•i ∧ x•i < b•i}" and "{a .. b} = {x::'a. ∀i∈Basis. a•i ≤ x•i ∧ x•i ≤ b•i}" by (auto simp add:set_eq_iff eucl_le[where 'a='a] eucl_less[where 'a='a])lemma mem_interval: fixes a :: "'a::ordered_euclidean_space" shows "x ∈ {a<..<b} <-> (∀i∈Basis. a•i < x•i ∧ x•i < b•i)" and "x ∈ {a .. b} <-> (∀i∈Basis. a•i ≤ x•i ∧ x•i ≤ b•i)" using interval[of a b] by (auto simp add: set_eq_iff eucl_le[where 'a='a] eucl_less[where 'a='a])lemma interval_eq_empty: fixes a :: "'a::ordered_euclidean_space" shows "({a <..< b} = {} <-> (∃i∈Basis. b•i ≤ a•i))" (is ?th1) and "({a .. b} = {} <-> (∃i∈Basis. b•i < a•i))" (is ?th2)proof - { fix i x assume i: "i∈Basis" and as:"b•i ≤ a•i" and x:"x∈{a <..< b}" then have "a • i < x • i ∧ x • i < b • i" unfolding mem_interval by auto then have "a•i < b•i" by auto then have False using as by auto } moreover { assume as: "∀i∈Basis. ¬ (b•i ≤ a•i)" let ?x = "(1/2) *⇩R (a + b)" { fix i :: 'a assume i: "i ∈ Basis" have "a•i < b•i" using as[THEN bspec[where x=i]] i by auto then have "a•i < ((1/2) *⇩R (a+b)) • i" "((1/2) *⇩R (a+b)) • i < b•i" by (auto simp: inner_add_left) } then have "{a <..< b} ≠ {}" using mem_interval(1)[of "?x" a b] by auto } ultimately show ?th1 by blast { fix i x assume i: "i ∈ Basis" and as:"b•i < a•i" and x:"x∈{a .. b}" then have "a • i ≤ x • i ∧ x • i ≤ b • i" unfolding mem_interval by auto then have "a•i ≤ b•i" by auto then have False using as by auto } moreover { assume as:"∀i∈Basis. ¬ (b•i < a•i)" let ?x = "(1/2) *⇩R (a + b)" { fix i :: 'a assume i:"i ∈ Basis" have "a•i ≤ b•i" using as[THEN bspec[where x=i]] i by auto then have "a•i ≤ ((1/2) *⇩R (a+b)) • i" "((1/2) *⇩R (a+b)) • i ≤ b•i" by (auto simp: inner_add_left) } then have "{a .. b} ≠ {}" using mem_interval(2)[of "?x" a b] by auto } ultimately show ?th2 by blastqedlemma interval_ne_empty: fixes a :: "'a::ordered_euclidean_space" shows "{a .. b} ≠ {} <-> (∀i∈Basis. a•i ≤ b•i)" and "{a <..< b} ≠ {} <-> (∀i∈Basis. a•i < b•i)" unfolding interval_eq_empty[of a b] by fastforce+lemma interval_sing: fixes a :: "'a::ordered_euclidean_space" shows "{a .. a} = {a}" and "{a<..<a} = {}" unfolding set_eq_iff mem_interval eq_iff [symmetric] by (auto intro: euclidean_eqI simp: ex_in_conv)lemma subset_interval_imp: fixes a :: "'a::ordered_euclidean_space" shows "(∀i∈Basis. a•i ≤ c•i ∧ d•i ≤ b•i) ==> {c .. d} ⊆ {a .. b}" and "(∀i∈Basis. a•i < c•i ∧ d•i < b•i) ==> {c .. d} ⊆ {a<..<b}" and "(∀i∈Basis. a•i ≤ c•i ∧ d•i ≤ b•i) ==> {c<..<d} ⊆ {a .. b}" and "(∀i∈Basis. a•i ≤ c•i ∧ d•i ≤ b•i) ==> {c<..<d} ⊆ {a<..<b}" unfolding subset_eq[unfolded Ball_def] unfolding mem_interval by (best intro: order_trans less_le_trans le_less_trans less_imp_le)+lemma interval_open_subset_closed: fixes a :: "'a::ordered_euclidean_space" shows "{a<..<b} ⊆ {a .. b}" unfolding subset_eq [unfolded Ball_def] mem_interval by (fast intro: less_imp_le)lemma subset_interval: fixes a :: "'a::ordered_euclidean_space" shows "{c .. d} ⊆ {a .. b} <-> (∀i∈Basis. c•i ≤ d•i) --> (∀i∈Basis. a•i ≤ c•i ∧ d•i ≤ b•i)" (is ?th1) and "{c .. d} ⊆ {a<..<b} <-> (∀i∈Basis. c•i ≤ d•i) --> (∀i∈Basis. a•i < c•i ∧ d•i < b•i)" (is ?th2) and "{c<..<d} ⊆ {a .. b} <-> (∀i∈Basis. c•i < d•i) --> (∀i∈Basis. a•i ≤ c•i ∧ d•i ≤ b•i)" (is ?th3) and "{c<..<d} ⊆ {a<..<b} <-> (∀i∈Basis. c•i < d•i) --> (∀i∈Basis. a•i ≤ c•i ∧ d•i ≤ b•i)" (is ?th4)proof - show ?th1 unfolding subset_eq and Ball_def and mem_interval by (auto intro: order_trans) show ?th2 unfolding subset_eq and Ball_def and mem_interval by (auto intro: le_less_trans less_le_trans order_trans less_imp_le) { assume as: "{c<..<d} ⊆ {a .. b}" "∀i∈Basis. c•i < d•i" then have "{c<..<d} ≠ {}" unfolding interval_eq_empty by auto fix i :: 'a assume i: "i ∈ Basis" (** TODO combine the following two parts as done in the HOL_light version. **) { let ?x = "(∑j∈Basis. (if j=i then ((min (a•j) (d•j))+c•j)/2 else (c•j+d•j)/2) *⇩R j)::'a" assume as2: "a•i > c•i" { fix j :: 'a assume j: "j ∈ Basis" then have "c • j < ?x • j ∧ ?x • j < d • j" apply (cases "j = i") using as(2)[THEN bspec[where x=j]] i apply (auto simp add: as2) done } then have "?x∈{c<..<d}" using i unfolding mem_interval by auto moreover have "?x ∉ {a .. b}" unfolding mem_interval apply auto apply (rule_tac x=i in bexI) using as(2)[THEN bspec[where x=i]] and as2 i apply auto done ultimately have False using as by auto } then have "a•i ≤ c•i" by (rule ccontr) auto moreover { let ?x = "(∑j∈Basis. (if j=i then ((max (b•j) (c•j))+d•j)/2 else (c•j+d•j)/2) *⇩R j)::'a" assume as2: "b•i < d•i" { fix j :: 'a assume "j∈Basis" then have "d • j > ?x • j ∧ ?x • j > c • j" apply (cases "j = i") using as(2)[THEN bspec[where x=j]] apply (auto simp add: as2) done } then have "?x∈{c<..<d}" unfolding mem_interval by auto moreover have "?x∉{a .. b}" unfolding mem_interval apply auto apply (rule_tac x=i in bexI) using as(2)[THEN bspec[where x=i]] and as2 using i apply auto done ultimately have False using as by auto } then have "b•i ≥ d•i" by (rule ccontr) auto ultimately have "a•i ≤ c•i ∧ d•i ≤ b•i" by auto } note part1 = this show ?th3 unfolding subset_eq and Ball_def and mem_interval apply (rule, rule, rule, rule) apply (rule part1) unfolding subset_eq and Ball_def and mem_interval prefer 4 apply auto apply (erule_tac x=xa in allE, erule_tac x=xa in allE, fastforce)+ done { assume as: "{c<..<d} ⊆ {a<..<b}" "∀i∈Basis. c•i < d•i" fix i :: 'a assume i:"i∈Basis" from as(1) have "{c<..<d} ⊆ {a..b}" using interval_open_subset_closed[of a b] by auto then have "a•i ≤ c•i ∧ d•i ≤ b•i" using part1 and as(2) using i by auto } note * = this show ?th4 unfolding subset_eq and Ball_def and mem_interval apply (rule, rule, rule, rule) apply (rule *) unfolding subset_eq and Ball_def and mem_interval prefer 4 apply auto apply (erule_tac x=xa in allE, simp)+ doneqedlemma inter_interval: fixes a :: "'a::ordered_euclidean_space" shows "{a .. b} ∩ {c .. d} = {(∑i∈Basis. max (a•i) (c•i) *⇩R i) .. (∑i∈Basis. min (b•i) (d•i) *⇩R i)}" unfolding set_eq_iff and Int_iff and mem_interval by autolemma disjoint_interval: fixes a::"'a::ordered_euclidean_space" shows "{a .. b} ∩ {c .. d} = {} <-> (∃i∈Basis. (b•i < a•i ∨ d•i < c•i ∨ b•i < c•i ∨ d•i < a•i))" (is ?th1) and "{a .. b} ∩ {c<..<d} = {} <-> (∃i∈Basis. (b•i < a•i ∨ d•i ≤ c•i ∨ b•i ≤ c•i ∨ d•i ≤ a•i))" (is ?th2) and "{a<..<b} ∩ {c .. d} = {} <-> (∃i∈Basis. (b•i ≤ a•i ∨ d•i < c•i ∨ b•i ≤ c•i ∨ d•i ≤ a•i))" (is ?th3) and "{a<..<b} ∩ {c<..<d} = {} <-> (∃i∈Basis. (b•i ≤ a•i ∨ d•i ≤ c•i ∨ b•i ≤ c•i ∨ d•i ≤ a•i))" (is ?th4)proof - let ?z = "(∑i∈Basis. (((max (a•i) (c•i)) + (min (b•i) (d•i))) / 2) *⇩R i)::'a" have **: "!!P Q. (!!i :: 'a. i ∈ Basis ==> Q ?z i ==> P i) ==> (!!i x :: 'a. i ∈ Basis ==> P i ==> Q x i) ==> (∀x. ∃i∈Basis. Q x i) <-> (∃i∈Basis. P i)" by blast note * = set_eq_iff Int_iff empty_iff mem_interval ball_conj_distrib[symmetric] eq_False ball_simps(10) show ?th1 unfolding * by (intro **) auto show ?th2 unfolding * by (intro **) auto show ?th3 unfolding * by (intro **) auto show ?th4 unfolding * by (intro **) autoqed(* Moved interval_open_subset_closed a bit upwards *)lemma open_interval[intro]: fixes a b :: "'a::ordered_euclidean_space" shows "open {a<..<b}"proof - have "open (\<Inter>i∈Basis. (λx. x•i) - {a•i<..<b•i})" by (intro open_INT finite_lessThan ballI continuous_open_vimage allI linear_continuous_at open_real_greaterThanLessThan finite_Basis bounded_linear_inner_left) also have "(\<Inter>i∈Basis. (λx. x•i) - {a•i<..<b•i}) = {a<..<b}" by (auto simp add: eucl_less [where 'a='a]) finally show "open {a<..<b}" .qedlemma closed_interval[intro]: fixes a b :: "'a::ordered_euclidean_space" shows "closed {a .. b}"proof - have "closed (\<Inter>i∈Basis. (λx. x•i) - {a•i .. b•i})" by (intro closed_INT ballI continuous_closed_vimage allI linear_continuous_at closed_real_atLeastAtMost finite_Basis bounded_linear_inner_left) also have "(\<Inter>i∈Basis. (λx. x•i) - {a•i .. b•i}) = {a .. b}" by (auto simp add: eucl_le [where 'a='a]) finally show "closed {a .. b}" .qedlemma interior_closed_interval [intro]: fixes a b :: "'a::ordered_euclidean_space" shows "interior {a..b} = {a<..<b}" (is "?L = ?R")proof(rule subset_antisym) show "?R ⊆ ?L" using interval_open_subset_closed open_interval by (rule interior_maximal) { fix x assume "x ∈ interior {a..b}" then obtain s where s: "open s" "x ∈ s" "s ⊆ {a..b}" .. then obtain e where "e>0" and e:"∀x'. dist x' x < e --> x' ∈ {a..b}" unfolding open_dist and subset_eq by auto { fix i :: 'a assume i: "i ∈ Basis" have "dist (x - (e / 2) *⇩R i) x < e" and "dist (x + (e / 2) *⇩R i) x < e" unfolding dist_norm apply auto unfolding norm_minus_cancel using norm_Basis[OF i] e>0 apply auto done then have "a • i ≤ (x - (e / 2) *⇩R i) • i" and "(x + (e / 2) *⇩R i) • i ≤ b • i" using e[THEN spec[where x="x - (e/2) *⇩R i"]] and e[THEN spec[where x="x + (e/2) *⇩R i"]] unfolding mem_interval using i by blast+ then have "a • i < x • i" and "x • i < b • i" using e>0 i by (auto simp: inner_diff_left inner_Basis inner_add_left) } then have "x ∈ {a<..<b}" unfolding mem_interval by auto } then show "?L ⊆ ?R" ..qedlemma bounded_closed_interval: fixes a :: "'a::ordered_euclidean_space" shows "bounded {a .. b}"proof - let ?b = "∑i∈Basis. ¦a•i¦ + ¦b•i¦" { fix x :: "'a" assume x: "∀i∈Basis. a • i ≤ x • i ∧ x • i ≤ b • i" { fix i :: 'a assume "i ∈ Basis" then have "¦x•i¦ ≤ ¦a•i¦ + ¦b•i¦" using x[THEN bspec[where x=i]] by auto } then have "(∑i∈Basis. ¦x • i¦) ≤ ?b" apply - apply (rule setsum_mono) apply auto done then have "norm x ≤ ?b" using norm_le_l1[of x] by auto } then show ?thesis unfolding interval and bounded_iff by autoqedlemma bounded_interval: fixes a :: "'a::ordered_euclidean_space" shows "bounded {a .. b} ∧ bounded {a<..<b}" using bounded_closed_interval[of a b] using interval_open_subset_closed[of a b] using bounded_subset[of "{a..b}" "{a<..<b}"] by simplemma not_interval_univ: fixes a :: "'a::ordered_euclidean_space" shows "{a .. b} ≠ UNIV ∧ {a<..<b} ≠ UNIV" using bounded_interval[of a b] by autolemma compact_interval: fixes a :: "'a::ordered_euclidean_space" shows "compact {a .. b}" using bounded_closed_imp_seq_compact[of "{a..b}"] using bounded_interval[of a b] by (auto simp: compact_eq_seq_compact_metric)lemma open_interval_midpoint: fixes a :: "'a::ordered_euclidean_space" assumes "{a<..<b} ≠ {}" shows "((1/2) *⇩R (a + b)) ∈ {a<..<b}"proof - { fix i :: 'a assume "i ∈ Basis" then have "a • i < ((1 / 2) *⇩R (a + b)) • i ∧ ((1 / 2) *⇩R (a + b)) • i < b • i" using assms[unfolded interval_ne_empty, THEN bspec[where x=i]] by (auto simp: inner_add_left) } then show ?thesis unfolding mem_interval by autoqedlemma open_closed_interval_convex: fixes x :: "'a::ordered_euclidean_space" assumes x: "x ∈ {a<..<b}" and y: "y ∈ {a .. b}" and e: "0 < e" "e ≤ 1" shows "(e *⇩R x + (1 - e) *⇩R y) ∈ {a<..<b}"proof - { fix i :: 'a assume i: "i ∈ Basis" have "a • i = e * (a • i) + (1 - e) * (a • i)" unfolding left_diff_distrib by simp also have "… < e * (x • i) + (1 - e) * (y • i)" apply (rule add_less_le_mono) using e unfolding mult_less_cancel_left and mult_le_cancel_left apply simp_all using x unfolding mem_interval using i apply simp using y unfolding mem_interval using i apply simp done finally have "a • i < (e *⇩R x + (1 - e) *⇩R y) • i" unfolding inner_simps by auto moreover { have "b • i = e * (b•i) + (1 - e) * (b•i)" unfolding left_diff_distrib by simp also have "… > e * (x • i) + (1 - e) * (y • i)" apply (rule add_less_le_mono) using e unfolding mult_less_cancel_left and mult_le_cancel_left apply simp_all using x unfolding mem_interval using i apply simp using y unfolding mem_interval using i apply simp done finally have "(e *⇩R x + (1 - e) *⇩R y) • i < b • i" unfolding inner_simps by auto } ultimately have "a • i < (e *⇩R x + (1 - e) *⇩R y) • i ∧ (e *⇩R x + (1 - e) *⇩R y) • i < b • i" by auto } then show ?thesis unfolding mem_interval by autoqedlemma closure_open_interval: fixes a :: "'a::ordered_euclidean_space" assumes "{a<..<b} ≠ {}" shows "closure {a<..<b} = {a .. b}"proof - have ab: "a < b" using assms[unfolded interval_ne_empty] apply (subst eucl_less) apply auto done let ?c = "(1 / 2) *⇩R (a + b)" { fix x assume as:"x ∈ {a .. b}" def f ≡ "λn::nat. x + (inverse (real n + 1)) *⇩R (?c - x)" { fix n assume fn: "f n < b --> a < f n --> f n = x" and xc: "x ≠ ?c" have *: "0 < inverse (real n + 1)" "inverse (real n + 1) ≤ 1" unfolding inverse_le_1_iff by auto have "(inverse (real n + 1)) *⇩R ((1 / 2) *⇩R (a + b)) + (1 - inverse (real n + 1)) *⇩R x = x + (inverse (real n + 1)) *⇩R (((1 / 2) *⇩R (a + b)) - x)" by (auto simp add: algebra_simps) then have "f n < b" and "a < f n" using open_closed_interval_convex[OF open_interval_midpoint[OF assms] as *] unfolding f_def by auto then have False using fn unfolding f_def using xc by auto } moreover { assume "¬ (f ---> x) sequentially" { fix e :: real assume "e > 0" then have "∃N::nat. inverse (real (N + 1)) < e" using real_arch_inv[of e] apply (auto simp add: Suc_pred') apply (rule_tac x="n - 1" in exI) apply auto done then obtain N :: nat where "inverse (real (N + 1)) < e" by auto then have "∀n≥N. inverse (real n + 1) < e" apply auto apply (metis Suc_le_mono le_SucE less_imp_inverse_less nat_le_real_less order_less_trans real_of_nat_Suc real_of_nat_Suc_gt_zero) done then have "∃N::nat. ∀n≥N. inverse (real n + 1) < e" by auto } then have "((λn. inverse (real n + 1)) ---> 0) sequentially" unfolding LIMSEQ_def by(auto simp add: dist_norm) then have "(f ---> x) sequentially" unfolding f_def using tendsto_add[OF tendsto_const, of "λn::nat. (inverse (real n + 1)) *⇩R ((1 / 2) *⇩R (a + b) - x)" 0 sequentially x] using tendsto_scaleR [OF _ tendsto_const, of "λn::nat. inverse (real n + 1)" 0 sequentially "((1 / 2) *⇩R (a + b) - x)"] by auto } ultimately have "x ∈ closure {a<..<b}" using as and open_interval_midpoint[OF assms] unfolding closure_def unfolding islimpt_sequential by (cases "x=?c") auto } then show ?thesis using closure_minimal[OF interval_open_subset_closed closed_interval, of a b] by blastqedlemma bounded_subset_open_interval_symmetric: fixes s::"('a::ordered_euclidean_space) set" assumes "bounded s" shows "∃a. s ⊆ {-a<..<a}"proof - obtain b where "b>0" and b: "∀x∈s. norm x ≤ b" using assms[unfolded bounded_pos] by auto def a ≡ "(∑i∈Basis. (b + 1) *⇩R i)::'a" { fix x assume "x ∈ s" fix i :: 'a assume i: "i ∈ Basis" then have "(-a)•i < x•i" and "x•i < a•i" using b[THEN bspec[where x=x], OF x∈s] using Basis_le_norm[OF i, of x] unfolding inner_simps and a_def by auto } then show ?thesis by (auto intro: exI[where x=a] simp add: eucl_less[where 'a='a])qedlemma bounded_subset_open_interval: fixes s :: "('a::ordered_euclidean_space) set" shows "bounded s ==> (∃a b. s ⊆ {a<..<b})" by (auto dest!: bounded_subset_open_interval_symmetric)lemma bounded_subset_closed_interval_symmetric: fixes s :: "('a::ordered_euclidean_space) set" assumes "bounded s" shows "∃a. s ⊆ {-a .. a}"proof - obtain a where "s ⊆ {- a<..<a}" using bounded_subset_open_interval_symmetric[OF assms] by auto then show ?thesis using interval_open_subset_closed[of "-a" a] by autoqedlemma bounded_subset_closed_interval: fixes s :: "('a::ordered_euclidean_space) set" shows "bounded s ==> ∃a b. s ⊆ {a .. b}" using bounded_subset_closed_interval_symmetric[of s] by autolemma frontier_closed_interval: fixes a b :: "'a::ordered_euclidean_space" shows "frontier {a .. b} = {a .. b} - {a<..<b}" unfolding frontier_def unfolding interior_closed_interval and closure_closed[OF closed_interval] ..lemma frontier_open_interval: fixes a b :: "'a::ordered_euclidean_space" shows "frontier {a<..<b} = (if {a<..<b} = {} then {} else {a .. b} - {a<..<b})"proof (cases "{a<..<b} = {}") case True then show ?thesis using frontier_empty by autonext case False then show ?thesis unfolding frontier_def and closure_open_interval[OF False] and interior_open[OF open_interval] by autoqedlemma inter_interval_mixed_eq_empty: fixes a :: "'a::ordered_euclidean_space" assumes "{c<..<d} ≠ {}" shows "{a<..<b} ∩ {c .. d} = {} <-> {a<..<b} ∩ {c<..<d} = {}" unfolding closure_open_interval[OF assms, symmetric] unfolding open_inter_closure_eq_empty[OF open_interval] ..lemma open_box: "open (box a b)"proof - have "open (\<Inter>i∈Basis. (op • i) - {a • i <..< b • i})" by (auto intro!: continuous_open_vimage continuous_inner continuous_at_id continuous_const) also have "(\<Inter>i∈Basis. (op • i) - {a • i <..< b • i}) = box a b" by (auto simp add: box_def inner_commute) finally show ?thesis .qedinstance euclidean_space ⊆ second_countable_topologyproof def a ≡ "λf :: 'a => (real × real). ∑i∈Basis. fst (f i) *⇩R i" then have a: "!!f. (∑i∈Basis. fst (f i) *⇩R i) = a f" by simp def b ≡ "λf :: 'a => (real × real). ∑i∈Basis. snd (f i) *⇩R i" then have b: "!!f. (∑i∈Basis. snd (f i) *⇩R i) = b f" by simp def B ≡ "(λf. box (a f) (b f)) (Basis ->⇩E (\<rat> × \<rat>))" have "Ball B open" by (simp add: B_def open_box) moreover have "(∀A. open A --> (∃B'⊆B. \<Union>B' = A))" proof safe fix A::"'a set" assume "open A" show "∃B'⊆B. \<Union>B' = A" apply (rule exI[of _ "{b∈B. b ⊆ A}"]) apply (subst (3) open_UNION_box[OF open A]) apply (auto simp add: a b B_def) done qed ultimately have "topological_basis B" unfolding topological_basis_def by blast moreover have "countable B" unfolding B_def by (intro countable_image countable_PiE finite_Basis countable_SIGMA countable_rat) ultimately show "∃B::'a set set. countable B ∧ open = generate_topology B" by (blast intro: topological_basis_imp_subbasis)qedinstance euclidean_space ⊆ polish_space ..text {* Intervals in general, including infinite and mixtures of open and closed. *}definition "is_interval (s::('a::euclidean_space) set) <-> (∀a∈s. ∀b∈s. ∀x. (∀i∈Basis. ((a•i ≤ x•i ∧ x•i ≤ b•i) ∨ (b•i ≤ x•i ∧ x•i ≤ a•i))) --> x ∈ s)"lemma is_interval_interval: "is_interval {a .. b::'a::ordered_euclidean_space}" (is ?th1) "is_interval {a<..<b}" (is ?th2) proof - show ?th1 ?th2 unfolding is_interval_def mem_interval Ball_def atLeastAtMost_iff by(meson order_trans le_less_trans less_le_trans less_trans)+ qedlemma is_interval_empty: "is_interval {}" unfolding is_interval_def by simplemma is_interval_univ: "is_interval UNIV" unfolding is_interval_def by simpsubsection {* Closure of halfspaces and hyperplanes *}lemma isCont_open_vimage: assumes "!!x. isCont f x" and "open s" shows "open (f - s)"proof - from assms(1) have "continuous_on UNIV f" unfolding isCont_def continuous_on_def by simp then have "open {x ∈ UNIV. f x ∈ s}" using open_UNIV open s by (rule continuous_open_preimage) then show "open (f - s)" by (simp add: vimage_def)qedlemma isCont_closed_vimage: assumes "!!x. isCont f x" and "closed s" shows "closed (f - s)" using assms unfolding closed_def vimage_Compl [symmetric] by (rule isCont_open_vimage)lemma open_Collect_less: fixes f g :: "'a::t2_space => real" assumes f: "!!x. isCont f x" and g: "!!x. isCont g x" shows "open {x. f x < g x}"proof - have "open ((λx. g x - f x) - {0<..})" using isCont_diff [OF g f] open_real_greaterThan by (rule isCont_open_vimage) also have "((λx. g x - f x) - {0<..}) = {x. f x < g x}" by auto finally show ?thesis .qedlemma closed_Collect_le: fixes f g :: "'a::t2_space => real" assumes f: "!!x. isCont f x" and g: "!!x. isCont g x" shows "closed {x. f x ≤ g x}"proof - have "closed ((λx. g x - f x) - {0..})" using isCont_diff [OF g f] closed_real_atLeast by (rule isCont_closed_vimage) also have "((λx. g x - f x) - {0..}) = {x. f x ≤ g x}" by auto finally show ?thesis .qedlemma closed_Collect_eq: fixes f g :: "'a::t2_space => 'b::t2_space" assumes f: "!!x. isCont f x" and g: "!!x. isCont g x" shows "closed {x. f x = g x}"proof - have "open {(x::'b, y::'b). x ≠ y}" unfolding open_prod_def by (auto dest!: hausdorff) then have "closed {(x::'b, y::'b). x = y}" unfolding closed_def split_def Collect_neg_eq . with isCont_Pair [OF f g] have "closed ((λx. (f x, g x)) - {(x, y). x = y})" by (rule isCont_closed_vimage) also have "… = {x. f x = g x}" by auto finally show ?thesis .qedlemma continuous_at_inner: "continuous (at x) (inner a)" unfolding continuous_at by (intro tendsto_intros)lemma closed_halfspace_le: "closed {x. inner a x ≤ b}" by (simp add: closed_Collect_le)lemma closed_halfspace_ge: "closed {x. inner a x ≥ b}" by (simp add: closed_Collect_le)lemma closed_hyperplane: "closed {x. inner a x = b}" by (simp add: closed_Collect_eq)lemma closed_halfspace_component_le: "closed {x::'a::euclidean_space. x•i ≤ a}" by (simp add: closed_Collect_le)lemma closed_halfspace_component_ge: "closed {x::'a::euclidean_space. x•i ≥ a}" by (simp add: closed_Collect_le)lemma closed_interval_left: fixes b :: "'a::euclidean_space" shows "closed {x::'a. ∀i∈Basis. x•i ≤ b•i}" by (simp add: Collect_ball_eq closed_INT closed_Collect_le)lemma closed_interval_right: fixes a :: "'a::euclidean_space" shows "closed {x::'a. ∀i∈Basis. a•i ≤ x•i}" by (simp add: Collect_ball_eq closed_INT closed_Collect_le)text {* Openness of halfspaces. *}lemma open_halfspace_lt: "open {x. inner a x < b}" by (simp add: open_Collect_less)lemma open_halfspace_gt: "open {x. inner a x > b}" by (simp add: open_Collect_less)lemma open_halfspace_component_lt: "open {x::'a::euclidean_space. x•i < a}" by (simp add: open_Collect_less)lemma open_halfspace_component_gt: "open {x::'a::euclidean_space. x•i > a}" by (simp add: open_Collect_less)text{* Instantiation for intervals on @{text ordered_euclidean_space} *}lemma eucl_lessThan_eq_halfspaces: fixes a :: "'a::ordered_euclidean_space" shows "{..<a} = (\<Inter>i∈Basis. {x. x • i < a • i})" by (auto simp: eucl_less[where 'a='a])lemma eucl_greaterThan_eq_halfspaces: fixes a :: "'a::ordered_euclidean_space" shows "{a<..} = (\<Inter>i∈Basis. {x. a • i < x • i})" by (auto simp: eucl_less[where 'a='a])lemma eucl_atMost_eq_halfspaces: fixes a :: "'a::ordered_euclidean_space" shows "{.. a} = (\<Inter>i∈Basis. {x. x • i ≤ a • i})" by (auto simp: eucl_le[where 'a='a])lemma eucl_atLeast_eq_halfspaces: fixes a :: "'a::ordered_euclidean_space" shows "{a ..} = (\<Inter>i∈Basis. {x. a • i ≤ x • i})" by (auto simp: eucl_le[where 'a='a])lemma open_eucl_lessThan[simp, intro]: fixes a :: "'a::ordered_euclidean_space" shows "open {..< a}" by (auto simp: eucl_lessThan_eq_halfspaces open_halfspace_component_lt)lemma open_eucl_greaterThan[simp, intro]: fixes a :: "'a::ordered_euclidean_space" shows "open {a <..}" by (auto simp: eucl_greaterThan_eq_halfspaces open_halfspace_component_gt)lemma closed_eucl_atMost[simp, intro]: fixes a :: "'a::ordered_euclidean_space" shows "closed {.. a}" unfolding eucl_atMost_eq_halfspaces by (simp add: closed_INT closed_Collect_le)lemma closed_eucl_atLeast[simp, intro]: fixes a :: "'a::ordered_euclidean_space" shows "closed {a ..}" unfolding eucl_atLeast_eq_halfspaces by (simp add: closed_INT closed_Collect_le)text {* This gives a simple derivation of limit component bounds. *}lemma Lim_component_le: fixes f :: "'a => 'b::euclidean_space" assumes "(f ---> l) net" and "¬ (trivial_limit net)" and "eventually (λx. f(x)•i ≤ b) net" shows "l•i ≤ b" by (rule tendsto_le[OF assms(2) tendsto_const tendsto_inner[OF assms(1) tendsto_const] assms(3)])lemma Lim_component_ge: fixes f :: "'a => 'b::euclidean_space" assumes "(f ---> l) net" and "¬ (trivial_limit net)" and "eventually (λx. b ≤ (f x)•i) net" shows "b ≤ l•i" by (rule tendsto_le[OF assms(2) tendsto_inner[OF assms(1) tendsto_const] tendsto_const assms(3)])lemma Lim_component_eq: fixes f :: "'a => 'b::euclidean_space" assumes net: "(f ---> l) net" "¬ trivial_limit net" and ev:"eventually (λx. f(x)•i = b) net" shows "l•i = b" using ev[unfolded order_eq_iff eventually_conj_iff] using Lim_component_ge[OF net, of b i] using Lim_component_le[OF net, of i b] by autotext {* Limits relative to a union. *}lemma eventually_within_Un: "eventually P (at x within (s ∪ t)) <-> eventually P (at x within s) ∧ eventually P (at x within t)" unfolding eventually_at_filter by (auto elim!: eventually_rev_mp)lemma Lim_within_union: "(f ---> l) (at x within (s ∪ t)) <-> (f ---> l) (at x within s) ∧ (f ---> l) (at x within t)" unfolding tendsto_def by (auto simp add: eventually_within_Un)lemma Lim_topological: "(f ---> l) net <-> trivial_limit net ∨ (∀S. open S --> l ∈ S --> eventually (λx. f x ∈ S) net)" unfolding tendsto_def trivial_limit_eq by autotext{* Some more convenient intermediate-value theorem formulations. *}lemma connected_ivt_hyperplane: assumes "connected s" and "x ∈ s" and "y ∈ s" and "inner a x ≤ b" and "b ≤ inner a y" shows "∃z ∈ s. inner a z = b"proof (rule ccontr) assume as:"¬ (∃z∈s. inner a z = b)" let ?A = "{x. inner a x < b}" let ?B = "{x. inner a x > b}" have "open ?A" "open ?B" using open_halfspace_lt and open_halfspace_gt by auto moreover have "?A ∩ ?B = {}" by auto moreover have "s ⊆ ?A ∪ ?B" using as by auto ultimately show False using assms(1)[unfolded connected_def not_ex, THEN spec[where x="?A"], THEN spec[where x="?B"]] using assms(2-5) by autoqedlemma connected_ivt_component: fixes x::"'a::euclidean_space" shows "connected s ==> x ∈ s ==> y ∈ s ==> x•k ≤ a ==> a ≤ y•k ==> (∃z∈s. z•k = a)" using connected_ivt_hyperplane[of s x y "k::'a" a] by (auto simp: inner_commute)subsection {* Homeomorphisms *}definition "homeomorphism s t f g <-> (∀x∈s. (g(f x) = x)) ∧ (f s = t) ∧ continuous_on s f ∧ (∀y∈t. (f(g y) = y)) ∧ (g t = s) ∧ continuous_on t g"definition homeomorphic :: "'a::topological_space set => 'b::topological_space set => bool" (infixr "homeomorphic" 60) where "s homeomorphic t ≡ (∃f g. homeomorphism s t f g)"lemma homeomorphic_refl: "s homeomorphic s" unfolding homeomorphic_def unfolding homeomorphism_def using continuous_on_id apply (rule_tac x = "(λx. x)" in exI) apply (rule_tac x = "(λx. x)" in exI) apply blast donelemma homeomorphic_sym: "s homeomorphic t <-> t homeomorphic s" unfolding homeomorphic_def unfolding homeomorphism_def by blastlemma homeomorphic_trans: assumes "s homeomorphic t" and "t homeomorphic u" shows "s homeomorphic u"proof - obtain f1 g1 where fg1: "∀x∈s. g1 (f1 x) = x" "f1 s = t" "continuous_on s f1" "∀y∈t. f1 (g1 y) = y" "g1 t = s" "continuous_on t g1" using assms(1) unfolding homeomorphic_def homeomorphism_def by auto obtain f2 g2 where fg2: "∀x∈t. g2 (f2 x) = x" "f2 t = u" "continuous_on t f2" "∀y∈u. f2 (g2 y) = y" "g2 u = t" "continuous_on u g2" using assms(2) unfolding homeomorphic_def homeomorphism_def by auto { fix x assume "x∈s" then have "(g1 o g2) ((f2 o f1) x) = x" using fg1(1)[THEN bspec[where x=x]] and fg2(1)[THEN bspec[where x="f1 x"]] and fg1(2) by auto } moreover have "(f2 o f1) s = u" using fg1(2) fg2(2) by auto moreover have "continuous_on s (f2 o f1)" using continuous_on_compose[OF fg1(3)] and fg2(3) unfolding fg1(2) by auto moreover { fix y assume "y∈u" then have "(f2 o f1) ((g1 o g2) y) = y" using fg2(4)[THEN bspec[where x=y]] and fg1(4)[THEN bspec[where x="g2 y"]] and fg2(5) by auto } moreover have "(g1 o g2) u = s" using fg1(5) fg2(5) by auto moreover have "continuous_on u (g1 o g2)" using continuous_on_compose[OF fg2(6)] and fg1(6) unfolding fg2(5) by auto ultimately show ?thesis unfolding homeomorphic_def homeomorphism_def apply (rule_tac x="f2 o f1" in exI) apply (rule_tac x="g1 o g2" in exI) apply auto doneqedlemma homeomorphic_minimal: "s homeomorphic t <-> (∃f g. (∀x∈s. f(x) ∈ t ∧ (g(f(x)) = x)) ∧ (∀y∈t. g(y) ∈ s ∧ (f(g(y)) = y)) ∧ continuous_on s f ∧ continuous_on t g)" unfolding homeomorphic_def homeomorphism_def apply auto apply (rule_tac x=f in exI) apply (rule_tac x=g in exI) apply auto apply (rule_tac x=f in exI) apply (rule_tac x=g in exI) apply auto unfolding image_iff apply (erule_tac x="g x" in ballE) apply (erule_tac x="x" in ballE) apply auto apply (rule_tac x="g x" in bexI) apply auto apply (erule_tac x="f x" in ballE) apply (erule_tac x="x" in ballE) apply auto apply (rule_tac x="f x" in bexI) apply auto donetext {* Relatively weak hypotheses if a set is compact. *}lemma homeomorphism_compact: fixes f :: "'a::topological_space => 'b::t2_space" assumes "compact s" "continuous_on s f" "f s = t" "inj_on f s" shows "∃g. homeomorphism s t f g"proof - def g ≡ "λx. SOME y. y∈s ∧ f y = x" have g: "∀x∈s. g (f x) = x" using assms(3) assms(4)[unfolded inj_on_def] unfolding g_def by auto { fix y assume "y ∈ t" then obtain x where x:"f x = y" "x∈s" using assms(3) by auto then have "g (f x) = x" using g by auto then have "f (g y) = y" unfolding x(1)[symmetric] by auto } then have g':"∀x∈t. f (g x) = x" by auto moreover { fix x have "x∈s ==> x ∈ g t" using g[THEN bspec[where x=x]] unfolding image_iff using assms(3) by (auto intro!: bexI[where x="f x"]) moreover { assume "x∈g t" then obtain y where y:"y∈t" "g y = x" by auto then obtain x' where x':"x'∈s" "f x' = y" using assms(3) by auto then have "x ∈ s" unfolding g_def using someI2[of "λb. b∈s ∧ f b = y" x' "λx. x∈s"] unfolding y(2)[symmetric] and g_def by auto } ultimately have "x∈s <-> x ∈ g t" .. } then have "g t = s" by auto ultimately show ?thesis unfolding homeomorphism_def homeomorphic_def apply (rule_tac x=g in exI) using g and assms(3) and continuous_on_inv[OF assms(2,1), of g, unfolded assms(3)] and assms(2) apply auto doneqedlemma homeomorphic_compact: fixes f :: "'a::topological_space => 'b::t2_space" shows "compact s ==> continuous_on s f ==> (f s = t) ==> inj_on f s ==> s homeomorphic t" unfolding homeomorphic_def by (metis homeomorphism_compact)text{* Preservation of topological properties. *}lemma homeomorphic_compactness: "s homeomorphic t ==> (compact s <-> compact t)" unfolding homeomorphic_def homeomorphism_def by (metis compact_continuous_image)text{* Results on translation, scaling etc. *}lemma homeomorphic_scaling: fixes s :: "'a::real_normed_vector set" assumes "c ≠ 0" shows "s homeomorphic ((λx. c *⇩R x) s)" unfolding homeomorphic_minimal apply (rule_tac x="λx. c *⇩R x" in exI) apply (rule_tac x="λx. (1 / c) *⇩R x" in exI) using assms apply (auto simp add: continuous_on_intros) donelemma homeomorphic_translation: fixes s :: "'a::real_normed_vector set" shows "s homeomorphic ((λx. a + x) s)" unfolding homeomorphic_minimal apply (rule_tac x="λx. a + x" in exI) apply (rule_tac x="λx. -a + x" in exI) using continuous_on_add[OF continuous_on_const continuous_on_id] apply auto donelemma homeomorphic_affinity: fixes s :: "'a::real_normed_vector set" assumes "c ≠ 0" shows "s homeomorphic ((λx. a + c *⇩R x) s)"proof - have *: "op + a op *⇩R c s = (λx. a + c *⇩R x) s" by auto show ?thesis using homeomorphic_trans using homeomorphic_scaling[OF assms, of s] using homeomorphic_translation[of "(λx. c *⇩R x) s" a] unfolding * by autoqedlemma homeomorphic_balls: fixes a b ::"'a::real_normed_vector" assumes "0 < d" "0 < e" shows "(ball a d) homeomorphic (ball b e)" (is ?th) and "(cball a d) homeomorphic (cball b e)" (is ?cth)proof - show ?th unfolding homeomorphic_minimal apply(rule_tac x="λx. b + (e/d) *⇩R (x - a)" in exI) apply(rule_tac x="λx. a + (d/e) *⇩R (x - b)" in exI) using assms apply (auto intro!: continuous_on_intros simp: dist_commute dist_norm pos_divide_less_eq mult_strict_left_mono) done show ?cth unfolding homeomorphic_minimal apply(rule_tac x="λx. b + (e/d) *⇩R (x - a)" in exI) apply(rule_tac x="λx. a + (d/e) *⇩R (x - b)" in exI) using assms apply (auto intro!: continuous_on_intros simp: dist_commute dist_norm pos_divide_le_eq mult_strict_left_mono) doneqedtext{* "Isometry" (up to constant bounds) of injective linear map etc. *}lemma cauchy_isometric: fixes x :: "nat => 'a::euclidean_space" assumes e: "e > 0" and s: "subspace s" and f: "bounded_linear f" and normf: "∀x∈s. norm (f x) ≥ e * norm x" and xs: "∀n. x n ∈ s" and cf: "Cauchy (f o x)" shows "Cauchy x"proof - interpret f: bounded_linear f by fact { fix d :: real assume "d > 0" then obtain N where N:"∀n≥N. norm (f (x n) - f (x N)) < e * d" using cf[unfolded cauchy o_def dist_norm, THEN spec[where x="e*d"]] and e and mult_pos_pos[of e d] by auto { fix n assume "n≥N" have "e * norm (x n - x N) ≤ norm (f (x n - x N))" using subspace_sub[OF s, of "x n" "x N"] using xs[THEN spec[where x=N]] and xs[THEN spec[where x=n]] using normf[THEN bspec[where x="x n - x N"]] by auto also have "norm (f (x n - x N)) < e * d" using N ≤ n N unfolding f.diff[symmetric] by auto finally have "norm (x n - x N) < d" using e>0 by simp } then have "∃N. ∀n≥N. norm (x n - x N) < d" by auto } then show ?thesis unfolding cauchy and dist_norm by autoqedlemma complete_isometric_image: fixes f :: "'a::euclidean_space => 'b::euclidean_space" assumes "0 < e" and s: "subspace s" and f: "bounded_linear f" and normf: "∀x∈s. norm(f x) ≥ e * norm(x)" and cs: "complete s" shows "complete (f s)"proof - { fix g assume as:"∀n::nat. g n ∈ f s" and cfg:"Cauchy g" then obtain x where "∀n. x n ∈ s ∧ g n = f (x n)" using choice[of "λ n xa. xa ∈ s ∧ g n = f xa"] by auto then have x:"∀n. x n ∈ s" "∀n. g n = f (x n)" by auto then have "f o x = g" unfolding fun_eq_iff by auto then obtain l where "l∈s" and l:"(x ---> l) sequentially" using cs[unfolded complete_def, THEN spec[where x="x"]] using cauchy_isometric[OF 0<e s f normf] and cfg and x(1) by auto then have "∃l∈f s. (g ---> l) sequentially" using linear_continuous_at[OF f, unfolded continuous_at_sequentially, THEN spec[where x=x], of l] unfolding f o x = g by auto } then show ?thesis unfolding complete_def by autoqedlemma injective_imp_isometric: fixes f :: "'a::euclidean_space => 'b::euclidean_space" assumes s: "closed s" "subspace s" and f: "bounded_linear f" "∀x∈s. f x = 0 --> x = 0" shows "∃e>0. ∀x∈s. norm (f x) ≥ e * norm x"proof (cases "s ⊆ {0::'a}") case True { fix x assume "x ∈ s" then have "x = 0" using True by auto then have "norm x ≤ norm (f x)" by auto } then show ?thesis by (auto intro!: exI[where x=1])next interpret f: bounded_linear f by fact case False then obtain a where a: "a ≠ 0" "a ∈ s" by auto from False have "s ≠ {}" by auto let ?S = "{f x| x. (x ∈ s ∧ norm x = norm a)}" let ?S' = "{x::'a. x∈s ∧ norm x = norm a}" let ?S'' = "{x::'a. norm x = norm a}" have "?S'' = frontier(cball 0 (norm a))" unfolding frontier_cball and dist_norm by auto then have "compact ?S''" using compact_frontier[OF compact_cball, of 0 "norm a"] by auto moreover have "?S' = s ∩ ?S''" by auto ultimately have "compact ?S'" using closed_inter_compact[of s ?S''] using s(1) by auto moreover have *:"f ?S' = ?S" by auto ultimately have "compact ?S" using compact_continuous_image[OF linear_continuous_on[OF f(1)], of ?S'] by auto then have "closed ?S" using compact_imp_closed by auto moreover have "?S ≠ {}" using a by auto ultimately obtain b' where "b'∈?S" "∀y∈?S. norm b' ≤ norm y" using distance_attains_inf[of ?S 0] unfolding dist_0_norm by auto then obtain b where "b∈s" and ba: "norm b = norm a" and b: "∀x∈{x ∈ s. norm x = norm a}. norm (f b) ≤ norm (f x)" unfolding *[symmetric] unfolding image_iff by auto let ?e = "norm (f b) / norm b" have "norm b > 0" using ba and a and norm_ge_zero by auto moreover have "norm (f b) > 0" using f(2)[THEN bspec[where x=b], OF b∈s] using norm b >0 unfolding zero_less_norm_iff by auto ultimately have "0 < norm (f b) / norm b" by (simp only: divide_pos_pos) moreover { fix x assume "x∈s" then have "norm (f b) / norm b * norm x ≤ norm (f x)" proof (cases "x=0") case True then show "norm (f b) / norm b * norm x ≤ norm (f x)" by auto next case False then have *: "0 < norm a / norm x" using a≠0 unfolding zero_less_norm_iff[symmetric] by (simp only: divide_pos_pos) have "∀c. ∀x∈s. c *⇩R x ∈ s" using s[unfolded subspace_def] by auto then have "(norm a / norm x) *⇩R x ∈ {x ∈ s. norm x = norm a}" using x∈s and x≠0 by auto then show "norm (f b) / norm b * norm x ≤ norm (f x)" using b[THEN bspec[where x="(norm a / norm x) *⇩R x"]] unfolding f.scaleR and ba using x≠0 a≠0 by (auto simp add: mult_commute pos_le_divide_eq pos_divide_le_eq) qed } ultimately show ?thesis by autoqedlemma closed_injective_image_subspace: fixes f :: "'a::euclidean_space => 'b::euclidean_space" assumes "subspace s" "bounded_linear f" "∀x∈s. f x = 0 --> x = 0" "closed s" shows "closed(f s)"proof - obtain e where "e > 0" and e: "∀x∈s. e * norm x ≤ norm (f x)" using injective_imp_isometric[OF assms(4,1,2,3)] by auto show ?thesis using complete_isometric_image[OF e>0 assms(1,2) e] and assms(4) unfolding complete_eq_closed[symmetric] by autoqedsubsection {* Some properties of a canonical subspace *}lemma subspace_substandard: "subspace {x::'a::euclidean_space. (∀i∈Basis. P i --> x•i = 0)}" unfolding subspace_def by (auto simp: inner_add_left)lemma closed_substandard: "closed {x::'a::euclidean_space. ∀i∈Basis. P i --> x•i = 0}" (is "closed ?A")proof - let ?D = "{i∈Basis. P i}" have "closed (\<Inter>i∈?D. {x::'a. x•i = 0})" by (simp add: closed_INT closed_Collect_eq) also have "(\<Inter>i∈?D. {x::'a. x•i = 0}) = ?A" by auto finally show "closed ?A" .qedlemma dim_substandard: assumes d: "d ⊆ Basis" shows "dim {x::'a::euclidean_space. ∀i∈Basis. i ∉ d --> x•i = 0} = card d" (is "dim ?A = _")proof (rule dim_unique) show "d ⊆ ?A" using d by (auto simp: inner_Basis) show "independent d" using independent_mono [OF independent_Basis d] . show "?A ⊆ span d" proof (clarify) fix x assume x: "∀i∈Basis. i ∉ d --> x • i = 0" have "finite d" using finite_subset [OF d finite_Basis] . then have "(∑i∈d. (x • i) *⇩R i) ∈ span d" by (simp add: span_setsum span_clauses) also have "(∑i∈d. (x • i) *⇩R i) = (∑i∈Basis. (x • i) *⇩R i)" by (rule setsum_mono_zero_cong_left [OF finite_Basis d]) (auto simp add: x) finally show "x ∈ span d" unfolding euclidean_representation . qedqed simptext{* Hence closure and completeness of all subspaces. *}lemma ex_card: assumes "n ≤ card A" shows "∃S⊆A. card S = n"proof cases assume "finite A" from ex_bij_betw_nat_finite[OF this] guess f .. note f = this moreover from f n ≤ card A have "{..< n} ⊆ {..< card A}" "inj_on f {..< n}" by (auto simp: bij_betw_def intro: subset_inj_on) ultimately have "f {..< n} ⊆ A" "card (f {..< n}) = n" by (auto simp: bij_betw_def card_image) then show ?thesis by blastnext assume "¬ finite A" with n ≤ card A show ?thesis by forceqedlemma closed_subspace: fixes s :: "'a::euclidean_space set" assumes "subspace s" shows "closed s"proof - have "dim s ≤ card (Basis :: 'a set)" using dim_subset_UNIV by auto with ex_card[OF this] obtain d :: "'a set" where t: "card d = dim s" and d: "d ⊆ Basis" by auto let ?t = "{x::'a. ∀i∈Basis. i ∉ d --> x•i = 0}" have "∃f. linear f ∧ f {x::'a. ∀i∈Basis. i ∉ d --> x • i = 0} = s ∧ inj_on f {x::'a. ∀i∈Basis. i ∉ d --> x • i = 0}" using dim_substandard[of d] t d assms by (intro subspace_isomorphism[OF subspace_substandard[of "λi. i ∉ d"]]) (auto simp: inner_Basis) then guess f by (elim exE conjE) note f = this interpret f: bounded_linear f using f unfolding linear_conv_bounded_linear by auto { fix x have "x∈?t ==> f x = 0 ==> x = 0" using f.zero d f(3)[THEN inj_onD, of x 0] by auto } moreover have "closed ?t" using closed_substandard . moreover have "subspace ?t" using subspace_substandard . ultimately show ?thesis using closed_injective_image_subspace[of ?t f] unfolding f(2) using f(1) unfolding linear_conv_bounded_linear by autoqedlemma complete_subspace: fixes s :: "('a::euclidean_space) set" shows "subspace s ==> complete s" using complete_eq_closed closed_subspace by autolemma dim_closure: fixes s :: "('a::euclidean_space) set" shows "dim(closure s) = dim s" (is "?dc = ?d")proof - have "?dc ≤ ?d" using closure_minimal[OF span_inc, of s] using closed_subspace[OF subspace_span, of s] using dim_subset[of "closure s" "span s"] unfolding dim_span by auto then show ?thesis using dim_subset[OF closure_subset, of s] by autoqedsubsection {* Affine transformations of intervals *}lemma real_affinity_le: "0 < (m::'a::linordered_field) ==> (m * x + c ≤ y <-> x ≤ inverse(m) * y + -(c / m))" by (simp add: field_simps inverse_eq_divide)lemma real_le_affinity: "0 < (m::'a::linordered_field) ==> (y ≤ m * x + c <-> inverse(m) * y + -(c / m) ≤ x)" by (simp add: field_simps inverse_eq_divide)lemma real_affinity_lt: "0 < (m::'a::linordered_field) ==> (m * x + c < y <-> x < inverse(m) * y + -(c / m))" by (simp add: field_simps inverse_eq_divide)lemma real_lt_affinity: "0 < (m::'a::linordered_field) ==> (y < m * x + c <-> inverse(m) * y + -(c / m) < x)" by (simp add: field_simps inverse_eq_divide)lemma real_affinity_eq: "(m::'a::linordered_field) ≠ 0 ==> (m * x + c = y <-> x = inverse(m) * y + -(c / m))" by (simp add: field_simps inverse_eq_divide)lemma real_eq_affinity: "(m::'a::linordered_field) ≠ 0 ==> (y = m * x + c <-> inverse(m) * y + -(c / m) = x)" by (simp add: field_simps inverse_eq_divide)lemma image_affinity_interval: fixes m::real fixes a b c :: "'a::ordered_euclidean_space" shows "(λx. m *⇩R x + c) {a .. b} = (if {a .. b} = {} then {} else (if 0 ≤ m then {m *⇩R a + c .. m *⇩R b + c} else {m *⇩R b + c .. m *⇩R a + c}))"proof (cases "m = 0") case True { fix x assume "x ≤ c" "c ≤ x" then have "x = c" unfolding eucl_le[where 'a='a] apply - apply (subst euclidean_eq_iff) apply (auto intro: order_antisym) done } moreover have "c ∈ {m *⇩R a + c..m *⇩R b + c}" unfolding True by (auto simp add: eucl_le[where 'a='a]) ultimately show ?thesis using True by autonext case False { fix y assume "a ≤ y" "y ≤ b" "m > 0" then have "m *⇩R a + c ≤ m *⇩R y + c" and "m *⇩R y + c ≤ m *⇩R b + c" unfolding eucl_le[where 'a='a] by (auto simp: inner_simps) } moreover { fix y assume "a ≤ y" "y ≤ b" "m < 0" then have "m *⇩R b + c ≤ m *⇩R y + c" and "m *⇩R y + c ≤ m *⇩R a + c" unfolding eucl_le[where 'a='a] by (auto simp add: mult_left_mono_neg inner_simps) } moreover { fix y assume "m > 0" and "m *⇩R a + c ≤ y" and "y ≤ m *⇩R b + c" then have "y ∈ (λx. m *⇩R x + c) {a..b}" unfolding image_iff Bex_def mem_interval eucl_le[where 'a='a] apply (intro exI[where x="(1 / m) *⇩R (y - c)"]) apply (auto simp add: pos_le_divide_eq pos_divide_le_eq mult_commute diff_le_iff inner_simps) done } moreover { fix y assume "m *⇩R b + c ≤ y" "y ≤ m *⇩R a + c" "m < 0" then have "y ∈ (λx. m *⇩R x + c) {a..b}" unfolding image_iff Bex_def mem_interval eucl_le[where 'a='a] apply (intro exI[where x="(1 / m) *⇩R (y - c)"]) apply (auto simp add: neg_le_divide_eq neg_divide_le_eq mult_commute diff_le_iff inner_simps) done } ultimately show ?thesis using False by autoqedlemma image_smult_interval:"(λx. m *⇩R (x::_::ordered_euclidean_space)) {a..b} = (if {a..b} = {} then {} else if 0 ≤ m then {m *⇩R a..m *⇩R b} else {m *⇩R b..m *⇩R a})" using image_affinity_interval[of m 0 a b] by autosubsection {* Banach fixed point theorem (not really topological...) *}lemma banach_fix: assumes s: "complete s" "s ≠ {}" and c: "0 ≤ c" "c < 1" and f: "(f s) ⊆ s" and lipschitz: "∀x∈s. ∀y∈s. dist (f x) (f y) ≤ c * dist x y" shows "∃!x∈s. f x = x"proof - have "1 - c > 0" using c by auto from s(2) obtain z0 where "z0 ∈ s" by auto def z ≡ "λn. (f ^^ n) z0" { fix n :: nat have "z n ∈ s" unfolding z_def proof (induct n) case 0 then show ?case using z0 ∈ s by auto next case Suc then show ?case using f by auto qed } note z_in_s = this def d ≡ "dist (z 0) (z 1)" have fzn:"!!n. f (z n) = z (Suc n)" unfolding z_def by auto { fix n :: nat have "dist (z n) (z (Suc n)) ≤ (c ^ n) * d" proof (induct n) case 0 then show ?case unfolding d_def by auto next case (Suc m) then have "c * dist (z m) (z (Suc m)) ≤ c ^ Suc m * d" using 0 ≤ c using mult_left_mono[of "dist (z m) (z (Suc m))" "c ^ m * d" c] by auto then show ?case using lipschitz[THEN bspec[where x="z m"], OF z_in_s, THEN bspec[where x="z (Suc m)"], OF z_in_s] unfolding fzn and mult_le_cancel_left by auto qed } note cf_z = this { fix n m :: nat have "(1 - c) * dist (z m) (z (m+n)) ≤ (c ^ m) * d * (1 - c ^ n)" proof (induct n) case 0 show ?case by auto next case (Suc k) have "(1 - c) * dist (z m) (z (m + Suc k)) ≤ (1 - c) * (dist (z m) (z (m + k)) + dist (z (m + k)) (z (Suc (m + k))))" using dist_triangle and c by (auto simp add: dist_triangle) also have "… ≤ (1 - c) * (dist (z m) (z (m + k)) + c ^ (m + k) * d)" using cf_z[of "m + k"] and c by auto also have "… ≤ c ^ m * d * (1 - c ^ k) + (1 - c) * c ^ (m + k) * d" using Suc by (auto simp add: field_simps) also have "… = (c ^ m) * (d * (1 - c ^ k) + (1 - c) * c ^ k * d)" unfolding power_add by (auto simp add: field_simps) also have "… ≤ (c ^ m) * d * (1 - c ^ Suc k)" using c by (auto simp add: field_simps) finally show ?case by auto qed } note cf_z2 = this { fix e :: real assume "e > 0" then have "∃N. ∀m n. N ≤ m ∧ N ≤ n --> dist (z m) (z n) < e" proof (cases "d = 0") case True have *: "!!x. ((1 - c) * x ≤ 0) = (x ≤ 0)" using 1 - c > 0 by (metis mult_zero_left mult_commute real_mult_le_cancel_iff1) from True have "!!n. z n = z0" using cf_z2[of 0] and c unfolding z_def by (simp add: *) then show ?thesis using e>0 by auto next case False then have "d>0" unfolding d_def using zero_le_dist[of "z 0" "z 1"] by (metis False d_def less_le) then have "0 < e * (1 - c) / d" using e>0 and 1-c>0 using divide_pos_pos[of "e * (1 - c)" d] and mult_pos_pos[of e "1 - c"] by auto then obtain N where N:"c ^ N < e * (1 - c) / d" using real_arch_pow_inv[of "e * (1 - c) / d" c] and c by auto { fix m n::nat assume "m>n" and as:"m≥N" "n≥N" have *:"c ^ n ≤ c ^ N" using n≥N and c using power_decreasing[OF n≥N, of c] by auto have "1 - c ^ (m - n) > 0" using c and power_strict_mono[of c 1 "m - n"] using m>n by auto then have **: "d * (1 - c ^ (m - n)) / (1 - c) > 0" using mult_pos_pos[OF d>0, of "1 - c ^ (m - n)"] using divide_pos_pos[of "d * (1 - c ^ (m - n))" "1 - c"] using 0 < 1 - c by auto have "dist (z m) (z n) ≤ c ^ n * d * (1 - c ^ (m - n)) / (1 - c)" using cf_z2[of n "m - n"] and m>n unfolding pos_le_divide_eq[OF 1-c>0] by (auto simp add: mult_commute dist_commute) also have "… ≤ c ^ N * d * (1 - c ^ (m - n)) / (1 - c)" using mult_right_mono[OF * order_less_imp_le[OF **]] unfolding mult_assoc by auto also have "… < (e * (1 - c) / d) * d * (1 - c ^ (m - n)) / (1 - c)" using mult_strict_right_mono[OF N **] unfolding mult_assoc by auto also have "… = e * (1 - c ^ (m - n))" using c and d>0 and 1 - c > 0 by auto also have "… ≤ e" using c and 1 - c ^ (m - n) > 0 and e>0 using mult_right_le_one_le[of e "1 - c ^ (m - n)"] by auto finally have "dist (z m) (z n) < e" by auto } note * = this { fix m n :: nat assume as: "N ≤ m" "N ≤ n" then have "dist (z n) (z m) < e" proof (cases "n = m") case True then show ?thesis using e>0 by auto next case False then show ?thesis using as and *[of n m] *[of m n] unfolding nat_neq_iff by (auto simp add: dist_commute) qed } then show ?thesis by auto qed } then have "Cauchy z" unfolding cauchy_def by auto then obtain x where "x∈s" and x:"(z ---> x) sequentially" using s(1)[unfolded compact_def complete_def, THEN spec[where x=z]] and z_in_s by auto def e ≡ "dist (f x) x" have "e = 0" proof (rule ccontr) assume "e ≠ 0" then have "e > 0" unfolding e_def using zero_le_dist[of "f x" x] by (metis dist_eq_0_iff dist_nz e_def) then obtain N where N:"∀n≥N. dist (z n) x < e / 2" using x[unfolded LIMSEQ_def, THEN spec[where x="e/2"]] by auto then have N':"dist (z N) x < e / 2" by auto have *: "c * dist (z N) x ≤ dist (z N) x" unfolding mult_le_cancel_right2 using zero_le_dist[of "z N" x] and c by (metis dist_eq_0_iff dist_nz order_less_asym less_le) have "dist (f (z N)) (f x) ≤ c * dist (z N) x" using lipschitz[THEN bspec[where x="z N"], THEN bspec[where x=x]] using z_in_s[of N] x∈s using c by auto also have "… < e / 2" using N' and c using * by auto finally show False unfolding fzn using N[THEN spec[where x="Suc N"]] and dist_triangle_half_r[of "z (Suc N)" "f x" e x] unfolding e_def by auto qed then have "f x = x" unfolding e_def by auto moreover { fix y assume "f y = y" "y∈s" then have "dist x y ≤ c * dist x y" using lipschitz[THEN bspec[where x=x], THEN bspec[where x=y]] using x∈s and f x = x by auto then have "dist x y = 0" unfolding mult_le_cancel_right1 using c and zero_le_dist[of x y] by auto then have "y = x" by auto } ultimately show ?thesis using x∈s by blast+qedsubsection {* Edelstein fixed point theorem *}lemma edelstein_fix: fixes s :: "'a::metric_space set" assumes s: "compact s" "s ≠ {}" and gs: "(g s) ⊆ s" and dist: "∀x∈s. ∀y∈s. x ≠ y --> dist (g x) (g y) < dist x y" shows "∃!x∈s. g x = x"proof - let ?D = "(λx. (x, x)) s" have D: "compact ?D" "?D ≠ {}" by (rule compact_continuous_image) (auto intro!: s continuous_Pair continuous_within_id simp: continuous_on_eq_continuous_within) have "!!x y e. x ∈ s ==> y ∈ s ==> 0 < e ==> dist y x < e ==> dist (g y) (g x) < e" using dist by fastforce then have "continuous_on s g" unfolding continuous_on_iff by auto then have cont: "continuous_on ?D (λx. dist ((g o fst) x) (snd x))" unfolding continuous_on_eq_continuous_within by (intro continuous_dist ballI continuous_within_compose) (auto intro!: continuous_fst continuous_snd continuous_within_id simp: image_image) obtain a where "a ∈ s" and le: "!!x. x ∈ s ==> dist (g a) a ≤ dist (g x) x" using continuous_attains_inf[OF D cont] by auto have "g a = a" proof (rule ccontr) assume "g a ≠ a" with a ∈ s gs have "dist (g (g a)) (g a) < dist (g a) a" by (intro dist[rule_format]) auto moreover have "dist (g a) a ≤ dist (g (g a)) (g a)" using a ∈ s gs by (intro le) auto ultimately show False by auto qed moreover have "!!x. x ∈ s ==> g x = x ==> x = a" using dist[THEN bspec[where x=a]] g a = a and a∈s by auto ultimately show "∃!x∈s. g x = x" using a ∈ s by blastqeddeclare tendsto_const [intro] (* FIXME: move *)end | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.836185872554779, "perplexity": 7519.699006977827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://brilliant.org/problems/2016-amc-sample-question/ | # 2016 AMC (Sample Question)
Let $$k$$ be a positive integer. Bernado and Silvia take turns writing and erasing numbers on a blackboard as follows: Bernardo starts by writing the smallest perfect square with $$k+1$$ digits. Every time Bernardo writes a number, Silvia erases the last $$k$$ digits of it. Bernardo then writes the next perfect square, Silvia erases the last $$k$$ digits of it, and this process continues until the last two numbers that remain on the board differ by at least 2. Let $$f(k)$$ be the smallest positive integer not written on the board. For example, if $$k=1$$, then the numbers that Bernardo writes are $$16,25,36,49,64$$, and the numbers showing on the board after Silvia erases are $$1,2,3,4,5,6$$, and thus $$f(1) = 5$$. What is the sum of digits of $$f(2) + f(4) + f(6) + \cdots + f(2016)$$?
× | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8532286286354065, "perplexity": 374.95574195666455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512584.10/warc/CC-MAIN-20171211071340-20171211091340-00189.warc.gz"} |
http://link.springer.com/article/10.1023%2FA%3A1005266731458 | Article
Studia Logica
, Volume 63, Issue 2, pp 245-268
First online:
# Proof-Theoretic Modal PA-Completeness II: The Syntactic Countermodel
• Paolo GentiliniAffiliated withIstituto per la Matematica Applicata del Consiglio Nazionale delle Ricerche (IMA-CNR)
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Abstract
This paper is the second part of the syntactic demonstration of the Arithmetical Completeness of the modal system G, the first part of which is presented in [9]. Given a sequent S so that ⊢GL-LIN S, ⊬G S, and given its characteristic formula H = char(S), which expresses the non G-provability of S, we construct a canonical proof-tree T of ~ H in GL-LIN, the height of which is the distance d(S, G) of S from G. T is the syntactic countermodel of S with respect to Gand is a tool of general interest in Provability Logic, that allows some classification in the set of the arithmetical interpretations.
Proof-Theory Provability Logic countermodel of a sequent classification of arithmetical interpretations of modal logic | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9336269497871399, "perplexity": 2458.45555260052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701147841.50/warc/CC-MAIN-20160205193907-00128-ip-10-236-182-209.ec2.internal.warc.gz"} |
http://mathhelpforum.com/algebra/166122-what-1-64-power-4-a-print.html | # What is 1/64 as a power of 4?
• December 13th 2010, 06:04 AM
jebus197
What is 1/64 as a power of 4?
Hi, can someone please explain how to do this question?
Quote:
Write the number http://learn.open.ac.uk/filter/mathm...afd3c198090e32 as a power of 4. What is the value of the index?
I'm not just looking for an answer, as I would very much appreciate knowing how to do it too. I'm not even sure I know what an integer or an index is yet.
• December 13th 2010, 06:05 AM
Plato
What is 64 as a power of 4?
• December 13th 2010, 06:16 AM
jebus197
64 as a power of 4 is $4^3$. So is $4^3$ the answer?
Even if it is, it's still not really explaining the question though.
Thanks.
• December 13th 2010, 06:19 AM
Unknown008
Or another way to ask this, is to what power should 4 be raised to, to have 64?
EDIT: I didn't see you above post.
No, that means that:
$\dfrac{1}{64} = \dfrac{1}{4^3}$
What happens when you have a fraction?
• December 13th 2010, 06:31 AM
Plato
Actually it should be $4^{-3}$.
• December 13th 2010, 09:08 AM
jebus197
Thanks. But you guys are answering with more questions (and answers). What I really need is an explanation.
• December 13th 2010, 09:10 AM
snowtea
Perhaps you just need to understand what it means to have a negative exponent.
$x^{-n} = \frac{1}{x^n}$
So $4^{-3} = \frac{1}{4^3} = \frac{1}{64}$
• December 13th 2010, 09:16 AM
Unknown008
Quote:
Originally Posted by jebus197
Thanks. But you guys are answering with more questions (and answers). What I really need is an explanation.
Well, what we are trying to do is making you think and understand why such and such things work... because we'll not always be here to answer you or to explain things to you. You'll have to be able to think on your own at some point and evaluate whether or not your thought is correct or not.
• December 13th 2010, 09:21 AM
jebus197
Yes but what I really need is an explanation. Didn't any of you guys maths tutors explain things to you before asking you to do it?
• December 13th 2010, 09:30 AM
Plato
$4^{-1}=\dfrac{1}{4}$ by definition.
So $4^{-3}=\dfrac{1}{4^3}=\dfrac{1}{64}$.
$4^{-n}=\dfrac{1}{4^n}$.
• December 13th 2010, 10:31 AM
jebus197
That's a bit better. Thanks.
• December 13th 2010, 05:32 PM
mr fantastic
Quote:
Originally Posted by jebus197
That's a bit better. Thanks.
There is an assumption that you know something about index laws (otherwise why are you attempting this question). The point of being asked questions is to try and guide you to the answer yourself, based on what you already know. You were told everything you needed to know, the expectation was that you would attempt then to answer the question rather than maintaining a helpless attitude. Post #2 and then particularly post #7 tell you exactly what is required.
What I'd like to see is if you have actually learned anything from this thread. eg. What is 1/81 as a power of 3? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 9, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7914559841156006, "perplexity": 698.4003818627501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990123.20/warc/CC-MAIN-20150728002310-00230-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://ora.ouls.ox.ac.uk/objects/uuid:b24e9e23-7fd3-418f-83bb-5820fa57f5d5 | Conference item
### Solvability of matrix-exponential equations
Abstract:
We consider a continuous analogue of (Babai et al. 1996)’s and (Cai et al. 2000)’s problem of solving multiplicative matrix equations. Given k + 1 square matrices A1,..., Ak, C, all of the same dimension, whose entries are real algebraic, we examine the problem of deciding whether there exist non-negative reals t1,..., tk such that k/Π/i=1 exp(Ai ti) = C. We show that this problem is undecidable in general, but decidable under the assumption that the matrices A1, ..., Ak commute. Our results ...
Publication status:
Published
Peer review status:
Peer reviewed
### Access Document
Files:
• (Accepted manuscript, pdf, 333.1KB)
Publisher copy:
10.1145/2933575.2934538
### Authors
More by this author
Institution:
University of Oxford
Oxford college:
St John's College
Role:
Author
More by this author
Institution:
University of Oxford
Division:
MPLS
Department:
Computer Science
Role:
Author
More by this author
Institution:
University of Oxford
Oxford college:
St Cross College
Role:
Author
More by this author
Institution:
University of Oxford
Oxford college:
Green Templeton College
Role:
Author
More from this funder
Funding agency for:
Sousa Pinto, J
Publisher:
ACM/IEEE Symposium on Logic in Computer Science Publisher's website
Journal:
Thirty-First Annual ACM/IEEE Symposium on Logic in Computer Science Journal website
Host title:
Thirty-First Annual ACM/IEEE Symposium on Logic in Computer Science
Publication date:
2016-08-01
Acceptance date:
2016-04-04
DOI:
Source identifiers:
619304
Keywords:
Pubs id:
pubs:619304
UUID:
uuid:b24e9e23-7fd3-418f-83bb-5820fa57f5d5
Local pid:
pubs:619304
Deposit date:
2016-08-11 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9616424441337585, "perplexity": 10912.580495481749}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570692.22/warc/CC-MAIN-20220807181008-20220807211008-00113.warc.gz"} |
http://mathhelpforum.com/algebra/22977-systems-check.html | 1. ## Systems- Check
Solve by the method of your choice.
x^2+y^2=50
(x-7)^2+y^2=1
I got; {(7,1), (7,-1),(-7,1),(-7,-1)}
x^2+y^2=29
-8x+y^2=41
I got: {(-2,5),(-2.-5)}
2. Originally Posted by soly_sol
Solve by the method of your choice.
x^2+y^2=50
(x-7)^2+y^2=1
I got; {(7,1), (7,-1),(-7,1),(-7,-1)}
you know, you could check your solutions by plugging into the original system. you would have immediately realize that (-7,-1) and (-7,1) do not work in the second equation, so those solutions are wrong
3. Originally Posted by soly_sol
Solve by the method of your choice.
x^2+y^2=50
(x-7)^2+y^2=1
I got; {(7,1), (7,-1),(-7,1),(-7,-1)}
what did you do?
what is $(-7 - 7)^2 + ( \pm 1)^2$?
Originally Posted by soly_sol
x^2+y^2=29
-8x+y^2=41
I got: {(-2,5),(-2.-5)}
right! but i think, there is more.. EDIT: no more.. Ü | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.658984363079071, "perplexity": 6927.913697389842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607802.75/warc/CC-MAIN-20170524055048-20170524075048-00029.warc.gz"} |
https://export.arxiv.org/list/astro-ph.HE/1708 | # High Energy Astrophysical Phenomena
## Authors and titles for astro-ph.HE in Aug 2017
[ total of 323 entries: 1-25 | 26-50 | 51-75 | 76-100 | ... | 301-323 ]
[ showing 25 entries per page: fewer | more | all ]
[1]
Title: HESS J0632+057: hydrodynamics and nonthermal emission
Comments: 6 pages, 5 figures, accepted by MNRAS Letters
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[2]
Title: Astroparticle Physics Tests of Lorentz Invariance Violation
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[3]
Title: The Vela X pulsar wind nebula through the eyes of H.E.S.S. and Suzaku
Authors: L. Tibaldo, F. Aharonian, P. Bordas, S. Caroff, J. A. Hinton, D. Khangulyan, H. Odaka, R. Tuffs (for the H.E.S.S. Collaboration)
Comments: 8 pages, 4 figures. To appear in Proceedings of 35th ICRC, Busan (Korea) 2017
Journal-ref: PoS(ICRC2017)719
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[4]
Title: Possible range of viscosity parameter to trigger black hole candidates to exhibit different states of outbursts
Comments: 28 pages, 2 figures, 3 tables, in press: ApJ (05/07/17)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[5]
Title: Pulsar Wind Nebulae Created by Fast-Moving Pulsars
Comments: 25 pages, 11 figures, accepted for publication in the Journal of Plasma Physics
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[6]
Title: H.E.S.S. observations following multi-messenger alerts in real-time
Comments: Proceedings 35th International Cosmic Ray Conference (ICRC2017), Busan/South Korea
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[7]
Title: FRB Strength Distribution Challenges the Cosmological Principle
Authors: J. I. Katz
Comments: 5 pp, 4 figs Expanded discussion of uncertainties and biases; conclusions unchanged
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Cosmology and Nongalactic Astrophysics (astro-ph.CO)
[8]
Title: The Formation of Rapidly Rotating Black Holes in High Mass X-ray Binaries
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[9]
Title: Monitoring of the FSRQ PKS 1510-089 with H.E.S.S
Comments: To appear in Proceedings of 35th ICRC, Busan (Korea) 2017
Journal-ref: POS(ICRC2017)654
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[10]
Title: Interferometric Radio Measurements of Air Showers with LOPES: Final Results (ICRC 2017)
Authors: Frank G. Schröder, Katrin Link (for the LOPES Collaboration)
Comments: Proceedings of the 35th ICRC 2017, Busan, Korea
Journal-ref: PoS (ICRC2017) 458
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Instrumentation and Methods for Astrophysics (astro-ph.IM)
[11]
Title: Overview on the Tunka-Rex antenna array for cosmic-ray air showers (ICRC 2017)
Authors: Frank G. Schröder (for the Tunka-Rex Collaboration)
Comments: Proceedings of the 35th ICRC 2017, Busan, Korea
Journal-ref: PoS (ICRC2017) 459
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Instrumentation and Methods for Astrophysics (astro-ph.IM)
[12]
Title: The exceptional VHE gamma-ray outburst of PKS 1510-089 in May 2016
Comments: To appear in Proceedings of 35th ICRC, Busan (Korea) 2017
Journal-ref: Proceedings of the 35th International Cosmic Ray Conference (ICRC2017) Volume 301 p. 655
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[13]
Title: H.E.S.S. discovery of very-high-energy emission from the blazar PKS 0736+017: on the location of the $γ$-ray emitting region in FSRQs
Authors: Matteo Cerruti, Jean-Philippe Lenain, Heike Prokoph (for the H.E.S.S. Collaboration)
Comments: 8 pages; to appear in the proceedings of the 35th International Cosmic Ray Conference (ICRC2017)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[14]
Title: Extragalactic Observations with HESS: Past and Future
Authors: Andrew M. Taylor, David Sanchez, Matteo Cerruti (on behalf of the H.E.S.S. collaboration)
Comments: 8 pages; to appear in the proceedings of the 35th International Cosmic Ray Conference (ICRC2017)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[15]
Title: High-Energy Gamma Rays from the Milky Way: Three-Dimensional Spatial Models for the Cosmic-Ray and Radiation Field Densities in the Interstellar Medium
Comments: 23 pages, 14 figures, 1 appendix. ApJ in press
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[16]
Title: Studying the ICM in clusters of galaxies via surface brightness fluctuations of the cosmic X-ray background
Authors: Alexander Kolodzig (KIAA), Marat Gilfanov (MPA, IKI), Gert Hütsi (Tartu Observatory Estonia), Rashid Sunyaev (MPA, IKI)
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[17]
Title: Gaia17biu/SN 2017egm in NGC 3191: The closest hydrogen-poor superluminous supernova to date is in a "normal", massive, metal-rich spiral galaxy
Comments: Accepted for publication in ApJ. Ancillary ASCII tables added: TRL.txt -- blackbody temperature, radius and luminosity; uvw2uvm2uvw1uvu.txt -- UV photometry; BgVri.txt -- optical photometry; zJHK.txt -- NIR photometry
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE); Astrophysics of Galaxies (astro-ph.GA); Solar and Stellar Astrophysics (astro-ph.SR)
[18]
Title: Observation of the extremely bright flare of the FSRQ 3C279 with H.E.S.S. II
Comments: In Proceedings of the 35th International Cosmic Ray Conference (ICRC2017), Busan, South Korea
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[19]
Title: H.E.S.S. II observations of the 2014 periastron passage of PSR B1259-63/LS 2883
Authors: C. Romoli, P. Bordas, C. Mariaud, T. Murach (for the H.E.S.S. Collaboration)
Comments: In Proceedings of the 35th International Cosmic Ray Conference (ICRC2017), Busan, South Korea
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[20]
Title: NuSTAR view of the Z-type neutron star low-mass X-ray binary Cygnus~X--2
Comments: Accepted for publication in MNRAS
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[21]
Title: First Results of Eta Car Observations with H.E.S.S.II
Comments: 5 pages, 2 figures, To appear in Proceedings of 35th ICRC, Busan, Korea, 2017
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[22]
Title: Quantifying Intrinsic Variability of Sagittarius A* using Closure Phase Measurements of the Event Horizon Telescope
Comments: 18 pages, 15 figures; accepted for publication in ApJ
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[23]
Title: Target of Opportunity Observations of Blazars with H.E.S.S
Comments: 35th International Cosmic Ray Conference -ICRC2017, 10-20 July, 2017, Bexco, Busan, Korea
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[24]
Title: GRB Observations with H.E.S.S. II
Comments: 8 pages, 2 figures, To apper in Proceedings of 35th ICRC, Busan (Korea) 2017
Subjects: High Energy Astrophysical Phenomena (astro-ph.HE)
[25]
Title: Emission from accelerating jets in gamma-ray bursts: Radiation dominated flows with increasing mass outflow rates | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.644760251045227, "perplexity": 26605.894448386014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662520817.27/warc/CC-MAIN-20220517194243-20220517224243-00410.warc.gz"} |
https://blog.evanchen.cc/2014/12/25/representation-theory-part-2-schurs-lemma/ | # Representation Theory, Part 2: Schur’s Lemma
Merry Christmas!
In the previous post I introduced the idea of an irreducible representation and showed that except in fields of low characteristic, these representations decompose completely. In this post I’ll present Schur’s Lemma at talk about what Schur and Maschke tell us about homomorphisms of representations.
1. Motivation
Fix a group ${G}$ now, and consider all isomorphism classes of finite-dimensional representations of ${G}$. We’ll denote this set by ${\mathrm{Irrep}(G)}$. Maschke’s Theorem tells us that any finite-dimensional representation ${\rho}$ can be decomposed as
$\displaystyle \bigoplus_{\rho_\alpha \in \mathrm{Irrep}(G)} \rho_{\alpha}^{\oplus n_\alpha}$
where ${n_\alpha}$ is some nonnegative integer. This begs the question: what is ${n_\alpha}$? Is it even uniquely determined by ${\rho}$?
To answer this I first need to compute ${\mathrm{Hom}_G(\rho, \pi)}$ for any two distinct irreducible representations ${\rho}$ and ${\pi}$. One case is easy.
Lemma 1 Let ${\rho}$ and ${\pi}$ be non-isomorphic irreducible representations (not necessarily finite dimensional). Then there are no nontrivial homomorphisms ${\phi : \rho \rightarrow \pi}$. In other words, ${\mathrm{Hom}_G(\rho, \pi) = \{0\}}$.
I haven’t actually told you what it means for representations to be isomorphic, but you can guess — it just means that there’s a homomorphism of ${G}$-representations between them which is also a bijection of the underlying vector spaces.
Proof: Let ${\phi : \rho_1 \rightarrow \rho_2}$ be a nonzero homomorphism. We can actually prove the following stronger results.
• If ${\rho_2}$ is irreducible then ${\phi}$ is surjective.
• If ${\rho_1}$ is irreducible then ${\phi}$ is injective.
Exercise Prove the above two results. (Hint: show that ${\text{Im } \phi}$ and ${\ker \phi}$ give rise to subrepresentations.)
Combining these two results gives the lemma because ${\phi}$ is now a bijection, and hence an isomorphism. $\Box$
2. Schur’s Lemma
Thus we only have to consider the case ${\rho \simeq \pi}$. The result which relates these is called Schur’s Lemma, but is important enough that we refer to it as a theorem.
Theorem 2 (Schur’s Lemma) Assume ${k}$ is algebraically closed. Let ${\rho}$ be a finite dimensional irreducible representation. Then ${\mathrm{Hom}_{G} (\rho, \rho)}$ consists precisely of maps of the form ${v \mapsto \lambda v}$, where ${\lambda \in k}$; the only possible maps are multiplication by a scalar. In other words,
$\displaystyle \mathrm{Hom}_{G} (\rho, \rho) \simeq k$
and ${\dim \mathrm{Hom}_G(\rho, \rho) = 1}$.
This is NOT in general true without the algebraically closed condition, as the following example shows.
Example Let ${k = {\mathbb R}}$, let ${V = {\mathbb R}^2}$, and let ${G = {\mathbb Z}_3}$ act on ${V}$ by rotating every ${\vec x \in {\mathbb R}^2}$ by ${120^{\circ}}$ around the origin, giving a representation ${\rho}$. Then ${\rho}$ is a counterexample to Schur’s Lemma.
Proof: This representation is clearly irreducible because the only point that it fixes is the origin, so there are no nontrivial subrepresentations.
We can regard now ${\rho}$ as a map in ${{\mathbb C}}$ which is multiplication by ${e^{\frac{2\pi i}{3}}}$. Then for any other complex number ${\xi}$, the map “multiplication by ${\xi}$” commutes with the map “multiplication by ${e^{\frac{2\pi i}{3}}}$”. So in fact
$\displaystyle \mathrm{Hom}_G(\rho, \rho) \simeq {\mathbb C}$
which has dimension ${2}$. $\Box$
Now we can give the proof of Schur’s Lemma.
Proof: Clearly any map ${v \mapsto \lambda v}$ respects the ${G}$-action.
Now consider any ${T \in \mathrm{Hom}_G(\rho, \rho)}$. Set ${\rho = (V, \cdot_\rho)}$. Here’s the key: because ${k}$ is algebraically closed, and we’re over a finite dimensional vector space ${V}$, the map ${T}$ has an eigenvalue ${\lambda}$. Hence by definition ${V}$ has a subspace ${V_\lambda}$ over which ${T}$ is just multiplication by ${\lambda}$.
But then ${V_\lambda}$ is a ${G}$-invariant subspace of ${V}$! Since ${\rho}$ is irreducible, this can only happen if ${V = V_\lambda}$. That means ${T}$ is multiplication by ${\lambda}$ for the entire space ${V}$, as desired. $\Box$
3. Computing dimensions of homomorphisms
Since we can now compute the dimension of the ${\mathrm{Hom}_G}$ of any two irreducible representations, we can compute the dimension of the ${\mathrm{Hom}_G}$ for any composition of irreducibles, as follows.
Corollary 3 We have
$\displaystyle \dim \mathrm{Hom}_G \left( \bigoplus_\alpha \rho_\alpha^{\oplus n_\alpha}, \bigoplus_\beta \rho_\beta^{\oplus m_\beta} \right) = \sum_{\alpha} n_\alpha m_\alpha$
where the direct sums run over the isomorphism classes of irreducibles.
Proof: The ${\mathrm{Hom}}$ just decomposes over each of the components as
\displaystyle \begin{aligned} \mathrm{Hom}_G \left( \bigoplus_\alpha \rho_\alpha^{\oplus n_\alpha}, \bigoplus_\beta \rho_\beta^{\oplus m_\beta} \right) &\simeq \bigoplus_{\alpha, \beta} \mathrm{Hom}_G(\rho_\alpha^{\oplus n_\alpha}, \rho_\beta^{\oplus m_\beta}) \\ &\simeq \bigoplus_{\alpha, \beta} \mathrm{Hom}_G(\rho_\alpha, \rho_\beta)^{\oplus n_\alpha m_\alpha}. \end{aligned}
Here we’re using the fact that ${\mathrm{Hom}_G(\rho_1 \oplus \rho_2, \rho) = \mathrm{Hom}_G(\rho_1, \rho) \oplus \mathrm{Hom}_G(\rho_2, \rho)}$ (obvious) and its analog. The claim follows from our lemmas now. $\Box$
As a special case of this, we can quickly derive the following.
Corollary 4 Suppose ${\rho = \bigoplus_\alpha \rho_\alpha^{n_\alpha}}$ as above. Then for any particular ${\beta}$,
$\displaystyle n_\beta = \dim \mathrm{Hom}_G(\rho, \rho_\beta).$
Proof: We have
$\displaystyle \dim \mathrm{Hom}_G(\rho, \rho_\beta) = n_\beta \mathrm{Hom}_G(\rho_\beta, \rho_\beta) = n_\beta$
as desired. $\Box$
This settles the “unique decomposition” in the affirmative. Hurrah!
It might be worth noting that we didn’t actually need Schur’s Lemma if we were solely interested in uniqueness, since without it we would have obtained
$\displaystyle n_\beta = \frac{\dim \mathrm{Hom}_G(\rho, \rho_\beta)}{\dim \mathrm{Hom}_G(\rho_\beta, \rho_\beta)}.$
However, the denominator in that expression is rather unsatisfying, don’t you think?
4. Conclusion
In summary, we have shown the following main results for finite dimensional representations of a group ${G}$.
• Maschke’s Theorem: If ${G}$ is finite and ${\text{char } k}$ does not divide ${\left\lvert G \right\rvert}$, then any finite dimensional representation is a direct sum of irreducibles. This decomposition is unique up to isomorphism.
• Schur’s Lemma: If ${k}$ is algebraically closed, then ${\mathrm{Hom}_G(\rho, \rho) \simeq k}$ for any irreducible ${\rho}$, while there are no nontrivial homomorphisms between non-isomorphic irreducibles.
In the next post I’ll talk about products of irreducibles, and use them in the fourth post to prove two very elegant results about the irreducibles, as follows.
1. The number of (isomorphsim classes) of irreducibles ${\rho_\alpha}$ is equal to the number of conjugacy classes of ${G}$.
2. We have ${ \left\lvert G \right\rvert = \sum_\alpha \left( \dim \rho_\alpha \right)^2 }$.
Thanks to Dennis Gaitsgory, who taught me this in his course Math 55a. My notes for Math 55a can be found at my website. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 94, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9929470419883728, "perplexity": 216.19169876337295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00188.warc.gz"} |
https://wiki.synfig.org/index.php?title=Dev:Bline_Speed&diff=next&oldid=5760 | Difference between revisions of "Dev:Bline Speed"
Bline's parameter's "speed"
If you have played enough with Blines, "BLine Vertex" and "BLine Tangent" converts you perhaps have noticed that a change in the "Amount" parameter doesn't always step forward/backwards the same amount. For example, adding 0.1 to "Amount" doesn't move a "Bline Vertex" by the same distance all the time.
Near the bline's vertices (or near the curved parts) you'll notice that evenly spaced "Amount" values are either compressed together or expanded away from each other. Trying to make an object follow a bline will lead to the object changing speeds as it goes along it.
The problem lies in how Blines are defined and how a position in the Bline changes as the "Amount" parameter changes. I'll refer to the rate of change as the Bline's "speed".
Why does "speed" changes?
Firstly, a Synfig Bline is composed of several bezier sections. Each segment is a cubic bezier line. This sections are joined back to back, allowing for arbitrarily complex shapes. All the properties that for a single section, also hold true for any number of sections. So I'm gonna focus on Blines with a single section, in other words, Blines with only two vertexes.
A Bline with a single section reduces to a Cubic Bezier defined like this:
$\displaystyle \mathbf{B}(t)=(1-t)^3\mathbf{P}_0+3t(1-t)^2\mathbf{P}_1+3t^2(1-t)\mathbf{P}_2+t^3\mathbf{P}_3 \mbox{ , } t \in [0,1].$
This equation describes the shape of the curve. As the $\displaystyle t\,\!$ parameter increases from zero up to one, the point defined by the equation moves from the Bezier's start towards it's end. The rate of the motion to as $\displaystyle t\,\!$ increases describes the curve's "speed". Taking the derivative of this equation yields the "speed":
$\displaystyle \frac{d\mathbf{B}(t)}{dt}= (1-t)^2 [ 3 ( \mathbf{P}_1 - \mathbf{P}_0 ) ] + 2t(1-t) [ 3 ( \mathbf{P}_2 - \mathbf{P}_1 ) ] + t^2 [ 3 ( \mathbf{P}_3 - \mathbf{P}_2 ) ] \mbox{ , }t \in [0,1].$
You may have noticed that this equation is equivalent to a Quadratic Bezier. This tells us that the "speed" can and does change as the $\displaystyle t\,\!$ parameter changes.
Our objective is now to compensate the derivative to achieve a desired "speed". We cannot change the control points to the curve without changing it's shape. The only other thing we can change is the parameter $\displaystyle t\,\!$ . Therefore, we define a function $\displaystyle g(t)\,\!$ so that:
$\displaystyle \frac{d\mathbf{B}(g(t))}{dt}=\boldsymbol{s}(t)\,\,$
Where $\displaystyle \boldsymbol{s}(t)$ is a vector $\displaystyle (s_x(t),s_y(t))\,\!$ that defines the desired speed as a function of $\displaystyle t\,\!$ . The curve needs to move in a whole range of directions as the curve describes its shape. Our objective is only to control its magnitude. This magnitude condition can be expressed as:
$\displaystyle s_x^2(t)+s_y^2(t)=s_{mag}^2(t)$
Where $\displaystyle s_{mag}(t)\,\!$ is a function defining the desired "speed" magnitude.
We can expand our first equation a bit:
$\displaystyle \frac{d\mathbf{B}(g(t))}{dt}=\frac{d\mathbf{B}(g(t))}{d(g(t))}\,\frac{dg(t)}{dt}=\boldsymbol{s}(t)\,\,$
Expanding the equation like this lets us use the original Bline's derivative definition, by replacing $\displaystyle t\,\!$ with $\displaystyle g(t)\,\!$ . Next we replace the x and y components into the magnitude condition equation:
$\displaystyle \Bigg[\frac{dB_x(g(t))}{d(g(t))}\,\frac{dg(t)}{dt}\Bigg]^2+\Bigg[\frac{dB_y(g(t))}{d(g(t))}\,\frac{dg(t)}{dt}\Bigg]^2=s_{mag}^2(t)$
Rearranging we obtain an ordinary non-linear differential equation:
$\displaystyle \frac{dg(t)}{dt}=\frac{s_{mag}(t)}{\sqrt{\Big[\frac{dB_x(g(t))}{d(g(t))}\Big]^2+\Big[\frac{dB_y(g(t))}{d(g(t))}\Big]^2}}$
Solving this equation yields a function $\displaystyle g(t)\,\!$ such that the curve's "speed" is dictated by the function $\displaystyle s_{mag}(t)\,\!$ .
Solving the equation
All that is left is to solve the equation. It is quite complex and as I said before, the differential equation that we got is non-linear. This makes it hard to find $\displaystyle g(t)\,\!$ in a clear formula.
But even such a complex equation is easy to solve numerically. Keeping in mind that what we want is simply the value of $\displaystyle g(t)\,\!$ so we can plug it into "Amount". A numerical solution of the equation gives us just that, the value of $\displaystyle g(t)\,\!$ at certain intervals.
The Runge-Kutta method serves this purpose quite well, and it's quite simple also. All we need is to evaluate the derivative of the function that we need to find, and feed the values into the Runge-Kutta method.
Let's try a simple case, constant speed. If $\displaystyle s_{mag}(t)\,\!$ is a constant value, then it would need to be equal to the Bline's length, so that as the $\displaystyle t\,\!$ goes from 0.0 up to 1.0, the curve moves from the start to the end. Too little speed and the curve won't reach the end when $\displaystyle t\,\!$ reaches 1.0. Too much and the curve will go past the end when $\displaystyle t\,\!$ reaches 1.0.
Conveniently, this method also allows to find a Bline's length. If we assume $\displaystyle s_{mag}(t)=1\,\!$ then the curve will reach it's end when $\displaystyle t=LENGTH\,\!$ , where LENGTH is the Bline's length. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 29, "math_score": 0.8910622596740723, "perplexity": 404.6497903316781}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525500.21/warc/CC-MAIN-20190718042531-20190718064531-00550.warc.gz"} |
http://zbmath.org/?q=an:1059.62581 | # zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Multihypothesis sequential probability ratio tests. II: Accurate asymptotic expansions for the expected sample size. (English) Zbl 1059.62581
Summary: For Part I, see ibid. 45, 2448–2461 (1999; Zbl 1131.62313).
We proved in Part I that two specific constructions of multihypothesis sequential tests, which we refer to as multihypothesis sequential probability ratio tests (MSPRTs), are asymptotically optimal as the decision risks (or error probabilities) go to zero. The MSPRTs asymptotically minimize not only the expected sample size but also any positive moment of the stopping time distribution, under very general statistical models for the observations. In this paper, based on nonlinear renewal theory we find accurate asymptotic approximations (up to a vanishing term) for the expected sample size that take into account the “overshoot” over the boundaries of decision statistics. The approximations are derived for the scenario where the hypotheses are simple, the observations are independent and identically distributed (i.i.d.) according to one of the underlying distributions, and the decision risks go to zero. Simulation results for practical examples show that these approximations are fairly accurate not only for large but also for moderate sample sizes. The asymptotic results given here complete the analysis initiated by C. W. Baum and V. V. Veeravalli [see IEEE Trans. Inf. Theory 40, No. 6, 1994–2007 (1994; Zbl 0828.62070), where first-order asymptotics were obtained for the expected sample size under a specific restriction on the Kullback-Leibler distances between the hypotheses
##### MSC:
62L10 Sequential statistical analysis | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097127079963684, "perplexity": 4037.873372724856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://nrich.maths.org/1963/index?nomenu=1 | Show that the edges $AD$ and $BC$ of a tetrahedron $ABCD$ are mutually perpendicular when: $AB^2+CD^2 = AC^2+BD^2$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5980554819107056, "perplexity": 126.1275072524125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422122220909.62/warc/CC-MAIN-20150124175700-00236-ip-10-180-212-252.ec2.internal.warc.gz"} |
https://vincenttam.github.io/blog/2014/09/08/another-way-of-writing-piecewise-functions/ | # Another Way of Writing Piecewise Functions
## Background
I changed my way of writing block equations for RSS.1 However, in my old post about the Contraction Mapping Principles, there’s an inequality on the rate of convergence of a point in a complete metric space to the unique fixed point of the Lipschitz mapping with a Lipschitz constant strictly less than one.2
\begin{aligned} d(x^*,x_n) =& d\left( \lim_{k \to \infty} x_k,x_n\right) \\\ =& \lim_{k \to \infty} d(x_k,x_n) \\\ \le& \lim_{k \to \infty} \frac{q^n}{1 - q} d(x_1,x_0) \\\ =& \frac{q^n}{1 - q} d(x_1,x_0) \end{aligned}
Starting from the second line in the above block equation, at the left of the binary relation symbols there’s a whitespace character.
### Visual effects in pages under “/posts/” or the index page
Note that due to the development of Octopress, I now see three backslashes in the “MathJax Equation Source”.
It doesn’t matter much in block equations.
## Problem
However, it does matter if I have to define a piecewise function. Take one defined in one of my old posts as an example.3
(Added on DEC 12TH, 2014) (Revised on SEP 3RD, 2015)
Note: As you can see from the above piecewise function, the problem is now gone.
$f(x,y) = \begin{cases} 0 &\text{if } (x,y) \in \vect{I} \land y \ge x\\\ 1 &\text{if } (x,y) \in \vect{I} \land y < x \end{cases}$
### Visual effects in pages under “/posts/” or the index page
Note that due to the development of Octopress, I now see three backslashes in the “MathJax Equation Source”.
## Solution
Now, I know how to tell kramdown to ignore MathJax code. This is much more convenient than the method described below.
After I observed that the two equations, which are aligned by the aligned environment, at the bottom of the post cited in footnote #3, I used the $\rm \LaTeX$ commands \left\\{ and \right. in the Markdown source file for posts to construct the left curly brace only.
f(x,y) = \left\{ \begin{aligned} 0 &\text{ if } (x,y) \in \vect{I} \land y \ge x\\\ 1 &\text{ if } (x,y) \in \vect{I} \land y < x, \end{aligned} \right. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884640574455261, "perplexity": 1254.0550777695003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320679.64/warc/CC-MAIN-20170626050425-20170626070425-00573.warc.gz"} |
https://www.atmos-chem-phys-discuss.net/acp-2018-836/ | Journal cover Journal topic
Atmospheric Chemistry and Physics An interactive open-access journal of the European Geosciences Union
Journal topic
# Journal metrics
• IF 5.509
• IF 5-year 5.689
• CiteScore 5.44
• SNIP 1.519
• SJR 3.032
• IPP 5.37
• h5-index 86
• Scimago H index 161
# Abstracted/indexed
Abstracted/indexed
https://doi.org/10.5194/acp-2018-836
https://doi.org/10.5194/acp-2018-836
Research article 10 Sep 2018
Research article | 10 Sep 2018
Review status
This discussion paper is a preprint. It is a manuscript under review for the journal Atmospheric Chemistry and Physics (ACP).
# Rapid ice aggregation process revealed through triple-wavelength Doppler spectra radar analysis
Andrew I. Barrett1,2, Christopher D. Westbrook1, John C. Nicol1, and Thorwald H. M. Stein1 Andrew I. Barrett et al.
• 2Institute for Meteorology and Climate Research, Karlsruhe Institute of Technology, Karlsruhe, 76131, Germany
Abstract. Rapid aggregation of ice particles has been identified by combining data from three co-located, vertically-pointing radars operating at different frequencies. A new technique has been developed that uses the Doppler spectra from these radars to retrieve the vertical profile of ice particle size distributions.
The ice particles grow rapidly from a maximum size of 0.75mm to 5mm while falling less than 500m and in under 10minutes. This rapid growth is shown to agree well with theoretical estimates of aggregation, with aggregation efficiency close to 1, and is inconsistent with other growth processes, e.g. growth by deposition, riming. The aggregation occurs in the middle of the cloud, and is not present throughout the entire lifetime of the cloud. However, the layer of rapid aggregation is very well defined, at a constant height, where the temperature is −15°C, and lasts for at least 20minutes (approximate horizontal distance: 24km). Immediately above this layer, the radar Doppler spectra is bi-modal, which signals the formation of new small ice particles at that height. We suggest that these newly formed particles, at approximately −15°C, grow dendritic arms, enabling them to easily interlock and accelerate the aggregation process. The estimated aggregation efficiency in the studied cloud is between 0.7 and 1, consistent with recent laboratory studies for dendrites at this temperature.
A newly developed method for retrieving the ice particle size distribution using the Doppler spectra allows these retrievals in a much larger fraction of the cloud than existing DWR methods. Through quantitative comparison of the Doppler spectra from the three radars we are able to estimate the ice particle size distribution at different heights in the cloud. Comparison of these size distributions with those calculated with more basic radar-derived values and more restrictive assumptions agree very well; however, the newly developed method allows size distribution retrieval in a larger fraction of the cloud because it allows us to isolate the signal from the larger (non-Rayleigh scattering) particles in the distribution and allows for deviation from the assumed shape of the distribution.
Andrew I. Barrett et al.
Interactive discussion
Status: final response (author comments only)
Status: final response (author comments only)
AC: Author comment | RC: Referee comment | SC: Short comment | EC: Editor comment
- Printer-friendly version - Supplement
Andrew I. Barrett et al.
Andrew I. Barrett et al.
Viewed
Total article views: 373 (including HTML, PDF, and XML)
HTML PDF XML Total BibTeX EndNote
289 81 3 373 2 2
• HTML: 289
• PDF: 81
• XML: 3
• Total: 373
• BibTeX: 2
• EndNote: 2
Viewed (geographical distribution)
Total article views: 373 (including HTML, PDF, and XML) Thereof 372 with geography defined and 1 with unknown origin.
Country # Views %
• 1
1
Cited
Saved
No saved metrics found.
Discussed
No discussed metrics found.
Latest update: 19 Jan 2019 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8044589161872864, "perplexity": 8131.613787443595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583690495.59/warc/CC-MAIN-20190120021730-20190120043730-00556.warc.gz"} |
https://tohoku.pure.elsevier.com/en/publications/effects-of-strain-rate-on-mechanical-properties-of-heterogeneous- | # Effects of strain rate on mechanical properties of heterogeneous nano-structured SUS316LN stainless steel: Revealed by in-situ X-Ray diffraction at synchrotron radiation facility
Hua Jiang, Chihiro Watanabe, Yoji Miyajima, Norimitsu Koga, Yoshiteru Aoyagi, Masakazu Kobayashi, Hiromi Miura
Research output: Contribution to journalArticlepeer-review
## Abstract
A SUS316LN austenitic stainless steel was heavily cold rolled to develop heterogeneous nano (HN)-structure. And the effects of strain rate on the tensile deformation behavior of the HN-structured SUS316LN stainless steel were investigated. For this purpose, changes in lattice defect densities, such as dislocation density and stacking fault probability, during the tensile tests were investigated by means of in-situ X-ray diffraction (XRD) measurements at a synchrotron facility. The strength and elongation to failure simultaneously increased with increasing applied strain rate. The analyses of XRD profiles and microstructural observation revealed that twin fault probability as well as dislocation density and stacking fault probability increased with increasing strain rate. The more enhanced formation of ultra-fine twins by the increase in strain rate led to higher work hardening and resulted in more excellent strength/ductility balance.
Original language English 141251 Materials Science and Engineering A 815 https://doi.org/10.1016/j.msea.2021.141251 Published - 2021 May 20
## Keywords
• Dislocation density
• Heterogeneous nano-structure
• In-situ X-ray measurement
• Mechanical twin
• SUS316LN stainless Steel | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8213509321212769, "perplexity": 11971.725167781415}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154310.16/warc/CC-MAIN-20210802075003-20210802105003-00347.warc.gz"} |
http://link.springer.com/article/10.1007/s00705-012-1546-x | Annotated Sequence Record
Archives of Virology
, Volume 158, Issue 4, pp 909-913
First online:
# Complete genome sequence of arracacha virus B: a novel cheravirus
• I. P. AdamsAffiliated withFood and Environment Research Agency Email author
• , R. GloverAffiliated withFood and Environment Research Agency
• , R. Souza-RichardsAffiliated withFood and Environment Research Agency
• , S. BennettAffiliated withFood and Environment Research Agency
• , U. HanyAffiliated withFood and Environment Research Agency
• , N. BoonhamAffiliated withFood and Environment Research Agency
Rent the article at a discount
Rent now
* Final gross prices may vary according to local VAT.
## Abstract
The complete genome sequences of RNA1 and RNA2 of the oca strain of the potato virus arracacha virus B were determined using next-generation sequencing. The RNA1 molecule is predicted to encode a 259-kDa polyprotein with homology to proteins of the cheraviruses apple latent spherical virus (ALSV) and cherry rasp leaf virus (CRLV). The RNA2 molecule is predicted to encode a 102-kDa polyprotein which also has homology to the corresponding protein of ALSV and, to a lesser degree, CRLV (30 % for RNA1, 24 % for RNA2). Detailed analysis of the genome sequence confirms that AVB is a distinct member of the genus Cheravirus. | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8208158612251282, "perplexity": 20526.3967222874}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00346-ip-10-236-182-209.ec2.internal.warc.gz"} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.