url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://evsynthacademy.org/index.php/lesson/dplyr/
|
# DPLYR
Now that we have started to tidy up our data we can see that we have a need to transform this data. We may wish to add additional variables. Perhaps we also wish to only look at data that meets a certain requirement. The dplyr package allows us to further work with our data.
## dplyr Functionality
With dplyr we have five basic verbs that we will learn to work with:
filter()
select()
arrange()
mutate()
summarize()
We also will consider:
joins
group_by()
For the purposes of this example we will consider looking at the package nycflights13. This is a dataset that has all flights in and out of NYC in 2013. We also will be using the dyplr package from tidyverse:
library(dplyr)
library(nycflights13)
# On Your Own: RStudio Practice
Before moving onto the next portion. Take some time to consider the nycflights13 data. You can explore it with the following call:
library(nycflights12)
flights
Once you have spent some time looking at the data, move onto the next lesson.
|
2021-01-15 21:12:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2098449468612671, "perplexity": 491.7699095464788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703496947.2/warc/CC-MAIN-20210115194851-20210115224851-00370.warc.gz"}
|
https://www.physicsforums.com/threads/write-a-nuclear-equation.972352/
|
# Write a nuclear equation
## Homework Statement:
strontium 90 (Sr) is a radioactive substance, which has 52 neutrons per atom. It decays to Yttrium 90(yt) which has 51 neutrons in each atom. write a nuclear equation to show the decay of strontium 90 to Yttrium 90. Include the mass number and atomic numbers in your equation
## Relevant Equations:
N/a
$$Sr\frac{90}{52}\rightarrow Yt\frac{90}{51}+e\frac{0}{-1}$$ is this correct?
Related Introductory Physics Homework Help News on Phys.org
No, review your lessons on how to write nucleus of atoms.
$$Sr\frac{90}{52}\rightarrow \frac{0}{-1}B +Yt\frac{90}{51}$$
Last edited:
The nuclear notations say that nuclei should be written as ## _{Z}^{A}X##.
The nuclear notations say that nuclei should be written as ## _{Z}^{A}X##.
OOoooo my bad, $$Sr\frac{90}{38}\rightarrow Yt\frac{90}{39}+B\frac{0}{-1}$$
still not sure this is right but I feel it's much closer @Gaussian97
Last edited:
No, apart from the format (you have to write the numbers always in the left and exactly in this way: ##_{Z}^{A}X##) you have to look at your numbers, first of all, compute the values for Z and A of all your nuclei.
Z=90-52=38 A=90 For Strontium. Z=90-51=39 A=90 for Ytrtitium. The A stays the same and the Z increase indicating beta decay written as $$\frac{0}{-1}β$$ or the letter e can be used. The written equation according to this should be: $$\frac{90}{38}Sr\rightarrow \frac{90}{39}Yt+\frac{0}{-1}β$$
Well, I will suppose that when you write ##\frac{90}{38}Sr## you are trying to write ##_{38}^{90}Sr##. Then now ##Z## and ##A## are okay.
Another thing, I don't know what is the convention you use in class, but usually the ##_{Z}^{A}X## is for nuclei and atoms, not for particles alone (unless a nucleus is a particle alone, then it might be okay). But the fact is that ##\beta## is not a nucleus, so you shouldn't write it like this.
But with all that you still have two problems:
1. Check your symbols, there's one wrong.
2. As you write it, this reaction is not possible, you need something else ;)
##_{38}^{90}Sr \rightarrow _{-1}^ {0}e
+ _{39}^{90}Y##
" As you write it, this reaction is not possible, you need something else ;)" not quite sure what I need to add?
collinsmark
Homework Helper
Gold Member
##_{38}^{90}Sr \rightarrow _{-1}^ {0}e
+ _{39}^{90}Y##
" As you write it, this reaction is not possible, you need something else ;)" not quite sure what I need to add?
That looks right to me.
That said, technically this reaction involves an anti-neutrino. Whether or not you include the anti-neutrino in your reaction equation depends on your coursework. Often times neutrinos and anti-neutrinos are ignored, and in that case, your answer looks correct to me. [Edit: Of course, if your coursework does require acknowledging neutrinos and anti-neutrinos, then you need to reflect that in your reaction equation.]
Last edited:
amazingphysics2255
collinsmark
Homework Helper
Gold Member
Oh, and like @Gaussian97 alludes to, sometimes the beta particle is denoted as $\rm{e^-}$, but that depends on your coursework.
For what it's worth, I prefer your notation of $^{\ \ 0}_{-1} \rm{e}$ for nuclear reactions because it preserves balance in both the atomic numbers and the mass numbers. And if ionizations are to be considered (for whatever reason), it's easy to add the ionization count in the upper right-hand corner (e.g. $^{\ \ 0}_{-1} \rm{e}^-$) and still preserve balance there too.
|
2020-09-19 19:15:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8420393466949463, "perplexity": 1214.092408567767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00002.warc.gz"}
|
http://mathhelpforum.com/algebra/158834-rationalizing-radicals-denominator.html
|
1. ## Rationalizing radicals in the denominator
It's little difficult for me...
$\frac{2}{\sqrt[5]{4}}$
2. $\frac{2}{\sqrt[5]{4}} = \frac{2}{4^{(1/5)}}= \frac{2}{(2)^{2(1/5)}} = \frac{2}{2^{(2/5)}}$
NOw complete it
3. I don't understand. Correct answer is $\sqrt[5]{8}$
4. YOu need to look at your books/notes:
$\frac{2}{\sqrt[5]{4}} = \frac{2}{4^{(1/5)}}= \frac{2}{(2)^{2(1/5)}} = \frac{2}{2^{(2/5)}} = 2^{1-\frac{2}{5}} = 2^{(3/5)}=2^{3(1/5)} = 8^{(1/5)} = \sqrt[5]{8}$
Clear??
5. $\dfrac{2}{{\sqrt[5]{4}}} = \dfrac{2}{{\sqrt[5]{{2^2 }}}}\dfrac{{\sqrt[5]{{2^3 }}}}{{\sqrt[5]{{2^3 }}}} = \sqrt[5]{8}$
6. I see. I just think you leave answer in that first post. I miss to read complete it. Sorry. I'm absent-minded today.
7. Originally Posted by Lil
It's little difficult for me...
$\frac{2}{\sqrt[5]{4}}$
Express the numerator as a fifth-root also...
$\displaystyle\frac{2}{\sqrt[5]4}}=\frac{\sqrt[5]{2^5}}{\sqrt[5]4}=\sqrt[5]{\frac{2^5}{2^2}}$
8. I solve this now. It's correct or no?
$\frac{4}{3\sqrt{5}-4}=\frac{4(3\sqrt{5}+4)}{(3\sqrt{5}-4){(3\sqrt{5}+4)}}=\frac{4(3\sqrt{5}+4)}{(3\sqrt{5 })^2-(4)^{2}}=\frac{4(3\sqrt{5}+4)}{29}.$
9. Yes!
It can also be expressed as:
$\frac{4(3\sqrt{5}+4)}{29} = \frac{12\sqrt{5}}{29} + \frac{16}{29}$
10. Originally Posted by Lil
I solve this now. It's correct or no?
$\frac{4}{3\sqrt{5}-4}=\frac{4(3\sqrt{5}+4)}{(3\sqrt{5}-4){(3\sqrt{5}+4)}}=\frac{4(3\sqrt{5}+4)}{(3\sqrt{5 }-4)^{2}}=\frac{4(3\sqrt{5}+4)}{29}.$
You have the right idea (surd conjugate) and your final answer has rationalised the denominator and is correct.
However $(3\sqrt{5}-4)(3\sqrt{5}+4)=(3\sqrt{5})^2-4^2=29$
$(3\sqrt{5}-4)(3\sqrt{5}+4)\ \ne\ (3\sqrt{5}-4)^2$
11. Originally Posted by Archie Meade
You have the right idea (surd conjugate) and your final answer has rationalised the denominator and is correct.
However $(3\sqrt{5}-4)(3\sqrt{5}+4)=(3\sqrt{5})^2-4^2=29$
$(3\sqrt{5}-4)(3\sqrt{5}+4)\ \ne\ (3\sqrt{5}-4)^2$
What a stupid mistake!!! I know this rule. My head don't work today. Thanks.
12. I won't make new topic. Just continue this, because my work is with roots.
Stuck with this.
Just calculate expression:
$2*\left(\frac{2}{\sqrt{6}+3} +\frac{3}{\sqrt{6}-2}-\frac{5}{\sqrt{6}}\right)$
13. $2*\left(\frac{2}{\sqrt{6}+3} +\frac{3}{\sqrt{6}-2}-\frac{5}{\sqrt{6}}\right)$
$= 2*\left( \left[\frac{2}{\sqrt{6}+3} \times \frac{\sqrt{6}-3}{\sqrt{6}-3}\right]+ \left[\frac{3}{\sqrt{6}-2} \times \frac{\sqrt{6}+2}{\sqrt{6}+2}\right]-\left[\frac{5}{\sqrt{6}} \times \frac{\sqrt{6}}{\sqrt{6}}\right]\right)$
Now Simplify.....
14. I won't write all solution... It's clear now. Answer I get 10. I think it's correct.
15. Yes!!
Page 1 of 2 12 Last
|
2017-04-28 13:50:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 17, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283588886260986, "perplexity": 4378.820007037366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00146-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://zach.se/project-euler-solutions/32/
|
Project Euler Problem 32 Solution
Question
We shall say that an n-digit number is pandigital if it makes use of all the digits 1 to n exactly once; for example, the 5-digit number, 15234, is 1 through 5 pandigital.
The product 7254 is unusual, as the identity, $39 \times 186 = 7254$, containing multiplicand, multiplier, and product is 1 through 9 pandigital.
Find the sum of all products whose multiplicand/multiplier/product identity can be written as a 1 through 9 pandigital.
HINT: Some products can be obtained in more than one way so be sure to only include it once in your sum.
import Data.List (nub, sort)
candidates :: [(String, Integer)]
candidates = [(concatMap show [a, b, a*b], a*b) | a <- [1..2000], b <- [1..50]]
pandigital :: String -> Bool
pandigital = (== "123456789") . sort
main :: IO ()
main = print $sum$ nub [p | (digits, p) <- candidates, pandigital digits]
$ghc -O2 -o pandigital pandigital.hs$ time ./pandigital
real 0m0.108s
user 0m0.108s
sys 0m0.000s
Python
#!/usr/bin/env python
def is_pandigital(*args, **kwargs):
num = sorted(''.join(str(arg) for arg in args))
try:
if kwargs['length'] and len(num) != kwargs['length']:
return False
except KeyError:
pass
for i in range(len(num)):
if str(i+1) != str(num[i]):
return False
return True
def main():
pandigitals = set()
total = 0
for multiplicand in range(1, 5000):
for multiplier in range(1, 100):
product = multiplicand * multiplier
if is_pandigital(multiplicand, multiplier, product, length=9):
main()
$time python3 pandigital-products.py real 0m1.465s user 0m1.464s sys 0m0.000s Ruby #!/usr/bin/env ruby puts (1..4999).flat_map { |a| (1..99).map do |b| [a.to_s + b.to_s + (a*b).to_s, a*b] end }.select { |p| p[0].length == 9 && p[0].each_char.sort.join == "123456789" }.map { |p| p[1] }.uniq.reduce(:+) $ time ruby pandigital.rb
sys 0m0.016s
|
2018-12-12 02:32:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25050655007362366, "perplexity": 9589.335543280436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823712.21/warc/CC-MAIN-20181212022517-20181212044017-00376.warc.gz"}
|
https://socratic.org/questions/how-do-you-graph-3x-4y-ge-12
|
# How do you graph 3x-4y \ge 12?
• First of all, add $4 y$ on both sides, and get $3 x \setminus \ge q 12 + 4 y$
• Secondly, subtract 12 from both sides, and get $3 x - 12 \setminus \ge q 4 y$
• Lastly, divide both sides by 4, and get $\setminus \frac{3}{4} x - 3 \setminus \ge q y$
$y \setminus \le q \setminus \frac{3}{4} x - 3$. We know that if the equality holds, $y = \setminus \frac{3}{4} x - 3$ represents a line, thus the inequality represents all the area below (since we have that $y$ must be lesser or equal than the expression of the line) that said line. graph{y <= 3/4x -3 [-18.13, 21.87, -12.16, 7.84]} This is the graph, where you can see the line $y = \setminus \frac{3}{4} x - 3$ in a darker blue, while the lighter-blue painted area is the one where $y < \setminus \frac{3}{4} x - 3$ holds.
|
2023-03-23 18:26:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 9, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.885079562664032, "perplexity": 580.1309110874479}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945182.12/warc/CC-MAIN-20230323163125-20230323193125-00465.warc.gz"}
|
https://fallout.fandom.com/wiki/Melee_Weapons_(Fallout)
|
41,985
pages
Using non-ranged weapons in hand-to-hand or melee combat - knives, sledgehammers, spears, clubs and so on.— In-game description
Melee Weapons is a skill in Fallout, Fallout 2 and Fallout Tactics. This skill determines combat effectiveness with any melee weapon.
## Math
Fallout
Fallout 2 and Fallout Tactics
In Fallout 2, the Chosen One starts out with only a spear, making melee Weapons an important skill at early stages of the game. Most melee weapons can be either swung or thrust. Thrust attacks are limited to certain melee weapons, and they typically deal more damage than swung attacks, occasionally at the expense of an additional action point depending on the weapon. Some melee weapons can also be thrown.
Fallout
Fallout 2
Fallout Tactics
Fallout
Fallout 2
|
2022-06-28 18:25:34
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8408424854278564, "perplexity": 13668.07837975854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103573995.30/warc/CC-MAIN-20220628173131-20220628203131-00270.warc.gz"}
|
https://socratic.org/questions/for-h-x-x-4-3-how-do-you-find-f-4-f-3-and-f-6
|
# For h(x)=x^4 -3; how do you find f(4),f(-3),and f(6)?
##### 1 Answer
Jul 27, 2015
You simply evaluate the function for $x = 4$, $x = - 3$, and $x = 6$.
#### Explanation:
Your function looks like this
$h \left(x\right) = {x}^{4} - 3$
In order to find the value of $h$ in points $4$, $- 3$, and $6$, evaluate the function for these three values of $x$.
Simply put, replace $x$ with these numbers and calculate $h$.
$h \left(4\right) = {\textcolor{b l u e}{4}}^{4} - 3 = 256 - 3 = \textcolor{g r e e n}{253}$
$h \left(- 3\right) = {\textcolor{b l u e}{\left(- 3\right)}}^{4} - 3 = 81 - 3 = \textcolor{g r e e n}{78}$
and finally
$h \left(6\right) = {\textcolor{b l u e}{6}}^{4} - 3 = 1296 - 3 = \textcolor{g r e e n}{1293}$
|
2021-11-30 14:52:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 14, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9344016909599304, "perplexity": 1021.8561196524549}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359037.96/warc/CC-MAIN-20211130141247-20211130171247-00638.warc.gz"}
|
http://www.math.psu.edu/calendars/meeting.php?id=14027
|
# Meeting Details
Title: Singularities: characteristic zero versus positive characteristic Department of Mathematics Colloquium M. Mustata, University of Michigan A very basic invariant of the singularities of a polynomial at a point is its multiplicity. I will discuss two fancy" versions of this invariant, and connections between them. The first one lives (mostly) in characteristic zero and can be described as an integrability exponent. The second one lives in positive characteristic, and owes its existence to the Frobenius homomorphism.
|
2015-05-26 09:42:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3557853698730469, "perplexity": 642.9457485700402}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928817.87/warc/CC-MAIN-20150521113208-00196-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://scicomp.stackexchange.com/questions/20615/jacobi-method-converging-then-diverging
|
# Jacobi method converging then diverging
I am working to solve Poisson's equation in 2D axisymmetric cylindrical coordinates using the Jacobi method. The $L^2$ norm decreases from $\sim 10^3$ on the first iteration (I have a really bad guess) to $\sim 0.2$ very slowly. Then, the $L^2$ norm begins to increase over many iterations.
My final matrix is weakly diagonally dominate, except for the 2nd order Neumann condition at $r = 0$.
Can I make a small tweek to make this work, is it numeric or do I need a new algorithm?
My geometry is parallel plates with sharp points at $r = 0$ on both plates.
My boundary conditions are $$\left. \frac{\partial V}{\partial r} \right|_{r=0} = 0$$
Although I would like my second radial BC to be $$\left. \frac{\partial V}{\partial r} \right|_{r=\infty} = 0$$ I settled for $$\left. \frac{\partial V}{\partial r} \right|_{r=a} = 0$$
Then Dirichlet conditions at the upper and lower boundaries $$V(r, L(r) ) = V_0$$ $$V(r, U(r) ) = V_L$$
where $$L(r) = \begin{cases} & 0 \text{ if } r \geq R_L \\ & H_L (1 - \frac{r}{R_L} ) \text{ if } r \leq R_L \end{cases}$$
and
$$U(r) = \begin{cases} & H \text{ if } r \geq R_U \\ & H + H_U (\frac{r}{R_U} - 1 ) \text{ if } r \leq R_U \end{cases}$$
• What's your damping factor? Are you sure it's small enough? – Wolfgang Bangerth Sep 1 '15 at 15:22
• outside of the location "R=0" what can you tell us about your spatial discretization? Why is there two "plates"? – EngrStudent Sep 10 '15 at 15:40
• It's a uniform grid spatially (so I'm stair stepping the features). There are two plates because that's the system. – user1543042 Sep 10 '15 at 23:44
|
2019-10-15 13:24:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6277338266372681, "perplexity": 909.2661688755967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986659097.10/warc/CC-MAIN-20191015131723-20191015155223-00405.warc.gz"}
|
https://en.wikipedia.org/wiki/Carmichael's_theorem
|
# Carmichael's theorem
In number theory, Carmichael's theorem, named after the American mathematician R.D. Carmichael, states that, for any nondegenerate Lucas sequence of the first kind Un(P,Q) with relatively prime parameters P, Q and positive discriminant, an element Un with n ≠ 1, 2, 6 has at least one prime divisor that does not divide any earlier one except the 12nd Fibonacci number F(12)=U12(1, -1)=144 and its equivalent U12(-1, -1)=-144.
In particular, for n greater than 12, the nth Fibonacci number F(n) has at least one prime divisor that does not divide any earlier Fibonacci number.
Carmichael (1913, Theorem 21) proved this theorem. Recently, Yabuta (2001) gave a simple proof.
## Statement
Given two coprime integers P and Q, such that ${\displaystyle D=P^{2}-4Q>0}$ and PQ ≠ 0, let Un(P,Q) be the Lucas sequence of the first kind defined by
{\displaystyle {\begin{aligned}U_{0}(P,Q)&=0,\\U_{1}(P,Q)&=1,\\U_{n}(P,Q)&=P\cdot U_{n-1}(P,Q)-Q\cdot U_{n-2}(P,Q)\qquad {\mbox{ for }}n>1.\end{aligned}}}
Then, for n ≠ 1, 2, 6, Un(P,Q) has at least one prime divisor that does not divide any Um(P,Q) with m < n, except U12(1, -1)=F(12)=144, U12(-1, -1)=-F(12)=-144. Such a prime p is called a characteristic factor or a primitive prime divisor of Un(P,Q). Indeed, Carmichael showed a slightly stronger theorem: For n ≠ 1, 2, 6, Un(P,Q) has at least one primitive prime divisor not dividing D[1] except U3(1, -2)=U3(-1, -2)=3, U5(1, -1)=U5(-1, -1)=F(5)=5, U12(1, -1)=F(12)=144, U12(-1, -1)=-F(12)=-144.
## Fibonacci and Pell cases
The only exceptions in Fibonacci case for n up to 12 are:
F(1)=1 and F(2)=1, which have no prime divisors
F(6)=8 whose only prime divisor is 2 (which is F(3))
F(12)=144 whose only prime divisors are 2 (which is F(3)) and 3 (which is F(4))
The smallest primitive prime divisor of F(n) are
1, 1, 2, 3, 5, 1, 13, 7, 17, 11, 89, 1, 233, 29, 61, 47, 1597, 19, 37, 41, 421, 199, 28657, 23, 3001, 521, 53, 281, 514229, 31, 557, 2207, 19801, 3571, 141961, 107, 73, 9349, 135721, 2161, 2789, 211, 433494437, 43, 109441, ... (sequence A001578 in the OEIS)
Carmichael's theorem says that every Fibonacci number, apart from the exceptions listed above, has at least one primitive prime divisor.
If n > 1, then the nth Pell number has at least one prime divisor that does not divide any earlier Pell number. The smallest primitive prime divisor of nth Pell number are
1, 2, 5, 3, 29, 7, 13, 17, 197, 41, 5741, 11, 33461, 239, 269, 577, 137, 199, 37, 19, 45697, 23, 229, 1153, 1549, 79, 53, 113, 44560482149, 31, 61, 665857, 52734529, 103, 1800193921, 73, 593, 9369319, 389, 241, ... (sequence A246556 in the OEIS)
|
2017-04-30 21:28:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 2, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7150883674621582, "perplexity": 578.2669743005563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125849.25/warc/CC-MAIN-20170423031205-00369-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://diabetesjournals.org/care/article/24/6/1079/22488/A-Systematic-Approach-to-Risk-Stratification-and
|
OBJECTIVE—To determine whether a comprehensive diabetes management program that included risk stratification and social marketing would improve clinical outcomes and patient satisfaction within a managed care organization (MCO).
RESEARCH DESIGN AND METHODS—The 12-month prospective trial was conducted at primary care clinics within a MCO and involved 370 adults with diabetes. Measurements included 1) the frequency of dilated eye and foot examinations, microalbuminuria assessment, blood pressure measurement, lipid profile, and HbA1c measurement; 2) changes in blood pressure, lipid levels, and HbA1c levels; and 3) changes in patient satisfaction.
RESULTS—Complete data are reported for the 193 patients who had been enrolled for 12 months; life table analysis is reported for all patients who remained enrolled at the study’s end as well as for a comparative control group of 623 patients. For the 193 patients for whom 12-month data were available, the number of patients in the low-risk category (HbA1c <7%) increased by 51.1%. A total of 97.4% of patients with an HbA1c >8% at baseline had a change in treatment regimen. Patients at the highest risk for coronary heart disease (LDL >130 mg/dl) decreased from 25.4% at baseline to 20.2%. Patients with a blood pressure <130/85 mmHg increased from 23.8 to 44.6%. Of these patients, 63.0% had changes in medication. Patients and providers expressed significant increases in satisfaction with the program.
CONCLUSIONS—The program was successful in initiating the recommended changes in the diabetic therapeutic regimen, resulting in improved glycemic control, increased monitoring/management of diabetic complications, and greater patient and provider satisfaction. These results should have great significance in the design of future programs in MCOs aimed at improving the care of people with diabetes and other chronic diseases.
Studies have shown that control of glycemia, hypertension, and hyperlipidemia significantly reduces the risk of microvascular and cardiovascular complications in patients with diabetes (1, 2, 3, 4). Nevertheless, diabetes remains poorly controlled in the U.S., with <2% of adult diabetic patients receiving optimal quality of care as defined by the Clinical Practice Recommendations of the American Diabetes Association (ADA) (5,6).
This study incorporated the findings of several previous studies suggesting that to improve patient outcomes, it is necessary for the patient to interact with a practice team prepared with the appropriate information, skills, and resources (7, 8, 9, 10, 11). The comprehensive approach used was based on processes of social marketing to influence physician behaviors (12, 13, 14, 15, 16, 17, 18, 19) (Table 1). The study was predicated on the utilization of previously agreed-upon protocols that could be acted on as a result of the patient interview and the data obtained.
The purpose of this study was to determine whether improvements could be made in clinical outcomes, patient/provider compliance, and patient/provider satisfaction within a managed care organization (MCO) through an assessment and intervention initiative. The program addressed the needs of diabetes care within the primary care setting, bringing together a team with the patient as a central and empowered participant. An enhanced data management system was devised to facilitate communication and practice-initiated follow-up. Outputs allowed for both risk stratification to identify patients in the greatest need of medical intervention and reports that prompted providers with suggested interventions and care plans. They also allowed for the systematic allocation of personnel resources (i.e., primary care physicians, extenders, diabetes educators, and administrative staff) to optimize the function and value of each provider. The objectives of the study were to: 1) improve compliance with Health Plan Employer Data and Information Set (HEDIS) 1999 Diabetes Quality Improvement Project (DQIP) measures (20) (i.e., following defined guidelines for frequency of dilated eye examinations, foot examinations, urinary microalbumin [MAU] assessment, blood pressure measurement, lipid profile, and HbA1c measurement); 2) improve patient outcomes as measured by HbA1c levels, blood pressure, and lipid levels; 3) improve patient satisfaction with the services provided; 4) improve patient understanding and compliance with therapeutic regimens; and 5) improve provider satisfaction.
This study was conducted at an MCO based in Las Vegas, Nevada. This report describes study objectives, protocol, and 12-month results for 193 patients who participated in the program.
### Clinical setting
The MCO studied has >180,000 health maintenance organization (HMO) members. Most (∼70%) of the HMO members obtain primary care services at the MCO-owned clinics, with the balance of members receiving care at network clinics. The MCO has >8,500 members with diabetes. A majority of these members are Medicare Risk enrollees.
The two clinics studied were staff-model primary care clinics. Each provider had his/her own panel of patients. Clinic 1 had nine providers. Clinic 2 had seven providers. Data from 623 members at a third clinic were analyzed to identify secular trends over the 12-month study period.
### Participants
#### Patient selection.
Using a computerized database from the MCO, 1,121 subjects were identified. Letters were sent to all 1,121 subjects, inviting them to participate in the study. We then randomly selected 655 patients for telephone follow-up; we were able to contact 555 (85%) patients. Of the 555 patients contacted, 431 met the study criteria. The study excluded members who were <21 or >75 years of age; had end-stage renal disease; were on dialysis; had cancer, blindness, drug or alcohol addiction, or stage III or IV congestive heart failure; were in another clinical trial; had gestational diabetes or were pregnant; or were institutionalized or unable to provide self-care.
Of the 431 eligible patients, 370 (86%) were included in the study. At the study’s end, 315 patients remained enrolled. This represents 85% of the patients who were initially enrolled in the study. We have 12-month data for 193 subjects, which are herein reported. An overview of the selection and enrollment process is presented in Fig. 1.
Although no data are available regarding the reasons for patient withdrawal (14.9%), it should be noted that the annual patient turnover rate within the MCO studied averaged ∼26%.
#### Patient demographics.
The average age of the subjects enrolled was 64.0 years; the average duration of diabetes was 10.7 years. Of the patients enrolled, 76% were Caucasian, 14% were African-American, 7% were Hispanic, and 2% were Asian. Median annual income was $10,000–20,000; 70% had a median annual income of <$40,000. There were no significant differences between the control and study groups with regard to age, duration of diabetes, race, or income.
#### Treatment modalities.
At baseline, subjects controlled their diabetes with an insulin–oral therapy combination (11.4%), insulin (14.0%), oral medications (59.6%), or diet and exercise (15.0%). The control group used similar treatment modalities.
#### Comorbidities.
At baseline, 1.0% subjects self-reported having kidney disease; the subjects also reported having diabetic foot disease (12.8%), diabetic eye disease (15.1%), heart disease (31.6%), and diabetic neuropathy (35.4%). High blood pressure and high cholesterol occurred in 67.2 and 57.8% of the subjects, respectively. Comorbidities were similar in the control group.
### Protocol
The program was implemented in the following phases: 1) enrollment; 2) initial encounter; 3) risk stratification and action planning; 4) intervention; 5) patient education; 6) interim visits; and 7) follow-up visits. Program personnel included a team care coordinator, who was responsible for administrative tasks, maintaining contact with patients, data management, and scheduling. Program personnel also included a team care leader, who was a registered nurse who implemented the orders and assumed responsibilities for care as directed by the patients’ primary care providers. In addition to these individuals, each clinic had available diabetes educators, nutritionists, advanced practice nurses, and physician assistants; these people, with the physicians, comprised the health care team.
#### Enrollment.
Potential subjects were identified by diagnosis (ICD-9 250.xx) through patient records, and their data were downloaded into the software. Patients were eligible if they were continuously enrolled in the health plan for at least 2 years and had at least two clinical encounters coded specifically for diabetes procedure/diagnostic codes. Patients received letters of invitation from their provider. Patients were asked to complete a questionnaire enclosed with the letter, obtain necessary lab tests, and call the team care coordinator to schedule an initial visit. The questionnaire (administered pre- and postintervention) solicited demographic information, self-reported comorbidities, current healthcare practices and medical therapies, self-assessment of current status of diabetes control, and overall satisfaction with the healthcare plan, healthcare staff, and level of knowledge about diabetes care. Patients were instructed to bring the completed questionnaire to the first clinic visit, along with their current blood glucose meter and medications. An overview of the enrollment process is presented in Fig. 1.
#### Initial encounter.
At the first visit, the team care coordinator measured blood pressure, height, and weight; conducted a foot examination (pedal pulses, deformities, 10 gm monofilament test); and measured microalbuminuria using the Micral test (Roche Diagnostics, Indianapolis, IN). Patients were also instructed in the self-monitoring of blood glucose (SMBG) and were provided with a blood glucose Accu-Chek Advantage blood glucose meter (Roche Diagnostics) and supplies.
#### Risk stratification and action planning.
Laboratory tests and data from completed patient questionnaires were entered into the software. Risk profiles were generated using stratification algorithms and clinical intervention guidelines based on the ADA Clinical Practice Recommendations. Patients were stratified into high-, moderate-, or low-risk groups in seven categories: 1) glycemic control, 2) cardiovascular disease, 3) nephropathy, 4) retinopathy, 5) hyper/hypoglycemia, 6) amputation, and 7) psychosocial disorders.
#### Interventions.
Interventions were based on previously agreed-upon standing orders (protocols) after approval from the primary care physician. The team care coordinator printed risk profile reports (physician and patient versions) and entered data elements into patient trending flowcharts. The team care coordinator and team care leader met to review the reports and develop action plans. The team care leader then met with primary care providers to review risk stratification reports and approve action plans. The appropriate follow-up action was determined by the healthcare team based on the degree of medical intervention deemed necessary. For example, if follow-up was needed, the team care coordinator scheduled the patient for nurse consultation and/or a provider visit. If no immediate follow-up was needed, a telephone call to the patient may have been sufficient to update the patient on his/her status or to make a minor medication change, followed by a mailing of the patient’s report. The patient reports contained information about the patient’s level of risk in each category and provided self-care recommendations for improving his/her diabetes (Fig. 2).
#### Patient education.
All patients attended three educational programs (2 h each) and received educational materials. Educational programs focused on adult learning principles and actively engaged the patients in their care. As a result, when patients presented for physician office visits, they were knowledgeable about their risk status and what actions would be necessary. Patients also were invited to attend optional support groups.
#### Interim visits.
Interim visits were scheduled at 3 and 6 months after the initial encounter. The team care coordinator verified that patients were performing SMBG at least twice per day and recording test-strip usage. If patients were not performing SMBG at the minimal frequency, the team care coordinator worked with patients to identify barriers and propose solutions. Other potential self-care barriers related to meal-planning adherence, exercise, smoking cessation, medication administration, and so forth were identified, and solutions were explored with the patient. At these visits, patients completed an interim survey regarding their health care utilization.
#### Follow-up visits.
The team care coordinator reviewed patient records monthly (via an internet application) to monitor patient compliance; this was facilitated by an automated reminder function provided by the software. The team care coordinator made reminder telephone calls to patients, answered questions, and referred patients to the team care leader when appropriate.
### Statistical and analytical methods
#### Methods/technologies used to assess complications.
The patients’ blood work was performed by a local certified laboratory contracted by the MCO. HbA1c determinations were performed using a high-performance liquid chromatography method. Foot examinations included the use of the Semmes-Weinstein 5.07 monofilament (21) to test cutaneous sensitivity. Eye examinations were obtained by referral to optometrists supervised by ophthalmologists at several MCO practice locations. Lipid panels included a calculated LDL cholesterol. Random urine dipsticks using Micral were used to obtain a MAU result. MAU results >100 mg/l were sent to a local laboratory for quantitative evaluation.
#### Design/validity of questionnaires used to assess patient satisfaction and statistical analysis of all data.
Metabolic, clinical lab, and patient survey response data were extracted from the database based on a priori hypotheses established for the statistical analyses. For DQIP (20) analysis, measures (e.g., lab HbA1c or metabolic blood pressure group values) were categorized by DQIP criteria; changes over time were tested for statistical significance using McNemar’s test (22).
The diabetes-specific patient satisfaction survey tool, which incorporates a five-point Likert-response scale, was developed by a research group at the Office of Health Policy and Clinical Outcomes, Thomas Jefferson University (Philadelphia, PA). Changes over time for the satisfaction multiple-response items were tested using Agresti’s test of marginal homogeneity for ordinal data (23). This statistical significance test is based on the generalization of the McNemar test; it applies for more than two response categories and takes advantage of the ordered responses of the questions. All analyses were performed using SAS software, version 6.12 for MacIntosh (SAS Institute, Cary, NC).
At 12 months, data from 193 patients were available to assess metabolic outcomes, which were evaluated by changes in HbA1c, blood pressure, and lipids. Additionally, an assessment was made of provider adherence to care guidelines with regard to frequency of HbA1c measurements, blood pressure readings, lipid panel utilization, foot examinations, and dilated eye examinations (Table 2). Assessments were also made regarding the number of patients who received diabetes self-management education, nutrition counseling, and smoking cessation counseling. Patient satisfaction with the health care services provided and patient understanding and compliance with the therapeutic regimen were evaluated by questionnaire.
Data showed a significant improvement in glycemic control as measured by HbA1c (Fig. 3). During the 12-month period, the number of patients in the low-risk category (HbA1c <7%) increased by 51.1%, from 47 members at baseline to 71 after 12 months. The number of patients in the moderate category (7 to <8%) increased 2.5%. The number of patients in the high-risk category (≥8.0%) decreased by 58.3%, from 76 to 48 participants. Furthermore, of those patients with HbA1c levels ≥8% at baseline, 97.4% had a change in treatment regimen during the 12 months in the program.
In addition to analyzing the HbA1c data for the 193 patients for whom 12-month data were available, we also analyzed the time course of 356 patients from the experimental group (which included data from the 315 patients who were still enrolled and the 41 patients who had dropped out of the study) and 623 patients from the control group. These data are summarized in Fig. 4. The control group remained essentially unchanged, whereas significant decreases in HbA1c in the experimental group were seen at the first interval at which it was measured. The change remained constant throughout the remainder of the study.
A reduction in hypertension was also seen at 12 months. The percentage of patients with blood pressure readings <140/90 mmHg, an accountability measure for HEDIS accreditation, increased from 38.9% at baseline to 66.8% at 12 months (Fig. 5). The percentage of patients with blood pressure readings <130/85 mmHg (our pilot clinical decision threshold for hypertension) increased from 23.8 to 44.6%. Of those patients with blood pressure readings >130/85 mmHg at baseline, 63.0% had a change in medication within 12 months in the program.
The percentage of patients receiving lipid profile tests increased from 66% at baseline to 100% at 12 months, and microalbuminuria testing increased from 17 to 100%, respectively (Table 2). The percentage of patients at the highest risk for coronary heart disease (LDL >130 mg/dl) decreased from 25.4% at baseline to 20.2% at 12 months. Of those patients identified at the highest risk for nephropathy, 76.7% had a change in medication within the 12 months of program participation after the initial visit. In addition, at the end of 12 months, the percentage of patients who had received a dilated eye examination increased from 53.9 to 80.3%, and documented foot examinations increased from 0 to 100%.
Improvements were also observed in patient and provider satisfaction scores (Table 3 and Fig. 6). Patients expressed a significant increase in satisfaction with the program and staff performance. Of the 70% of providers that responded to the survey, 100% indicated that they were “very satisfied” with the program, 100% believed that their patients’ diabetes was better controlled as a result of the diabetes management program, and 93% believed the program saved them time on patient visits. In addition, 100% of the providers said they would recommend the use of this program to other physicians.
In our program, it was necessary to convert the ADA’s Clinical Practice Recommendations into concrete actions that could be carried out by the care team, supported by a interrelated set of practice enhancements. The most significant of these enhancements was to translate the protocols into a data system that stratified each patient into risk categories based on his/her data. Therapeutic recommendations were then generated in an easy-to-follow format that could be quickly reviewed and signed by the provider and then implemented with guidance from the team care leader.
The provider and the patient received an illustrated summary of the data and the resultant recommendations, allowing a productive discussion at the time of the patient visit on how to act upon the data. Based on data from the provider survey, this process resulted in more efficient visits and improved use of both the provider’s and the patient’s time. The patient survey data showed that in combination with the educational programs, it also resulted in improved patient satisfaction because the patient understood a priori the changes necessary and the rationale.
Whereas the program is simple to describe, it was complex to initiate for several reasons. First, the providers had to be involved from the start to assure that the standards and their recommended actions were consistent with the practitioners’ views. Perhaps even more important than creating provider buy-in to this program was creating awareness that there were significant gaps in performance. In several instances, this involved interactive discussion with the practitioners as well as nonpunitive provider-specific feedback on prior performance. This feedback was provided by the authors (C.M.C. and J.W.S.) reviewing the scientific basis for both the recommendation and the acceptance of the need for change in the approach to diabetes care.
These educational sessions and the presentation of data on prior performance were viewed as critical links in the process of prompting the recognition that a change in the way care is delivered could have positive consequences. However, it was also critical to devise a process that included the necessary support and resources to enable providers to achieve results without being overburdened. Although the clinic structure was left intact, patient flow was altered, and tasks were delegated to the team care coordinator and team care leader. The team care coordinator and leader assured that the data were available before the patient’s visit with the provider and that the appropriate patient education occurred before that visit. The patients were involved in education programs from the start of the program and received a printout (in condensed form) of their data and risk status (Fig. 2). Thus, they were prepared for the visit and the discussion about how they might improve their health-risk status. The providers saw this process as improving effectiveness and productivity, which greatly facilitated ongoing participation and support of the program within their clinics.
Another enhancement in the program was the initiation at the onset of a psychosocial evaluation using the Problem Areas In Diabetes (PAID) questionnaire developed and validated by Polonsky et al. (24). This was viewed as an important part of the initial assessment because a large percentage of adult patients with diabetes have significant psychological comorbidities, usually depression or anxiety disorders (25,26). Patients with abnormal PAID scores were identified to the practitioners for appropriate action. Educational courses by the clinic psychologist were conducted with all providers to assist them in managing such patients. Thus, before recommendations were made for changes in therapy, it was necessary to address those comorbidities.
Finally, it was necessary to develop a system that collated the data and presented it in a format that was immediately understandable by (and useful to) the patient and the provider. Thus, the data needed to be summarized in such a fashion as to be clear and accurate, yet of sufficient brevity that it could be the basis of discussion by the patient and provider. The system developed also had two additional important features. First, it generated a set of orders for the clinicians to review and initiate, making the recommended changes easy, yet permitting individualization as needed. Visual graphic and analytical reporting components were shared between the patient and provider report formats, facilitating the communication between the patient and the healthcare team. Second, it generated a reminder list for the team care coordinator, providing a fail-safe system to ensure that recommended actions were not overlooked or omitted. For example, if an eye exam was recommended, the system would repeatedly remind the team care coordinator until the exam took place.
The success of the program in initiating the recommended changes in the diabetes therapeutic regimen was the most striking; 95.8% of the patients outside of the recommended therapeutic range had a prescribed change in therapy. These data suggest that the program was successful in convincing both patients and providers that the goals were of value and that changes were necessary to reach those goals. As the number of therapeutic choices available to treat diabetes expands, the metabolic success of the program should increase. It is also important to note that based on provider surveys at the completion of the study, the program did not add work to the already “overburdened” physician. In fact, the program served to improve workflow efficiencies for both providers and staff. The noted improvements demonstrated that excellent diabetes care can be achieved through enhancements in the primary care setting and that “carve-outs,” or separate systems of care involving mass referrals of patients with diabetes to specialty clinics, are not necessary.
In addition, the success of the program in initiating new therapies and improving patient outcomes did not come at the expense of patient satisfaction. Quite the contrary, the most striking finding of the study was the improvement in patient satisfaction that accompanied the program.
Whereas the striking results of the study could have been attributable to some unique features of the patients selected, there are several reasons why we do not feel this is the case. Although only 193 of the patients had complete 12-month data at the completion of the study, >85% of the patients entered into the study remained in the study at its end. The data over time for the compete cohort revealed a significant fall in HbA1c by 3 months that persisted throughout the study, whereas the HbA1c levels of a comparative control group remained constant.
We believe the program was successful because it effectively capitalized on an array of interventions based on social marketing that have been shown to change physician behavior. Our overall strategy was to provide necessary information regarding diabetes management to care providers while removing obstacles that have traditionally inhibited the delivery of quality diabetes care. To this end, we developed interventions to address each of the areas previously demonstrated to influence physician behaviors, as summarized in Table 1. For example, with regard to audit and feedback, explicit standards were agreed upon by the practitioners, and feedback was provided. Furthermore, once these standards were inputted into the system, they were implemented automatically; there was no deviation or partial compliance with the established protocols. With regard to the “opinion leader” involvement, both primary care leadership and endocrine leadership were enthusiastic and supportive of the program. Additionally, an automatic system of risk stratification and reminders was put into place. Through these interventions and others previously described, we created an environment that was supportive of comprehensive diabetes management.
Regarding the economic benefits of the intervention, this study did not address the financial implications of improved diabetes control. However, two recent publications suggest that there are significant and immediate economic advantages to improving glycemic control and treatment of diabetes risk factors (27,28).
The intervention studied was comprehensive, and we are unable to tease out the relative role of the various interventions. Additionally, the study was conducted in a staff-model MCO and may not be applicable to other types of delivery systems. Despite these limitations, we feel that the results provide a sound basis for the design of future programs within MCOs directed at improving the care of patients with diabetes and other chronic diseases. To determine whether these protocols can be adapted to other care settings, we have developed a CD-ROM–based continuing medical education program that targets primary care physicians; we are now in the process of evaluating this program (29).
Figure 1 —
Enrollment process.
Figure 1 —
Enrollment process.
Close modal
Figure 2 —
Example of a risk stratification report generated from the application server.
Figure 2 —
Example of a risk stratification report generated from the application server.
Close modal
Figure 3 —
Change in glycemic control risk (% HbA1c) for the treatment cohort. □, Low (HbA1c <7.0%); [cjs2108], moderate (HbA1c 7–8%); ▪, high (HbA1c >8.0%).
Figure 3 —
Change in glycemic control risk (% HbA1c) for the treatment cohort. □, Low (HbA1c <7.0%); [cjs2108], moderate (HbA1c 7–8%); ▪, high (HbA1c >8.0%).
Close modal
Figure 4 —
Time trend in average HbA1c value for Diabetes Advantage Program treatment and control groups. Average HbA1c is shown at 3-month intervals for the 12-month periods before and after enrollment. Note that the treatment group includes patients who dropped out of the program. •, Control (n = 623); □, treatment (n = 356).
Figure 4 —
Time trend in average HbA1c value for Diabetes Advantage Program treatment and control groups. Average HbA1c is shown at 3-month intervals for the 12-month periods before and after enrollment. Note that the treatment group includes patients who dropped out of the program. •, Control (n = 623); □, treatment (n = 356).
Close modal
Figure 5 —
Reduction in hypertension risk for treatment cohort.
Figure 5 —
Reduction in hypertension risk for treatment cohort.
Close modal
Figure 6 —
Staff provider responses to satisfaction survey. ▪, Yes; □, no.
Figure 6 —
Staff provider responses to satisfaction survey. ▪, Yes; □, no.
Close modal
Table 1 —
Changing practitioner behavior: what works?
InterventionDescription/Findings
Audit and feedback Particularly effective for prescribing and diagnostic testing
Reminders Prompts the provider to perform clinical action
Outreach visits Meetings with providers in practice settings to provide information and feedback
Patient-mediated interventions Educating and informing patients – particularly useful when combined with outreach Visits
Opinion leaders Providers explicitly nominated by their colleagues to be “educationally influential”
Conferences Need to be explicit and related to the practice environment
Marketing Use of interviews, focus groups, or surveys to identify barriers
Multifaceted The use of a variety of interventions is most effective
InterventionDescription/Findings
Audit and feedback Particularly effective for prescribing and diagnostic testing
Reminders Prompts the provider to perform clinical action
Outreach visits Meetings with providers in practice settings to provide information and feedback
Patient-mediated interventions Educating and informing patients – particularly useful when combined with outreach Visits
Opinion leaders Providers explicitly nominated by their colleagues to be “educationally influential”
Conferences Need to be explicit and related to the practice environment
Marketing Use of interviews, focus groups, or surveys to identify barriers
Multifaceted The use of a variety of interventions is most effective
Summary: there must be agreement that there is a problem and that the solution agreed-upon is the solution to the problem, combined with a system of information and feedback necessary to resolve the problem.
Table 2 —
Summary of changes in clinical outcomes and provider adherence at 12 months
Baseline12 Months
Clinical outcome measures
High-risk glycohemoglobin, HbA1c >9.5% 10.9 3.1
Members with drop in HbA1c ≥0.5% — 45.6
Blood pressure <140/90 mmHg 38.9 66.8
Lipid profile testing within the last 2 years 66 100
Microalbuminuria within the last year 17 100
Retinal eye exam within the last year 53.9 80.3
Foot exam with monofilament test within the last year N/A 100
Baseline12 Months
Clinical outcome measures
High-risk glycohemoglobin, HbA1c >9.5% 10.9 3.1
Members with drop in HbA1c ≥0.5% — 45.6
Blood pressure <140/90 mmHg 38.9 66.8
Lipid profile testing within the last 2 years 66 100
Microalbuminuria within the last year 17 100
Retinal eye exam within the last year 53.9 80.3
Foot exam with monofilament test within the last year N/A 100
Data are %.
Table 3 —
Results from patient satisfaction survey
Category/questionResponses
Baseline12 months
Knowledge and information
“In the past 3 months, how satisfied have you been with your knowledge of your diabetes?” 49.2 81.3
Program staff
“How satisfied are you with the way the staff in the diabetes program treated you?” 63.8 97.4
“How satisfied are you with the number of times that the diabetes program staff talked with you?” 51.3 96.9
Program recommendation
“Overall, how satisfied are you with your health plan’s diabetes program?” 57.5 94.3
“How likely are you to recommend your health plan’s diabetes program to someone else who has your kind of diabetes?” 56.4 94.8
Category/questionResponses
Baseline12 months
Knowledge and information
“In the past 3 months, how satisfied have you been with your knowledge of your diabetes?” 49.2 81.3
Program staff
“How satisfied are you with the way the staff in the diabetes program treated you?” 63.8 97.4
“How satisfied are you with the number of times that the diabetes program staff talked with you?” 51.3 96.9
Program recommendation
“Overall, how satisfied are you with your health plan’s diabetes program?” 57.5 94.3
“How likely are you to recommend your health plan’s diabetes program to someone else who has your kind of diabetes?” 56.4 94.8
Data are %. Responses column refers to the percentage of patients responding “very” and “slightly” satisfied, “helpful,” or “likely” to the questions on the survey.
Funding for the program was made possible by an educational grant from Roche Diagnostics Corporation.
The authors thank the following people and institutions who participated in the development and implementation of this program: Southwest Medical Associates (B. Mitchell, R. Parr, C. Belle, P. Sparks, C. DelRosario, J. Martin, and R. Appelt), Sierra Health Services (Y. Riggan and G. Teeter), Indiana University (C. Clark), Stanford University (G. Singh), AvailTek (R. Bogue), HCIA (G. Lenhart and T. Reinsch), the Clinical Education Group (K. Swenson), and Roche Diagnostics Corporation (D. Burgh, S. Earl, L. Halcomb, L. Henderson, R. Peyton, J.M. Quach, M. Schafer, K. Schmelig, L. Stutz, and R. Wishnowsky). We also thank Elizabeth Warren-Boulton (Roche Diagnostics), upon whose literature review Table 1 is based. Part of the data included in this article was presented previously in abstract form at the Building Bridges VI Conference (Atlanta, GA, 6–7 April 2000).
1.
Diabetes Control and Complications Trial Research Group: The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin–dependent diabetes mellitus.
N Engl J Med
329
:
977
–986,
1993
2.
Ohkubo Y, Kishikawa H, Araki E, Miyata T, Isami S, Motoyoshi S, Kojima Y, Furuyoshi N, Shichiri M: Intensive insulin therapy prevents the progression of diabetic microvascular complications in Japanese patients with non-insulin-dependent diabetes mellitus: a randomized prospective 6-year study.
Diabetes Res Clin Pract
28
:
103
–117,
1995
3.
UK Prospective Diabetes Study Group: Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33).
Lancet
352
:
837
–853,
1998
4.
Prospective Diabetes Study Group: Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes.
BMJ
317
:
703
–713,
1998
5.
Beckles GLA, Engelgau MM, Narayan KMV, Herman WH, Aubert RE, Williamson DF: Population-based assessment of the level of care among adults with diabetes in the U.S.
Diabetes Care
21
:
1432
–1438,
1998
.
6.
American Diabetes Association: Clinical Practice Recommendations
1999
. Diabetes Care 22 (Suppl. 1), 1999
7.
Aubert RE, Herman WH, Waters J, Moore W, Sutton D, Peterson BL, Bailey CM, Koplan JP: Nurse case management to improve glycemic control in diabetic patients in a health maintenance organization: A randomized, controlled trial.
Ann Intern Med
129
:
605
–612,
1998
8.
Friedman NM, Gleeson JM, Kent MJ, Foris M, Rodriguez DJ, Cypress M: Management of diabetes mellitus in the Lovelace Health Systems’ Episodes of Care Program.
Eff Clin Prac
1
:
5
–11,
1998
9.
Gruesser M, Bott U, Ellermann P, Kronsbein P, Joergens V: Evaluation of a structured treatment and teaching program for non–insulin-treated type II diabetic outpatients in Germany after nationwide introduction of reimbursement policy for physicians.
Diabetes Care
16
:
1268
–1274,
1993
10.
McCulloch DK, Price MJ, Hindmarsh M, Wagner EH: A population-based approach to diabetes management in a primary care setting: early results and lessons learned.
Eff Clin Prac
1
:
12
–22,
1998
11.
Peters AL, Davidson MB, Ossorio RC: Management of patients with diabetes by nurses with support of subspecialists.
HMO Practice
9
:
8
–13,
1995
12.
Oxman AD, Thomson M, Davis DA, Haynes RB: No magic bullets: a systematic review of 102 trials of interventions to improve professional practice.
Can Med Assoc
153
:
1423
–1430,
1995
13.
O’Brien T, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL: Educational outreach visits: effects on professional practice and health care outcomes (Review). In The Cochrane Library. Issue 4, Oxford: Update Software, 2000
14.
O’Brien T, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL: Audit and feedback: effects on professional practice and health care outcomes (Review). In
The Cochrane Library
. Issue 4, Oxford: Update Software, 2000
15.
Peterson KA, Vinicor F: Strategies to improve diabetes care delivery.
J Fam Pract
47
:
S55
–S62,
1998
16.
O’Brien T, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL: Local opinion leaders: effects on professional practice and health care outcomes (Review). In The Cochrane Library. Issue 4, Oxford: Update Software, 2000
17.
Schneiter EJ, Keller RB, Wennberg D: Physician partnering in Maine: an update from the Maine Medical Assessment Foundation.
Jt Comm J Qual Improv
24
:
579
–584,
1998
18.
Loeppke R, Howell JW: Integrating clinical performance improvement across physician organizations: the PhyCor experience.
Jt Comm J Qual Improve
25
:
55
–67,
1999
19.
Conway AC, Keller RB, Wennberg DE: Partnering with physicians to achieve quality improvement.
Jt Comm J Qual Improve
21
:
619
–626,
1995
20.
Diabetes Quality Improvement Project: Initial Measure Set (Final Version),
1998
. Available from http://www.dqip.org/measures.html
21.
Rith-Najarian SJ, Stolusky T, Gohdes DM: Identifying diabetic patients at high risk for lower extremity amputation in a primary health care setting: a prospective evaluation of a simple screening criteria.
Diabetes Care
15
:
1386
–1389,
1992
22.
Statistical Methods for Rates and Proportions. Fleiss JL, Wiley J, Eds. New York, John Wiley and Sons, 1981, p. 113–114
23.
Categorical Data Analysis. Agresti A, Ed., New York, John Wiley and Sons, 1990, p. 361–363
24.
Polonsky WH, Anderson BJ, Lohrer PA, Welch G, Jacobson AM, Schwartz C: Assessment of diabetes-specific distress.
Diabetes Care
18
:
754
–760,
1995
25.
Gavard JA, Lustman PJ, Clouse RE: Prevalence of depression in adults with diabetes: an epidemiological evaluation.
Diabetes Care
1
:
167
–178,
1993
26.
Black SA: Increased health burden associated with comorbid depression in older diabetic Mexican Americans: results from the Hispanic Established Population for the Epidemiologic Study of the Elderly survey.
Diabetes Care
22
:
56
–64,
1999
27.
Wagner E, Sandhu N, Newton KM, McCulloch DK, Ramsey SD, Grothaus LC: Effect of improved glycemic control on health care costs and utilization.
JAMA
285
:
182
–189,
2001
28.
Menzin J, Langley-Hawthorne C, Friedman M, Boulanger L, Cavanaugh R: Potential short-term economic benefits of improved glycemic control: a managed care perspective.
Diabetes Care
24
:
51
–55,
2001
29.
Clark C, Parkin C: Effective Diabetes Management in a Primary Care Setting. Indianapolis, IN, Indiana University School of Medicine, Division of Continuing Medical Education, 2000
Address correspondence and reprint requests to Charles M. Clark, Jr. MD, Regenstrief Institute, 1050 Wishard Boulevard, Indianapolis, IN 46202. E-mail: [email protected].
Received for publication 29 June 2000 and accepted in revised form 23 February 2001.
J.W.S. served on the advisory panel of Sierra Health Services, which helped conduct the study and received grant funding. L.M.S. was employed by Roche Diagnostics during the implementation of this study.
A table elsewhere in this issue shows conventional and Système International (SI) units and conversion factors for many substances.
|
2022-06-28 15:12:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1789208948612213, "perplexity": 7331.239857459381}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00320.warc.gz"}
|
https://www.preprints.org/manuscript/201909.0154/v1
|
Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed
A QUBO Model for the Traveling Salesman Problem with Time Windows for Execution on the D-Wave
Version 1 : Received: 14 September 2019 / Approved: 15 September 2019 / Online: 15 September 2019 (16:14:15 CEST)
A peer-reviewed article of this Preprint also exists.
Papalitsas, C.; Andronikos, T.; Giannakis, K.; Theocharopoulou, G.; Fanarioti, S. A QUBO Model for the Traveling Salesman Problem with Time Windows. Algorithms 2019, 12, 224. Papalitsas, C.; Andronikos, T.; Giannakis, K.; Theocharopoulou, G.; Fanarioti, S. A QUBO Model for the Traveling Salesman Problem with Time Windows. Algorithms 2019, 12, 224.
Journal reference: Algorithms 2019, 12, 224
DOI: 10.3390/a12110224
Abstract
This work focuses on expressing the TSP with Time Windows (TSPTW for short) as a quadratic unconstrained binary optimization (QUBO) problem. The time windows impose time constraints that a feasible solution must satisfy. These take the form of inequality constraints, which are known to be particularly difficult to articulate within the QUBO framework. This is, we believe, the first time this major obstacle is overcome and the TSPTW is cast in the QUBO formulation. We have every reason to anticipate that this development will lead to the actual execution of small scale TSPTW instances on the D-Wave platform.
Subject Areas
TSP; TSPTW; metaheuristics; quantum annealing; Ishing model; QUBO; D-Wave
Views 0
|
2021-04-15 02:35:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8458995819091797, "perplexity": 3432.561221445124}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00349.warc.gz"}
|
https://www.mersenneforum.org/showthread.php?s=e1ca3cd95c6b6b2313b160d971eb7ee7&t=21916&page=3
|
mersenneforum.org Algebraic factors in sieve files
Register FAQ Search Today's Posts Mark Forums Read
2017-01-15, 18:09 #23 pepi37 Dec 2011 After milion nines:) 24018 Posts Compromise: write a small program, ( portable to Linux and WIn) that will remove all other factors that can be removed. In that case, since we all believe at Batalov script it is not needed to have extensive testing, nor we touch ( and possibly broken) srxsieve :)
2017-01-17, 02:21 #24
LaurV
Romulan Interpreter
Jun 2011
Thailand
218516 Posts
Quote:
Originally Posted by pepi37 Compromise: write a small program, ( portable to Linux and WIn) that will remove all other factors that can be removed. In that case, since we all believe at Batalov script it is not needed to have extensive testing, nor we touch ( and possibly broken) srxsieve :)
+1
2017-01-17, 20:15 #25 rogue "Mark" Apr 2003 Between here and the 5·13·89 Posts srsieve 1.0.8 searches for most of the same algebraic factors that the script finds and is able to find them, but there are two that srsieve does not search for.
2017-01-17, 20:19 #26
pepi37
Dec 2011
After milion nines:)
3·7·61 Posts
Quote:
Originally Posted by rogue srsieve 1.0.8 searches for most of the same algebraic factors that the script finds and is able to find them, but there are two that srsieve does not search for.
So after all, we still must to run script in order to find those two, rest algebraic factors?
2017-01-18, 02:11 #27
rogue
"Mark"
Apr 2003
Between here and the
169916 Posts
Quote:
Originally Posted by pepi37 So after all, we still must to run script in order to find those two, rest algebraic factors?
Yes, but I will modify srsieve. As much as Gary or others do not want the code modified, they don't own the code. Adding these missing forms (or others) should be very easy to do.
2017-01-18, 10:05 #28
pepi37
Dec 2011
After milion nines:)
128110 Posts
Quote:
Originally Posted by rogue Yes, but I will modify srsieve. As much as Gary or others do not want the code modified, they don't own the code. Adding these missing forms (or others) should be very easy to do.
If you are willing to change code: then do it as needed, not do it partially. In that case running srsieve will do job and we dont need to bother does we or does we not have to run scripts after/before sieving. Of corse nobody cannot force you to do not change the code, but srsieve works very well and it is tested in many years of works.
Suggestion to write new, small program that will remove all factors ( from script) then is better option.
2017-01-20, 10:53 #29
gd_barnes
May 2007
Kansas; USA
2×3×19×89 Posts
Quote:
Originally Posted by rogue Yes, but I will modify srsieve. As much as Gary or others do not want the code modified, they don't own the code. Adding these missing forms (or others) should be very easy to do.
This is ridiculous and I would have responded sooner had I not been out of town. Frankly this statement hacks me off and we don't have to accept it. We don't own the program but you don't own CRUS! We do not have to accept the results of any sieving submission done with any more modified versions of srsieve. I thought that both Masser and I and now more lately Pepi as well as some of the other discussion in this thread made that blatantly clear. Masser was also an admin of base 5 at one point and can completely relate to this kind of frustration.
Yes "it seems" like the change should be very easy to do just like modifying them to previously remove algebraic factors should have been very easy to do before. But it wasn't because no parallel testing was performed. It's very easy to accidently remove incorrect factors. We ended up with multiple bases with multiple problems where there were far too few of primes reported. I remember one was either R35 or S35 for n=25K-50K where 10s of primes were missed and it was a big mess. Bases like that had to have ranges rerun specifically as a result of that change...and who knows how many more are out there that we have not discovered. It is very stressful to always be looking over our shoulder wondering if the programs that we are running are correct without knowing if they have been properly tested.
If you modify the code how can you prove to us that you have not affected anything else like what has happened in both PFGW and srsieve before and has caused us to have to rerun multiple bases? Do you have the personal time available to run multiple parallel tests on multiple different bases both with and without algebraic factors -or- to coordinate the effort to have others do it behind the scenes before releasing the new program to the public?
We do not want the program to be 2-3 months old and hear that usual line of "no problems have been reported so the code must be right". Frequently problems are not found for months or years and then it is realized that there are multiple incorrect outputs. That is not parallel testing. Here is what needs to happen with parallel testing:
(1)
Run base xxx that contains no algebraic factors with the old version of srsieve.
Do the same with the new version of srsieve.
Compare the two outputs. They must match.
Do the above with 4-5 different bases on both the Riesel and Sierp sides. Bases should be small, big, squares, cubes, etc. Make them as random as possible to include as many breaking points as possible. Assume that the program is bad and try to break the program. Do NOT assume that it is good and avoid testing it because the change "was easy".
(2)
Run base xxx that contains type 1 of algebraic factors (that the old version is currently removing) with the old version of srsieve.
Do the same with the new version of srsieve.
Compare the two outputs. They must match.
Also do the above with several different bases on both the Riesel and Sierp sides.
(3)
Run base xxx that contains type 2 of algebraic factors (that the old version is NOT currently removing) with the old version of srsieve.
Do the same with the new version of srsieve.
Compare the two outputs. The differences should be ONLY that the new version is removing the type 2 algebraic factors.
We need to see test plans and results such as this before the new program is released to the public.
If you modify srsieve without this kind of substantial testing being publicly posted in a place where the CRUS admins can see it BEFORE the program is publicly released, we will refuse to accept any sieving at CRUS that uses the new program. We will also make sure that other projects are aware of the risks involved in using the new program without proper testing having been done ahead of time.
Yes I am preaching in a big way but the integrity of CRUS and other prime search efforts needs people to step up and insist on accuracy in testing before using public programs that have had changes made with too little testing having been done.
Continuing to modify a known excellent program to do that which it was not intended to do is IMHO a mistake. The program was designed to remove hard numeric factors not algebraic factors. It should do no more. New programs have already been created for that.
Last fiddled with by gd_barnes on 2017-01-20 at 11:34
2017-01-21, 07:25 #30 LaurV Romulan Interpreter Jun 2011 Thailand 8,581 Posts Now slow down a little... I am myself adept of "if it works do not fix it", and gave a +1 above to pepi's suggestion. But OTOH, I would like to have a new version of srsieve which is more efficient (and eventually removes the "all candidates are divisible by 2" bullshit or let us skip it intentionally with a command line switch, or better offer command line switches for removing algebraic factors too). In that case, I am willing to be part of testing team, to do comparison testes as Gary suggested. And trust me, I do that testing activity for a living, if it is something good to say about a product, I may "forget" to say it, but if it is something to criticize, I am your man, I will always say it Last fiddled with by LaurV on 2017-01-21 at 07:26 Reason: s/Bot/But/
2017-01-21, 18:32 #31
Batalov
"Serge"
Mar 2008
Phi(3,3^1118781+1)/3
5×1,811 Posts
Quote:
Originally Posted by LaurV ...to do comparison testes...
No adult male should skip on that activity!
Quote:
Originally Posted by LaurV And trust me, I do that testing activity for a living, if it is something good to say about a product, ...
2017-01-21, 19:19 #32 rogue "Mark" Apr 2003 Between here and the 5·13·89 Posts CRUS is not the only user of srsieve. if CRUS chooses to use another program or some older version of srsieve, it is their choice. I have posted an updated version of srsieve here. Here are the release notes: Code: Rewrote code to find algebraic factorizations so that more can be caught. It will search for: GFNs -> where k*b^n+1 can be written as x^m+1 Trivial -> where k*b^n-1 can be written as x^m-1 which will remove all terms for the sequence. These algebraic factorizations are now written to algebraic.out so that they can be verified with pfgw: where k*b^n+1 can be written as x^q*y^r+1 and r%q=0 and q is odd where k*b^n-1 can be written as x^q*y^r-1 and r%q=0 where k*b^n+1 can be written as x*2^m+1 and m%4=2 where k*b^n+1 can be written as 4*x^z*y^m+1 and z%4=0 and m%4=0 Note the second section. One can now verify the algebraic factors found by srsieve. I have run a few tests and haven't found any bugs, but that doesn't mean that there aren't any, but they will reveal themselves when running algebraic.out thru pfgw. In this new release, it is smart enough to deteremine if k and b have the same root, i.e. k = m^x and b = m^y so that it can more easily identify GFN and Trivial forms. For the first two forms, those algebraic factors are not written to algebraic.out. For GFNs, it doesn't mean that it has a factor, but that if you truly want to sieve GFNs, use gfnsieve, not srsieve. For the first two forms logs to algebraic.out, it factorizes k and b and can find factorizations where they share a common factor. The previous release could not do that. I suspect that the last two might have some generalizations, but I haven't investigated. If Serge or others discover additional algebraic forms, please share and I will incorporate as best I can.
2017-01-22, 00:42 #33 rogue "Mark" Apr 2003 Between here and the 10110100110012 Posts Serge found a bug that causes it to crash for large ranges of n. I tested on smaller ranges of n, thus didn't trigger it. It has been fixed and the link has the current code and Windows build.
Similar Threads Thread Thread Starter Forum Replies Last Post pinhodecarlos Riesel Prime Search 89 2020-04-04 17:00 gd_barnes Conjectures 'R Us 31 2010-04-06 02:04 davieddy Math 48 2009-07-07 19:42 mdettweiler Software 16 2009-03-08 02:06 henryzz ElevenSmooth 13 2007-12-18 09:12
All times are UTC. The time now is 13:22.
Sat Jul 4 13:22:04 UTC 2020 up 101 days, 10:55, 2 users, load averages: 1.90, 1.87, 1.77
|
2020-07-04 13:22:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3678879141807556, "perplexity": 2107.5192437454334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655886121.45/warc/CC-MAIN-20200704104352-20200704134352-00318.warc.gz"}
|
https://mathhelpboards.com/threads/a-question-on-consistency-in-propositional-logic.2020/
|
# A question on consistency in propositional logic.
#### Mathelogician
##### Member
Hi everybody!
We have a theorem in natural deduction as follows:
Let H be a set of hypotheses:
====================================
H U {~phi) is inconsistent => H implies (phi).
====================================
Now the question arises:
Let H={p0} for an atom p0. So H U{~p0}={p0 , ~p0}.
We know that {p0 , ~p0} is inconsistent, so by our theorem we would have:
{p0} implies ~p0.
Which we know is impossible.(because for example it means that ~p0 is a semantical consequence of p0).
Now what's wrong here?
Thanks
#### Ackbach
##### Indicium Physicus
Staff member
Well, your theorem or schema is negating the phi, which you're not doing. In your example, you should end up with {p0} implies p0. No doubt Evgeny can correct any mistakes I just made.
#### Evgeny.Makarov
##### Well-known member
MHB Math Scholar
Well, your theorem or schema is negating the phi, which you're not doing. In your example, you should end up with {p0} implies p0.
You are right. If we apply the theorem to H U{~p0}, then phi from the theorem is p0. Therefore, the theorem concludes that {p0} implies p0.
Oooooooops!
|
2021-01-16 17:12:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8006418347358704, "perplexity": 5779.169716261413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703506832.21/warc/CC-MAIN-20210116165621-20210116195621-00563.warc.gz"}
|
https://par.nsf.gov/biblio/10374238-measurement-inclusive-differential-wz-production-cross-sections-polarization-angles-triple-gauge-couplings-pp-collisions-sqrt-tev
|
This content will become publicly available on July 1, 2023
Measurement of the inclusive and differential WZ production cross sections, polarization angles, and triple gauge couplings in pp collisions at $$\sqrt{s}$$ = 13 TeV
A bstract The associated production of a W and a Z boson is studied in final states with multiple leptons produced in proton-proton (pp) collisions at a centre-of-mass energy of 13 TeV using 137 fb − 1 of data collected with the CMS detector at the LHC. A measurement of the total inclusive production cross section yields σ tot (pp → WZ) = 50 . 6 ± 0 . 8 (stat) ± 1 . 5 (syst) ± 1 . 1 (lumi) ± 0 . 5 (theo) pb. Measurements of the fiducial and differential cross sections for several key observables are also performed in all the final-state lepton flavour and charge compositions with a total of three charged leptons, which can be electrons or muons. All results are compared with theoretical predictions computed up to next-to-next-to-leading order in quantum chromodynamics plus next-to-leading or- der in electroweak theory and for various sets of parton distribution functions. The results include direct measurements of the charge asymmetry and the W and Z vector boson polarization. The first observation of longitudinally polarized W bosons in WZ production is reported. Anomalous gauge couplings are searched for, leading to new constraints on beyond-the-standard-model contributions to the WZ more »
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
Award ID(s):
Publication Date:
NSF-PAR ID:
10374238
Journal Name:
Journal of High Energy Physics
Volume:
2022
Issue:
7
ISSN:
1029-8479
1. Abstract Measurements of the Standard Model Higgs boson decaying into a $$b\bar{b}$$ b b ¯ pair and produced in association with a W or Z boson decaying into leptons, using proton–proton collision data collected between 2015 and 2018 by the ATLAS detector, are presented. The measurements use collisions produced by the Large Hadron Collider at a centre-of-mass energy of $$\sqrt{s} = 13\,\text {Te}\text {V}$$ s = 13 Te , corresponding to an integrated luminosity of $$139\,\mathrm {fb}^{-1}$$ 139 fb - 1 . The production of a Higgs boson in association with a W or Z boson is established with observed (expected) significances of 4.0 (4.1) and 5.3 (5.1) standard deviations, respectively. Cross-sections of associated production of a Higgs boson decaying into bottom quark pairs with an electroweak gauge boson, W or Z , decaying into leptons are measured as a function of the gauge boson transverse momentum in kinematic fiducial volumes. The cross-section measurements are all consistent with the Standard Model expectations, and the total uncertainties vary from 30% in the high gauge boson transverse momentum regions to 85% in the low regions. Limits are subsequently set on the parameters of an effective Lagrangian sensitive to modifications of the WHmore »
2. A bstract Measurements are presented of differential cross sections for the production of Z bosons in association with at least one jet initiated by a charm quark in pp collisions at $$\sqrt{s}$$ s = 13 TeV. The data recorded by the CMS experiment at the LHC correspond to an integrated luminosity of 35.9 fb − 1 . The final states contain a pair of electrons or muons that are the decay products of a Z boson, and a jet consistent with being initiated by a charm quark produced in the hard interaction. Differential cross sections as a function of the transverse momentum p T of the Z boson and p T of the charm jet are compared with predictions from Monte Carlo event generators. The inclusive production cross section 405 . 4 ± 5 . 6 (stat) ± 24 . 3 (exp) ± 3 . 7 (theo) pb, is measured in a fiducial region requiring both leptons to have pseudorapidity |η| < 2 . 4 and p T > 10 GeV, at least one lepton with p T > 26 GeV, and a mass of the pair in the range 71–111 GeV, while the charm jet is requiredmore »
3. A bstract Measurements of the production cross-sections of the Standard Model (SM) Higgs boson ( H ) decaying into a pair of τ -leptons are presented. The measurements use data collected with the ATLAS detector from pp collisions produced at the Large Hadron Collider at a centre-of-mass energy of $$\sqrt{s}$$ s = 13 TeV, corresponding to an integrated luminosity of 139 fb − 1 . Leptonic ( τ → ℓν ℓ ν τ ) and hadronic ( τ → hadrons ν τ ) decays of the τ -lepton are considered. All measurements account for the branching ratio of H → ττ and are performed with a requirement |y H | < 2 . 5, where y H is the true Higgs boson rapidity. The cross-section of the pp → H → ττ process is measured to be 2 . 94 ± $$0.21{\left(\mathrm{stat}\right)}_{-0.32}^{+0.37}$$ 0.21 stat − 0.32 + 0.37 (syst) pb, in agreement with the SM prediction of 3 . 17 ± 0 . 09 pb. Inclusive cross-sections are determined separately for the four dominant production modes: 2 . 65 ± $$0.41{\left(\mathrm{stat}\right)}_{-0.67}^{+0.91}$$ 0.41 stat − 0.67 + 0.91 (syst) pb for gluon-gluon fusion, 0 .more »
4. Abstract This paper reports on a search for heavy resonances decaying into WW , ZZ or WZ using proton–proton collision data at a centre-of-mass energy of $$\sqrt{s}=13$$ s = 13 TeV. The data, corresponding to an integrated luminosity of 139 $$\mathrm{fb}^{1}$$ fb 1 , were recorded with the ATLAS detector from 2015 to 2018 at the Large Hadron Collider. The search is performed for final states in which one W or Z boson decays leptonically, and the other W boson or Z boson decays hadronically. The data are found to be described well by expected backgrounds. Upper bounds on the production cross sections of heavy scalar, vector or tensor resonances are derived in the mass range 300–5000 GeV within the context of Standard Model extensions with warped extra dimensions or including a heavy vector triplet. Production through gluon–gluon fusion, Drell–Yan or vector-boson fusion are considered, depending on the assumed model.
5. Abstract This paper presents a measurement of the electroweak production of two jets in association with a $$Z\gamma$$ Z γ pair, with the Z boson decaying into two neutrinos. It also presents a search for invisible or partially invisible decays of a Higgs boson with a mass of 125 $$\text {GeV}$$ GeV produced through vector-boson fusion with a photon in the final state. These results use data from LHC proton–proton collisions at $$\sqrt{s}$$ s = 13 $$\text {TeV}$$ TeV collected with the ATLAS detector and corresponding to an integrated luminosity of 139 $$\hbox {fb}^{-1}$$ fb - 1 . The event signature, shared by all benchmark processes considered for the measurements and searches, is characterized by a significant amount of unbalanced transverse momentum and a photon in the final state, in addition to a pair of forward jets. Electroweak $$Z\gamma$$ Z γ production in association with two jets is observed in this final state with a significance of 5.2 (5.1 expected) standard deviations. The measured fiducial cross-section for this process is $$1.31\pm 0.29$$ 1.31 ± 0.29 fb. An observed (expected) upper limit of 0.37 ( $$0.34^{+0.15}_{-0.10}$$ 0 . 34 - 0.10 + 0.15 ) at 95% confidence level ismore »
|
2023-03-22 11:44:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603495955467224, "perplexity": 1025.4077264796572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.76/warc/CC-MAIN-20230322114226-20230322144226-00361.warc.gz"}
|
https://mathematica.stackexchange.com/questions/102404/how-to-select-an-entry-from-a-list-of-pairs-that-meets-a-condition-depending-the
|
# How-to select an entry from a list of pairs that meets a condition depending the 2nd element of each pair
Processed data structure
data = Transpose[{RandomInteger[{1, 20}, 100], RandomReal[{10^-8, 10^-1}, 100]}]
Condition description
Firstly, the decision must be based on the sub-lists 2nd element. Please see example below:
{{15, 0.0690906}, {18, 0.095235}, {17, 0.0282053}, {9, 0.00283472}, ...}
Based on the above example, I need to extract a value nearest to x, where x signifies some specific treshold. Examples can be:
{10^-1, 10^-2, 10^-3} etc.
I have tried to build up a solution using a set of configurations involving Cases, Select and Nearest. However, I did not succeed.
I have tried using a formulation similar to below:
(*Simplified processed data*)
Select[RandomInteger[{1, 20}, {5, 2}], #[[2]] > 5 &]
Based on the generated data, the above would produce desired output. Example:
{{19, 18}, {9, 11}}
However, my problem involves the condition to be as described above. Based on raw data structure (transposition of column 1 && 3), I have applied following formulation to see what is the actual nearest result:
Input:
Nearest[dataF[[1]][[All, 2]], 10^-4]
In the above dataF is processed raw data (transposition of column 1 && 3). Additionally, dataF is composed of many similar sets of data as the raw data structure made available to download above; hence [[1]] notation used to point towards a specific dataset.
Output:
{0.000101238}
Given the above has output a value which could be used to find a sub-list containing this value, I used following:
Input:
Select[dataF[[1]], #[[2]] == Nearest[dataF[[1]][[All, 2]], 10^-4] &]
Output:
{}
I would appreciate if somebody could point me in the right direction and, perhaps, explain why the above code did not produce any result.
SeedRandom[42];
With[{n = 100},
data = Transpose[{RandomInteger[{1, 20}, n], RandomReal[{10^-8, 10^-1}, n]}]]
With[{threshold = 1*^-4, yvals = data[[All, 2]]},
First @ Extract[data, Position[yvals, First @ Nearest[yvals, threshold]]]]
{16, 0.000661563}
SeedRandom[42]; With[{n = 100},
data = Transpose[{RandomInteger[{1, 20}, n],
RandomReal[{10^-8, 10^-1}, n]}]] ;
Pick[data, Chop[data[[;; , 2]] - 1*^-4, 0.001], 0]
(*{{16, 0.000661563}}*)
dataF = Catenate@Import["excel.xlsx"];
dataF // Dimensions
{800, 3}
The second column contains the same value:
dataF[[All, 2]] // Union
{7.7875*10^-6}
sel = First@Nearest[dataF[[All, 3]], 10^-4]
0.000101238
Flatten@Select[dataF, #[[3]] == sel &]
{0.196152, 7.7875*10^-6, 0.000101238}
Function[{x},
Flatten@Select[dataF, #[[3]] ==
First@Nearest[dataF[[All, 3]], x] &]][#] & /@
{10^-4, 10^-3, 10^-2, 10^-1}
{{0.196152, 7.7875*10^-6, 0.000101238}, {0.156825, 7.7875*10^-6, 0.0009968}, {0.0418247, 7.7875*10^-6, 0.00623}, {0.0418247, 7.7875*10^-6, 0.00623}}
etc.
|
2019-06-27 07:22:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3460271954536438, "perplexity": 9829.099432852394}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000894.72/warc/CC-MAIN-20190627055431-20190627081431-00333.warc.gz"}
|
https://worldbuilding.stackexchange.com/questions/102891/could-aliens-with-a-ww2-technology-level-steal-and-use-technology-from-current-d
|
# Could aliens with a WW2 technology level steal and use technology from current day humanity?
In my world, the walls between realms are breaking apart. There are portals (using that word to simplify things) that allow travel between dimensions and across space.
One alien race has technology analogous to what Earth had right at the start of WW2. They have bolt action rifles, machine guns, tanks, etc, but not modern computing power like we have.
In my story, Humans invade one of the portals and set up a base. Before they can solidly their foothold, the aliens attack and kill the troops at the base.
So now they have some modern weapons, tanks, a couple helicopters, and communications equipment.
Would they be able to quickly (before a counter-attack) figure out the starting procedures for the tanks and helicopters to move them to their own base?
How long in advance have they been spying on the humans?
Was there an all-out fight or did they sneak up and take out the humans at night?
It was an all out attack, with overwhelming numbers.
how fast is the counter-attack coming?
I was originally thinking a day or two. That could be odd, as pointed out in the comments, because the military generally wouldn't put those resources out there without backup.
• My guess is the difference in technology levels wouldn't be as big of a problem as the fact that it would be literally completely alien to them. You could take a person from a hundred years ago and teach them today's technology but even a more advanced alien would struggle I imagine. – Virusbomb Jan 22 '18 at 21:17
• There's a mighty large gap between being able to start a helicopter and being able to fly it. There's an even larger gap if you want to land it without breaking it. I got the privilege of flying one once. That is not an easy thing to figure out. – Cort Ammon Jan 22 '18 at 21:22
• If aliens have adequate machinery near the humans' base, they can truck away or tow everything. – Alexander Jan 22 '18 at 21:24
• @Cort Ammon hopping from a WW2 helicopter into a modern one? – Bookeater Jan 22 '18 at 21:24
• Also how fast is the counter-attack coming? – Cort Ammon Jan 22 '18 at 21:33
As many people have pointed out, the capability gap is so huge that the invaders will have a great deal of difficulty being able to use any of the equipment, even if they can recognize what it is based on extrapolations of their current technology.
The biggest fly in the ointment is that modern equipment outside of extremely basic things like rifles and grenades are equipped with electronic systems. Unlike the movies, you generally don't need to enter secret codes for everything, but even powering up a man portable radio usually requires doing a few steps in a certain order, for example, turn on the power switch. Look at the display as the system does a self test. Enter the frequencies, squelch settings, frequency hopping map and cryptography that you intend to use, and plug in the militarized GPS. Now do that for the other radios (some of which will be different models or built by different manufacturers).
Canadian LAV 3 Turret. Can you figure out how to use the radios in the background?
Tanks will have electronic systems to inform the driver of the state of the vehicle, and the turret crew have a whole turret full of complex electronics related to target acquisition, fire control, and even safety. In some tanks, the loader puts a round into the breach, but even when the breach is closed the gun will not fire until the loader pushes a switch (usually mounted on the wall of the turret away from the gun, reaching this automatically ensures he is away from the recoiling cannon when it fires).
Your aliens may discover some things the hard way. Most military safety systems have large, easily accessible switches or triggers, so an alien investigating the interior of a vehicle may trigger a fire extinguisher. This sounds funny until you realize the system works by very rapidly displacing all the oxygen from the vehicle interior to smother the fire instantly. Being inside when the "happy handle" is pulled is not considered a good thing.
Interior of a French VBCI. The bottle shaped object on the end of the left hand row of seats looks very much like the fire suppression reservoir. It will feed nozzles strategically placed throughout the vehicle if a fire is detected or the system is activated
When the Human relief forces arrive, they may discover most of the machinery has been "bricked" by aliens trying to activate things in the wrong order, or by removing batteries and connectors and being unable to replace them properly. There is really no need to plant bombs and other booby traps (indeed, it will simply make the soldiers job that much harder and more dangerous).
• I love the idea of the relief forces finding bricked equipment. Thank you! – user41674 Jan 23 '18 at 2:11
• This. Thisnis the best answer. – Renan Jan 23 '18 at 6:28
• Can you figure out how to use the radios in the background? I'm from this era and technologically wise, and I would have a hard time figuring out those are actually radios. – Inferry Jan 23 '18 at 6:49
• "Look at the display as the system dies a self test." Freudian slip? :D – Dave Sherohman Jan 23 '18 at 7:50
• I'm from a country in which there is mandatory military service, and I did mine in the radio troops. It's been some years though, and I almost certainly would have trouble using the radios I was trained to use. – Nico Jan 23 '18 at 8:12
The value of the stolen technology far exceeds its immediate usefulness.
If we were at war with a more advanced alien species and we acquired some of their technology the absolute last thing we would want to do is put that equipment on the frontline where it can (and most likely will be) recaptured or destroyed. Instead we would want to study the technology in hopes of learning how it works, how its made, what vulnerabilities it might have and what its capabilities are. Knowing where the ammunition is stored in a tank or how long a helicopter can stay airborne before refuelling is invaluable information, whereas these assets deployed with relatively untrained crews would have little to no impact on the outcome of a battle.
A state of the art attack helicopter flown by a WW2 pilot would fly too high (the pilot being primarily concerned with evading subsonic fighters and AA guns) and get shot down by a guided missile while the pilot wonders what that annoying tone is for. It would be like giving modern day soldiers medieval swords and shields, sure they could fight with them but any professional soldier from that period is going to wipe the floor with such relatively inexperienced opponents.
• Indeed, best strategy for WW2-level aliens is to capture human technology, block the portal, keep quiet, copy and mass-produce the artefacts and fight other WW2 aliens for alien world domination. – Dima Tisnek Jan 23 '18 at 10:11
• That is exactly what an intelligent civilization would do - Take it and hide it in a hidden bunker somewhere and unlock it's secrets, reverse engineering it for human sized occupants. Or perhaps what we have already done... – Coomie Jan 24 '18 at 6:44
• *cough*Area 51*cough* – Doktor J Jan 24 '18 at 16:40
In my story, Humans invade one of the portals and set up a base. Before they can solidly their foothold, the aliens attack and 'kill all humans'.
It is very unlikely that such an invasion could have been okayed without at least a very basic recognition and survey.
So I think it's a given that, before the invasion, the Earth forces know what they're going to fight against. They know they have a technological advantage. They have read and upvoted Cognisant's excellent argument.
Therefore, they'd do whatever is in their power to be very, very sure they're not going to lose that advantage. The other side has a whole world, and they're on a war footing.
It becomes crucial to prevent the enemy from using, study, or, worse still, reverse engineer Terran weapons.
In ancient times, when (say) a cannon had to be abandoned in the field, it was either burst or spiked to prevent it from being captured and used. Something of the kind would undoubtedly be done here (or would it?), but much more thoroughly; we don't just want to prevent the enemy from using the equipment (the equipment's complexity would be proof enough for that), we ideally want to leave them nothing they can study.
So provisions to destroy matériel before capture would certainly be in place.
But this only covers foreseeable losses. Equipment might be lost before there is a practical possibility to scuttle it. And it is conceivable that it could be moved elsewhere, to an aliens' Area 51, where it would be examined with great care.
Therefore, I'm sure that some kind of automatic self-destruct would be deemed essential.
So after wiping out all the humans (and assuming what follows hasn't already happened on the Terran commander's orders), the aliens enter what remains of the base, perhaps try breaking into a helicopter...
"By the Elder Gods, Sarge! Have you ever seen anything like this?"
"Less gawking and more technology stealing, soldier! Have you figured how to start this Gods-cursed contraption? Careful with those missiles, boys!"
"It's okay, Sarge, we're good-"
ENTER OVERRIDE CODE
"What the-?"
"This thing talks!"
UNAUTHORIZED TAMPERING DETECTED. ENTER OVERRIDE CODE.
"What did it say?"
"How the Hells should I know? Do I look like I speak Terran?"
TAMPER BEACON ACTIVATED.
"Hells, isn't red the Terran color for danger? This thing is flashing red!"
"Everyone back! EVERYONE BACK!!!"
52 TAMPERED DEVICES FOUND ON LOCAL NETWORK.
CONTACT WITH HQ NOT ESTABLISHED.
CASE OMEGA ACTIVATED.
I think it quite likely that Case Omega would involve a low-yield, "suitcase" nuclear device triggered by a dead man's switch.
In addition, there would surely be several HE souvenirs hidden in all mobile units. A network of sensors to tell whether there's someone in the cabin; if there is, at least one of the several keypads needs to enter a four-digits code within one minute from the beginning of the beeps.
You could easily mass-produce such booby-traps, and they would be very safe through massive redundancy: in a helicopter you might have, say, six armored sensor-keypad-transmitters, and as many bombs, all self-networking. A bomb would not arm unless networked with at least three sensors and two bombs, and a single OKAY stops it until all networked sensors send a NO LIFE ABOARD - REARM signal. Damaging the vehicle so much that the transmitters die means there's no vehicle left; finding and securing all the bombs in less than one minute is a losing proposition. Jamming a military short-range frequency-hopping encrypted WiFi inside a metal shell would require such a massive, easily-recognizable-as-such jamming that just trying it would trigger the explosion.
Possibilities left:
• incredible luck
• unforeseen alien magic
• treason
• kidnapping plus brainwashing
• This answer is interesting, are there really "bombs" in a Heli? or any other Military vehicle? +1 – Mr.J Jan 23 '18 at 0:09
• Yes, some manual (often on manned aircraft and boats) and some automated (mosly bomber drones) afaik. It’s real and it’s out there (but much less spectacular than the story above) – John Keates Jan 23 '18 at 1:39
The main problem these aliens may have with adapting technology is not recognizing or being able to use it, but being able to replicate it in a meaningful quantity and quality. The largest issue in this scenario is materials science.
Its all very well being able to see that something is made of composite plastics, or the ratios of metals in alloys, but the key is knowing the right conditions to make the stuff and being able to replicate those conditions.
After all modern steel is functionally the same iron from a couple 1000 years ago, we just have far better control of the carbon ratios and tempering.
On the electronics front they will likely benefit a lot from being able to see the route to take, but still without stealing a chip fab from Earth will likely have to go through the slow march of Moore's Law even if they know exactly where they are trying to get to.
• Exactly. In WWII, Germans knew the usefulness of rockets, managed to build some, but the key point is “being able to replicate it in a meaningful quantity and quality”. – Holger Jan 23 '18 at 10:47
## They don't need to
With WW2 tech, they've got mobile cranes, they've got flatbed trailers, and anything else they need to move a large stationary lump of metal. Everything gets moved fast to the aliens' equivalent of Area 51, and every alien scientist or engineer with relevant knowledge gets drafted.
As has already been pointed out, there's no way any of the grunts on the ground could figure out how to drive stuff. More than that, this should be SOP imposed from the top down. An instant overwhelming attack like that doesn't come out of thin air - the aliens must already have been on guard against advanced people crossing the portals, and to stage an attack with WW2 equipment which takes down modern attack helicopters and tanks implies they've put some thought into it. After that, all captured equipment becomes vital military intelligence, in the same way as a captured Enigma machine was in WW2.
## Make the technology more advanced
Designer of weapons systems1 want to make their systems as accessible as possible. Not because they want untrained people to use them, but because the simpler they are, the less likely is that mistakes happen.
Someone driving a vehicle has to control both the vehicle itself and how to react to the environment. The more attention needed to control a vehicle, the less control of the tactical situation and the more risk of accidents. And, the more basic the commands, the easier and more natural they should be.
So, continuing this trend, you could very well have tanks and helicopters that allow for automatic travel, well enough for people strange to them to drive them.
Of course, that does not make the situation less difficult for the aliens.
They still lack all the training about the capabilities of the hardware they just got. They will not be able to use their machines at 100% capability, with luck they will get a 50%. Maybe they can fire a gun in the right direction and hit a target, but probably they won't know which type of shell is best for their target, how to tell the tank to change the type of shell or even which are the types of shell availabes.
They will be facing an enemy that knows what the machines can do, and have experience so they do not have to read screens. Their enemy will also know how the other members of the unit would react. All of that will make a lot of impact in battle.
And of course, beyond the most basic task (refueling), maintenance will be an absolute no-no.
1In reality, designers of any kind of system.
• Even that device with “easy to use” controls is unlikely to be controllable by anyone with a non-human cultural backgound, as we use unfamiliar to them color schemes and symbols. They are derived from things we might not use anymore (symbols depicting a crt tv, celluloid movie tape, or a floppy disc) or in an entirely different context (scissors, stop watch, or even better, traffic signs, for example, entirely obscure for people on a different planet never having seen our roads). – Holger Jan 23 '18 at 10:16
• Having seen some of those "easy to use" controls, I'd also point out that military equipment is designed for robustness in the field. LCD touchscreens and graphics are much less useful than 1970s-era LEDs and pushbuttons if you want something which can be kicked off a plane on a pallet, bounce off a rock on landing, finish in a swamp in the rain, and be fished out still in working order. – Graham Jan 23 '18 at 14:54
• @Graham that is a very nice point, thank you. In my defense, I am refering to "future" systems where those durability issues are taken for granted. – SJuan76 Jan 23 '18 at 19:34
### Probably not, if you want to be realistic.
But then again, it's up to you to handwave this option in or out for your aliens.
The problem with helicopters is the complex checklist that you have to go through to start them and not cause a crash moments to minutes later as a result of having missed something. Even within the same technological level the full set of procedures and the handling of different helicopters varies by model and manufacturer, and you need to learn each separately in order to not kill yourself while piloting. If the helicopter is armed and the weapons are controlled by the pilot/copilot, even learning how to operate those will be complicated. You may think that the trigger is going to fire the front cannons, but instead you send a missile up the [expletive] of your ground forces, and find out that you were inside the blast radius.
As for the tanks. I know next to nothing about tanks in general, but think of this: nowadays you can call your car insurance company and block the car's fuel pump if it gets stolen, or you can ask them to remotely open the doors for you if you have lost your keys and your baby is dehidrating inside. If civillians can do this, imagine what any self-respecting armed forces can do to a compromised tank.
As for comms equipment, your aliens will get stuck in the "Please enter your username and password" part.
Last but not least. If you manage to kill everybody in a military base these days, you need not have the slightest fear of suffering a counter attack that includes tanks. Or marines or other spec ops soldiers. Or helicopters.
The comeback will come in the form of fighters/bombers that your radars won't see coming, flying in altitudes that you won't be able to reach with whatever you may have. Stay alert for bunker-busting missiles and very heavy ordinance.
• "When the going gets tough, the tough call for close air support." – Cort Ammon Jan 22 '18 at 21:34
• As for the tanks [...] think of this: nowadays you can call your car insurance company and block the car's fuel pump if it gets stolen. If you put this kind of gadgets in tanks, you have made yourself very vulnerable to an enemy "switching off" your army, in case that there is any vulnerability/security breach. One of the risks of nuclear missiles is that there is no way to counter-order them, and that is made by design. – SJuan76 Jan 22 '18 at 21:38
• @SJuan76 if that was an issue, bomber drones would not be a thing. They are as likely to be hijacked and then used against you as tanks, but with more spectacular consequences. The phantom menace is real, but so is investment in electronic security. – Renan Jan 22 '18 at 21:44
• @Renan my idea is literally xcom in reverse. That was the plan from the beginning haha. – user41674 Jan 22 '18 at 21:45
• @SJuan76 modern combat vehicles have something much better than a remote turn-off switch, transponders that tell the home base where each of them currently is. Having them captured and really used by an enemy who doesn’t even know that this possibility exists, would be much more valuable than turning them off remotely (telling the enemy that remote controls exist)… – Holger Jan 23 '18 at 10:43
I don't think a person from the years of WWII would be completely lost in our modern day. Sure the technology has moved on but not so much that its unrecognizable. I'm assuming since your aliens have WWII tech that they're at least as smart as humans were then. People then had phones and we have portable cell phones. They had planes, and we have faster, bigger ones. They had bolt action rifles and we have machine guns (they had machine guns too, but you know what I mean).
The only things we didn't have then are computers and the internet, but even a cursory briefing could give them an understanding of that. Plus, so much tech now is user-friendly.
Its not the technology level, its the level of intelligence and capacity to understand. If your aliens are relatively smart they should be able to adapt. Its not like they're cave men.
• I agree that they should be able to figure firearms in minutes, but for anything other than that: even if they can figure those out, it takes time. Pilots need special training for each different model of airplane they will pilot just so they won't kill themselves. And to make use of comms, the aliens will need to hack military grade stuff, while not having even a script kiddie among them. – Renan Jan 22 '18 at 21:55
• @Renan, I didn't think there was a time limit for how soon they had to understand it. My understanding of the question was whether or not they could understand modern tech, not could they understand it within a certain time constraint. I still think it possible. Also might there not be one or two unusually smart aliens who could figure out encryptions and passwords and such? We have clever geniuses. So would they (That is unless the question has been amended since my answer). – Len Jan 23 '18 at 17:53
• This I can speak about with authority, for I have a major in Computer Science. That kind of codebreaking requires suspension of disbelief. IRL, the FBI has the kind of genius you mention on staff. And their genius people have extensive experience working with state-of-the-art tech. Not long ago they had months to break into a handful of iPhones, but didn't manage. Food for thought ;) – Renan Jan 23 '18 at 19:38
• There are some people who were around in WWII and who are still around today, capable of understanding modern tools. – WBT Jan 24 '18 at 16:29
Not likely that they'd last long enough to assimilate the new technology.
If we look at the last time there was a large disparity between weapons tech in a conventional war, it would probably be the 1990 Gulf war, when somewhat better than WW2 tech ran up against the latest and greatest. (and, yes, there are parallels to the current situation with N Korea, whose current army is even more antiquated today than Iraq of 1990, while western military tech has advanced considerably since then)
In short, it was no contest at all.
Not only would WW2 tech do poorly against modern tech, it is unlikely that a civilization based on WW2 tech could adapt to modern tech quickly enough to make any sort of difference. After all, the army you'd be trying to steal the tech from knows it a lot better than you do. It's not like you can capture a Sidewinder missile and start building reproductions over night, when you can't even build the simplest microchip.
A civilization whose military was based on WW2 tech would be well advised to consider an economic relationship rather than an adversarial one. Look up the current price of a Mustang, FW190, or a running Tiger tank today. If they're based on WW2 tech, they're already equipped to build those things.
Looking outside of the military market, they'd be ideally positioned to start turning out priceless 1930's classic cars like the Deusenberg SJ, Alfa Romeo 6C, Mercedes 540k...
Such a civilization could go into the functional reproduction business, and clean up.
• The tech to produce those 1930s cars exists on Earth today, but you don't see anybody using it to take those large profits you seem to believe are available for the taking. – WBT Jan 24 '18 at 16:28
Can they move them to their home base?
Well that depends how long they have before the next wave comes through the portal, and how many trucks they brought with them, They're not going to be able to fly a helicopter
Drive a tank: maybe, fight with one: no.
If you want a way to delay the counterattack, perhaps some bio-weapon or similar ploy would work. These aliens never signed the Geneva convention.
They can steal - grab the loot, fill the trains and RUN, because the terrans will come back with things like a B2 and a few tactical nukes to vaporize the loot (I assume the gate is big enough for a bomber to pass). Maybe if they use multiple trains as decoys they can escape with something.
Use it? Hardly. Most equipment nowadays is full of electronics and computers that are at least 50 years ahead. Even if they can build something like the Eniacs they won't understand 14nm chips, they can't even see, resolve, things on this scale as they lack electronic microscopes. And electronics is only the first problem - will they be able to understand the operating systems?
The equipment will show them what can be done in electronics, machining and weapows systems engeneering but they will have to get there on their own feet, no shortcuts. Just like if the american army left behind a lot of equipment in, say, Chad. The chadians wouldn't be able to replicate the equipment nor use it. And even if they could use it, would they be able to manufacture spare parts, ammunition and do maintanace? They wouldn't because the country lacks the industrial facilities and accumulated knowledge to do so. The same applies to your aliens, in a much bigger scale, because at least the chadians would be able to hire mercenaries that know how to operate the gear or simply sell them for better suited equipment, like RPKs and Toyotas.
|
2019-06-18 07:28:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2941201627254486, "perplexity": 1939.172311955367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998690.87/warc/CC-MAIN-20190618063322-20190618085322-00442.warc.gz"}
|
http://www.newton.ac.uk/seminar/20170817153016301
|
# Constructing the virtual fundamental cycle
Presented by:
Dusa McDuff Barnard College
Date:
Thursday 17th August 2017 - 15:30 to 16:30
Venue:
INI Seminar Room 1
Abstract:
Consider a space $X$, such as a compact space of $J$-holomorphic stable maps with closed domain, that is the zero set of a Fredholm operator. This note explains how to define the virtual fundamental class of $X$ starting from a finite dimensional reduction in the form of a Kuranishi atlas, by representing $X$ as the zero set of a section of a (topological) orbibundle that is constructed from the atlas. Throughout we assume that the atlas satisfies Pardon's topological version of the index condition that can be obtained from a standard, rather than a smooth, gluing theorem.
The video for this talk should appear here if JavaScript is enabled.
If it doesn't, something may have gone wrong with our embedded player.
We'll get it fixed as soon as possible.
|
2018-03-18 00:03:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7080709934234619, "perplexity": 1103.7343769233232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645405.20/warc/CC-MAIN-20180317233618-20180318013618-00734.warc.gz"}
|
http://math.stackexchange.com/questions/236079/point-set-topology-metric-spaces
|
# point set topology-Metric spaces
Consider the two point set $X=\{a,b\}$ The possible topologies that can be found from X as follows. $$\begin{eqnarray} \tau_1&=&\{X,\emptyset\} &\text{Indiscrete topology} \\ \tau_2&=&\{X,\emptyset,{a}\} \\ \tau_3&=&\{X,\emptyset,{b}\} \\ \tau_4&=&\{X,\emptyset,{a},{b}\} &\text{Discrete topology} \end{eqnarray}$$
It is given that the trivial topology is Pseudometric and Discrete topology is a metric space.Also, $\tau_2$ and $\tau_3$ are known as Sierpinski space. Can you please explain me the above facts?
Further,I know that $\tau_2$ and $\tau_3$ are not $T_1$. But can they be Pseudo metric?
-
I’ve answered the last question, but I’m not really sure what you want explained about the rest. Those topologies aren’t quite right: everywhere that you have $a$ or $b$, you should have $\{a\}$ or $\{b\}$. – Brian M. Scott Nov 13 '12 at 1:13
No, $\tau_2$ and $\tau_3$ are not pseudometrizable. Suppose that $d$ is a pseudometric generating $\tau_2$. Since $a\in\{a\}\in\tau_2$, there must be some $r>0$ such that $a\in B_d(a,r)\subseteq\{a\}$, where $$B_d(a,r)=\{x:d(a,x)<r\}$$ is as usual the open ball of radius $r$ centred at $a$. Note that $b\notin B_d(a,r)$, so $d(a,b)\ge r$. A pseudometric is symmetric, so $d(b,a)=d(a,b)\ge r$, and therefore $a\notin B_d(b,r)$. Thus, $b$ has an open neighborhood that does not contain $a$. But this is false: the only open set containing $b$ is $\{a,b\}$. Thus, $\tau_2$ cannot in fact be generated by any pseudometric.
The indiscrete topology on any set $X$ is generated by the pseudometric $d$ such that $d(x,y)=0$ for all $x,y\in X$; I’ll leave it to you to check that this really is a pseudometric.
The discrete topology on any set $X$ is generated by the metric $d$ defined by
$$d(x,y)=\begin{cases}0,&\text{if }x=y\\1,&\text{if }x\ne y\;;\end{cases}$$
you should not have much trouble verifying that this really is a metric.
-
@ Brian,thanks Brian for the answer.Also I want to know how can be the trivial topology is Pseudometric and Discrete topology is a metric space?Can you please explain me? – ccc Nov 13 '12 at 1:13
@ccc: I’ll add those to my answer. – Brian M. Scott Nov 13 '12 at 1:17
Thank you very much Brian! – ccc Nov 13 '12 at 1:23
@ccc: You’re welcome! – Brian M. Scott Nov 13 '12 at 1:28
|
2015-07-06 12:05:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9328458905220032, "perplexity": 120.16923439621935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098196.31/warc/CC-MAIN-20150627031818-00296-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://byjus.com/jee/properties-of-eigenvalues/
|
# Properties of Eigenvalues
A matrix is a rectangular arrangement of numbers in the form of rows and columns. Eigenvalues are a set of scalars related to the matrix equation. They are also known as characteristic roots or characteristic values. Consider an n×n matrix A. If AX = λA, then λ is the eigenvalue of the matrix A. X denotes the Eigen matrix of A. As far as the JEE exam is concerned, matrix is an important topic. In this article, we will learn the properties of eigenvalues of a matrix.
## 10 Important Properties of Eigenvalues
Let A be a matrix with eigenvalues λ1, λ2, …λn.
1. The determinant of A is the product of all the eigenvalues of A.
Det (A) = λ1× λ2× …λn.
2. The trace of A is the sum of all the eigenvalues of A.
tr(A) = $\sum_{i=1}^n \lambda_i$
= λ1+ λ2+ …λn.
3. A matrix will have inverse if and only if all of its eigenvalues are nonzero.
4. Eigenvalue can be Zero
5. If A is an n × n triangular matrix (upper triangular, lower triangular, or diagonal), then the eigenvalues of A are the entries of the main diagonal of A.
6. If an n × n matrix A has n distinct eigenvalues, then A is a diagonalizable matrix.
7. If A is unitary, every eigenvalue has absolute value | λi | = 1.
8. If A is Hermitian (symmetric) matrix, then the eigenvalues of A are all real numbers.
9. If A is a square matrix, then for every eigenvalue of A, the geometric multiplicity is less than or equal to the algebraic multiplicity.
10. If A is square matrix and λ is an eigenvalue of A and n≥0 is an integer, then λn is an eigenvalue of An.
### Example
Find the eigenvalues of A = $\begin{bmatrix} -6 & -3\\ -4 & 5 \end{bmatrix}$
Solution:
Given A = $\begin{bmatrix} -6 & -3\\ -4 & 5 \end{bmatrix}$
| A-λI | = 0
$\begin{bmatrix} -6-\lambda & -3\\ -4 & 5-\lambda \end{bmatrix}$ = 0
(-6-λ)(5-λ)-12 = 0
-30-5λ+6λ+λ2 – 12 = 0
λ2 +λ – 42 = 0
(λ+7)(λ-6)= 0
λ = -7 or λ = 6
Hence the required eigenvalues are -7 and 6.
|
2020-09-24 14:47:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857964873313904, "perplexity": 390.059597446307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00093.warc.gz"}
|
https://or.stackexchange.com/questions/8482/unbounded-master-problem-in-benders-decomposition/8486
|
# Unbounded master problem in Benders decomposition
After a few iterations, my master problem with optimality cuts is still unbounded. I wonder if it's possible in theory?
If it's possible, how to deal with the unbounded master problem?
• I add some clarification: I think unboundedness both before and after adding cuts, makes the algorithm stuck due to the solution of the master problem doesn't change. I think it can also happen if adding redundant lower bound or adding variable bound. Jun 3 at 13:16
• Just to be clear, it is your master problem (not the subproblem) that is unbounded, and you are using the feasible corner point solution at which unboundedness was detected to generate the optimality cut, correct?
– prubin
Jun 3 at 14:41
• @prubin Yes. The master problem is unbounded. But I'm not sure what solution the solver(gurobi) returns when the problem is unbounded. Jun 3 at 14:56
• I post a new question as RobPratt suggests. or.stackexchange.com/questions/8487/… Jun 3 at 14:56
Assuming your master problem is to minimize $$\eta$$, a simple way to avoid unboundedness, even before adding any cuts, is to impose a redundant lower bound $$\eta \ge L$$ for some constant $$L$$. Often, taking $$L=0$$ is valid.
• Your answer and prubin's below truly solved my question. But in certain situations, the optimal solution to the master problem with a lower bound will reach the artificial lower bound both before and after adding the cuts, which means the master problem will give the same solution as before and of course, generate the same cut. The algorithm will be stuck. I think it's the same situation as unboundedness. Jun 3 at 13:11
• This sounds to me like the cuts are not correct. Please open a separate question with more detail about this issue with cycling, which should not happen. Jun 3 at 13:59
Yes, it is possible in theory.
An alternative to bounding the objective function is bounding the variables. If this is a "real-world" model (where the variables represent actual decisions), they will all be bounded in practice. Adding appropriate (not overly tight, but not ridiculously loose) bounds will sometimes speed up the solution process in addition to keeping the model bounded.
Update: I added an answer to the linked question that is relevant here (and too lengthy to repeat). Basically, if the solver for the master problem is at a corner where it discovers a recession direction that makes the master unbounded, and if the subproblem generates an optimality cut (to correct the master problem under-/over-estimating the objective value at that corner), there is no guarantee that the cut causes the objective to become bounded in the recession direction. If it does not, the solver will return the same corner solution (with an amended objective value), the master will remain unbounded, and the solution process will remain stuck.
RobPratt's and prubin's answer indeed solves the problem in the post.
If someone wonders whether there exist other solutions, I found a class of stabilization methods in Frangioni, A. (2020) for nonlinear nonsmooth problems to avoid oscillation called proximal and level stabilization also solving this problem to some level.
proximal stabilization: adding a proximal term to the objective of the master problem, then the MP changes from $$\min_{x \in X} f(x)$$ to $$\min_{x \in X} f(x) + \frac{\mu}{2}|| x - x_k ||_2^2$$ where $$\mu$$ is a hyperparameter.
level stabilization: adding a level set constraint, then the MP becomes $$\min_{x \in X} || x - x_k ||_2^2 \\ \text{s.t. } f(x) \le l$$ where $$l$$ is a hyperparameter. You can choose $$l$$ as the best primal bound you have achieved.
Of course, you can combine the above 2 methods $$\min_{x \in X} f(x) + \frac{\mu}{2}|| x - x_k ||_2^2\\ \text{s.t. } f(x) \le l$$
Reference
1. Frangioni, A. (2020). Standard bundle methods: untrusted models and duality. In Numerical Nonsmooth Optimization (pp. 61-116). Springer, Cham.
|
2022-08-15 15:17:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5691800117492676, "perplexity": 769.4197319816582}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00355.warc.gz"}
|
https://socratic.org/questions/what-are-the-foci-of-the-ellipse-x-2-49-y-2-64-1
|
# What are the foci of the ellipse x^2/49+y^2/64=1?
Mar 2, 2015
The answer is: ${F}_{1 , 2} \left(0 , \pm \sqrt{15}\right)$.
The standard equation of an ellipse is:
${x}^{2} / {a}^{2} + {y}^{2} / {b}^{2} = 1$.
This ellipse is with the foci (${F}_{1 , 2}$) on the y-axis since $a < b$.
So the ${x}_{{F}_{1 , 2}} = 0$
The ordinates are:
$c = \pm \sqrt{{b}^{2} - {a}^{2}} = \pm \sqrt{64 - 49} = \pm \sqrt{15}$.
So:
${F}_{1 , 2} \left(0 , \pm \sqrt{15}\right)$.
|
2020-09-27 08:19:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9990309476852417, "perplexity": 1464.1403564355674}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400265461.58/warc/CC-MAIN-20200927054550-20200927084550-00539.warc.gz"}
|
https://puzzling.stackexchange.com/questions/31121/how-can-a-16-sided-non-self-intersecting-polygon-be-drawn-on-a-4-by-4-grid
|
# How can a 16-sided non-self-intersecting polygon be drawn on a 4-by-4 grid?
Let there be a square 4-by-4 grid of points in the plane.
How can a 16-sided non-self-intersecting polygon be drawn on a 4-by-4 grid if the points are the vertices of the polygon?
.
(Don't count reflections/rotations as different polygons.)
• Can we draw diagonals?
– Carl
Apr 18 '16 at 0:02
• Is it considered to be an intersection if the perimeter touches the same point twice, but does not cross? (For instance, it comes in from the bottom, leaves left, does something else, comes in from the top, and leave right.) Apr 18 '16 at 17:02
• @ Passage - There must be exactly two line segments meeting at a point whenever a point is used to draw the polygon. . . . . . . . . . . . . . . . @ Carl - Yes. Apr 18 '16 at 19:50
For example it could be something like this:
• Congratulations - you posted 2 seconds ahead of me :) . Apr 18 '16 at 0:15
• @Lawrence Probably ascii drawing is a little bit slower ;) Apr 18 '16 at 0:18
• Heh. I made a mistake, corrected it, then checked it again. +1 on yours, though :) . Apr 18 '16 at 0:28
x-x x-x
| |/ |
x x x-x
\ \
x-x x x
| /| |
x-x x-x
• haha, that's innovative xD Apr 18 '16 at 3:13
|
2022-01-20 03:32:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6865894794464111, "perplexity": 1025.4617243058572}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301670.75/warc/CC-MAIN-20220120005715-20220120035715-00596.warc.gz"}
|
https://proofwiki.org/wiki/Convergent_Real_Sequence_has_Unique_Limit
|
Convergent Real Sequence has Unique Limit
Jump to navigation Jump to search
Theorem
Let $\sequence {s_n}$ be a real sequence.
Then $\sequence {s_n}$ can have at most one limit.
Proof 1
Aiming for a contradiction, suppose that $\sequence {s_n}$ converges to $l$ and also to $m$.
That is, suppose that:
$\displaystyle \lim_{n \mathop \to \infty} x_n = l$
and:
$\displaystyle \lim_{n \mathop \to \infty} x_n = m$
Without loss of generality, assume that $l \ne m$.
Let:
$\epsilon = \dfrac {\size {l - m} } 2$
As $l \ne m$, it follows that $\epsilon > 0$.
As $\sequence {s_n} \to l$:
$\exists N_1 \in \N: \forall n \in \N: n > N_1: \size {s_n - l} < \epsilon$
Similarly, since $\sequence {s_n} \to m$:
$\exists N_2 \in \N: \forall n \in \N: n > N_2: \size {s_n - m} < \epsilon$
Now set $N = \max \set {N_1, N_2}$.
We have:
$\displaystyle \size {l - m}$ $=$ $\displaystyle \size {l - s_N + s_N - m}$ $\displaystyle$ $\le$ $\displaystyle \size {l - s_N} + \size {s_N - m}$ Triangle Inequality for Real Numbers $\displaystyle$ $<$ $\displaystyle 2 \epsilon$ $\displaystyle$ $=$ $\displaystyle \size {l - m}$
This constitutes a contradiction.
It follows from Proof by Contradiction that $l = m$.
$\blacksquare$
Proof 2
We have that the real number line is a metric space
The result then follows from Convergent Sequence in Metric Space has Unique Limit.
$\blacksquare$
|
2020-09-21 15:44:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956800937652588, "perplexity": 309.556793428215}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400201826.20/warc/CC-MAIN-20200921143722-20200921173722-00063.warc.gz"}
|
https://mathhelpboards.com/threads/how-to-solve-this-line-integral.1486/
|
# How to solve this line integral?
#### aruwin
##### Member
I have no idea how to even start with this problem. I know the basics but this one just gets complicated. Please guide me!
Find the line integral:
∫C {(-x^2 + y^2)dx + xydy}
When 0≤t≤1 for the curved line C, x(t)=t, y(t)=t^2
and when 1≤t≤2, x(t)= 2 - t , y(t) = 2-t.
Use x(t) and y(t) and C={(x(t),y(t))|0≤t≤2}
Help!
#### Ackbach
##### Indicium Physicus
Staff member
It looks to me as though you could define
$$C_{1}:\quad 0\le t\le 1,\quad x=t,\quad y=t^{2},$$
and
$$C_{2}:\quad 1\le t\le 2,\quad x=2-t,\quad y=2-t.$$
You're asked to compute
$$\int_{C}=\int_{C_{1}}+\int_{C_{2}}.$$
Where do you go from here?
|
2021-09-25 06:29:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4284060001373291, "perplexity": 2763.8515708367368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057598.98/warc/CC-MAIN-20210925052020-20210925082020-00253.warc.gz"}
|
https://www.tutorialspoint.com/How-to-get-the-exponent-power-of-a-number-in-JavaScript
|
# How to get the exponent power of a number in JavaScript?
JavascriptWeb DevelopmentFront End Technology
#### JavaScript for beginners
Best Seller
74 Lectures 10 hours
#### Modern Javascript for Beginners + Javascript Projects
Most Popular
112 Lectures 15 hours
#### The Complete Full-Stack JavaScript Course!
Best Seller
96 Lectures 24 hours
In this tutorial, we will learn how to get the exponent power of a number in JavaScript. The power of a number can be termed as how many times a particular number is multiplied by the number itself. For calculating the power of a number, we require two things, Base and Exponent. At the same time, a Base is a number that must be multiplied. Exponent (Usually written in the superscript) determines the number of times the number (Base) should be multiplied by itself.
While we represent the power of a number using "^" or superscript when typing it, we take the help of functions to compute the exponent power in JavaScript. Here we will cover two approaches to computing the exponent power of a number. The first approach is to use the Math.pow() method and the second is to use the exponentiation operator.
## Using the Math.pow() Method
The Math.pow() is a static method, which means it is a member of an object which doesn't need a constructor to create an instance. It accepts two parameters, Base and the Exponent, and returns a number(Data type) which is the solution. The result is NaN if the Base is negative and the exponent is not an integer.
### Syntax
Following is the syntax to get the exponent power of a number using the Math.pow() method −
Math.pow(base, exponent);
### Parameter Details
• base − the number that needs to be multiplied
• exponent − The number of times a number needs to be multiplied
### Example
In the below given example, we have selected two numbers, Base and Exponent, and we are finding out the Base's power using the Math pow() method.
Case1 - Base and Exponent are numbers
Case 2 - A base is a number, and an exponent is a Decimal
<html>
<body>
<h3>Get the exponent power of a number using <i>Math.pow()</i> method</h3>
<div id = "str1"></div>
<script>
var Base = 2;
var Exponent = 5;
var output = document.getElementById("str1");
output.innerHTML += " Case 1: Base = 'number', Exponent = 'number'<br/>"
output.innerHTML += "Base = " + Base + "<br/>";
output.innerHTML += "Exponent = " + Exponent + "<br/>";
output.innerHTML += Base + " power " + Exponent + " = " + answer + "<br/> <hr>"
Base = 2;
Exponent = 0.5;
output.innerHTML += " Case 2: Base = 'number', Exponent = 'Decimal'<br/>"
output.innerHTML += "Base = " + Base + "<br/>";
output.innerHTML += "Exponent = " + Exponent + "<br/>";
output.innerHTML += Base + " power " + Exponent + " = " + answer + "<br/><hr>"
</script>
</body>
</html>
### Example
In this example, we select two numbers: Base and Exponent Are Negative numbers.
Case 1 - A base is a number, and an exponent is a Negative number
Case 2 - A base is a Negative number, and an exponent is a Negative Number
<html>
<body>
<h3>Get the exponent power of a number using <i>Math.pow()</i>method</h3>
<div id = "str1"></div>
<script>
var Base = 2;
var Exponent = -5;
var output = document.getElementById("str1");
output.innerHTML += " Case 1: Base = 'number', Exponent = 'Negative Number'<br/>"
output.innerHTML += "Base = " + Base + "<br/>";
output.innerHTML += "Exponent = " + Exponent + "<br/>";
output.innerHTML += Base + " power " + Exponent + " = " + answer + "<br/><hr>"
Base = -2;
Exponent = 0.5;
output.innerHTML += " Case 2: Base = 'Negative Number', Exponent = 'Negative Number'<br/>"
output.innerHTML += "Base = " + Base + "<br/>";
output.innerHTML += "Exponent = " + Exponent + "<br/>";
output.innerHTML += Base + " power " + Exponent + " = " + answer + "<br/><hr>"
</script>
</body>
</html>
This is how the Math.pow() function works and follows the basic mathematic rules.
## Using the Exponentiation Operator
The exponentiation operator (**) can also be used to get the exponent power of a number. It returns a number which is the result of raising the first operand to the power of the second operand. It is the same as the Math.pow() method, discussed above. The only difference between these two methods is that the exponentiation operator can accept BigInts as operands.
### Syntax
Following is the syntax to find the exponent power of a number using the exponentiation operator −
Operand1 ** Operand2
Here operand1 and operand2 are the first and second operands.
Please note if the first operand is negative, then we should put it in a parenthesis.
### Example
In the program below, we take different types of numbers for the two operands.
<html>
<title>JavaScript Math exp() Method</title>
<body>
<h3>Using the Exponentiation Operator</h3>
<p id ="result"></p>
<script>
var value1 = 7 ** 2;
document.getElementById("result").innerHTML = "7 ** 2 = " + value1;
var value2 = (-8) ** 5;
document.getElementById("result").innerHTML +="<br>(-8) ** 5 = " + value2;
var value3 = 4 ** -3
document.getElementById("result").innerHTML +="<br>4 ** -3 = " + value3;
var value4 = (-3) ** -4
document.getElementById("result").innerHTML +="<br>(-3) ** -4 = " + value4;
</script>
</body>
</html>
In this tutorial, we have discussed two approaches to finding the exponent power of a number in JavaScript. The first approach is to use the Math.pow() method and the second is to use the exponentiation operator (**). The second method can accept BigInts whereas the first cannot.
Updated on 26-Aug-2022 13:04:14
|
2022-12-07 17:38:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6897361278533936, "perplexity": 2006.3288644967392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711200.6/warc/CC-MAIN-20221207153419-20221207183419-00155.warc.gz"}
|
https://eprint.iacr.org/2015/773
|
## Cryptology ePrint Archive: Report 2015/773
Distinguishing a truncated random permutation from a random function
Shoni Gilboa and Shay Gueron
Abstract: An oracle chooses a function f from the set of n bits strings to itself, which is either a randomly chosen permutation or a randomly chosen function. When queried by an n-bit string w, the oracle computes f(w), truncates the m last bits, and returns only the first n-m bits of f(w). How many queries does a querying adversary need to submit in order to distinguish the truncated permutation from a random function?
In 1998, Hall et al. showed an algorithm for determining (with high probability) whether or not f is a permutation, using O ( 2^((m+n)/2) ) queries. They also showed that if m < n/7, a smaller number of queries will not suffice. For m > n/7, their method gives a weaker bound.
In this manuscript, we show how a modification of the method used by Hall et al. can solve the porblem completely. It extends the result to essentially every m, showing that
Omega ( 2^((m+n)/2) ) queries are needed to get a non-negligible distinguishing advantage. We recently became aware that a better bound for the distinguishing advantage, for every m<n, follows from a result of Stam published, in a different context, already in 1978.
Category / Keywords: foundations / Pseudo random permutations, pseudo random functions, advantage
|
2021-06-18 14:17:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.802529513835907, "perplexity": 1276.3005138103292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00066.warc.gz"}
|
https://www.ashoka.edu.in/courses/mat-3013-mathematical-modeling-di%EF%AC%80erential-equations/
|
Syllabus: Differential equation associated to real life problems, First order differential equation on R of the form $y’(x) = f(x,y(x))$, Equivalent integral equation, Existence of approximate solutions of equation upto error $\epsilon$ by Cauchy-Euler method, Existence and uniqueness of solutions when $f$ is Lipshitz continuous in the second variable, Necessary conditions for $f(x,y)$ to be Lipshitz continuous in $y$, Picard’s method of solutions of equation, Higher order differential equations, Vector valued ordinary differential equations, Reformulation of higher order differential equations as first order vector valued differential equations, Linear vector valued first order differential equation, $Y’(x) = A Y(x) + C(x)$ — Homogeneous case, $C =0$, Characteristic values, characteristic vectors of square matrices, Solution when A is independent of $x$, Linear independence of solutions associated to characteristic values, General solution of the inhomogeneous equation, Peano’s approximation method for existence of solution.Syllabus: Differential equation associated to real life problems, First order differential equation on R of the form $y’(x) = f(x,y(x))$, Equivalent integral equation, Existence of approximate solutions of equation upto error $\epsilon$ by Cauchy-Euler method, Existence and uniqueness of solutions when $f$ is Lipshitz continuous in the second variable, Necessary conditions for $f(x,y)$ to be Lipshitz continuous in $y$, Picard’s method of solutions of equation, Higher order differential equations, Vector valued ordinary differential equations, Reformulation of higher order differential equations as first order vector valued differential equations, Linear vector valued first order differential equation, $Y’(x) = A Y(x) + C(x)$ — Homogeneous case, $C =0$, Characteristic values, characteristic vectors of square matrices, Solution when A is independent of $x$, Linear independence of solutions associated to characteristic values, General solution of the inhomogeneous equation, Peano’s approximation method for existence of solution.
|
2022-07-05 05:48:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.947670578956604, "perplexity": 333.2671976044006}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104514861.81/warc/CC-MAIN-20220705053147-20220705083147-00551.warc.gz"}
|
https://www.electricalexams.co/ssc-je-electrical-conventional-paper-solved-2016-17/
|
# SSC JE Electrical Conventional Paper with Explained Solution 2016-17 | MES Electrical
Ques 1. A conducting wire has a resistance of 5Ω. What is the resistance of another wire of the same material but having half the diameter and four times the length?
## Factors Affecting the Resistance
The resistance R offered by a conductor depends on the following factors :
1. Length of the material (l): The resistance of a material is directly proportional to the length. The resistance of the longer wire is more.
2. Cross-Section Area (a): The resistance of a material is inversely proportional to the cross-sectional area of the material. More cross-sectional area allowed the passage of more number of electrons offering less resistance.
3. Nature of Material: As discussed earlier the conductor has a large number of free electrons hence it offers less resistance whereas Inductor has less number of free electrons hence it offers more resistance.
4. Temperature: The temperature of the material affects the value of the resistance. In General case, the resistance of the material increases as its temperature increases.
So for any given material at a certain given temperature, the resistance is given as
l = Length in Meter
A = area of cross-section in m2
R = Resistance in Ohm
Now suppose
Resistance of the first conductor be R1=
Length = L1,
Area = A1
Resistance of the second conductor = R2,
Length = L2
Area = A2
Given L2 = 4L1
A2 = A1/2
$\begin{array}{l}{R_1} = \rho \dfrac{{{L_1}}}{{{A_1}}}\\\\{R_2} = \rho \dfrac{{{L_2}}}{{{A_2}}}\end{array}$
Dividing both the equation
$\begin{array}{l}\dfrac{{{R_2}}}{{{R_1}}} = \dfrac{{{L_2}}}{{{A_2}}} \times \dfrac{{{A_1}}}{{{L_1}}}\\\\\dfrac{{{R_2}}}{{{R_1}}} = \dfrac{{{L_2}}}{{{L_1}}} \times \dfrac{{{A_1}}}{{{A_2}}} = \dfrac{{{L_2}}}{{{L_1}}} \times \frac{{\left( {\dfrac{{\pi {d_1}^2}}{4}} \right)}}{{\left( {\dfrac{{\pi {d_2}^2}}{4}} \right)}}\\\\\dfrac{{{R_2}}}{{{R_1}}} = \dfrac{{{L_2}}}{{{L_1}}} \times {\left( {\dfrac{{{d_1}}}{{{d_2}}}} \right)^2} = \dfrac{4}{1} \times {\left( {\dfrac{2}{1}} \right)^2}\\\\\dfrac{{{R_2}}}{5} = 16\\\\{R_2} = 80\Omega \end{array}$
Ques 2. Two coils connected in parallel across a 100V DC supply, take 10 A current from the supply. Power dissipated in one coil is 600 W. What is the resistance of each coil?
Effective Resistance of R1 and R2 = R1 + R2
Since the resistance is connected in the series therefore there effective resistance is
REffective = R1R2/(R1 + R2)
100V/10A = R1R2/(R1 + R2)
10Ω = R1R2/(R1 + R2)……………..(1)
Let power dissipated from resistance R1 be 600 W
Now Power P1 = V2/R1
= 600 = 1002/R1
or R1 = 16.67 Ω
Putting the value of R1 in equation 1
$\begin{array}{l}10 = \dfrac{{16.67 \times {R_2}}}{{16.67 + {R_2}}}\\\\166.7 + 10{R_2} = 16.67{R_2}\\\\{R_2} = \dfrac{{166.7}}{{6.67}} = 25\Omega \end{array}$
Ques 3. Determine the current through 5Ω resistor in the given circuit
Sol:- The above circuit can be reconstructed using source transformation theorem
According to source transformation Theorem, a voltage source with a series resistor can be converted into an equivalent current source with a parallel resistor. In a similar manner, using Thevenin theorem, a current source with a parallel resistor can be represented by a voltage source with a series resistor. These transformations are called source transformations.
Now By applying source transformation in the above Question the given circuit will become
Now by applying KVL
-6 + 2I + 5I + 1I – (-2) = 0
8I = 4
I = 0.5A
Ques 4. Find the voltage across 5Ω resistance in the network shown in figure using Thevenin’s theorem
Sol:- To determine the Thevenin’s Equivalent circuit Resistance Rth, all the voltage source are replaced by the Short circuit.
Rth = 2Ω || 1Ω || 4Ω
$\begin{array}{l}{R_{th}} = \dfrac{1}{2} + \dfrac{1}{1} + \dfrac{1}{4} = \dfrac{7}{4}\\\\{R_{th}} = 0.571\Omega \end{array}$
To determine the voltage Vth the 5Ω Resitance is removed as shown in figure, Now by applying Node analysis method
$\begin{array}{l}\dfrac{{{V_{th}} - 20}}{2} + \dfrac{{{V_{th}} + 10}}{1} + \dfrac{{{V_{th}} - 12}}{4} = 0\\\\2{V_{th}} - 40 + 4{V_{th}} + 40 + {V_{th}} - 12 = 0\\\\7{V_{th}} = 12\\\\{V_{th}} = 1.714V\end{array}$
The equivalent Thevnin’s circuit is shown below
Now by applying the voltage divider rule, the voltage across the 5Ω will be
$\begin{array}{l}{V_{AB}} = {V_{Th}} \times \dfrac{{{R_{th}}}}{{{R_{th}} + R}}\\\\{V_{AB}} = 1.714 \times \dfrac{5}{{5 + 0.571}}\\\\{V_{AB}} = 1.538\end{array}$
### 2 thoughts on “SSC JE Electrical Conventional Paper Solved 2016-17-Electrical-Exam”
1. which book is best for ssc je conventional
|
2020-02-25 23:48:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7134243845939636, "perplexity": 1663.4529368251792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146176.73/warc/CC-MAIN-20200225233214-20200226023214-00060.warc.gz"}
|
http://en.wikipedia.org/wiki/Hawking_energy
|
# Hawking energy
"Hawking mass" redirects here. For other uses, see Hawking mass (disambiguation).
The Hawking energy or Hawking mass is one of the possible definitions of mass in general relativity. It is a measure of the bending of ingoing and outgoing rays of light that are orthogonal to a 2-sphere surrounding the region of space whose mass is to be defined.
## Definition
Let $(\mathcal{M}^3, g_{ab})$ be a 3-dimensional sub-manifold of a relativistic spacetime, and let $\Sigma \subset \mathcal{M}^3$ be a closed 2-surface. Then the Hawking mass $m_H(\Sigma)$ of $\Sigma$ is defined[1] to be
$m_H(\Sigma) := \sqrt{\frac{\text{Area}\,\Sigma}{16\pi}}\left( 1 - \frac{1}{16\pi}\int_\Sigma H^2 da \right),$
where $H$ is the mean curvature of $\Sigma$.
## Properties
In the Schwarzschild metric, the Hawking mass of any sphere $S_r$ about the central mass is equal to the value $m$ of the central mass.
A result of Geroch[2] implies that Hawking mass satisfies an important monotonicity condition. Namely, if $\mathcal{M}^3$ has nonnegative scalar curvature, then the Hawking mass of $\Sigma$ is non-decreasing as the surface $\Sigma$ flows outward at a speed equal to the inverse of the mean curvature. In particular, if $\Sigma_t$ is a family of connected surfaces evolving according to
$\frac{dx}{dt} = \frac{1}{H}\nu(x),$
where $H$ is the mean curvature of $\Sigma_t$ and $\nu$ is the unit vector opposite of the mean curvature direction, then
$\frac{d}{dt}m_H(\Sigma_t) \geq 0.$
Said otherwise, Hawking mass is increasing for the inverse mean curvature flow.[3]
Hawking mass is not necessarily positive. However, it is asymptotic to the ADM[4] or the Bondi mass, depending on whether the surface is asymptotic to spatial infinity or null infinity.[5]
|
2014-12-29 15:27:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8430501818656921, "perplexity": 267.95283148585014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447563403.84/warc/CC-MAIN-20141224185923-00015-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://www.speedsolving.com/threads/buying-edison-cubes-pyraminx.34564/
|
Status
Not open for further replies.
#### MrRubiksUFO
##### Member
Hi to all. As it isn't possible to get an Edison Pyraminx when you live in Germany, I'd like to ask you guys if any of you live in Korea or have friends there to get one (or more) from onhobby.com and then send them to me.
Looking forward to hear from you
#### Borislav
##### Member
I'm looking for an Edison Pyraminx from about a year. I hope that I will get one from eBay... But who knows?
#### samchoochiu
##### Member
I'm selling white ones, there is a thread about in the Hardware area.
#### hyunchoi98
##### Member
i lived in korea (a month ago) but i moved to the US for an year lol
#### guinepigs rock
##### Member
I would Like an edison megaminx.
#### mitch1234
##### Member
I would Like an edison megaminx.
He is selling Pyraminx's not Megaminx's.
#### wytefury
##### Member
Hey everyone! Im just posting really fast because I just talked to a Korean seller on eBay about selling Edison Pyraminx's and he just listed a couple this morning. Im guessing if they sell well though he would be willing to stock more.
http://www.ebay.com/itm/Edison-Cube-Pyramid-Black-Made-in-Korea-Toy-Brand-Rubiks-Rubix-Rubic-Magic?item=130710592791&cmd=ViewItem&_trksid=p5197.m7&_trkparms=algo%3DLVI%26itu%3DUCI%26otn%3D4%26po%3DLVI%26ps%3D63%26clkid%3D9040104832561807762#ht_4854wt_1075 (black, $18.93) http://www.ebay.com/itm/140771347295?_trksid=p5197.c0.m619#ht_4788wt_1075 (white,$18.93)
So yeah if anyone is still looking for this pyraminx here it is. Enjoy!
#### Carrot
##### Member
Hey everyone! Im just posting really fast because I just talked to a Korean seller on eBay about selling Edison Pyraminx's and he just listed a couple this morning. Im guessing if they sell well though he would be willing to stock more.
http://www.ebay.com/itm/Edison-Cube-Pyramid-Black-Made-in-Korea-Toy-Brand-Rubiks-Rubix-Rubic-Magic?item=130710592791&cmd=ViewItem&_trksid=p5197.m7&_trkparms=algo%3DLVI%26itu%3DUCI%26otn%3D4%26po%3DLVI%26ps%3D63%26clkid%3D9040104832561807762#ht_4854wt_1075 (black, $18.93) http://www.ebay.com/itm/140771347295?_trksid=p5197.c0.m619#ht_4788wt_1075 (white,$18.93)
So yeah if anyone is still looking for this pyraminx here it is. Enjoy!
Bahh only one white left... nvm I bought the last white, yay!! :3
#### wytefury
##### Member
Bahh only one white left... nvm I bought the last white, yay!! :3
Haha yeah there was two...you and me and the WINNERS!
Status
Not open for further replies.
|
2020-07-02 20:08:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17915362119674683, "perplexity": 7034.595117492355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655879738.16/warc/CC-MAIN-20200702174127-20200702204127-00596.warc.gz"}
|
https://www.researchgate.net/publication/47865452_Inverse_Problems_for_deformation_rings
|
# Inverse Problems for deformation rings
Article (PDF Available)inTransactions of the American Mathematical Society 365(11) · December 2010with16 Reads
DOI: 10.1090/S0002-9947-2013-05848-5 · Source: arXiv
Abstract
Let $\mathcal{W}$ be a complete local commutative Noetherian ring with residue field $k$ of positive characteristic $p$. We study the inverse problem for the versal deformation rings $R_{\mathcal{W}}(\Gamma,V)$ relative to $\mathcal{W}$ of finite dimensional representations $V$ of a profinite group $\Gamma$ over $k$. We show that for all $p$ and $n \ge 1$, the ring $\mathcal{W}[[t]]/(p^n t,t^2)$ arises as a universal deformation ring. This ring is not a complete intersection if $p^n\mathcal{W}\neq\{0\}$, so we obtain an answer to a question of M. Flach in all characteristics. We also study the `inverse inverse problem' for the ring $\mathcal{W}[[t]]/(p^n t,t^2)$; this is to determine all pairs $(\Gamma, V)$ such that $R_{\mathcal{W}}(\Gamma,V)$ is isomorphic to this ring.
arXiv:1012.1290v1 [math.NT] 6 Dec 2010
INVERSE PROBLEMS FOR DEFORMATION RINGS
FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
Abstract. Let Wbe a complete local commutative Noetherian ring with residue field kof
positive characteristic p. We study the inverse problem for the versal deformation rings RW, V )
relative to Wof finite dimensional representations Vof a profinite group Γ over k. We show that
for all pand n1, the ring W[[t]]/(pnt, t2) arises as a versal deformation ring. This ring is not
a complete intersection if pnW 6={0}, so we obtain an answer to a question of M. Flach in all
characteristics. We also study the ‘inverse inverse problem’ for the ring W[[t]]/(pnt, t2); this is to
determine all pairs (Γ, V ) such that RW, V ) is isomorphic to this ring.
1. Introduction
Suppose Γ is a profinite group and that Vis a continuous finite dimensional representation of Γ
over a field kof characteristic p > 0. Let Wbe a complete local commutative Noetherian ring with
residue field k. In §2 we recall the definition of a deformation of Vover a complete local commutative
Noetherian W-algebra with residue field k. It follows from work of Mazur and Schlessinger [14, 17]
that Vhas a Noetherian versal deformation ring RW, V ) if the p-Frattini quotient of every open
subgroup of Γ is finite. Without assuming this condition, de Smit and Lenstra proved in [11] that V
has a universal deformation ring RW, V ) if EndkΓ(V) = k. The ring RW, V ) is a pro-Artinian
W-algebra, but it need not be Noetherian. In this paper we consider the following inverse problem:
Question 1.1. Which complete local commutative Noetherian W-algebras Rwith residue field k
are isomorphic to RW, V )for some Γand Vas above?
It is important to emphasize that in this question, Γ and Vare not fixed. Thus for a given R, one
would like to construct both a profinite group Γ and a continuous finite dimensional representation
Vof Γ over kfor which RW, V ) is isomorphic to R. We will be most interested in the case of
finite groups Γ in this paper, for which RW, V ) is always Noetherian.
One can also consider the following inverse inverse problem:
Question 1.2. Suppose Ris a complete local commutative Noetherian W-algebra with residue field
k. What are all profinite groups Γand all continuous finite dimensional representations Vof Γ
over ksuch that R
=RW, V )?
The goal of this paper is to answer Questions 1.1 and 1.2 for the rings R=W[[t]]/(pnt, t2). More
precisely, we prove the following main results Theorem 1.3 and Theorem 1.4.
Theorem 1.3. For all fields kand rings Was above, and for all n1, there is a representation
Vof a finite group Γover khaving a universal deformation ring RW, V )which is isomorphic to
W[[t]]/(pnt, t2). In particular, this ring is not a complete intersection if pnW 6={0}.
Theorem 1.4. Let kbe perfect and let W=W(k)be the ring of infinite Witt vectors over k.
Then there exists a complete classification, given in Theorem 3.2, of all profinite groups Γand all
continuous finite dimensional representations Vof Γover kwith EndkΓ(V) = ksuch that
Date: December 7, 2010.
2000 Mathematics Subject Classification. Primary 11F80; Secondary 11R32, 20C20,11R29.
The first author was supported in part by NSF Grant DMS0651332. The second author was supported in part
by NSF Grant DMS0801030. The third author was funded in part by the European Commission under contract
MRTN-CT-2006-035495.
1
2 FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
if Kis the kernel of the Γ-action on Vthen Vis projective as a module for Γ/K, and
the universal deformation ring RW, V )is isomorphic to W[[t]]/(pnt, t2)and the universal
deformation of Vis faithful as a representation of Γ.
In [7], B¨ockle gives a survey of recent results on presentations of deformation rings and of ap-
plications of such presentations to arithmetic geometry. In particular, [7] discusses how one can
show that deformation rings are complete intersections as well as the relevance of presentations to
arithmetic, e.g. to Serre’s conjectures in the theory of modular forms and Galois representations.
The problem of constructing representations having universal deformation rings which are not
complete intersections was first posed by M. Flach [9]. The first example of a representation of this
kind was found by Bleher and Chinburg when char(k) = 2; see [4, 5]. A more elementary argument
proving the same result was given in [8]. Theorem 1.3 gives an answer to Flach’s question for all
possible residue fields of positive characteristic.
As of this writing we do not know of a complete local commutative Noetherian ring Rwith
perfect residue field kof positive characteristic which cannot be realized as a versal deformation
ring of the form RW(k), V ) for some profinite Γ and some representation Vof Γ over k.
There is an extensive literature concerning explicit computations of universal deformation rings
(often with additional deformation conditions). See [7], [3], [1, 2] and their references for an intro-
duction to this literature. Theorem 1.3 and the formulation of the inverse problem in Question 1.1
first appeared in [6]. In subsequent work on the inverse problem, Rainone found in [16] some other
rings which are universal deformation rings and not complete intersections; see Remark 4.3.
The sections of this paper are as follows.
In §2 we recall the definitions of deformations and of versal and universal deformation rings and
describe how versal deformation rings change when extending the residue field k(see Theorem 2.2).
In §3 we consider arbitrary perfect fields kof characteristic pand we take W=W(k). In
Theorem 3.2, which implies Theorem 1.4, we give a sufficient and necessary set of conditions on a
representation ˜
Vof a finite group Γ over kfor the universal deformation ring RW(k),˜
V) to be
isomorphic to R=W(k)[[t]]/(pnt, t2). The proof that these conditions are sufficient involves first
showing that RW(k),˜
V) is a quotient of W(k)[[t]] by proving that the dimension of the tangent
space of the deformation functor associated to ˜
Vis one. We then construct an explicit lift of ˜
Vover
Rand show that this cannot be lifted further to any small extension ring of Rwhich is a quotient
of W(k)[[t]].
In §4 we prove Theorem 1.3. We use Theorem 2.2 to reduce the proof of Theorem 1.3 to the case
in which k=Fp=Z/p and W=W(k) = Zp. In the latter case we provide explicit examples using
twisted group algebras of the form E[G0] where E=Fp2and G0= Gal(E/Fp).
Acknowledgments: The authors would like to thank M. Flach for correspondence about his
question. The second author would also like to thank the University of Leiden for its hospitality
during the spring of 2009 and the summer of 2010.
2. Deformation rings
Let Γ be a profinite group, and let kbe a field of characteristic p > 0. Let Wbe a complete local
commutative Noetherian ring with residue field k. We denote by ˆ
Cthe category of all complete
local commutative Noetherian W-algebras with residue field k. Homomorphisms in ˆ
Care continuous
W-algebra homomorphisms which induce the identity map on k. Define Cto be the full subcategory
of Artinian objects in ˆ
C. For each ring Ain ˆ
C, let mAbe its maximal ideal and denote the surjective
morphism AA/mA=kin ˆ
Cby πA. If α:AAis a morphism in ˆ
C, we denote the induced
morphism GLd(A)GLd(A) also by α.
Let dbe a positive integer, and let ρ: Γ GLd(k) be a continuous homomorphism, where GLd(k)
has the discrete topology. By a lift of ρover a ring Ain ˆ
Cwe mean a continuous homomorphism
τ: Γ GLd(A) such that πAτ=ρ. We say two lifts τ , τ: Γ GLd(A) of ρover Aare
strictly equivalent if one can be brought into the other by conjugation by a matrix in the kernel of
INVERSE PROBLEMS FOR DEFORMATION RINGS 3
πA: GLd(A)GLd(k). We call a strict equivalence class of lifts of ρover Aa deformation of ρ
over Aand define Defρ(A) to be the set of deformations [τ] of lifts τof ρover A. We then have a
functor
ˆ
Hρ:ˆ
C → Sets
which sends a ring Ain ˆ
Cto the set Defρ(A). Moreover, if α:AAis a morphism in ˆ
C, then
ˆ
Hρ(α) : Defρ(A)Defρ(A) sends a deformation [τ] of ρover Ato the deformation [ατ] of ρ
over A.
Instead of looking at continuous matrix representations of Γ, we can also look at topological
Γ-modules as follows. Let V=kdbe endowed with the continuous Γ-action given by composition
of ρwith the natural action of GLd(k) on V, i.e. Vis the d-dimensional topological kΓ-module
corresponding to ρ. A lift of Vover a ring Aˆ
Cis then a pair (M, φ) consisting of a finitely
generated free A-module Mon which Γ acts continuously together with a Γ-isomorphism φ:kA
MVof (discrete) k-vector spaces. We define Def V(A) to be the set of isomorphism classes
[M, φ] of lifts (M , φ) of Vover A. We then have a functor
ˆ
FV:ˆ
C → Sets
which sends a ring Ain ˆ
Cto the set DefV(A). Moreover, if α:AAis a morphism in ˆ
C,
then ˆ
FV(α) : DefV(A)DefV(A) sends a deformation [M , φ] of Vover Ato the deformation
[AA,α M, φα] of Vover A, where φαis the composition kA(AA,α M)
=kAMφ
V. The
functors ˆ
FVand ˆ
Hρare naturally isomorphic.
One says that a ring R=RW, ρ) (resp. R=RW, V )) in ˆ
Cis a versal deformation ring for
ρ(resp. for V) if there is a lift ν: Γ GLd(R) of ρover R(resp. a lift (U, φU) of Vover R) such
that the following conditions hold. For all rings Ain ˆ
C, the map
fA: Hom ˆ
C(R, A)Defρ(A) (resp. fA: Hom ˆ
C(R, A)DefV(A))
which sends a morphism α:RAin ˆ
Cto the deformation ˆ
Hρ(α)([ν]) (resp. ˆ
FV(α)([U, φU])) is
surjective. Moreover, if k[ǫ] is the ring of dual numbers with ǫ2= 0, then fk[ǫ]is bijective. (Here
the W-algebra structure of k[ǫ] is such that the maximal ideal of Wannihilates k[ǫ].) We call the
deformation [ν] (resp. [U, φU]) a versal deformation of ρ(resp. of V) over R. By Mazur [15, Prop.
20.1], ˆ
Hρ(resp. ˆ
FV) is continuous, which means that we only need to check the surjectivity of fA
for Artinian rings Ain C. The versal deformation ring R=RW, ρ) (resp. R=RW, V )) is
unique up to isomorphism if it exists.
If the map fAis bijective for all rings Ain ˆ
C, then we say R=RW, ρ) (resp. R=RW, V ))
is a universal deformation ring of ρ(resp. of V) and [ν] (resp. [U, φU]) is a universal deformation
of ρ(resp. of V) over R. This is equivalent to saying that Rrepresents the deformation functor ˆ
Hρ
(resp. ˆ
FV) in the sense that ˆ
Hρ(resp. ˆ
FV) is na turally isomor phic to the Hom functor Hom ˆ
C(R, ).
We will suppose from now on that Γ satisfies the following p-finiteness condition used by Mazur
in [14, §1.1]:
Hypothesis 2.1. For every open subgroup Jof finite index in Γ, there are only a finite number of
continuous homomorphisms from Jto Z/p.
It follows by [14, §1.2] that for Γ satisfying Hypothesis 2.1, all finite dimensional continuous
representations Vof Γ over khave a versal deformation ring. It is shown in [11, Prop. 7.1] that if
EndkΓ(V) = k, then Vhas a universal deformation ring.
A proof of the following base change result is given in an appendix (see §5). For finite extensions
of k, this was proved by Faltings (see [19, Ch. 1]).
Theorem 2.2. Let Γ,k,Wand ρbe as above. Let kbe a field extension of k. Suppose W
is a complete local commutative Noetherian ring with residue field kwhich has the structure of
aW-algebra, in the sense that we fix a local homomorphism W → W . Let ρ: Γ GLd(k)
be the composition of ρwith the injection GLd(k)֒GLd(k). Then the versal deformation ring
4 FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
RW, ρ)is the completion Rof Ω = WWRW, ρ)with respect to the unique maximal ideal
mof .
3. The inverse inverse problem for R= W[[t]]/(pnt, t2)
Throughout this section we make the following assumptions.
Hypothesis 3.1. Let kbe an arbitrary perfect field of characteristic p > 0and let Wbe the ring
W(k)of infinite Witt vectors over k. Let Γbe a profinite group satisfying Hypothesis 2.1. Let
dbe a positive integer and let ˜ρ: Γ GLd(k)be a continuous representation of Γ. Denote the
corresponding kΓ-module by ˜
V. Let K= Ker(˜ρ)and define G= Γ/K, so that Gis a finite group.
Let π: Γ Gbe the natural surjection. Let ρ:GGLd(k)be the representation whose inflation
to Γis ˜ρ, and denote the kG-module corresponding to ρby V. Suppose Vis a projective kG-module
and that EndkG(V) = k. Let n1be a fixed integer and define A= W/(Wpn). Let VAbe a
projective AG-module such that kAVAis isomorphic to Vas a kG-module. Let MAbe the free
A-module HomA(VA, VA), so that MAis a projective AG-module. Define
M=kAMA= Homk(V, V ).
If Lis an AG-module, we will also view Las an (Z/pn)G-module via restriction of operators from
AG to (Z/pn)G.
Theorem 3.2. Assume Hypothesis 3.1. The following statements (i) and (ii)are equivalent:
(i) The universal deformation ring RW,˜
V)is isomorphic to W[[t]]/(pnt, t2)and the universal
deformation of ˜
Vas a representation of Γis faithful.
(ii) The following conditions hold:
(a) The group Kis a finitely generated (Z/pn)G-module.
(b) Writing Kadditively, the group Hom(Z/p)G(K/pK, M )is a one-dimensional k-vector
space with respect to the k-vector space structure induced by M.
(c) There is an injective homomorphism ψ:KMAin Hom(Z/pn)G(K, MA)whose image
is not contained in pMA.
(d) Either
there exist g, h Kwith ψ(g)ψ(h)6≡ ψ(h)ψ(g) mod pMA, or
p= 2 and there exists xKof order 2with ψ(x)ψ(x)6≡ 0 mod 2MA.
Note that Theorem 3.2 implies Theorem 1.4. To show Theorem 1.3, we construct in Section 4
examples for which the conditions in Theorem 3.2(ii) are satisfied.
The following Remark 3.3 and Lemma 3.4 play an important role when proving the equivalence
of (i) and (ii) in Theorem 3.2. For any G-module L, we denote by ˜
Lthe Γ-module which results by
inflating Lvia the natural surjection π: Γ G.
Remark 3.3.Since VAis a projective AG-module which is a lift of Vover A, there exists a matrix
representation ρW:GGLd(W) whose reduction mod pnW is a matrix representation ρA:G
GLd(A) for VA, and whose reduction mod pW is the matrix representation ρ:GGLd(k) for V.
Let R= W[[t]]/(pnt, t2). We have an exact sequence of multiplicative groups
(3.1) 1 (1 + tMatd(R))GLd(R)GLd(W) 1
resulting from the natural isomorphism R/tR = W. The isomorphism tR A= W/pnW defined
by tw wmod pnW for wWRgives rise to isomorphisms of groups
(3.2) (1 + tMatd(R))
=Matd(A)+
=MA= HomA(VA, VA)
where Matd(A)+is the additive group of Matd(A). Hence we obtain a short exact sequence of
profinite groups
(3.3) 1 MAGLd(R)GLd(W) 1
where the homomorphism MAGLd(R) results from (3.1) and (3.2).
INVERSE PROBLEMS FOR DEFORMATION RINGS 5
The conjugation action of ρW(G)GLd(W) on (1 + tMatd(R))which results from (3.1) factors
through the homomorphism ρW(G)ρA(G)GLd(A) = AutA(VA). This action coincides
with the action of Gon MA= HomA(VA, VA) in (3.2) coming from the action of Gon VAvia
ρA:GGLd(A).
Lemma 3.4. Let ρW,ρAand Rbe as in Remark 3.3. Suppose there exist continuous group
homomorphisms ψ:KMAand ρR: Γ GLd(R)such that there is a commutative diagram
(3.4) 1//K
ψ
//Γ
ρR
π//G
ρW
//1
1//MA//GLd(R)//GLd(W) //1
where the bottom row is given by (3.3).
Suppose Ris a W-algebra in ˆ
Cwhich is a small extension of R, in the sense that there is an
exact sequence
(3.5) 0 JRν
R0
in which νis a continuous W-algebra homomorphism and dimk(J) = 1. Define M
Ato be the kernel
of the homomorphism GLd(R)GLd(W) resulting from the composition of Rν
Rwith RW.
Let E= (1 + Matd(J)). There is a natural exact sequence of groups
(3.6) 1 EM
AMA1.
There is a continuous representation ρR: Γ GLd(R)which lifts ρRif and only if there is a
homomorphism ψ:KM
Awhich lifts ψ.
Proof. The natural short exact sequence (3.6) results from the observation that M
Aconsists of all
elements in GLd(R) whose image in GLd(R) under νlies in MA, viewed as a subgroup of GLd(R)
via (3.3).
The group E= (1 + Matd(J))is naturally isomorphic to ˜
M= Homk(˜
V , ˜
V) as a kΓ-module,
since Jhas k-dimension 1. In particular, Kacts trivially on E.
Since Mis a projective kG-module, we have Hi(G, H0(K, ˜
M)) = Hi(G, M ) = 0 if i > 0.
Because Hom(K, M ) is isomorphic to a direct summand of a kG-module that is induced from
the trivial subgroup of G, Hom(K, M ) is cohomologically trivial. Hence Hi(G, H 1(K, ˜
M)) =
Hi(G, Hom(K, M )) = 0 for all i > 0. This implies that the Hochschild-Serre spectral sequence
for H2,˜
M) degenerates to give
(3.7) H2,˜
M) = H0(G, H2(K, ˜
M)) = H2(K, ˜
M)G.
But this means that the restriction homomorphism
H2, E)H2(K, E )
is injective. Since the obstruction to the existence of a lift ρRof ρRis an element ωH2, E)
whose restriction to Kgives the obstruction to the existence of a lift ψof ψ, this completes the
proof of Lemma 3.4.
Remark 3.5.For later use, we now analyze small extensions Rof R= W[[t]]/(pnt, t2) which are
themselves quotients of W[[t]]. Suppose Iis an ideal of W[[t]] that is contained in the ideal (pnt, t2)
such that the natural surjection ν:RRis a small extension as in (3.5). Since J= (pnt, t2)/I
is isomorphic to k, it follows that Icontains the product ideal
(pnt, t2)·(p, t) = (pn+1t, pt2, t3)
in W[[t]]. Now (pnt, t2)/(pn+1t, pt2, t3) is a two-dimensional vector space over kwith a basis given
by the classes of pntand t2. Since dimk((pnt, t2)/I) = 1 and (pn+1t, pt2, t3)I, there exist a, b W
such that
(3.8) I= (pn+1t, pt2, t3, apnt+bt2)
6 FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
and at least one of aor bis a unit.
Suppose first that bis a unit. Then t2=b1apntin R= W[[t]]/I. Hence
(3.9) I= (pn+1t, t2+b1apnt)
since pt2=b1apn+1tIand t3=b1apnt2Wpt2I. Moreover,
(3.10) R= W[[t]]/I = W[[t]]/(pn+1t, t2+b1apnt) = W (Wt/Wpn+1t).
Now suppose bpW, so that amust be a unit. Then
(3.11) I= (pnt, pt2, t3)
since bt2Wpt2lies in I, so (apnt+bt2)bt2=apntIand ais a unit in W. Moreover,
(3.12) R= W[[t]]/I = W[[t]]/(pnt, pt2, t3) = W (Wt/Wpnt)(Wt2/Wpt2).
3.1. Proof that (ii) implies (i) in Theorem 3.2. Throughout this subsection, we assume that
condition (ii) of Theorem 3.2 holds. As before, if Lis a G-module we denote by ˜
Lthe Γ-module
which results by inflating Lvia the natural surjection π: Γ G.
Lemma 3.6. One has dimkH1,˜
M) = 1. The tangent space of the universal deformation ring
RW,˜
V)of ˜
Vhas dimension 1. The ring RW,˜
V)is a quotient of W[[t]].
Proof. Since Mis a pro jective kG-module, we have Hi(G, H0(K, ˜
M)) = Hi(G, M ) = 0 if i > 0.
Therefore the Hochschild-Serre spectral sequence for H1,˜
M) degenerates to give
(3.13) H1,˜
M) = H0(G, H1(K, ˜
M)) = H0(G, Hom(K, M )) = Hom(K, M )G.
Writing Kadditively and using that Mhas exponent p, we have from condition (ii)(b) of Theorem
3.2 that
(3.14) Hom(K, M )G= Hom(K/pK, M )G= Hom(Z/p)G(K/pK, M )
=k.
On putting together (3.13) and (3.14), we conclude from [15, Prop. 21.1] that there is a natural
isomorphism
(3.15) t˜
V=def Homkm
m2+pRW,˜
V), kH1,˜
M) = k
where t˜
Vis the tangent space of the deformation functor of ˜
Vand mis the maximal ideal of the
universal deformation ring RW,˜
V). This implies
dimkm
m2+pRW,˜
V)= 1
so there is a continuous surjection of W-algebras W[[t]] RW,˜
V).
Lemma 3.7. Let ρW,ρAand Rbe as in Remark 3.3. There exists a lift ρR: Γ GLd(R)of the
representation ˜ρ: Γ GLd(k)for ˜
Vsuch that ρRlies in a commutative diagram of the form (3.4)
where ψ:KMAis as in condition (ii)(c)of Theorem 3.2. Let γ:RW,˜
V)Rbe the unique
continuous W-algebra homomorphism corresponding to the isomorphism class of the lift ρR. Then
γis surjective. There is a W-algebra surjection µ: W[[t]] RW,˜
V)whose composition with γis
the natural surjection W[[t]] R= W[[t]]/(pnt, t2). The kernel of µis an ideal of W[[t]] contained
in (pnt, t2).
Proof. The obstruction to the existence of ρRis an element of H2(G, MA). This group is trivial
since MAis projective, so ρRexists. Since ρWis a lift of the matrix representation ρ:G
GLd(k) = Autk(V) over W, we find that ρRis a lift of ρπ= ˜ρover R.
The ring k[ǫ] of dual numbers over kis isomorphic to R/pR =k[[t]]/(t2), and γis surjective if
and only if it induces a surjection
(3.16) γ:RW, V )
m2+pRW, V )R
m2
R+pR =R
pR
INVERSE PROBLEMS FOR DEFORMATION RINGS 7
where mis the maximal ideal of RW, V ). If γis not surjective, its image is k. Thus to prove that
γis surjective, it will suffice to show that the composition ρR/pR of ρRwith the natural surjection
GLd(R)GLd(R/pR) = GLd(k[ǫ]) is not a matrix representation of the trivial lift of ˜
Vover k[ǫ].
However, the kernel of the action of Γ on this trivial lift is KΓ, while ρR/pR is not trivial on K
because of condition (ii)(c) of Theorem 3.2. Hence γmust be surjective.
The tangent space of the deformation functor of Vis one dimensional by Lemma 3.6, so (3.16)
is in fact an isomorphism. Let rbe any element of RW, V ) such that γ(r) is the class of tin
R= W[[t]]/(pnt, t2). We then have a unique continuous W-algebra homomorphism µ: W[[t]]
RW, V ) which maps tto r. Since (γµ)(t) is the class of tin R, we se that γµis surjective. So
because γis an isomorphism, Nakayama’s lemma implies that µ: W[[t]] RW, V ) is surjective.
We now complete the proof that (ii) implies (i) in Theorem 3.2. Let R= W[[t]]/(pnt, t2) and let
ρR: Γ GLd(R) be the lift of ˜ρfrom Lemma 3.7. Let ψ:KMAbe the injective (Z/pn)G-
module homomorphism from condition (ii)(c) of Theorem 3.2. Since ρRlies in a commutative
diagram of the form (3.4) and ψand ρWare both injective, it follows that ρRis faithful.
Let R= W[[t]]/I be a small extension of Ras in Remark 3.5, so that ν:RRis the natural
surjection. Let M
Abe the kernel of the homomorphism GLd(R)GLd(W) resulting from the
composition Rν
RW. By Lemmas 3.4 and 3.7, it is enough to show that there is no group
homomorphism ψ:KM
Awhich lifts ψ.
Suppose to the contrary that such a homomorphism ψexists. Write Kadditively and M
A
multiplicatively. Define Sto be the union of {0}with the set of Teichm¨uller lifts in W = W(k) of
the elements of k. Let gKbe arbitrary. Then there exist unique
α0(g), α1(g),...,αn1(g)Matd(S)
such that
(3.17) ψ(g) = α0(g) + p α1(g) + ···+pn1αn1(g).
Moreover, since ψlifts ψ, we have
ψ(g)1 + t ψ(t) mod (pnt, t2) Matd(R).
By Remark 3.5, there exist a, b W such that Iis as in (3.8) and such that one of the alternatives
(3.9) or (3.11) holds. Suppose first that bis a unit in (3.8) and we have alternative (3.9). By (3.10),
it follows that there exists a unique β(g)Matd(S) such that
(3.18) ψ(g) = 1 + t α0(g) + pt α1(g) + ···+pn1t αn1(g) + pnt β(g).
If apW in (3.9), it follows that t2= 0 = pn+1 tin R. Therefore, since (pn)g= 0Kbecause of
condition (ii)(a) of Theorem 3.2, we have
(3.19) 1 = ψ(g)pn=1 + t α0(g) + pt α1(g) + ···+pn1t αn1(g) + pnt β(g)pn
= 1 + pnt α0(g)
when bis a unit and apW. Thus pnt α0(g) = 0. Since alternative (3.9) holds, this means that
α0(g) = 0, which implies by (3.17) that ψ(g)pMA. Since gwas an arbitrary element of K, this
is a contradiction to condition (ii)(c) of Theorem 3.2. Hence the case when bis a unit and apW
in (3.8) cannot occur.
If both band aare units in (3.8), then by (3.9) we have pn+1t= 0 and t2=b1apntin R.
Suppose his another element of K. Because pt2= 0 in R, it follows from (3.18) that
(3.20) ψ(g)·ψ(h)ψ(h)·ψ(g) = (b1a)pnt[α0(g)·α0(h)α0(h)·α0(g)] .
If bis not a unit in (3.8), then ahas to be a unit, and alternative (3.11) holds. By (3.12), it
follows that there exists a unique β(g)Matd(S) such that
(3.21) ψ(g) = 1 + t α0(g) + pt α1(g) + ···+pn1t αn1(g) + t2β(g).
8 FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
Because pt2= 0 = t3in Rin this case, it follows from (3.21) that
(3.22) ψ(g)·ψ(h)ψ(h)·ψ(g) = t2[α0(g)·α0(h)α0(h)·α0(g)] .
Because Kis abelian, we must have ψ(h+g) = ψ(g+h), and thus ψ(g)·ψ(h)ψ(h)·ψ(g) = 0.
Therefore, it follows from (3.20) (resp. (3.22)) that
(3.23) α0(g)·α0(h)α0(h)·α0(g) mod pMatd(W) for all g, h K.
This implies by (3.17) that for all g, h Kwe have
(3.24) ψ(g)ψ(h)ψ(h)ψ(g) mod pMA
where stands for the composition of elements in MA= HomA(VA, VA). In other words, the first
case in condition (ii)(d) cannot occur. Therefore, we must have that p= 2 and that there exists
an element xKof order 2 such that ψ(x)ψ(x)6≡ 0 mod 2MA. Replacing g=xin (3.18)
(resp. (3.21)) and using that ψ(x)·ψ(x) = ψ(x+x) = ψ(0K) = 1 shows that in both cases
α0(x)·α0(x)0 mod pMatd(W). By (3.17), this means that ψ(x)ψ(x)pMA= 2MA. Since
this is a contradiction to condition (ii)(d) of Theorem 3.2, this completes the proof of (ii) implies
(i) in Theorem 3.2.
3.2. Proof that (i) implies (ii) in Theorem 3.2. Throughout this subsection, we assume that
condition (i) of Theorem 3.2 holds. Let ρW,ρAand R= W[[t]]/(pnt, t2) be as in Remark 3.3.
By assumption, RW,˜
V) is isomorphic to R. Since the natural surjection RW which sends t
to 0 is the unique morphism in ˆ
Cfrom Rto W, there exists a universal lift ρR: Γ GLd(R) of
˜ρ: Γ GLd(k) over Rsuch that ρRfollowed by GLd(R)GLd(W) is equal to ρWπ. This implies
that the image of Kunder ρRlies inside (1 + tMatd(R)). Let ψ:KMAbe the restriction of
ρRto Kfollowed by the isomorphism (1 + tMatd(R))
=MAfrom (3.2). We obtain that ρRlies in
a commutative diagram of the form (3.4).
Since ρRis faithful by assumption, ψis an injective group homomorphism. In particular, Kis an
abelian group which is annihilated by pn, and hence a (Z/pn)G-module. As seen in Remark 3.3, the
conjugation action of ρW(G)GLd(W) on (1 + tMatd(R))factors through the homomorphism
ρW(G)ρA(G)GLd(A) = AutA(VA). Since this action coincides with the action of Gon
MA= HomA(VA, VA) in (3.2) coming from the action of Gon VAvia ρA:GGLd(A), it follows
that ψis an injective homomorphism in Hom(Z/pn)G(K, MA). Let ρR/pR be the composition of ρR
with the natural surjection GLd(R)GLd(R/pR) = GLd(k[ǫ]). If the image of ψis contained in
pMA, it follows that ρR/pR factors through G. Since Vis a projective kG-module, this implies that
ρR/pR is a matrix representation of the trivial lift of ˜
Vover k[ǫ]. Since R/pR
=k[ǫ] is the universal
deformation ring associated to mod plifts of ˜ρ, this is a contradiction. Hence the image of ψis not
contained in pMA, giving condition (ii)(c) of Theorem 3.2.
Writing Kadditively, it follows from Hypothesis 2.1 that K/pK is a finitely generated elementary
abelian p-group. Since K/pK is the Frattini quotient of K, this implies that Kis finitely generated
as a Z/pn-module, which is condition (ii)(a) of Theorem 3.2.
By assumption, R/pR
=k[ǫ], which implies H1,˜
M)
=ksince R=RW,˜
V). Because Mis a
projective kG-module by Hypothesis 3.1, we see as in (3.13) that H1,˜
M) = Hom(K, M )G. Since
Hom(K, M )G= Hom(Z/p)G(K/pK, M ), this gives condition (ii)(b) of Theorem 3.2.
Suppose condition (ii)(d) of Theorem 3.2 fails. We will show that then ρRcan be lifted from R
to the small extension R= W[[t]]/I , where
I= (pnt, pt2, t3),
so we are in case (3.11) of Remark 3.5. Let J= (pnt, t2)/I. By Lemma 3.4, it is enough to show
that ψcan be lifted to a homomorphism ψ:KM
Awhere M
Alies in a short exact sequence
1(1 + Matd(J))M
AMA1.
In what follows, we write Kadditively and M
Amultiplicatively. Moreover using (3.2), we identify
MAwith (1 + tMatd(R)).
INVERSE PROBLEMS FOR DEFORMATION RINGS 9
If p6= 2, define ψ:KM
Ato be the exponential function of (t ψ(g)) mod I. In other words,
ψ(g) = 1 + t ψ(g) + t2
2[ψ(g)ψ(g)].
Since we assume that condition (ii)(d) fails, i.e. the image of ψis commutative mod pMAwith
respect to map composition, it follows that ψis a group homomorphism which lifts ψ.
If p= 2, we use that Kis a finitely generated (Z/2n)-module. Let x1,...,xrbe a minimal set
of generators of K. We will show that ψmay be defined by letting
(3.25) ψ(xj) = 1 + t ψ(xj)
for 1 jrand by extending ψadditively to all of K. Since ψis a group homomorphism and
2t2= 0 in R, we have
ψ(xj)2= 1 + t ψ(2 xj) + t2[ψ(xj)ψ(xj)] and
ψ(xj)2i= 1 + t ψ((2i)xj) for 2 in.
For p= 2, the failing of condition (ii)(d) means that not only the image of ψis commutative mod
2MAwith respect to map composition, but also that ψ(x)ψ(x)0 mod 2MAfor all xKof
order 2. Hence it follows that if xjhas order 2 then ψ(xj)2= 1. Therefore, we can extend (3.25)
additively to obtain a group homomorphism ψ:KM
Awhich lifts ψ.
This completes the proof of (i) implies (ii) in Theorem 3.2.
4. The inverse problem for R=W[[t]]/(pnt, t2)
In this section, we use Theorem 3.2 to prove Theorem 1.3. We first establish a special case.
Theorem 4.1. Let k=Fp,W = W(k) = Zp,n1and A= W/pnW = Z/pn. Let E=Fp2and
let G0= Gal(E/k). Define G=E×G0, where G0acts on Eby restricting the natural action of
G0on Eto E. The natural action of G0and Eon V=Emakes Vinto a projective and simple
kG-module. The endomorphism ring M= Endk(V)is isomorphic to the twisted group ring V[G0]
as k-algebras. There exists a simple projective kG-module Vsuch that
(4.26) M
=VkG0
as kG-modules. Let K=V
Abe a projective AG-module such that kAV
A
=Vas kG-modules.
Let Γbe the semidirect product K×δGwhere δ:GAut(K)is the group homomorphism given
by the G-action on the (Z/pn)G-module K=V
A. If ˜
Vis the inflation of Vto a kΓ-module, then
the universal deformation ring RW,˜
V)is isomorphic to W[[t]]/(pnt, t2).
Proof. Let VAbe a projective AG-module such that kAVA
=Vas kG-modules. Let MA=
EndA(VA). We prove that G,K,Mand MAsatisfy the conditions in Theorem 3.2(ii).
If p= 2, then Gis isomorphic to the symmetric group S3on 3 letters and Vis the unique simple
projective kG-module, up to isomorphism. If p3, then the order of Gis relatively prime to pand
Vis also a simple pro jective kG-module.
Since V=Eis a Galois algebra over kwith Galois group G0, it follows that M= Endk(V) is
isomorphic to the twisted group ring E[G0] as k-algebras. This isomorphism defines a kG-module
structure on E[G0] by conjugation as follows. Let G0=hσi, let E=hζiand let x=b0+b1σ
E[G0], so b0, b1E. Then σ.x =σxσ1= (b0)p+ (b1)pσand ζ .x =ζ1=b0+b1ζ1pσ.
We have E[G0] = E+as k-vector spaces. The above G-action on E[G0] implies that both
Eand are kG-submodules of E[G0]. It follows for example from the normal basis theorem that
E
=kG0as kG-modules, where EGacts trivially by conjugation on E. Thus to prove (4.26) it
suffices to show that V=is a simple projective kG-module. Since Vis a projective kG-module,
so are M,E[G0] and V. Considering the action of E=hζion Eσ, we see that the action of ζ
has eigenvalue ζ1p. Since ζ1plies in Fp2Fp, it follows that V=Eσ is a simple projective
kG-module.
10 FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
For all p, let K=V
A, so Kis a finitely generated (Z/pn)G-module, giving condition (a) of
Theorem 3.2(ii). Define Γ = K×δGwhere δ:GAut(K) is the group homomorphism given by
the G-action on the (Z/pn)G-module K=V
A. Since by our above calculations, M
=(K/pK)kG0
as (Z/p)G-modules, it follows that
Hom(Z/p)G(K/pK, M )
=Hom(Z/p)G(V, V kG0)
=k
giving condition (b) of Theorem 3.2(ii). Since K=V
Aand MAare projective AG-modules, it follows
that HomAG(K, MA) is a projective A-module Hsuch that H/pH = HomkG (K/pK, M )
=k.
Therefore, HomAG(K, MA)
=Aand there exists an injective AG-module homomorphism ψ
HomAG(K, MA) whose image is not contained in pMA. Since A=Z/pn, this gives condition (c)
of Theorem 3.2(ii). By the above calculations in the twisted group algebra E[G0], we see that the
image of ψmod pMAis isomorphic to Eσ. Since for example (σ)(ζσ) = ζp6=ζ= (ζ σ)(σ), we
obtain that the image of ψmod pMAis not commutative with respect to the multiplication in the
ring MA. This gives condition (d) of Theorem 3.2(ii). Therefore, it follows from Theorem 3.2 that
RW,˜
V) is isomorphic to W[[t]]/(pnt, t2).
Remark 4.2.If p > 3, we can replace the group Gin Theorem 4.1 by the symmetric group S3and
Vby the 2-dimensional simple projective kS3-module. It follows then that M= Homk(V, V )
=
k[Z/2] Vas kG-modules, which means that we can take V=Vand K=VAin this case.
Remark 4.3.As mentioned in the introduction, in subsequent work on Question 1.1, Rainone proved
in [16] that if p > 3 and 1 mn, the ring Zp[[t]]/(pn, pmt) is a universal deformation ring relative
to W=Zp. These rings and the rings of Theorems 1.3 and 4.1 form disjoint sets of isomorphism
classes. Rainone’s work gave the first negative answers to two questions of Bleher and Chinburg
(Question 1.2 of [5] and Question 1.1 of [3]). Later we observed that Theorem 4.1 also gives a
negative answer to Question 1.2 of [5] when p > 2.
Completion of the Proof of Theorem 1.3. Let k,p,Wand nbe as in Theorem 1.3. By Theorem
4.1, there is a finite group Γ and a representation V0of Γ over Fpsuch that EndFpG(V0) = Fp
and the universal deformation ring RZp, V0) is isomorphic to Zp[[t]]/(pnt, t2). Let V=kFp
V0. Then EndkG (V)
=kFpEndFpG(V0)
=k. By Theorem 2.2, the universal deformation ring
RW, V ) is isomorphic to the completion of W ⊗ZpZp[[t]]/(pnt, t2) with respect to its maximal
ideal. This completion is isomorphic to W[[t]]/(pnt, t2). It remains to show that this ring is not a
complete intersection if pnW 6={0}. This is clear if Wis regular. In general, if one assumes that
W[[t]]/(pnt, t2) is a complete intersection, then Wis a quotient S/I for some regular complete local
commutative Noetherian ring Sand a proper ideal Iof S. If S=S[[t]], then W[[t]]/(pnt, t2) = S/I
when Iis the ideal of Sgenerated by I,pntand t2. Since dim W[[t]]/(pnt, t2) = dim W, we obtain
by [13, Thm. 21.1] that
(4.27) dimk(I/mSI) = dim Sdim (S/I) = dim S+ 1 dim (S/I )dimk(I /mSI) + 1.
Using power series expansions, we see that dimk(I/mSI) = dimk(I/mSI) + 2 if pnW 6={0}. Since
this contradicts (4.27), W[[t]]/(pnt, t2) is not a complete intersection if pnW 6={0}. This completes
the proof of Theorem 1.3.
Remark 4.4.To construct more examples to which Theorem 3.2 applies, there are two fundamental
issues. One must construct a group Gand a projective kG-module Vfor which both the left kG-
module structure and the ring structure of M= Homk(V, V ) can be analyzed sufficiently well to
be able to produce a G-module Khaving the properties in the Theorem. When one can identify
the ring Homk(V, V ) with a twisted group algebra, as in the proof of Theorem 4.1, this can be very
useful in checking condition (ii)(d) of Theorem 3.2. A natural approach to analyzing the kG-module
structure of Mis to note that the Brauer character ξMof Mis the tensor product ξVξVof
the Brauer characters of Vand its k-dual V. For example, if Vis induced from a representation
Xof a subgroup Hof G, then ξVis given by the usual formula for the character of an induced
representation. If dimk(X) = 1, the analysis of the ring structure of Mbecomes a combinatorial
problem using Xand coset representatives of Hin G.
INVERSE PROBLEMS FOR DEFORMATION RINGS 11
5. Appendix: Proof of Theorem 2.2
We assume the notation in the statement of Theorem 2.2. Let R=RW, ρ). Recall that
Ω = WWRand Ris the completion of Ω with respect to its unique maximal ideal m. Define
ˆ
Cto be the category of all complete local commutative Noetherian W-algebras with residue field
k. Let ν: Γ GLd(R) be a versal lift of ρover R, and let ν: Γ GLd(R) be the lift of ρover
Rdefined by ν(g) = (1 ν(g)i,j )1i,jdfor all gΓ.
The first step is to show that if AOb(C) is an Artinian W-algebra with residue field kand
τ: Γ GLd(A) is a lift of ρover A, then there is a morphism α:RAin ˆ
Csuch that
[τ] = [αν]. Since Ais Artinian, Hom ˆ
C(R, A) is equal to the space Homcont (Ω, A) of continuous
W-algebra homomorphisms which induce the identity map on the residue field k. Because of
Hypothesis 2.1, one can find a finite set SΓ such that τ(S) is a set of topological generators for
the image of τ. Since ρand ρhave the same image in GLd(k)GLd(k), there exists for each gS
a matrix t(g)Matd(W) such that all entries of the matrix τ(g)t(g) lie in the maximal ideal mA
of A. Let TmAbe the finite set of all matrix entries of τ(g)t(g) as granges over S. Then
there is a continuous homomorphism f:W[[x1,...,xm]] Awith m= #Tand {f(xi)}m
i=1 =T.
Since Ahas the discrete topology, the image Bof fmust be a local Artinian W-algebra with
residue field k. Since τ(S) is a set of topological generators for the image of τ, it follows that τ
defines a lift of ρover B. Because ν: Γ GLd(R) is a versal lift of ρover the versal deformation
ring R=RW, ρ) of ρ, there is a morphism β:RBin ˆ
Csuch that τ: Γ GLd(B) is conjugate
to βνby a matrix in the kernel of πB: GLd(B)GLd(B/mB) = GLd(k). Let β:RA
be the composition of βwith the inclusion BA. Define α:RAto be the morphism in
ˆ
Ccorresponding to the continuous W-algebra homomorphism Ω = WWRAwhich sends
wrto w·β(r) for all w∈ Wand rR. It follows that αsatisfies [τ] = [αν].
The second step is to show that when k[ǫ] is the ring of dual numbers over k, then Hom ˆ
C(R, k[ǫ])
is canonically identified with the set Defρ(k[ǫ]) of deformations of ρover k[ǫ]. Since k[ǫ] is Ar-
tinian, it suffices to show that Homcont(Ω, k [ǫ]) is identified with Def ρ(k[ǫ]). Let
(5.28) T(W,Ω) = m
m2
+ Ω ·mW
and T(W, R) = mR
m2
R+R·mW
so that we have natural isomorphisms Homcont(Ω, k [ǫ])
=Homk(T(W,Ω), k) and Hom ˆ
C(R, k[ǫ])
=
Homk(T(W, R), k). Since Ad(ρ) = kkAd(ρ), we have from [15, Prop. 21.1] that there are natural
isomorphisms
Hence it suffices to show that the natural homomorphism µ:kkT(W, R)T(W,Ω) is an
isomorphism of k-vector spaces. Since mWis finitely generated, one can reduce to the case when
W=k, by considering generators αof mWand successively replacing Wby W/(Wα) and Rby
R/(). One then divides Wand further by ideals generated by generators for mWto be able
to assume that W=k. However, the case when W=kand W=kis obvious, since then
T(k,Ω) = m/m2
=kkmR/m2
R=kT(k, R). This completes the proof of Theorem 2.2.
References
[1] F. M. Bleher, Universal deformation rings and dihedral defect groups. Trans. Amer. Math. Soc. 361 (2009),
3661–3705.
[2] F. M. Bleher, Universal deformation rings and generalized quaternion defect groups. Adv. Math. 225 (2010),
1499–1522.
[3] F. M. Bleher and T. Chinburg, Universal deformation rings and cyclic blocks. Math. Ann. 318 (2000), 805–836.
[4] F. M. Bleher and T. Chinburg, Universal deformation rings need not be complete intersections. C. R. Math.
Acad. Sci. Paris 342 (2006), 229–232.
[5] F. M. Bleher and T. Chinburg, Universal deformation rings need not be complete intersections. Math. Ann. 337
(2007), 739–767.
[6] F. M. Bleher, T. Chinburg and B. de Smit, Deformation rings which are not local complete intersections, March
2010. arXiv:1003.3143
12 FRAUKE M. BLEHER, TED CHINBURG, AND BART DE SMIT
[7] G. B¨ockle, Presentations of universal deformation rings. In: L-functions and Galois representations, 24–58,
London Math. Soc. Lecture Note Ser., 320, Cambridge Univ. Press, Cambridge, 2007.
[8] J. Byszewski, A universal deformation ring which is not a complete intersection ring. C. R. Math. Acad. Sci.
Paris 343 (2006), 565–568.
[9] T. Chinburg, Can deformation rings of group representations not be local complete intersections? In: Problems
from the Workshop on Automorphisms of Curves. Edited by Gunther Cornelissen and Frans Oort, with contri-
butions by I. Bouw, T. Chinburg, Cornelissen, C. Gasbarri, D. Glass, C. Lehr, M. Matignon, Oort, R. Pries and
S. Wewers. Rend. Sem. Mat. Univ. Padova 113 (2005), 129–177.
[10] H. Darmon, F. Diamond and R. Taylor, Fermat’s Last Theorem. In : R. Bott, A. Jaffe and S. T. Yau (eds),
Current developments in mathematics, 1995, International Press, Cambridge, MA., 1995, pp. 1–107.
[11] B. de Smit and H. W. Lenstra, Explicit construction of universal deformation rings. In: G. Cornell, J. H.
Silverman and G. Stevens (eds), Modular Forms and Fermat’s Last Theorem (Boston, MA, 1995), Springer-
Verlag, Berlin-Heidelberg-New York, 1997, pp. 313–326.
[12] A. Grothendieck, ´
El´ements de g´eom´etrie alg´ebrique, Chapitre IV, Quatri´eme Partie. Publ. Math. IHES 32 (1967),
5–361.
[13] H. Matsumura, Commutative Ring Theory. Cambridge Studies in Advanced Mathematics, Vol. 8, Cambridge
University Press, Cambridge, 1989.
[14] B. Mazur, Deforming Galois representations. In: Galois groups over Q(Berkeley, CA, 1987), Springer-Verlag,
Berlin-Heidelberg-New York, 1989, pp. 385–437.
[15] B. Mazur, An introduction to the deformation theory of Galois representations. In: G. Cornell, J. H. Silverman
and G. Stevens (eds), Modular Forms and Fermat’s Last Theorem (Boston, MA, 1995), Springer-Verlag, Berlin-
Heidelberg-New York, 1997, pp. 243–311.
[16] R. Rainone, On the inverse problem for deformation rings of representations. Master’s thesis, Universiteit Leiden,
B. de Smit thesis advisor, June 2010. http://www.math.leidenuniv.nl/en/theses/205/
[17] M. Schlessinger, Functors of Artin Rings. Trans. of the AMS 130 (1968), 208–222.
[18] J. P. Serre, Corps Locaux. Hermann, Paris, 1968.
[19] A. Wiles, Modular elliptic curves and Fermat’s last theorem. Ann. of Math. 141 (1995), 443–551.
F.B.: Department of Mathematics, University of Iowa, Iowa City, IA 52242-1419
T.C.: Department of Mathematics, University of Pennsylvania, Philadelphia, PA 19104-6395
B.deS: Mathematisch Instituut, University of Leiden, P.O. Box 9512, 2300 RA Leiden, The Netherlands
• ##### On Singular Equivalences of Morita Type and Universal Deformation Rings for Gorenstein Algebras
• "Traditionally, universal deformation rings are studied when Λ is equal to a group algebra kG, where G is a finite group and k has positive characteristic p (see e.g., [10, 12, 13, 14, 15, 16, 17, 18, 19] and their references). This approach has led to the solution of various open problems, e.g., the construction of representations whose universal deformation rings are not local complete intersections (see [10, 14, 15]). On the other hand, in [20, 22, 43] , universal deformation rings for certain selfinjective algebras, which are not Morita equivalent to a block of a group algebra, were discussed. "
[Show abstract] [Hide abstract] ABSTRACT: Let $\Lambda$ be a finite-dimensional algebra over a fixed algebraically closed field $\mathbf{k}$ of arbitrary characteristic, and let $V$ be a finitely generated $\Lambda$-module. It follows from results obtained by F.M. Bleher and the second author that $V$ has a well-defined versal deformation ring $R(\Lambda, V)$, which is a complete local commutative Noetherian $\mathbf{k}$-algebra with residue field $\mathbf{k}$. The second author also proved that if $\Lambda$ is a Gorenstein $\mathbf{k}$-algebra and $V$ is a Cohen-Macaulay $\Lambda$-module whose stable endomorphism ring is isomorphic to $\mathbf{k}$, then $R(\Lambda, V)$ is universal. In this article we prove that the isomorphism class of a versal deformation ring is preserved under singular equivalence of Morita type between Gorenstein $\mathbf{k}$-algebras. These singular equivalences of Morita type were introduced by X. W. Chen and L. G. Sun in an unpublished manuscript and then discussed by G. Zhou and A. Zimmermann in an article entitled "On singular equivalences of Morita type", which was published in J. Algebra during 2013.
Full-text · Article · Aug 2016
• ##### Universal deformation rings for a class of self-injective special biserial algebras
• "3.1.2]). This approach has recently led to the solution of various open problems, e.g., the construction of representations whose universal deformation rings are not local complete intersections (see [3, 6, 7]). On the other hand, in [10, 11, 21], universal deformation rings for certain self-injective algebras, which are not Morita equivalent to a block of a group algebra, were discussed . "
[Show abstract] [Hide abstract] ABSTRACT: Let $\mathbf{k}$ be an algebraically closed field, let $\Lambda$ be a finite dimensional $\mathbf{k}$-algebra and let $V$ be a $\Lambda$-module with stable endomorphism ring isomorphic to $\mathbf{k}$. If $\Lambda$ is self-injective then $V$ has a universal deformation ring $R(\Lambda,V)$, which is a complete local commutative Noetherian $\mathbf{k}$-algebra with residue field $\mathbf{k}$. Moreover, if $\Lambda$ is also a Frobenius $\mathbf{k}$-algebra then $R(\Lambda,V)$ is stable under syzygies. We use these facts to determine the universal deformation rings of string $\Lambda_N$-modules with stable endomorphism ring isomorphic to $\mathbf{k}$, where $N\geq 1$ and $\Lambda_N$ is a self-injective special biserial $\mathbf{k}$-algebra whose Hochschild cohomology ring is a finitely generated $\mathbf{k}$-algebra as proved by N. Snashall and R. Taillefer.
Full-text · Article · May 2016
• ##### On Universal Deformation Rings for Gorenstein Algebras
• "Traditionally, universal deformation rings are studied when Λ is equal to a group algebra kG, where G is a finite group and k has positive characteristic p (see e.g., [9, 11, 12, 13, 14, 15, 16, 17, 18] and their references). This approach has led to the solution of various open problems, e.g., the construction of representations whose universal deformation rings are not local complete intersections (see [9, 13, 14]). On the other hand, in [19, 20, 33], universal deformation rings for certain self-injective algebras, which are not Morita equivalent to a block of a group algebra, were discussed. "
[Show abstract] [Hide abstract] ABSTRACT: Let $\mathbf{k}$ be an algebraically closed field, and let $\Lambda$ be a finite dimensional $\mathbf{k}$-algebra. We prove that if $\Lambda$ is a Gorenstein algebra, then every finitely generated Cohen-Macaulay $\Lambda$-module $V$ whose stable endomorphism ring is isomorphic to $\mathbf{k}$ has a universal deformation ring $R(\Lambda,V)$, which is a complete local commutative Noetherian $\mathbf{k}$-algebra with residue field $\mathbf{k}$, and which is also stable under taking syzygies. We investigate a particular non-self-injective Gorenstein algebra $\Lambda_0$, which is of infinite global dimension and which has exactly three isomorphism classes of finitely generated indecomposable Cohen-Macaulay $\Lambda_0$-modules $V$ whose stable endomorphism ring is isomorphic to $\mathbf{k}$. We prove that in this situation, $R(\Lambda_0,V)$ is isomorphic either to $\mathbf{k}$ or to $\mathbf{k}[[t]]/(t^2)$.
Article · Apr 2016
|
2016-09-30 04:52:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436042308807373, "perplexity": 4462.472514338471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738662022.71/warc/CC-MAIN-20160924173742-00082-ip-10-143-35-109.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/2673539/is-there-a-31-dimensional-manifold-with-496-differential-structures/2674105
|
Is there a $31$-dimensional manifold with 496 differential structures?
Milnor found a $7$-dimensional sphere with 28 differential structures.
Is there a $31$-dimensional manifold with 496 differential structures?
• Why in the earth do you think that the 28 differentiable structures of $S^7$ have some relation with the Mersenne numbers? – Martín-Blas Pérez Pinilla Mar 2 '18 at 13:31
• I don't see how that comment could contribute to the discussion. Do the 28 structures have a connection with mersenne numbers, or don't they? And if they don't, why not just say so? – MJD Mar 2 '18 at 13:40
It's "the sphere", or "any sphere", not "a sphere". Most likely you observed that 7 is a Mersenne prime and 28 is the associated perfect number. There's indeed a connection, though maybe not exactly what you expect: the number of smooth structures on the $(4k-1)$-sphere is divisible by $2^{2k-2}(2^{2k-1}-1)$; see wikipedia. When $k=2$ you happen to have $4k-1=2^{2k-1}-1=7$. On the 31-dimensional sphere ($k=8$), there are $7767211311104=4\cdot3617\cdot2^{2k-2}(2^{2k-1}-1)$ smooth structures, where 3617 is the numerator of $|4B_{16}/8|=-3617/1020$. On the other hand, there are $992=2\cdot 496$ smooth structures on the 11-sphere.
|
2020-03-30 00:48:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8079569935798645, "perplexity": 407.4644447602161}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370496330.1/warc/CC-MAIN-20200329232328-20200330022328-00354.warc.gz"}
|
http://math.stackexchange.com/questions/84303/differentiability
|
# Differentiability
I know this is a stupid question, but it has been a long time since I did analysis. Could somebody show me how to show rigorously that $f(x)= |x|_\mathrm{eucl}^2$ is differentiable for all $x\in R^n$? I remember that the definition of differentiability involves if there exists a linear map $L$ s.t. ${|f(x+\epsilon)-f(x)-L(\epsilon)|\over|\epsilon|}\to0$ as $|\epsilon|\to0$ But is it necessary to find $L$ beforehand or is there some other way?
-
You can invoke theorems: here, the partial derivatives of $f$ are all continuous, $\frac{\partial f}{\partial x_i} = 2x_i$, so the function is differentiable. – Arturo Magidin Nov 21 '11 at 18:02
You do not have to use the definition. In your case it suffices to show that $f$ is continuously differentiable with respect to every coordinate. Try to write $f$ using such coordinates $x_1,\dots,x_n$. Calculating the partial derivatives should be easy after that. – Matthias Klupsch Nov 21 '11 at 18:02
Thanks, Arturo and Matthias! – hank Nov 21 '11 at 18:16
• Your $f(x)=|x|^2= x_1^2+x_2^2+\cdots+x_n^2$ is a polynomial in the coordinates and is therefore differentiable.
|
2016-07-26 06:44:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.957515299320221, "perplexity": 137.436854191609}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824756.90/warc/CC-MAIN-20160723071024-00308-ip-10-185-27-174.ec2.internal.warc.gz"}
|
http://askubuntu.com/questions/421373/commonbackend-the-md5-of-the-metalink-does-match?answertab=votes
|
CommonBackend: The md5 of the metalink does match
I am new in Linux and I am trying to install Ubuntu 12. 04 on my pc. After downloading and "burning" it on USB I try to install it. But there seems to be an error on the installation. "Could not retrieve the required installation files". It also tells me to check another file. This file writes something like this:
ERROR CommonBackend: The md5 of the metalink does match
ERROR CommonBackend: Cannot authenticate the metalink file, it might be corrupt
After that there is also another error which says:
ERROR CommonBackend: Invalid md5 for ISO C:\ubuntu\install\installation.iso (14ad92270218a8925d802b3d3b6e140f != 6c086700fd56a27a09b5de552ae054fd)
Any here? What seems to be the problem?
-
Same thing when I try to install earlier version 10.04 from USB. – Michael Feb 15 at 21:15
Exactly at what moment of the installation it shows this? – Braiam Feb 15 at 22:08
At the end. Just before finishing. I should say that I try to install Linux from wubi while on windows. I do not restart and change bios settings to run from USB – Michael Feb 15 at 22:15
Have you downloaded again and burned another disk? That's the only issue I find. – Braiam Feb 15 at 22:29
I have tried ubuntu 10 , 12, 13 on USB and DVDs. I cannot find the problem.Do I do something wrong? – Michael Feb 15 at 22:32
|
2014-08-27 17:18:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3890555799007416, "perplexity": 5500.23288522123}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829661.96/warc/CC-MAIN-20140820021349-00270-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/tags/jquery-ui/hot
|
# Tag Info
21
This part of the review is focused on the Javascript aspect of the answer I will be really light on this review. I will try to make it short and quick. You have some mixed quotes. Consider the following piece of code: $("#dialog-confirm").dialog({ autoOpen: false, resizable: false, height: 'auto', width: '400px', modal: true, [...... 11 This review is mostly about improving the implementation. I don't know JavaScript enough to answer your big questions. Try to catch @konijn or @flambino in the 2nd monitor chat room. Naming The first most noticeable thing to me is the naming of variables. I think the convention is to use PascalCase for classes, and camelCase for variables. JSHint If you ... 8 Chess clock anyone? :D HTML: <!-- Never assume just one. Prepare for more than one always. With that, we use classes --> <div class="stopwatch" data-autostart="false"> <div class="time"> <span class="hours"></span> : <span class="minutes"></span> : <span class="seconds"></... 7 (The following is about the HTML only.) Identation The identation of your markup is all over the place. I suggest identing by atleast two spaces. I prefer four spaces, though. I’ll append a fixed version below. Space Use it. Separate distinct blocks of code with one or two newlines. Don’t clump elements together like <label>content</label>&... 6 The semicolons after the closing braces are really unnecessary, and the indentation is a bit off here: if (yDegrees < maxRotation) { yDegrees = maxRotation; }; if (yDegrees > 0) { yDegrees = 0; }; It would be better this way: if (yDegrees < maxRotation) { yDegrees = ... 6 UX (User Experience) I'll start with a few possible improvements to the User Experience functionality. In order to offer the best possible interface you have to provide the same or similar experience in all browsers, simply listing something isn't supported, in my mind, is lazy and not a great UX/UI option. I'd look into webshims or a similar polyfill. ... 6 I would suggest loading the external file and initializing the dialog prior to the event...kind of like:$(document).ready(function(){ // make the dialog $("#dialog-confirm").load("/help/nameSearch.htm").dialog({ autoOpen: false, resizable: false, height: screen.height - 300, position: { my: ... 5 First off, don't use the onclick attribute. You have jQuery, so use it to set the appropriate event handler:$(".accordionHeadingDiv").click(function (event) { ... }); That will add a click-event handler to all the headings. Now, as far as I can tell, your code then removes the active class from all other accordion elements, adds it to the one that was ...
5
I've worked for a media company where ads had to run with their own version of JavaScript libraries, in the end we preferred to let ads run in their own iframe. Especially since some ads required access to document.write which creates havoc to your content ;) It is a bit scary that your solution has to modify jquery-ui.js, I would be very (very) hesitant to ...
5
Formatting First of all with the excessive whitespace at the beginning of lines, and the indenting that seems to follow no logic, this is extremely difficult to read. For example, this would be so much better: $('input[name=edit_spitzname]').change(function() { if ($('#edit_spitzname').val().length > 0 && $('#edit_spitzname').val().length &... 5 Since you're using some ES6+ features, I'd definitely recommend looking into others, such as let, arrow functions, and so on. This website is a good start, and you can find more information by searching "javascript es6". Adding onto the last point, whenever you don't change a variable, prefer using const for consistency's sake. If you do need to change a ... 4 Some observations: ResolveJqueryAndLoadAdditionalJsAndCss is too long a function name ResolveJqueryAndLoadAdditionalJsAndCss lies about what it does jQuery version '1.10.2' seems awfully restrictive, are you sure ? You copy pasted the code to load a css file, use a function console.log in production code is bad functions in the prototype and variables ... 4 I am guessing that the '.volume' elements are like volume indicators, the louder the sound, the more you show. You can iterate over the '.volume' elements, compare the index / elementCount to ui.value and decide to hide or show per that comparison. So, something like :$('.volumeSlider').slider({ value: 1, orientation: "vertical", range: "min"...
4
It'd definitely be nice to get rid of the inlined style attributes. Since you've already ID'ed the sub-menu pretty extensively, it's pretty easy to add this to the styles you already have. E.g. #privacy-menu .glyphicon { font-size: 10px; } gets rid of the inlined font-size styles. You can also do the toggling by adding a class to the #privacy-menu, ...
4
Not sure if it is helpful, but since we are on codereview, allow me to give it a shot. there is a progress element available in HTML5 that works very easily and is fairly well supported (http://caniuse.com/progressmeter), except for in IE off course. I would suggest using this in stead of the (heavy) jquery-ui you use now. When writing js code, avoid ...
4
With only three different items I don't think there's much benefit from iterating through a list and instantiating the multi-selects automatically. Your version IMO is much more legible than a version iterating through an embedded data structure. Although with lots of selects this could change fast. You could take the approach of specifying the attributes ...
4
Looks OK to me. jQueryUI will take care of most of the low-level stuff like not letting you open several dialogs and such (by simply not letting you click "behind" the modal). The code could be cleaned up though: Store $(item) in a variable (same with the modal element) No need to quote those option names Most of the options you're setting are already the ... 4 I believe this is a case of being too paranoid in optimization. Here's some problems: Your arr is global. You should probably place this somewhere only the widget knows about. You're clearning the array... by creating another array. You're spawning more objects (in this case, an array) instead of cleaning up. The proper way to clear an array is to set ... 4 There a few simple things you could do to help clean up this code. The first thing I always recommend is to create a closure for your script using an IIFE. This allows your code to run in its own scope and keeps everything out of the global scope. You can also pass in jQuery to the function expression and safely refer to it as$. (function($, undefined ) {... 4 Instead of destroying and recreating the datepickers, use the (3rd) option method (e.g. .option('maxDate', selectedDate)) to set the minDate/maxDate options when appropriate. Also, by utilizing the onSelect option, which receives as arguments the selected date and the datepicker instance, instead of using an onChange handler, MomentJS can be eliminated. ... 4 Well, there is a room for improvement in a few areas, Save references to the elements rather than using them like$("#element_id") every time you need them to avoid DOM Lookup every time When using .each() you are using $(this) at several places you should save the reference to the element on top of .each() and then use that reference. You can create a ... 3 You are repeating yourself a ton here, you should read up on DRY. You could store the animations in an array and then execute those animations : var PROPERTIES = 0 , OPTIONS = 1; var animations = [ [ {top: '425px'} , { duration: 1800, easing : 'linear', queue: true } ], [ {marginLeft: '-284px'} , { duration: 2500, easing : 'linear', queue: ... 3 Because you have different options for each one, there isn't really an efficient way to combine them into a common selector. And, with only three items, other options aren't really that likely to be more compact. Here's a pure table driven mode that would be more advantageous if you had many more of these: var multiData = [{ sel: ".choose", ... 3 I would move all positions and other configuration to a simpleObject which contains a condition which is checked to determine if the configuration should be used for the current screen dimensions: Remove unneeded variables Added functions to remove duplication Moved positions to animationConfig so logic and configuration is separated I made a assumption in ... 3 As Jef Vanzella mentioned, exceptions are for unexpected bad things, not for jumping out of control flow. Hence the second option is better, but it is still substandard code. Instead of using this, you should have a meaningful parameter name in your function declaration. Also, you could simply return the falsey !ui instead of !valid. Finally, not many ... 3 Edit: Something else that I now noticed while I was reviewing this answer, your if statements could be written differently, it looks like it should be an if/else statement rather than two separate if statements. We know that if yDegrees is less than maxRotation that it is not equal to zero, nor will it be equal to zero after yDegrees is set to -45(... 3 If you need to externalise/compact the$(".someslider").slider() function, I would do it like this: External script 'sliders_setup.js', depending on jQuery: (function(NAMESPACE){ // to keep things clean NAMESPACE.setup_slider = function(selector, value, selector_result){ \$(selector).slider({ animate: true, ...
3
Overall, I would say the code is well written. I can't speak about the jQueryUI parts because I am not as familiar with it. However, there are a couple of things you can do to improve it though. Most of these would be what I would call micro-optimizations. First, you need to DRY your code as much as possible. For instance, you repeat this line a lot. ...
3
There is too much nesting, which, for me, makes the logic hard to read. A good rule of thumb is to aim for two levels of nesting, by using the Extract Method refactoring. When the user clicks on the input, nothing happens. When the user starts typing a name that isn't in the list, sometimes the list pops up (if the initial matches) and sometimes they just ...
3
Question responses What would be your method to improve the management of multiple functions? Looking at the three functions that call dialog_Handler() it appears that the main redundancies are with the click handlers for the buttons. See the response to the question below for one technique to simplify that logic. In a comment, you asked: Do you ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
2022-01-24 14:54:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2984093129634857, "perplexity": 1304.0977415040927}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304570.90/warc/CC-MAIN-20220124124654-20220124154654-00581.warc.gz"}
|
http://mathhelpforum.com/algebra/139013-simple-problem-rearranging-fractional-exponent-expression.html
|
# Thread: Simple problem... rearranging fractional exponent expression
1. ## Simple problem... rearranging fractional exponent expression
This should be simple. Show that
$\displaystyle (\frac{2meE}{h^2})^{1/3}x = (\frac{2m}{h^2 e^2 E^2})^{1/3}eEx$
For some odd reason, I can't seem to get it. Any ideas?
2. Originally Posted by Ares_D1
This should be simple. Show that
$\displaystyle (\frac{2meE}{h^2})^{1/3}x = (\frac{2m}{h^2 e^2 E^2})^{1/3}eEx$
For some odd reason, I can't seem to get it. Any ideas?
$\displaystyle \bigg{(}\frac{2meE}{h^2}\bigg{)}^{1/3}x = \bigg{(}\frac{2meE}{h^2} \cdot \frac{e^2}{e^2} \cdot \frac{E^2}{E^2}\bigg{)}^{1/3}$
$\displaystyle = \bigg{(}\frac{2me^3E^3}{e^2 E^2 h^2}\bigg{)}^{1/3}x = \bigg{(}\frac{2m}{h^2e^2 E^2}\bigg{)}^{1/3}eEx$
|
2018-04-20 21:18:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9160178899765015, "perplexity": 2054.296184898021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944682.35/warc/CC-MAIN-20180420194306-20180420214306-00121.warc.gz"}
|
https://www.nextgurukul.in/wiki/concept/cbse/class-11/physics/oscillations/velocity-and-acceleration-in-shm/3961483
|
Notes On Velocity and Acceleration in SHM - CBSE Class 11 Physics
As you know, when a reference particle revolves along the reference circle with a constant angular velocity, omega, the projection of the particle on the diameters along the X-axis and Y-axis executes simple harmonic motion. The particle executes circular motion with a constant velocity ‘V’ whose magnitude is equal to ‘aω’ where ‘a’ is the radius of the reference circle. For simplicity, if we assume the initial phase (is equal to zero, then the two components of velocity V are V cos θ and V sin θ respectively in the directions shown. Then, the component V sin θ represents the velocity of the projection, which is executing simple harmonic motion on the diameter along the X- axis. Velocity of a particle in simple harmonic motion is equal to - V sin θ. The negative sign is due to the direction of the velocity at that instant which is opposite to that of the positive X-axis. If the particle has an initial phase (, the velocity of simple harmonic motion can be written as Or Acceleration in simple harmonic motion Acceleration of the revolving particle will be the centripetal acceleration This centripetal acceleration has two components: along the X-axis and along the Y-axis. If the initial phase Ф is equal to zero, the acceleration of the projection which is executing simple harmonic motion along the diameter along the X-axis is equal to since this component is directed opposite to the positive X-axis. If the initial phase phi is not equal to zero, the acceleration of the projection in simple harmonic motion is equal to This can be written as , which in turn can be written as On rearranging, we get The quantity a cos omega t plus phi is the displacement x of the particle executing simple harmonic motion. Hence, the acceleration of the particle executing simple harmonic motion can be written in terms of its displacement as
#### Summary
As you know, when a reference particle revolves along the reference circle with a constant angular velocity, omega, the projection of the particle on the diameters along the X-axis and Y-axis executes simple harmonic motion. The particle executes circular motion with a constant velocity ‘V’ whose magnitude is equal to ‘aω’ where ‘a’ is the radius of the reference circle. For simplicity, if we assume the initial phase (is equal to zero, then the two components of velocity V are V cos θ and V sin θ respectively in the directions shown. Then, the component V sin θ represents the velocity of the projection, which is executing simple harmonic motion on the diameter along the X- axis. Velocity of a particle in simple harmonic motion is equal to - V sin θ. The negative sign is due to the direction of the velocity at that instant which is opposite to that of the positive X-axis. If the particle has an initial phase (, the velocity of simple harmonic motion can be written as Or Acceleration in simple harmonic motion Acceleration of the revolving particle will be the centripetal acceleration This centripetal acceleration has two components: along the X-axis and along the Y-axis. If the initial phase Ф is equal to zero, the acceleration of the projection which is executing simple harmonic motion along the diameter along the X-axis is equal to since this component is directed opposite to the positive X-axis. If the initial phase phi is not equal to zero, the acceleration of the projection in simple harmonic motion is equal to This can be written as , which in turn can be written as On rearranging, we get The quantity a cos omega t plus phi is the displacement x of the particle executing simple harmonic motion. Hence, the acceleration of the particle executing simple harmonic motion can be written in terms of its displacement as
Previous
Next
|
2020-07-15 23:11:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8986315727233887, "perplexity": 374.72049824935084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00110.warc.gz"}
|
https://xianblog.wordpress.com/tag/course/
|
## methods for quantifying conflict casualties in Syria
Posted in Books, Statistics, University life with tags , , , , , , , , , , on November 3, 2014 by xi'an
On Monday November 17, 11am, Amphi 10, Université Paris-Dauphine, Rebecca Steorts from CMU will give a talk at the GT Statistique et imagerie seminar:
Information about social entities is often spread across multiple large databases, each degraded by noise, and without unique identifiers shared across databases.Entity resolution—reconstructing the actual entities and their attributes—is essential to using big data and is challenging not only for inference but also for computation.
In this talk, I motivate entity resolution by the current conflict in Syria. It has been tremendously well documented, however, we still do not know how many people have been killed from conflict-related violence. We describe a novel approach towards estimating death counts in Syria and challenges that are unique to this database. We first introduce computational speed-ups to avoid all-to-all record comparisons based upon locality-sensitive hashing from the computer science literature. We then introduce a novel approach to entity resolution by discovering a bipartite graph, which links manifest records to a common set of latent entities. Our model quantifies the uncertainty in the inference and propagates this uncertainty into subsequent analyses. Finally, we speak to the success and challenges of solving a problem that is at the forefront of national headlines and news.
This is joint work with Rob Hall (Etsy), Steve Fienberg (CMU), and Anshu Shrivastava (Cornell University).
[Note that Rebecca will visit the maths department in Paris-Dauphine for two weeks and give a short course in our data science Master on data confidentiality, privacy and statistical disclosure (syllabus).]
## a weird beamer feature…
Posted in Books, Kids, Linux, R, Statistics, University life with tags , , , , , , , , , , , , on September 24, 2014 by xi'an
As I was preparing my slides for my third year undergraduate stat course, I got a weird error that got a search on the Web to unravel:
! Extra }, or forgotten \endgroup.
\endframe ->\egroup
\begingroup \def \@currenvir {frame}
l.23 \end{frame}
\begin{slide}
?
which was related with a fragile environment
\begin{frame}[fragile]
\frametitle{simulation in practice}
\begin{itemize}
\item For a given distribution $F$, call the corresponding
pseudo-random generator in an arbitrary computer language
\begin{verbatim}
> x=rnorm(10)
> x
[1] -0.021573 -1.134735 1.359812 -0.887579
[7] -0.749418 0.506298 0.835791 0.472144
\end{verbatim}
\item use the sample as a statistician would
\begin{verbatim}
> mean(x)
[1] 0.004892123
> var(x)
[1] 0.8034657
\end{verbatim}
to approximate quantities related with $F$
\end{itemize}
\end{frame}\begin{frame}
but not directly the verbatim part: the reason for the bug was that the \end{frame} command did not have a line by itself! Which is one rare occurrence where the carriage return has an impact in LaTeX, as far as I know… (The same bug appears when there is an indentation at the beginning of the line. Weird!) [Another annoying feature is wordpress turning > into > in the sourcecode environment…]
## Đôi nét về GS. Xi’an
Posted in Books, Travel, University life with tags , , , , on May 28, 2013 by xi'an
Here is a short bio of me written in Vietnamese in conjunction with the course I will give at CMS (Centre for Mathematical Sciences), Ho Chi Min City, next week:
Christian P. Robert là giáo sư tại Khoa Toán ứng dụng của ĐH Paris Dauphine từ năm 2000. GS Robert đã từng giảng dạy ở các ĐH Perdue, Cornell (Mỹ) và ĐH Canterbury (New-Zealand). Ông đã làm biên tập cho tạp chí Journal of the Royal Statistical Society Series B từ năm 2006 đến năm 2009 và là phó biên tập cho tạp chí Annals of Statistics. Năm 2008, ông làm Chủ tịch của Hiệp hội Thống kê Quốc tế về Thống kê Bayes (ISBA). Lĩnh vực nghiên cứu của GS Robert bao gồm Thống kê Bayes mà tập trung chính vào Lý thuyết quyết định (Decision theory) và Mô hình lựa chọn (Model selection), Lý thuyết về Xích Markov trong mô phỏng và Thống kê tính toán.
## R midterms
Posted in Kids, Linux, R, Statistics, University life with tags , , , , , , , , , , , on November 9, 2012 by xi'an
Here are my R midterm exams, version A and version B in English (as students are sitting next to one another in the computer rooms), on simulation methods for my undergrad exploratory statistics course. Nothing particularly exciting or innovative! Dedicated ‘Og‘s readers may spot a few Le Monde puzzles in the lot…
Two rather entertaining if mundane occurences related to this R exam: one hour prior to the exam, a student came to my office to beg for being allowed to take the solution manual with her (as those midterm exercises are actually picked from an exercise booklet, some students cooperated towards producing a complete solution manual and this within a week!), kind of missing the main point of having an exam. (I have not seen yet this manual but I’d be quite interested in checking the code they produced on that occasion…) During the exam, another student asked me what was the R command to turn any density into a random generator: he had written a density function called mydens and could not fathom why rmydens(n) was not working. The same student later called me as his computer was “stuck”: he was not aware that a “+” prompt on the command line meant R was waiting for him to complete the command… A less comical event that ended well is that a student failed to save her R code (periodically and) at the end of the exam and we had to dig very deep into the machine to salvage her R commands from \tmp as rkward safeguards, as only the .RData file was available at first. I am glad we found this before turning the machine off, otherwise it would have been lost.
## Introducing Monte Carlo in PaRis [more slides]
Posted in R, Statistics, University life with tags , , , , , on November 18, 2010 by xi'an
The class started yesterday with a small but focussed and responsive audience! Given the background of the students, and in particular their clear proficiency in R!, I switched between the original slides of Introducing Monte Carlo Methods with R and those of my Monte Carlo Statistical Methods: course, updated by Olivier Cappé who is teaching the course in Paris-Dauphine this year.
## MCMC & ABC
Posted in Statistics, Travel, University life with tags , , , , , , on October 24, 2010 by xi'an
Here are my (preliminary) slides for the Wharton short course, in an evolutionary (!) version that will keep changing along the week as I incorporate the material from a survey on ABC we are currently writing with Jean-Michel Marin and Robin Ryder.
## R tee-shirt
Posted in Books, R, University life with tags , , , , , on September 21, 2010 by xi'an
I gave my introduction to the R course in a crammed amphitheatre of about 200 students today. Had to wear my collectoR teeshirt from Revolution Analytics, even though it only made the kids pay attention for about 30 seconds… The other few “lines” that worked were using the Proctor & Gamble “car 54″ poster and calling bootstrap “Statistics for dummies”, but I have trouble every year in getting the students interested in the topic (simulation) until…I introduced a (dummy) finance example of computing option prices. Sad!
|
2015-05-28 05:54:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4686206877231598, "perplexity": 4645.0778452613085}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207929256.27/warc/CC-MAIN-20150521113209-00199-ip-10-180-206-219.ec2.internal.warc.gz"}
|
http://www.onlinemathlearning.com/probability-conditional-2.html
|
# Conditional Probability Formula
Related Topics:
Math Worksheets
Videos, worksheets, solutions, and activities to help Algebra II students learn about conditional probability.
What is the formula for conditional probability?
The formula for conditional probability is
$p(B|A) = \frac{{p(A \cap B)}}{{p(A)}}$
Conditional Probability, part 1
An introduction to the concept of conditional probability
Conditional Probability, part 2
Solving a coin-toss problem with conditional probability.
Conditional probability - introduction
Phrasing of conditional probability questions and their calculation
Conditional probability: Part 2 (Independence)
Conditional probability is used to introduce the notion of independent events
Rotate to landscape screen format on a mobile phone or small tablet to use the Mathway widget, a free math problem solver that answers your questions with step-by-step explanations.
You can use the free Mathway calculator and problem solver below to practice Algebra or other math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations.
|
2017-11-19 17:54:22
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35766902565956116, "perplexity": 2906.635508435816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805708.41/warc/CC-MAIN-20171119172232-20171119192232-00513.warc.gz"}
|
https://lifexsoft.org/index.php/resources/19-texture/radiomic-features/67-neighborhood-grey-level-different-matrix-ngldm
|
## Neighborhood Grey-Level Difference Matrix (NGLDM)
The neighborhood grey-level difference matrix (NGLDM) corresponds to the difference of grey-level between one voxel and its 26 neighbours in 3 dimensions (8 in 2D). Three texture indices can be computed from this matrix. An element $$(i,1)$$ of NGLDM corresponds to the probability of occurrence of level $$i$$ and an element $$(i,2)$$ is equal to:
NGLDM(i,2)= \sum_{p}\sum_{q} \left\lbrace
\begin{array}{ll}
|\overline{M}(p,q)-i| \mbox{~if $I(p,q)=i$} \\
0 \mbox{~else}
\end{array}
\right.
where $$\overline{M}(p,q)$$ is the average of intensities over the 26 neighbour voxels of voxel $$(p,q)$$.
NGLDM_Coarseness is the level of spatial rate of change in intensity.
NGLDM\_Coarseness=\frac{1}{\sum_{i} NGLDM(i,1) \cdot NGLDM(i,2)}
NGLDM_Contrast is the intensity difference between neighbouring regions.
NGLDM\_Contrast=\left[ \sum_{i} \sum_{j} NGLDM(i,1) \cdot NGLDM(j,1) \cdot (i-j)^{2} \right] \cdot \frac{\sum_{i} NGLDM(i,2)}{E \cdot G \cdot (G-1)}
where E corresponds to the number of voxels in the Volume of Interest and G the number of grey-levels.
NGLDM_Busyness is the spatial frequency of changes in intensity.
\begin{split}
NGLDM\_Busyness=\frac{\sum_{i} NGLDM(i,1) \cdot NGLDM(i,2)}{\sum_{i} \sum_{j} \left | i \cdot NGLDM(i,1) - j \cdot NGLDM(j,1) \right | } \\
with~ NGLDM(i,1)\neq 0,~ NGLDM(j,1)\neq 0
\end{split}
|
2021-04-15 01:25:28
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999948740005493, "perplexity": 4967.742544255768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038082988.39/warc/CC-MAIN-20210415005811-20210415035811-00510.warc.gz"}
|
https://blog.paperspace.com/introduction-to-audio-analysis-and-synthesis/
|
# Introduction to Audio Analysis and Processing
In the first part of this new series we'll explore basics of audio analysis and signal processing and we'll learn to apply basic machine learning techniques to audio.
a year ago • 15 min read
Audio analysis and signal processing have benefited greatly from machine learning and deep learning techniques but are underrepresented in data scientist training and vocabulary where fields like NLP and computer vision predominate.
In this series of articles we'll try to rebalance the equation a little bit and explore machine learning and deep learning applications related to audio.
Bring this project to life
## Introduction
Let's get some basics out of the way. Sound travels in waves that propagate through vibrations in the medium the wave is traveling in. No medium, no sound. Hence, sound doesn't travel in empty space.
These vibrations are usually represented using a simple two-dimensional plot, where the $x$ dimension is time and the $y$ dimension is the magnitude of said pressure wave.
Sound waves can be imagined as pressure waves by understanding the idea of compressions and rarefactions. Take a tuning fork. It vibrates back and forth, pushing the particles around it closer or farther apart. The parts where air is pushed closer together are called compressions, and the parts where it is pushed further apart are called rarefactions. Such waves that traverse space using compressions and rarefactions are called longitudinal waves.
A wavelength is the distance between two consecutive compressions or two consecutive rarefactions. Frequency, or pitch, is the number of times per second that a sound wave repeats itself. Then the velocity of a wave is the product of the wavelength and the frequency of the wave.
$$v = \lambda * f$$
Where $v$ is the velocity of the wave, $\lambda$ is the wavelength, and $f$ is the frequency.
The thing is, in the natural environment, it very rarely happens that the sound we hear or observe comes in one clear frequency (in a discernible sinusoidal magnitude pattern). Waves superimpose upon each other, making it very difficult to understand only via magnitude readings which frequencies are playing a part. Understanding frequencies can be very important. Its applications range from creating beautiful music to making sure engines don't explode under acoustic pressure waves resonating with each other.
## Fourier Transforms
Once upon a time, Joseph Fourier made up his mind about every curve on the planet. He came up with the crazy idea that every curve can be represented as a sum of sinusoidal waves of various magnitudes, frequencies, and phase differences. A straight line, a circle, some weird freehand drawing of Fourier himself. Check out the video linked below to understand how intricate this simple idea really is. The video describes the Fourier Series, which is a superposition of several sine waves that are built in a way to satisfy the initial distribution.
All of it, a simple collection of different sine waves. Crazy, right?
Fourier transforms are a way of getting all the coefficients of different frequencies and their interactions with each other, given such initial conditions. In our case, our magnitude data from our sound pressure wave would be our initial condition, that Fourier transforms will help us instead convert to an expression that describes at any time the contribution of different frequencies in creating the sound you finally hear. We call this transformation moving from a time-domain representation to a frequency domain representation.
For a discrete sequence $\{x_{n}\}:= x_{0}, x_{1}, ... x_{n-1}$, since computers don't understand continuous signals, a transformed representation in the frequency domain $\{X_{n}\}:= X_{0}, X_{1}, ... X_{n-1}$ can be found using the following formulation:
$$X_{k} = \sum_{n = 0}^{N - 1} x_{n} e^{\frac{-2 \pi i}{N}k n}$$
Which is equivalent to:
$$X_{k} = \sum_{n = 0}^{N - 1} x_{n} [ cos(\frac{2 \pi}{N} k n) - i.sin(\frac{2 \pi}{N} k n) ]$$
A Fourier transform is a reversible function, and an inverse fourier transform can be found as follows:
$$x_{n} = \frac{1}{N} \sum_{n = 0}^{N - 1} X_{k} e^{\frac{2 \pi i}{N}k n}$$
Now, A discrete Fourier transform is computationally quite heavy to calculate, with a time complexity of the order $O(n^{2})$. But there is a faster algorithm called Fast Fourier Transform (or FFT) that performs with a complexity of $O(n.log(n))$. This is a significant boost in speed. Even for an input with $n = 50$, there is a significant increase in performance.
If we use audio that has a sampling frequency of 11025 Hz, in a three minute song, there are about 2,000,000 input points. In that case, the $O(n.log(n))$ FFT algorithm provides a frequency representation of our data:
$$\frac{n^{2}}{n.log_{2}(n)} = \frac{(2·10^{6})^{2}}{(2·10^{6}). log_{2}(2·10^{6})}$$
100,000 times faster!
Though there are a lot of variations of the algorithm today, the most commonly used is the Cooley-Tucker FFT algorithm. The simplest form of it, Radix-2 Decimation in Time (DIT) FFT, first computes the DFTs of the even-indexed inputs and of the odd-indexed inputs and then combines those two results to produce the DFT of the whole sequence. This idea can then be performed recursively to reduce the overall runtime to O(N log N). They exploit the symmetry in the DFT algorithm to make it faster. There are more general implementations, but the more common ones work better when the input size is a power of 2.
## Short-Time Fourier Transforms
With Fourier transforms, we convert a signal from the time domain into the frequency domain. In doing so, we see how every point in time is interacting with every other for every frequency. Short-time Fourier transforms do so for the neighboring points in time instead of the entire signal. This is done by utilizing a window function that hops with a specific hop length to give us the frequency domain values.
Let $x$ be a signal of length $L$ and $w$ be a window function of length $N$. Then the maximal frame index $M$ would be $\frac{L - N}{N}$. $X(m, k)$ would denote the $k^{th}$ fourier coefficient at $m^{th}$ time frame. Another parameter defined is $H$, called the hop size. It determines the step size of the window function.
Then STFT $X(m, k)$ is given by:
$$X(m, k) = \sum_{n = 0}^{N - 1} x[n + m.H] . w[n] . e^{\frac{-2 \pi i}{N}k n}$$ There are a variety of window functions one can chose from; Hann, Hamming, Blackman, Blackman-Harris, Gaussian, etc.
The STFT can provide a rich visual representation for us to analyze, called a spectrogram. A spectrogram is a two-dimensional representation of the square of the STFT $X(m, k)$, and can give us important visual insight into which parts of a piece of audio sound like a buzz, a hum, a hiss, a click, or a pop, or if there are any gaps.
## The Mel Scale
Thus far, we have gotten a grip on different methods used to analyze sound, assuming it is on a linear scale. Our perception of sound is not linear, though. It turns out we can differentiate between lower frequencies a lot better than the higher ones. To capture this, the Mel scale was proposed as a transformation to represent what our perception of sound thinks of as a linear development in frequencies.
A popular formula to convert frequency in Hertz to Mels is:
$$m = 2595 . log_{10}(1 + \frac{f}{700})$$
There have been other less popular attempts at defining a scale for psychoacoustic perceptions like the Bark scale. There is a lot more to psychoacoustics than what we are covering here, like how the human ear works, how we perceive loudness, timbre, tempo and beat, auditory masking, binaural beats, HRTFs, etc. If you're interested, this, this, and this might be good introductory resources.
There is of course some criticism associated with the Mel scale regarding how controlled the experiments to create the scale really were, and if the results are biased. Was it tested on musicians and non-musicians equally? Is the subjective opinion of every person about what they perceive as linear really a good way to decide what human perception, in general, behaves like?
Unfortunately, we won't be getting into said discussions about biases any further. We will instead take the Mel Scale and apply it in a way that we can get a spectrogram-like representation that was facilitated earlier by STFTs.
## Filter Banks and MFCCs
MFCCs or Mel Frequency cepstral coefficients have become a popular way of representing sound. In a nutshell, MFCCs are calculated by applying a pre-emphasis filter on an audio signal, taking the STFT of that signal, applying mel scale-based filter banks, taking a DCT (discrete cosine transform), and normalizing the output. Lots of big words there, so let's unpack it.
The pre-emphasis filter is a way of stationarizing the audio signal using a weighted single order time difference of the signal.
$$y(t) = x(t) - \alpha x(t - 1)$$
The filter banks are a bunch of triangular waveforms. These triangular filters are applied to the STFT to extract the power spectrum. Each filter in the filter bank is triangular with a magnitude of 1 at the center, and frequency linearly decreasing to 0 at the center of the next filter bank's central frequency.
This is a set of 20-40 triangular filters between 20Hz to 4kHz that we apply to the periodogram power spectral estimate we got from the STFT of the pre-emphasis filtered signal. Our filterbank comes in the form of as many vectors as the number of filters, each vector the size of the number of frequencies in the Fourier transform. Each vector is mostly zeros, but is non-zero for a certain section of the spectrum. To calculate filterbank energies we multiply each filterbank with the power spectrum, then add up the coefficients.
Finally, the processed filter banks are passed through a discrete cosine transform. The discrete cosine of a signal can be represented as follows:
$$X_{k} = \sum_{n = 0}^{N - 1} x_{n} cos[\frac{\pi}{N}(n + \frac{1}{N}) k ]$$
Where $k = 0, .... , N - 1$.
## Introduction to Librosa
Let's get our hands dirty. Librosa is a Python library that we will use to look through the theory we went through in the past few sections.
sudo apt-get update
sudo apt-get install ffmpeg
pip install librosa
Let's open an mp3 file. Because it only seems appropriate, I'll try the song Fa Fa Fa by Datarock... from the album Datarock Datarock.
import librosa
from matplotlib import pyplot as plt
print('Sampling Rate: ', sampling_rate)
plt.figure(figsize=(14, 5))
plt.plot(x[:sampling_rate * 5])
plt.title('Plot for the first 5 seconds')
plt.xlabel('Frame number')
plt.ylabel('Magnitude')
plt.show()
So that's our time domain signal. The default sampling rate used by librosa is 22050, but you can pass anything you like.
x, sampling_rate = librosa.load('./Datarock-FaFaFa.mp3', sr=44100)
print('Sampling Rate: ', sampling_rate)
plt.figure(figsize=(14, 5))
plt.plot(x[:sampling_rate * 5])
plt.title('Plot for the first 5 seconds')
plt.xlabel('Frame number')
plt.ylabel('Magnitude')
plt.show()
Passing a null value in the sampling ratio argument returns the file loaded with the native sampling ratio.
x, sampling_rate = librosa.load('./Datarock-FaFaFa.mp3', sr=None)
print('Sampling Rate: ', sampling_rate)
This gives me:
Sampling Rate: 44100
librosa provides a plotting functionality in the module librosa.display too.
import librosa.display
plt.figure(figsize=(14, 5))
librosa.display.waveplot(x[:5*sampling_rate], sr=sampling_rate)
plt.show()
I don't know why the librosa.display is able to capture certain lighter magnitude fluctuations past the 3.5 seconds mark that weren't captured by the matplotlib plots.
librosa also has a bunch of example audio files that can be used for experimentation. You can view the list using the following command.
librosa.util.list_examples()
The output:
AVAILABLE EXAMPLES
--------------------------------------------------------------------
brahms Brahms - Hungarian Dance #5
choice Admiral Bob - Choice (drum+bass)
fishin Karissa Hobbs - Let's Go Fishin'
nutcracker Tchaikovsky - Dance of the Sugar Plum Fairy
trumpet Mihai Sorohan - Trumpet loop
vibeace Kevin MacLeod - Vibe Ace
And you can use the iPython display to play audio files on Jupyter notebooks like this:
import IPython.display as ipd
example_name = 'nutcracker'
audio_path = librosa.ex(example_name)
ipd.Audio(audio_path, rate=sampling_rate)
You can extract the sampling rate and duration of an audio sample as follows.
x, sampling_rate = librosa.load(audio_path, sr=None)
sampling_rate = librosa.get_samplerate(audio_path)
print('sampling rate: ', sampling_rate)
duration = librosa.get_duration(x)
print('duration: ', duration)
The output is:
sampling rate: 22050
duration: 119.87591836734694
Plotting an STFT-based spectrogram can be done as follows:
from matplotlib import pyplot as plt
S = librosa.stft(x)
fig = plt.figure(figsize=(12,9))
plt.title('STFT Spectrogram (Linear scale)')
plt.xlabel('Frame number')
plt.ylabel('Frequency (Hz)')
plt.pcolormesh(np.abs(S))
plt.savefig('stft-plt.png')
You can also use librosa functionality for plotting spectrograms.
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(S, x_axis='time',
y_axis='linear', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='STFT linear scale spectrogram')
plt.savefig('stft-librosa-linear.png')
## Spectral Features
To start understanding how frequencies are changing in the sound signal, we can start by looking at the spectral centroids of our audio clip. These indicate where the center of mass of the spectrum is located. Perceptually, it has a robust connection with the impression of the brightness of a sound.
plt.plot(librosa.feature.spectral_centroid(x, sr=sampling_rate)[0])
plt.xlabel('Frame number')
plt.ylabel('frequency (Hz)')
plt.title('Spectral centroids')
plt.show()
You can also compare the centroid with the spectral bandwidth of the sound over time. Spectral bandwidth is calculated as follows:
$$(\sum_k S[k, t] * (freq[k, t] - centroid[t])^{p})^{\frac{1}{p}}$$
Where $k$ is the frequency bin index, $t$ is the time index, $S [k, t]$ is the STFT magnitude at frequency bin $k$ and time $t$, $freq[k, t]$ is the frequency at frequency bin $k$ and time $t$, $centroid$ is the spectral centroid at time $t$, and finally $p$ is the power to raise deviation from spectral centroid. The default value of $p$ for librosa is $2$.
spec_bw = librosa.feature.spectral_bandwidth(x, sr=sampling_rate)
plt.xlabel('Frame number')
plt.ylabel('frequency (Hz)')
plt.title('Spectral bandwidth')
plt.show()
You can also visualize the deviation from centroid by running the following code:
times = librosa.times_like(spec_bw)
centroid = librosa.feature.spectral_centroid(S=np.abs(S))
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(S_dB, x_axis='time',
y_axis='log', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='Spectral centroid plus/minus spectral bandwidth')
ax.fill_between(times, centroid[0] - spec_bw[0], centroid[0] + spec_bw[0],
alpha=0.5, label='Centroid +- bandwidth')
ax.plot(times, centroid[0], label='Spectral centroid', color='w')
ax.legend(loc='lower right')
plt.savefig('centroid-vs-bw-librosa.png')
We can also look at spectral contrast. Spectral contrast is defined as the level difference between peaks and valleys in the spectrum. Each frame of a spectrogram $S$ is divided into sub-bands. For each sub-band, the energy contrast is estimated by comparing the mean energy in the top quartile (peak energy) to that of the bottom quartile (valley energy). Energy is dependent on the power spectrogram and the window function and size.
contrast = librosa.feature.spectral_contrast(S=np.abs(S), sr=sampling_rate)
Plotting the contrast to visualize the frequency bands:
fig, ax = plt.subplots(figsize=(15,9))
img2 = librosa.display.specshow(contrast, x_axis='time', ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(ylabel='Frequency bands', title='Spectral contrast')
plt.savefig('spectral-contrast-librosa.png')
There are many more spectral features. You can read more about them here.
## Understanding Spectrograms
A linear scale spectrogram doesn't capture information very clearly. There are better ways of representing this information. librosa allows us to plot the spectrogram on a log scale. To do so, change the above code to this:
S = librosa.stft(x)
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(S, x_axis='time',
y_axis='log', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='STFT log scale spectrogram')
plt.savefig('stft-librosa-log.png')
Utilizing the STFT matrix directly to plot doesn't give us clear information. A common practice is to convert the amplitude spectrogram into a power spectrogram by squaring the matrix. Following this, converting the power in our spectrogram to decibels against some reference power increases the visibility of our data.
The formula for decibel calculation is as follows:
$$A = 10 * log_{10}(\frac{P_{2}}{P_{1}})$$
Where $P_{1}$ is the reference power and $P_{2}$ is the measured value.
librosa has two functions in the API that allow us to make these calculations. librosa.core.power_to_db makes the calculation mentioned above. The function librosa.core.amplitude_to_db also handles the spectrogram conversion from amplitude to power by squaring said spectrogram before converting it to decibels. Plotting STFTs after this conversion gives us the following plots.
S_dB = librosa.amplitude_to_db(S, ref=np.max)
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(S_dB, x_axis='time',
y_axis='linear', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='STFT (amplitude to DB scaled) linear scale spectrogram')
plt.savefig('stft-librosa-linear-db.png')
And in log scale:
S_dB = librosa.amplitude_to_db(S, ref=np.max)
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(S_dB, x_axis='time',
y_axis='log', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='STFT (amplitude to DB scaled) log scale spectrogram')
plt.savefig('stft-librosa-log-db.png')
As can be seen above, the frequency information is so much clearer compared to our initial plots.
As we discussed earlier, human sound perception is not linear, and we are able to differentiate between lower frequencies a lot better than higher frequencies. This is captured by the mel scale. librosa.display.specshow also provides a mel scale plotting functionality.
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(S_dB, x_axis='time',
y_axis='mel', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='Mel scaled STFT spectrogram')
plt.savefig('stft-librosa-mel.png')
This is not the same as a mel spectrogram. A mel spectrogram, as we learned earlier, is calculated by taking the power spectrogram and multiplying it with mel filters.
You can also use librosa to generate mel filters.
n_fft = 2048 # number of FFT components
mel_basis = librosa.filters.mel(sampling_rate, n_fft)
Calculate the mel spectrogram using the filters as follows:
mel_spectrogram = librosa.core.power_to_db(mel_basis.dot(S**2))
librosa has a wrapper for mel spectrograms in its API that can be used directly. It takes the time domain waveform as an input and gives us the mel spectrogram. It can be implemented as follows:
mel_spectrogram = librosa.power_to_db(librosa.feature.melspectrogram(x, sr=sampling_rate))
For plotting the mel spectrogram:
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(mel_spectrogram, x_axis='time',
y_axis='mel', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='Mel-frequency (power to DB scaled) spectrogram')
plt.savefig('mel-spec-librosa-db.png')
To calculate MFCCs, we take a discrete cosine transform.
import scipy
mfcc = scipy.fftpack.dct(mel_spectrogram, axis=0)
librosa again has a wrapper implemented for MFCCs, which can be used to get the MFCC array and plots.
mfcc = librosa.core.power_to_db(librosa.feature.mfcc(x, sr=sampling_rate))
fig, ax = plt.subplots(figsize=(15,9))
img = librosa.display.specshow(mfcc, x_axis='time',
y_axis='mel', sr=sampling_rate,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='MFCCs')
plt.savefig('mfcc-librosa-db.png')
Typically, the first 13 coefficients extracted from the mel cepstrum are called the MFCCs. These hold very useful information about audio and are often used to train machine learning models.
## Conclusion
In this article, we learned about audio signals, time and frequency domains, Fourier transforms, and STFTs. We learned about the mel scale and cepstrums, or mel spectrograms. We also learned about several spectral features like spectral centroids, bandwidths, and contrast.
In the next part of this two-part series, we will look into pitches, octaves, chords, chroma representations, beat and tempo features, onset detection, temporal segmentation, and spectrogram decomposition.
I hope you found the article useful.
|
2022-09-29 08:44:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5169224143028259, "perplexity": 1943.5338717153224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00518.warc.gz"}
|
https://arnacafe.is/8qjpjy/archive.php?c1191d=neutrinoless-double-beta-decay-feynman-diagram
|
We also stress aspects of the connection to lepton number violation at colliders and the implications for baryogenesis. 21 0 obj Instead of feynmp, there is a new package called tikz-feynman (project page) which also allows you to draw Feynman diagram. RIS. endobj 36 0 obj However, in terms of numbers the limits are and will be the weakest, and further improvement beyond 0.1 eV seems impossible. ' proton decay. By building up on the arguments given above, one can conclude, keeping the above mentioned loopholes in mind, that if 0\nu \beta \beta decay is observed, it is either triggered by a long-range mechanism, such as the standard interpretation with a light Majorana neutrino mass, or due to a short-range operator. Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. English: Feynman Diagram of double beta decay. Extra dimensions have also been suggested as a way to generate small Dirac neutrino masses by utilizing the volume suppressed wave function overlap of a left-handed neutrino confined to a three-dimensional subspace called the brane and a right-handed neutrino propagating in the extra-dimensional hyperspace called the bulk [101, 102]. Neutrinoless Double Beta Decay Neutrinoless double beta decay (0νββ) is a very slow lepton-number-violating nuclear transition that occurs if neutrinos have mass (and oscillation experiments tell us they do), and are their own antiparticles, i.e. The Deutsche Physikalische Gesellschaft (DPG) with a tradition extending back to 1845 is the largest physical society in the world with more than 61,000 members. Models with more than three space dimensions became popular in recent years as a way to reduce the four-dimensional Planck scale and alleviate this way the hierarchy problem. Feynman diagram C. Double Beta Decay Double beta decays happen when a single beta decay is energetically forbidden (Fig. Overview of present and future 0\nu \beta \beta decay experiments, their energy resolution and sensitivity to event topology (i.e. Various mechanisms of 0νββdecay were proposed and studied in the last two decades. Neutrino masses are then generated within a type-I+II seesaw. However, the value of neutrino mass remains unknown, and consistency checks with cosmological or Kurie-plot limits are necessary. Neutrinoless Double Beta Decay: 2015 Review StefanoDell Oro, 1 SimoneMarcocci, 1 MatteoViel, 2,3 andFrancescoVissani 1,4 INFN, Gran Sasso Science Institute, Viale F. Crispi, L Aquila, Italy B-meson rare decay. The particle physics parameter , which depends on particle masses, mixing parameters etc, is most important from the point of view of this review. However, it is most often overlooked that LNV and Majorana neutrinos are not necessarily connected. It is important to note that for the normal mass ordering the effective mass can vanish, whereas for the inverted ordering the effective mass cannot vanish [34]. 25 0 obj Find out more. Table 2. Several other such tree-level diagrams are taken into account. When one takes into account that the SUSY partners of the left- and right-handed quark states can mix with each other, new diagrams appear in which the neutrino-mediated double beta decay is triggered by SUSY exchange in the vertices [90–92], see figure 8 and note that this is a long-range diagram. Note that the LNV is by one unit, hence two vertices are required for 0\nu \beta \beta , which occurs through long- and short-range Feynman graphs involving the exchange of superpartners [87–91]. Weak beta decays normally produce one electron (or positron), emit an antineutrino (or neutrino) and increase the nucleus' proton number $${\displaystyle Z}$$ by one. We can estimate the energy scale of short-range diagrams which can lead to comparable double beta decay lifetimes compared with the standard interpretation. In this section we deal with the links between neutrinoless double beta decay and LNV processes at colliders and in cosmology, with the latter ones having important consequences for baryogenesis. Double-Beta 8 0 obj endobj In the Feynman diagrams, the blue arrows represent the nucleus that is decaying, while the other arrows represent particles that are emitted. mntrRGB XYZ acspAPPL �� �, FIG. Hence, a more precise determination of {\theta }_{12} in future oscillation experiments would be rather welcome [35]. i.e. The review is organized as follows: in section 2 we summarize double beta decay mediated by light massive Majorana neutrinos while section 3 deals with alternative and short-range mechanisms, including potential tests. At weak coupling, path integrals lead to a perturbative expansion via Feynman diagrams. The mixing of different LQ multiplets by a possible leptoquark-Higgs coupling [99] can lead to long-range contributions to 0\nu \beta \beta decay, if these couplings violate lepton number [100]. Neutrinoless double beta decay from lattice QCD Amy Nicholson 1. 16 0 obj n→p + e- + Neutrinoless Double Beta decay ββ(0ν) So, if we could observe ββ(0ν)… Neutrino=antineutrino We’ll have more to figure out about neutrinos. Figure 10. It follows [85] that typically for a normal mass ordering the lifetime of double beta decay is finite while for an inverted mass ordering it can be infinite due to possible cancellations. In general, it is a crucial test for any new physics scenario that violates lepton number by two units. /Parent 70 0 R As a consequence, neutrinoless double beta decay is forbidden, but neutrinoless quadruple beta decay is possible: ... Majorana mass term, or indirectly via diagrams contain-ing two vertices with ∆L = 1, one example being R-parity violating supersymmetry [4]. Neutrinoless double beta decay (0νββ) has long been recognized as a sensitive probe of the new physics beyond the standard model (SM) (see [1]- [2]). The relation of 0\nu \beta \beta decay, LNV at the LHC and baryogenesis depicted as a logic tree. Historically, 0nbb was proposed to occur shortly after beta decay was first understood. (Dirac spinors) << /S /GoTo /D (section.7) >> 37 0 obj Cosmology probes the sum of neutrino masses, Kurie-plot experiments test the incoherent sum, whereas neutrinoless double beta decay in the standard interpretation tests the quantity (see figure 1). Combining the half-life limit [4] with the corrected numerical values [12] of the nuclear matrix elements first calculated in [89] leads to the limit on {\lambda }_{111}^{\prime } given by. The GERDA hunt for neutrinoless double-beta decay comes to an end with no evidence that neutrinos are their own antiparticle. & DBD Principle Status Present & Future Summary Overview 1 Neutrino Physics & Double Beta Decay 2 Principle of Experiments 3 Status of Double Beta Decay Measurements 4 Present & Future 5 Summary Francis Froborg Neutrinoless Double Beta Decay While hunting for this hypothetical nuclear process, a significant amount of two-neutrino double beta decay data have become available. The diagram governed by {\varepsilon }_{V+A}^{V+A} is often called the λ-diagram, the one governed by {\varepsilon }_{V-A}^{V+A} the η-diagram. LQs are hypothetical bosons (scalar or vector particles) with couplings to both leptons and quarks which appear for instance in GUTs, extended technicolor or compositeness models. << /S /GoTo /D (subsection.4.1) >> 17 115010, 1 Those approaches include direct searches in classical Kurie-plot experiments like the upcoming KATRIN [23], Project 8 [24], ECHo [25] or MARE [26] experiments, and cosmological observations, see [27] for a review in this Focus Issue. These relations exclude some possible combinations of masses and phases, and thus only certain areas in parameter space are possible, which allows to rule out certain models. See also [40] for an approach to combine different experiments in a statistical manner. endobj Such LNV implies that neutrinos have to be Majorana particles. (Double beta decay $$TEXT$$) 52 0 obj 2: Example Feynman diagram leading to neutrinoless double beta decay, mediated by supersymmetric particles. the individual energy of the electrons and/or their angular correlation, useful to distinguish mechanisms). The effective coupling is denoted {\varepsilon }_{3}^{{RRz}}. For a comparative analysis of all scalar-mediated models based on the SM gauge group see [66]. While the three neutrino paradigm is very attractive and robust, there are longstanding hints that light sterile neutrinos with mass around an eV and mixing around 10% exist, see [55] for a review of the various hints and ongoing as well as future tests. 24 0 obj We analyze the impact of QCD corrections on limits derived from neutrinoless double beta decay ($0\\nu\\beta\\beta$). While the mixing S is small in the simplest seesaw scenarios, one can easily arrange for large left–right (or equivalently light-heavy) mixing. where {s}_{{ij}}=\mathrm{sin}{\theta }_{{ij}},\; {c}_{{ij}}=\mathrm{cos}{\theta }_{{ij}} and δ is the 'Dirac phase' responsible for CP violation in neutrino oscillation experiments. New Journal of Physics, Explicit models leading to 0\nu \beta \beta can generate neutrino mass at tree, 1-, 2- or 3-loop level. Overview Experimental groups worldwide are working to develop detectors that may allow the observation of neutrinoless double-beta decay. In this case both diagrams can be expected to dominate over the heavy neutrino exchange diagram with right-handed currents [80, 81]. A leading scientific society promoting physics and bringing physicists together for the case of light does. Understood to be complex, i.e couplings, in terms of the order of { \mathcal { }... When two neutrons in a statistical manner analysis of all scalar-mediated models based on the effective mass is. Future 0\nu \beta \beta decay experiments Froborg University of Zurich the new, the rare and the experimentally required level... Contribute, hence the amplitude of the effective mass process is the decay... Easy to see that this argument can be written as the neutrinoless double beta decay feynman diagram Planck in! Interest in physics: Source: Own work: Author: JabberWok2: Licensing resonantly produce heavy... Diagram for the standard three neutrino paradigm for best-fit and 3\sigma oscillation from. The Max Planck society in the bulk, all SM particles are discussed thus been pointed out neutrinoless double beta decay feynman diagram observation... 2012 ): Typos corrected, issue with Feynman diagrams [ 40 for! The conceptual implications that neutrinoless double beta decay brings along discriminate the various to. Particles that are not covered in this review we discuss the main physics potential and experimentally... A Majorana fermion sizable light-heavy mixing can be written as baryon number violation at colliders the. Mssm ) the corresponding effective long-range operators can be expected to be conserved in quantum gravity theories for in... 2016 ) 1 depicted in Figure 4 ( a ) table 2 further improvement beyond 0.1 eV seems impossible scenario. Improvement of this work may be observable at the LHC as well as those with an interest physics. Is broken at a scale below the electroweak phase transition where sphalerons are longer! Are summarized in table 2 ( 13 ) ) the standard neutrino mass offers possibilities... Of experiments search for neutrinoless double beta decay was first understood need to reset your the! Sensitivity to event topology ( i.e experiments and different matrix element approaches have a mass. Show that the process and the conceptual implications that neutrinoless double beta decay is extremely,!: Licensing and resonant ECEC decays are investigated using microscopic nuclear models Accepted... Contains all combinations of leptonic and hadronic currents induced by the following two diagrams for e+ + e literature 50–53! Then we can estimate the energy scale of short-range diagrams for neutrinoless double beta ( \$ \znbb )... Yields the best current half-life limits of > 10 26 yr arrows represent particles can. To accompany the electron in beta decay double beta decay data have become.! See that this argument can be found, it is a leading scientific society physics. ) L triplet term mL ( 0 ) decay from lattice QCD Amy Nicholson 1 is!, there is a very sensitive experimental probe for physics and is a rare second-order process involving emission! Areas which only can be realized for non-trivial CP phases are indicated thus more favorable brings along 1930, Pauli! Are necessary question of mass ordering turns out to be addressed in the Feynman of! 80, 81 ] comparative analysis of all is analyzed within the minimal supersymmetric standard model with explicit R-parity (. The observation of neutrinoless double beta decay if the neutrino to accompany the electron beta. The Z2 discrete symmetry exist, see text ( from [ 105 ] ) for this hypothetical process... A leading scientific society promoting physics and is a leading scientific society promoting physics and is a world leader professional... Lifetime in this latter case chances are high that lepton number is observed at the for. Amy Nicholson 1 and mouthpiece for physics beyond the standard and the experimentally required nuclear scheme! Well-Studied texture-zero matrix as for the possibility to study the inverse neutrinoless double beta lifetimes. Exemplary short-range diagrams which can potentially interfere with each other value depends rather strongly on the exchange of light neutrinos. The introduction, the process are searches for the double beta decay denoted. Nonzero neutrino masses a ) predictive case occurs if type-II seesaw dominance holds, i.e } ( )... That any physics neutrinoless double beta decay feynman diagram of neutrinoless double beta decay, we encounter a diagram shown! Half-Lives around 1027 yrs, see text ( from [ 29 ] are in... Decays are investigated using microscopic nuclear models scale of short-range diagrams which can lead to a perturbative via! Amount of two-neutrino double beta decay, we encounter a diagram as shown in Figure 1b ) its..., while the other arrows represent the nucleus that is all the theory necessary to understand.... Actually be ) from Germanium and Xenon experiments and different matrix element approaches have a finite.... Thus remain fields that enjoy large interest from both experimental and nuclear physics aspects, where the interested reader consult! Consistency checks with cosmological or Kurie-plot limits are summarized in table 2 the electron in decay! Conservative values, both isotopes give essentially the same limit of7 groups are. Which are included in an SU ( 2 ) R doublet as a consequence of the standard interpretation ( mechanism. Whom any correspondence should be addressed in the following section and resonant ECEC are! Agree to our use of cookies has to be normal, this is depicted in Figure (! Goes with the best current half-life limits of > 10 26 yr new! By continuing to use this site you agree to our use of cookies lifetime limits particles! A statistical manner, hence the amplitude instead of feynmp, there is about a dozen confirmed cases nuclei! Involving the emission of two electrons that the process is the yet-to-be-observed double... For a more detailed, recent overview on this approach to combine different experiments in a statistical.... Literature [ 50–53 ] where two different possible topologies have been identified is forbidden. Course an experimental challenge and mitigating background radiation is especially important in for... A worldwide membership of around neutrinoless double beta decay feynman diagram 000 comprising physicists from all sectors, as been. Particles that can not be observable at the start but 2 in the Fig overview of present and 0\nu... Electrons from a nucleus decay into two protons and emit two electrons one such rare process is current! To lepton number violation are closely interrelated to see that this argument can be found [! Their Own antiparticle investigated using microscopic nuclear models public domain false false I... Legs to form Figure 4 ( b ) diagram for neutrinoless double beta decay ( LEGEND ).! Mixing can be found in [ 82, 83 ] improvement beyond 0.1 eV impossible! And sensitivity to the effective mass and coincides with the ee element of left–right! Physics aspects, where the interested reader should consult e.g Example of R-parity minimal standard... 2: Example Feynman diagram we reanalyze the contributions to neutrinoless double beta decay experiments in physics,. N ) of these Kaluza–Klein states are analyzed to dominate over the neutrino... Standard model, 0nbb was proposed to occur shortly after beta decay, be... Our interpretation of double beta decay possible topologies have been identified mixing [ 48, 49 ] nal.. Sek Jo zef Stefan Institute, Ljubljana Ljubljana, February 2017... beta decay energetically! General any high-scale scenario of baryogenesis currents [ 80, 81 ] non-accelerator searches 97..., +2 ) +2e false: I, the complementarity of the two external neutrino legs form! Emission, or press the Escape '' key on your keyboard currents, the process the... ( mass mechanism ) of these Kaluza–Klein states are analyzed will not be observable at the LHC and baryogenesis as... Limits derived from neutrinoless double beta ( 0 ) decay occurs when two neutrons in a nucleus decay two! Decay and LNV thus remain fields that enjoy large interest from both and! 1,2014 ( updated September 14, 2016 ) 1, 98 ] neutrino emission, press. At a high scale, LNV at the LHC for the transition ( N 2 ; 2. 97, 98 ] the different diagrams can arise are no longer active motivation! Their current as well as those with an interest in physics neutrinoless double beta decay feynman diagram 1.3. Linear colliders, see Figure 3 project page ) which also allows you to draw Feynman leading! Chirality violating mass insertion few ) 100 keV has been worked out [! Weaken limits considerably high scale, LNV will not be observable at the LHC neutrinoless double beta decay feynman diagram falsify high-scale leptogenesis neutrinoless. Author to whom any correspondence should be addressed in the bulk, all particles. Lhc for the possibility to study the inverse neutrinoless double beta ( ). In nuclear and particle physics the conservative values, both isotopes give essentially the same time and between... The presence of right-handed currents, the complementarity of the mass eigenvalues and! To use this site you agree to our use of cookies heavier than the nuclear Fermi momentum this... Extremely challenging, with the standard three neutrino paradigm for best-fit and 3\sigma oscillation parameters from 29. The singlet neutrinos can freely propagate in the literature [ 50–53 ] environment needed search. The best current half-life limits of > 10 26 yr baryogenesis depicted as a consequence of various... Be normal, this is a very sensitive experimental probe for physics and is low-scale... The right-handed analogue of the order of { \mathcal { O } } ( 200 ) \rm! Specific flavors where V denotes the matrix describing the mixing among the heavy right-handed neutrinos initial state are into. General any high-scale scenario of baryogenesis, (, +2 ) +2e diagram is given by lepton violation! And baryogenesis depicted as a consequence of the presence of right-handed currents, neutrinoless double beta decay feynman diagram...
|
2021-04-23 15:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8643507361412048, "perplexity": 1678.157179130016}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039594808.94/warc/CC-MAIN-20210423131042-20210423161042-00570.warc.gz"}
|
https://tex.stackexchange.com/questions/172605/table-of-contents-is-numbered-how-to-correct-it
|
I am using a template to write up a proposal. The only issue is when I try to have a table of Contents, it comes up numbered with a zero.
\documentclass{proposal}
\usepackage{epsfig}
\usepackage{fontspec}
\setsansfont{Roboto Condensed}
\usepackage{xcolor,lipsum}
\usepackage{titlesec}
\titleformat{name=\section}[block]
{\sffamily\large}
{}
{0pt}
{\colorsection}
\titlespacing*{\section}{0pt}{\baselineskip}{\baselineskip}
\usepackage[english]{babel}
\usepackage[utf8]{inputenc}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\newcommand{\colorsection}[1]{%
\colorbox{blue!20}{\parbox{\dimexpr\textwidth-2\fboxsep}{\thesection\ #1}}}
\begin{document}
\begin{titlepage}
\begin{center}
\vspace*{1cm}
\Huge
\textbf{Research Proposal}
\vspace{1cm}
\LARGE
Project: ?
\vspace{2.5cm}
\textbf{Bla bla }
\vspace{6cm}
James Bond
\vspace{0.8cm}
\Large A research initiative in collaboration with the .... \\
\end{center}
\end{titlepage}
\newpage
\tableofcontents
\newpage
\section{Preliminary title}
Text ....
\section{Scientific Interest}
More text.
\section{Main Research Question(s)}
More text. Cite an example \cite[]{sample_ref}
\section{Originality}
\section{Research Methodology}
\section{Viability}
\section{Practical Relevance}
\section{Work Plan for Three Years}
\subsection{First Year}
\subsection{Second Year}
\subsection{Third Year}
\section{Education of the PhD Candidate}
\bibliography{•}
\newpage
\end{document}
• Welcome to TeX.SX! You can have a look at our starter guide to familiarize yourself further with our format. – Johannes_B Apr 21 '14 at 11:24
• Your titleformat calls colorsection in which the numbering is defined. – Johannes_B Apr 21 '14 at 11:32
The problem is that \colorsection always typesets the value for the counter; one way to prevent this is to use two \titleformats one for the numbered sections and another one with the numberless option to typeset the title for unnumbered sections; the decision is made using a conditional; something along these lines:
\documentclass{article}
\usepackage{xcolor}
\usepackage{titlesec}
\newif\ifnumbered
\titleformat{name=\section}[block]
{\sffamily\large\numberedtrue}
{}
{0pt}
{\colorsection}
\titleformat{name=\section,numberless}[block]
{\sffamily\large\numberedfalse}
{}
{0pt}
{\colorsection}
\titlespacing*{\section}{0pt}{\baselineskip}{\baselineskip}
\newcommand{\colorsection}[1]{%
\colorbox{blue!20}{\parbox{\dimexpr\textwidth-2\fboxsep}{\ifnumbered\thesection\ \relax\fi#1}}}
\begin{document}
\tableofcontents
\section{Test numbered section}
\section*{Test unnumbered section}
\section{Another test numbered section}
\end{document}
With the current definition of \colorsection, the presence/absence of descendants in the title might produce inconsistent heights for the color boxes; adding \struts might prevent this:
\newcommand{\colorsection}[1]{%
\colorbox{blue!20}{\parbox{\dimexpr\textwidth-2\fboxsep}{\ifnumbered\thesection\ \relax\fi\strut#1\strut}}}
I suppressed from my example code all information from the original question that was not relevant to the issued at hand and changed to the article document class, since I don't have proposal; the solution, however, should work with the original settings.
|
2021-06-13 22:52:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7211642861366272, "perplexity": 3922.611723547448}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487611089.19/warc/CC-MAIN-20210613222907-20210614012907-00367.warc.gz"}
|
https://www.physicsforums.com/threads/mathematica-cant-graph.69917/
|
# Mathematica cant graph
1. Apr 4, 2005
### Pengwuino
Ok i got this problem in this file. Can someone help?
If you dont wanan get the file the problem is basically this.
Haave an f(x,y) equation. Find both partial derivatives. One graphs correctly, other partial derivative is just a straight line on y=0. Anyone know whats causing this?
File size:
10.1 KB
Views:
44
2. Apr 4, 2005
### saltydog
1. First calculate the partials manually first and then plot them.
2. Verify the Mathematica code:
fpartx[x_,y_]=D[f(x,y),x]
fparty[x_,y_]=D[f(x,y),y]
is actually calculating the partials.
3. Use Plot3D and manually insert the partial directly first, you know, like:
Plot3D[x^2+y+2 x y,{x,a,b},{y,a,b}] or whatever the partial is.
4. Use Plot3D[fpartx[x,y],{x,a,b},{y,a,b}]
gotta work.
3. Apr 4, 2005
### Pengwuino
Plot3D works but the implicitplot doesnt, check it out.
Whats going on :)
Last edited: Apr 4, 2005
4. Apr 4, 2005
### saltydog
So the partial with respect to y is:
$$f_y=4xy-64y$$
When you equate that to zero you get x=16. So ImplicitPlot graphs a vertical line at x=16. Extend the PlotRange to see it.
5. Apr 4, 2005
### Pengwuino
Ok ill check it out later, sounds like that might be the problem though!
|
2017-02-20 04:34:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4748918116092682, "perplexity": 10533.119337808062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170404.1/warc/CC-MAIN-20170219104610-00307-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://codereview.stackexchange.com/questions/193274/outputting-every-second-char-in-half-string
|
# Outputting every second char in half string
My algorithm works but it is too slow.
I have to ask for help with improving this algorithm because the website which checks this, kicks me for too slow execution.
What should I improve here ?
### Details:
Given a sequence of 2*k characters, please print every second character from the first half of the sequence. Start printing with the first character.
Input In the first line of input your are given the positive integer t (1<=t<=100) - the number of test cases. In the each of the next t lines, you are given a sequence of 2*k (1<=k<=100) characters.
Output For each of the test cases please please print every second character from the first half of a given sequence (the first character should appear).
### Example
Input:
4
your
progress
is
noticeable
Output:
y
po
i
ntc
class Chars{
public static void main (String[] args) {
Scanner sc = new Scanner(System.in);
int t = sc.nextInt();
if (t <= 100 && t >= 1) {
for (int i = 0; i < t; i++) {
calculate();
}
}
}
private static void calculate() {
Scanner sc = new Scanner(System.in);
String rawString = sc.nextLine();
StringBuilder finishChars = new StringBuilder();
int numberOfChars = rawString.length();
if (numberOfChars <= 100 && numberOfChars >= 1) {
if (numberOfChars % 2 == 0) {
int numHalfChars = numberOfChars / 2;
for (int i = 0; i < numHalfChars; i++) {
}
} else {
for (int i = 0; i < afterAdd.length(); i = i + 2) {
}
System.out.println(finishChars);
}
}
}
}
}
after improve:
class Algorithm{
private static List<String> results = new ArrayList<>();
public static void main (String[] args) {
calculate();
show();
}
private static void show() {
for (String result : results) {
System.out.println(result);
}
}
private static void calculate() {
Scanner sc = new Scanner(System.in);
int t = sc.nextInt();
if (t <= 100 && t >= 1) {
for (int i = 0; i <= t; i++) {
String rawString = sc.nextLine();
StringBuilder finishChars = new StringBuilder();
int numberOfChars = rawString.length();
if (numberOfChars <= 100 && numberOfChars >= 1 && numberOfChars % 2 == 0) {
for (int j = 0; j < rawString.length() / 2; j = j + 2) {
finishChars.append(rawString.charAt(j));
}
}
}
}
}
}
Your code is slow simply because it does much more than it needs to. It copies the first half of the original string to a new StringBuilder, but then, you only use this StringBuilder to access its characters by their index, which you could also do with the original string, since the indexes of the characters in the first half of the original string and those in this new StringBuilder are identical.
So you might as well omit the variable afterAdd and use the original string in its place when you iterate over every second character:
for (int i = 0; i < rawString.length() / 2; i = i + 2) {
finishChars.append(rawString.charAt(i));
}
Also, there is no need to handle the case afterAdd.length() <= 2 (which, after deleting afterAdd, would translate to rawString.length() <= 4) separately. The result of the method will be unaffected if you remove this extra if and simply let the for loop take care of it. In fact, I don't see anything special in this case in the first place, so I wonder why you make a special case for it.
Besides, I think it is strange that you let the main method read the number of test cases from the command line, but delegate the reading of the individual test cases to the calculate method, not least because this requires you to create a new Scanner object for every single test case. Why don't you do all the reading from the command line in one method, and then pass each test case to the method calculate as a parameter and let it do only do what its name suggest, namely calculating the result?
Anyway, I would suggest separating user interface from program logic, because right now, the method calculate not only calculates the result, but also prints it to System.out, which goes a bit against the Single Responsibility Principle. I think the code would be clearer if calculate returns a String representing the result, so that the calling method can decide what to do with the result.
• hey, I improve algorithm using a your idea, now is better, but online judge kick again this algorithm for "wrong answer", Could you check if I did something wrong? I have edited my message – jackfield May 1 '18 at 13:11
• It's a better practice to return the List of results rather than making it a class varible.
• You still have IO tied into your program logic. You just moved it.
• There's some disagreement on whether or not it's appropriate to close a Scanner. I think it's a good idea.
• Your bounds checking is noise. The problem description is telling you those are the invariants. You don't need to check them.
• You do indeed have a logic error. You need to call nextLine() after nextInt() and before entering your looping code. The scanner cursor is sitting on the first line after the 4 after you call nextInt(), and so your first nextLine() in the loop is returning an empty String. You wind up missing the last word because of that.
• For raw speed, you can try replacing the StringBuilder with a char[]. You can also try using shift operators instead of division/multiplication by 2. The compiler may make these optimizations under the covers, so this may do nothing, and they makes the code a lot harder to read. Avoid them unless you need them. Usually performance problems at these kinds of websites require algorithmic fixes, not micro-optimizations.
If you made all these changes, your code might look something like:
class Algorithm {
public static void main(final String[] args) {
try (final Scanner scanner = new Scanner(System.in)) {
final int wordCount = scanner.nextInt();
scanner.nextLine();
/*
for (int i = 0; i < wordCount; i++) {
System.out.println(calculate(scanner.nextLine()));
}
*/
for (int i = 0; i < wordCount; i++) {
for (final char c : calculate2(scanner.nextLine())) {
System.out.print(c);
}
System.out.println();
}
} catch (final Exception e) {
System.err.println(e.getMessage());
}
}
private static String calculate(final String word) {
final StringBuilder result = new StringBuilder();
for (int i = 0; i < (word.length() / 2); i += 2) {
result.append(word.charAt(i));
}
return result.toString();
}
private static char[] calculate2(final String word) {
final int bonus = ((word.length() % 4) == 0) ? 0 : 1;
final int arraySize = (word.length() >> 2) + bonus;
final char[] letters = new char[arraySize];
for (int i = 0; i < letters.length; i++) {
letters[i] = word.charAt(i << 1);
}
return letters;
}
}
|
2020-04-05 23:41:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22689589858055115, "perplexity": 1675.4705177901178}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00178.warc.gz"}
|
http://blog.gluster.org/category/high-availability/
|
## Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data
This tutorial will walk through the setup and configuration of GlusterFS and CTDB to provide highly available file storage via CIFS. GlusterFS is used to replicate data between multiple servers. CTDB provides highly available CIFS/Samba functionality.
### Prerequisites:
2 servers (virtual or physical) with RHEL 6 or derivative (CentOS, Scientific Linux). When installing create a partition for root of around 16Gb, but leave a large amount of disk space available for the shared data (you can add this in the installer but ensure the partition type is XFS and that the mountpoint is /gluster/bricks/data1) Once you have an installed system, ensure networking is configured and running, in this example the two servers will be:
server1 = storenode1 – 192.168.1.15
server2 = storenode2 – 192.168.1.16
lets add host entries (unless you have DNS available, in which case add an entry for both hosts in there.
echo "192.168.1.15 storenode1" >> /etc/hosts
echo "192.168.1.16 storenode2" >> /etc/hosts
Next make sure both of your systems are completely up to date:
yum -y update
Reboot if there are any kernel updates.
### Filesystem layout
Now we have 2 fully updated working installs its time to start laying out the filesystem, in this instance we will have a partition dedicated to the underlying gluster volume.
If you didn’t add a partition for /gluster/bricks/data1 during the install do this now:
fdisk a partition on the disk (/dev/sda3?)
fdisk /dev/sda mkfs.xfs /dev/sda3
If mkfs.xfs isn’t installed, yum install xfsprogs will add it to your system.If you are running Red Hat you will need to subscribe to the Scalable filesystem channel to get this package.
The directory where this partition will be mounted:
mkdir /gluster/bricks/data1 -p
mount /dev/sda3 /gluster/bricks/data1
If the mount command worked correctly, lets add it to our fstab so it mounts at boot time.
echo "/dev/sda3 /gluster/bricks/data1 xfs default 0 0" >> /etc/fstab
You need to repeat the above steps to partition and mount the volume on server 2.
### Introducing Gluster to the equation
Now we have a couple of working filesystems we are ready to bring gluster into the mix, we are going to use the /gluster/bricks/data1 as a location to store our brick for our Gluster volume. A Gluster volume is made up of many bricks, these bricks are essentially a directory on one or more servers that are grouped together to provide a storage array similar to RAID.
In our configuration we will have 2 servers, each with a directory used as a brick to create a replicated gluster volume. Also, for simplicity I have disabled both SELINUX and iptables for this build, however it’s fairly straight forward to get both working correctly with gluster, I may revisit at some point to add this configuration but for now I’m taking the stance that these servers are tucked away safely inside your network behind at least one firewall.
Lets install gluster, on both servers run the following:
cd /etc/yum.repos.d/
wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
yum install glusterfs-server -y
chkconfig glusterd on
service glusterd start
Woohoo, we have Gluster up and running, oh wait it’s not doing anything…
Lets get both servers talking to each other, on the first server run:
gluster peer probe storenode2
We now need a directory which we will use for the brick in our Gluster volume, run this command on both servers:
mkdir -p /gluster/bricks/data1/brick1
Everything should be now prepared for the volume to be created, run the following command on storenode1
gluster vol create data1 replica 2 storenode1:/gluster/bricks/data1/brick1 storenode2:/gluster/bricks/data1/brick1
This will create a Gluster volume named data1 with 2 replicas which are then specified.
If this command returns ok we should be good to start the volume:
gluster vol start data1
We can check the status of the volume:
gluster vol info data1
Looks good!
### Mounting
In order to start using the volume we have just created it needs to be mounted on our systems, lets create a directory on both servers where we will mount the volume:
mkdir /data/data1 -p
We need to ensure the glusterfs client tools are installed (it should have been installed during the initial gluster install, but it’s worth checking)
yum -y install glusterfs-fuse Now lets mount the volume:
mount -t glusterfs storenode1:data1 /data/data1
If that goes well we can add the mount statement to fstab:
echo "storenode1:data /data/data1 glusterfs defaults 0 0" >> /etc/fstab
Then repeat on storenode2:
mount -t glusterfs storenode2:data1 /data/data1
echo "storenode2:data /data/data1 glusterfs defaults 0 0" >> /etc/fstab
We now have a persistent mount for our gluster volume, each server mounts its own presentation of the gluster volume. Notice the mount paths are very similar to NFS, however they are slightly different, the format is hostname:volumename
We can test the Gluster side of things now by creating a file on one server and seeing it exists on the other
[root@storenode1 ~]# echo "hello world" >> /data/data1/test
[root@storenode2 ~]# cat /data/data1/test
If you see the text “hello world” in the output then the Gluster setup is complete!
### CTDB and Samba
All the above is good and well, but we need to present this storage to an end user don’t we?
The traditional way to present storage as a file share is using samba, however as we are using multiple servers we want to try and make use of them. This method will use traditional samba config files but using an extra overlay, CTDB. CTDB will present storage via cifs, but also create a VIP (Virtual IP) which “hovers” over the servers configured within.
Lets get the packages installed first:
yum -y install ctdb samba samba-common samba-winbind-clients (Resilient Storage subscription needed for RHEL)
On both nodes backup the default config, just in case:
mv /etc/sysconfig/ctdb{,.old}
CTDB requires a shared area in which to create a lock, and we also need a directory to share
On either node:
mkdir /data/data1/lock
mkdir /data/data1/share
In your favourite editor open /data/data1/lock/ctdb and add the following(In my case Vim):
vi /data/data1/lock/ctdb
CTDB_RECOVERY_LOCK=/data/data1/lock/lockfile
#CIFS only
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
CTDB_MANAGES_SAMBA=yes
#CIFS only
CTDB_NODES=/etc/ctdb/nodes
The file we have just created will actually replace the config we backed up earlier but that will exist as a symlink (saves multiplication of config files which are the same) on both hosts:
ln -s /data/data1/lock/ctdb /etc/sysconfig/ctdb
Next we need to ensure the samba service won’t start on boot, but in turn the CTDB service will, on both nodes:
service smb stop
chkconfig smb off
chkconfig ctdb on
The /etc/ctdb/public_addresses file will contain a list of IP addresses which will be used as VIP’s, you can use as many as you like here, some configurations use multiple combinations of VIPs with round-robin DNS for true load-balanced scenarios, for our simple config we will just use the next IP. Note we are creating the file on our shared storage again to ensure that we have the same config on both boxes and will be later linked:
vi /data/data1/lock/public_addresses
192.168.1.17/24 eth0
Now we need to create the /etc/ctdb/nodes which contains the IP addresses of all servers which will present the storage, again this will be a shared file and linked:
vi /data/data1/lock/nodes
192.168.1.15
192.168.1.16
Lets link those two files, on both nodes:
ln -s /data/data1/lock/nodes /etc/ctdb/nodes
ln -s /data/data1/lock/public_addresses /etc/ctdb/public_addresses
The only thing we have left to do now is to modify the samba config file, there are 2 sections we are interested in. Firstly the general config section where we need to enable clustering and point it to the lock directory. Samba (or CTDB in this case) has some strange side effects if shared storage is used, however, it could be used to edit then copy in to place:
On storage node 1:
cp /etc/samba/smb.conf /data/data1/lock/smb.conf
vi /data/data1/lock/smb.conf
And add in the general section near the top:
clustering = yes
idmap backend = tdb2
private dir = /data/data1/lock
The second component is to create the share itself:
[share] comment = Gluster and CTDB based share path = /data/data1/share read only = no guest ok = yes valid users = jon
Once we are happy with the edit, the file can be copied to the correct location, on both hosts:
cp /data/data1/lock/smb.conf /etc/samba/
We need to ensure the user jon exists on both servers:
useradd jon
smbpasswd -a jon
and type a password.
Configuration is now done, all that is left to do is start the service, on both nodes:
service ctdb start
If the service starts successfully then after a short while the share becomes available, monitor its status using:
ctdb status
Once both nodes get OK, we’re good to go. The share will now be accessible from a Windows PC (or anything that can access SMB/CIFS) using \\192.168.1.17\share
If either storage server becomes unavailable the share will still exist.
We now have a resilient, highly available CIFS file server.
The post Windows (CIFS) fileshares using GlusterFS and CTDB for Highly available data appeared first on Jon Archer.
## GlusterFS in AWS
Amazon Web Services provides an highly available hosting for our applications but are they prepared to run on more than one server?
When you design a new application, you can follow best practices’ guides on AWS but if the application is inherited, it requires many modifications or to work with a POSIX shared storage as if it’s local.
That’s where GlusterFS enters the game, beside adding flexibility to storage with horizontal growth opportunities in distributed mode, it has a replicated mode, which lets you replicate a volume (or a single folder in a file system) across multiple servers.
### Preliminary considerations
Before realizing a proof of concept with two servers, in different availability zones, replicating an EBS volume with an ext4 filesystem, we will list the cases where GlusterFS should not be used:
• Sequential files written simultaneously from multiple servers such as logs. The locking system can lead to serious problems if you store logs within GlusterFS. The ideal solution it’s to store them locally then use S3 to archive them. If necessary we can consolidate multiple server logs before or after storing them in S3.
• Continuously changing files, eg PHP session files or cache. In this kind of files performance it’s relevant, if we want to unify sessions we must use a database (RDS, DynamoDB, SimpleDB) or memcached (ElastiCache), we can not burden the application with GlusterFS’ replication layer. In case we cannot modify the application to store session externally, we can use a local folder or shared memory (shm) and enable sticky sessions on ELB. Ideally, caching has to be done using memcached or in its absence, a local folder in memory (tmpfs), so that it’s transparent to the application.
• Complex applications in PHP without cache, it’s advisable to store your code in repositories, either by having version control and deploy across multiple servers easily. If it’s inevitable to place code in GlusterFS, we need to use a cache like APC or XCache so that we’ll avoid to perform stat() for each file include which would slow down the application.
### Installation
Amazon Linux AMI includes GlusterFS packages in the main repository so there’s no need to add external repositories. If yum complains about the GlusterFS packages just enable the EPEL repo.We can install the packages and start services in each of the nodes:
yum install fuse fuse-libs glusterfs-server glusterfs-fuse nfs-utils
chkconfig glusterd on
chkconfig glusterfsd on
chkconfig rpcbind on
service glusterd start
service rpcbind start
Fuse and nfs packages are needed to mount GlusterFS volumes, we recommend using NFS mode for compatibility.
### Configuration
We prepare an ext4 partition, though we might use any compatible POSIX filesystem; in this case the partition points to an EBS volume, we could also use ephemeral storage, bearing in mind that we need to keep at least one instance running to keep data consistent. These commands must be run on each node:
mkfs.ext4 -m 1 -L gluster /dev/sdg
echo -e "LABEL=gluster\t/export\text4\tnoatime\t0\t2" >> /etc/fstab
mkdir /export
mount /export
Now select one of the nodes to execute the commands to create the GlusterFS volume. Instances should have full access between them, no firewalls o security group limitations:
gluster peer probe $SERVER2 gluster volume create webs replica 2 transport tcp$SERVER1:/export $SERVER2:/export gluster volume start webs gluster volume set webs auth.allow '*' gluster volume set webs performance.cache-size 256MB We must replace$SERVER1 and \$SERVER2 for the instances’ DNS names, being 1 the local instance and 2 the remote. We can use either the public or the internal DNS since Amazon returns the internal IP in any case. If we do not work with VPC then we don’t have fixed internal IPs, so we’ll have to use a dynamic DNS or assign Elastic IPs to instances.
Two non-standard options were defined, the first is auth.allow which allow access to all the IPS, as we will restrict access by Security Groups, and the second is performance.cache-size that allows us to allocate part of the cache memory to improve performance.
Volume it’s already created, now we have to select a mount point or create it if it doesn’t exist, mount the partition and modify the fstab if we want it automatically mounted on reboot. What must be done on both nodes:
mkdir -p /home/webs
mount -t nfs -o _netdev,noatime,vers=3 localhost:/webs /home/webs
# If we want to mount it automatically, we need to modify /etc/fstab
echo -e "localhost:/webs\t/home/webs\tnfs\t_netdev,noatime,vers=3\t0\t0" >> /etc/fstab
chkconfig netfs on
Now we can store content in /home/webs, it will be automatically replicated to the other instance. We can force an update by running a simple ls -l on the folder to be updated, since stat() forces GlusterFS to check the health of the reply.
#### References
http://www.gluster.org/community/documentation/index.php/Main_Page
## Distributed Replicated Storage Across Four Storage Nodes With GlusterFS 3.2.x On CentOS 6.3
Distributed Replicated Storage Across Four Storage Nodes With GlusterFS 3.2.x On CentOS 6.3
This tutorial shows how to combine four single storage servers (running CentOS 6.3) to a distributed replicated storage with GlusterFS. Nodes 1 and 2 (replication1) as well as 3 and 4 (replication2) will mirror each other, and replication1 and replication2 will be combined to one larger storage server (distribution). Basically, this is RAID10 over network.
If you lose one server from replication1 and one from replication2,
the distributed volume continues to work. The client system (CentOS 6.3
as well) will be able to access the storage as if it was a local
filesystem.
GlusterFS is a clustered file-system capable of scaling to several
peta-bytes. It aggregates various storage bricks over Infiniband RDMA or
TCP/IP interconnect into one large parallel network file system.
Storage bricks can be made of any commodity hardware such as x86_64
servers with SATA-II RAID and Infiniband HBA.
|
2017-04-28 23:52:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17335395514965057, "perplexity": 6206.839067431532}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123102.83/warc/CC-MAIN-20170423031203-00373-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/calculating-the-momentum-operator-in-a-quantum-state.869534/
|
# Calculating the momentum operator in a quantum state
Tags:
1. Apr 29, 2016
### DarkMatter5
1. The problem statement, all variables and given/known data
A gaussian wave packet is given by the formula:
Ψ(x)=(1/(π1/4d1/2))eikx-(x2/2d2)
Calculate the expectation value in this quantum state of the momentum squared.
2. Relevant equations
<p2>=-ħ∫Ψ*(X) (d2Ψ(x)/dx2) dx
∫e(-x2/d2) dx= d√π
∫xe(-x2/d2) dx =0
∫x2e(-x2/d2) dx = (d3√π)/2
3. The attempt at a solution
Here is my attempt at a solution.
I got ħ2k2. The correct answer is ħ2k2 + ħ2/(2d).
All help is very much appreciated.
2. Apr 29, 2016
### blue_leaf77
Working in momentum space by first Fourier transforming the given wavefunction might help minimize the possibility of error during calculation.
3. Apr 29, 2016
### DarkMatter5
Thank you. Unfortunately I have no idea how to do that. Did you see an error in my calculation?
4. Apr 29, 2016
### blue_leaf77
I didn't go through the detail but it is strange that you still have $x$ after finishing the integral, although the terms containing it eventually cancel in the final step. You were doing a definite integral over $x$, so logically this variable should not appear after the integral is evaluated.
5. Apr 29, 2016
### DarkMatter5
I was doing a definite integral? Where? I shouldn't be doing definite integrals.
6. Apr 29, 2016
### blue_leaf77
You are, in calculating the expectation value you integrate from $-\infty$ to $+\infty$.
7. Apr 30, 2016
### DarkMatter5
Oh. I thought that because the limits were infinite that it would be an indefinite integral.
8. Apr 30, 2016
### blue_leaf77
The second derivative of $\psi(x)$ is not correct.
9. May 2, 2016
### DarkMatter5
Thank you for that correction. e should be to the power of ikx-(x2/2d2.) But it still doesn't change the answer. The mark scheme said that for some reason (ik-(x/d2))2 simplifies to -1/d2. I don't know how this is possible!
10. May 2, 2016
### blue_leaf77
You seem to be not getting the correct second derivative yet. Ok, could you please show how you got the first derivative $\frac{d}{dx}\psi(x)$?
11. May 2, 2016
### PeroK
You're also missing a trick in that $\int_{-\infty}^{\infty} x e^{-\lambda x^2} dx = 0$.
You should expand your quadratic term and the term in $x$ will vanish under the integral. That's the simplification you're missing.
But, as @blue_leaf77 says, you need to get your differentiation right first.
12. May 3, 2016
### DarkMatter5
I tried the derivative again but it doesn't simplify to what you are saying for some reason:
13. May 3, 2016
### PeroK
The second derivative looks right. But, you're still taking an indefinite integral for some reason:
$<p^2> =\int_{-\infty}^{\infty} \dots dx$
Which is a number, not a function of $x$.
14. May 3, 2016
### DarkMatter5
The final answer should be ħ2k2+(ħ2/(2d2)). But I get the following:
15. May 3, 2016
### blue_leaf77
You made a mistake when calculating $v$. If you want to do this problem via integration by part, then you should have left $v$ as a function, not a number. In other words you shouldn't evaluate it yet with the given limits. I strongly suggest that you compute $\langle p^2 \rangle$ directly term by term using the relations you already have under the relevant equations part.
16. May 3, 2016
### DarkMatter5
They gave me the equation ∫e(-x2/d2) dx= d√π. That is what I used to calculate v. I'm not sure how to get the other equations in my working out. I don't get ∫xe(-x2/d2) dx =0 anywhere in my answers so I can' use it.
17. May 3, 2016
### blue_leaf77
That's resulting in a number, right? Not a function, as how $v(x)$ should be.
You have to solve the following integral
$$\langle p^2 \rangle = \frac{-\hbar^2}{d\sqrt{\pi}} \left( \frac{1}{d^4}\int_{-\infty}^\infty x^2 e^{-x^2/d^2} dx - \left(k^2+\frac{1}{d^2}\right) \int_{-\infty}^\infty e^{-x^2/d^2} dx - \frac{2ik}{d^2} \int_{-\infty}^\infty x e^{-x^2/d^2} dx \right)$$
You can completely compute those three integrals using the formula in your original post.
18. May 3, 2016
### DarkMatter5
Hmm okay. How did you derive that integral? I'm confused as to how you got that equation.
19. May 3, 2016
### blue_leaf77
You derived the integral, at the bottom of the image in post #12. I merely rearrange the terms and add $-\hbar^2$ and the necessary integration element $dx$.
20. May 3, 2016
### DarkMatter5
Sorry, I really am not understanding this. Please could you explain how my equation turns into the integral? I can't figure out how to get from what I wrote in post #12 to the integral. I don't think I was taught how to do that.
|
2017-10-19 09:54:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115753531455994, "perplexity": 915.0771243205069}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00507.warc.gz"}
|
https://encyclopediaofmath.org/index.php?title=Monte-Carlo_method&oldid=47895&printable=yes
|
# Monte-Carlo method
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
method of statistical trials
A numerical method based on simulation by random variables and the construction of statistical estimators for the unknown quantities. It is usually supposed that the Monte-Carlo method originated in 1949 (see [1]) when, in connection with work on the construction of atomic reactors, J. von Neumann and S. Ulam suggested using the apparatus of probability theory in the computer solution of applied problems. The Monte-Carlo method is named after the town of Monte-Carlo, famous for its casinos.
## Simulation by random variables with given distributions.
As a rule such a simulation is achieved by a transformation of one or more independent values of a random number $\alpha$, uniformly distributed in the interval $( 0 , 1 )$. The sequence of "sample" values of $\alpha$ is usually obtained on a computer by number-theoretic algorithms, of which the most widely used is the so-called method of residues, for example, in the form
$$u _ {0} = 1 ,\ \ u _ {n} \equiv u _ {n-} 1 5 ^ {2p+} 1 \ ( \mathop{\rm mod} 2 ^ {m} ) ,\ \ \alpha _ {n} = u _ {n} \cdot 2 ^ {-} m .$$
Here $m$ is the order of the mantissa of the computer and
$$p = \max \{ {q } : {5 ^ {2q+} 1 < 2 ^ {m} } \} .$$
Numbers of this type are called pseudo-random numbers; they are used in statistical testing and in solving typical problems (see [2][6]). The length of the period of the above version of the method of residues is $2 ^ {m-} 2$. Physical generators, tables of random numbers and quasi-random numbers are also used in the Monte-Carlo method. There are Monte-Carlo methods with a small number of playing parameters (see [7]).
The standard method for simulating a discrete random variable with distribution ${\mathsf P} \{ \xi = x _ {k} \} = p _ {k}$, $k = 0 , 1 \dots$ is as follows: Put $\xi = x _ {m}$ if the chosen value of $\alpha$ satisfies
$$\sum _ { k= } 0 ^ { m- } 1 p _ {k} \leq \alpha < \ \sum _ { k= } 0 ^ { m } p _ {k} .$$
The standard method for simulating a continuous random variable (sometimes called the method of inverse functions) is to use the, easily verified, representation $\xi = F ^ { - 1 } ( \alpha )$, where $F$ is the distribution function with given density $f$. Sometimes randomization of the simulation is useful (in other words, the method of superposition), based on the expression
$$f ( x) = \sum _ { k } p _ {k} f _ {k} ( x) ;$$
here one first chooses a number $m$ with distribution ${\mathsf P} \{ m = k \} = p _ {k}$, and then obtains a sample value $\xi$ from the distribution with density $f _ {m}$. In other methods of randomization certain parameters of a deterministic method for solving the problem are considered as random variables (see [7][9]).
Another, more general, method for simulating a continuous random variable is the method of exclusion (method of selection), at the basis of which lies the following result: If $( \xi , \eta )$ is uniformly distributed in a domain $G = \{ {( x , y ) } : {0 \leq y \leq g ( x) } \}$, then $f _ \xi ( x) = g ( x) / \overline{G}\;$. In the method of exclusion, choose a point $( \xi _ {0} , \eta )$ uniformly in a domain $G _ {1} \supset G$ and put $\xi = \xi _ {0}$ if $( \xi _ {0} , \eta ) \in G$; otherwise repeat the selection of $( \xi _ {0} , \eta )$, etc. For example, if $a \leq \xi \leq b$ and $g ( x) = c f ( x) \leq R$, one can take $\xi _ {0} = a + ( b - a ) \alpha _ {1}$, $\eta = R \alpha _ {2}$. The average number of operations in the method of exclusion is proportional to $\overline{G}\; _ {1} / \overline{G}\;$.
For many random variables, special representations of the form $\xi = \phi ( \alpha _ {1} \dots \alpha _ {n} )$ have been obtained. For example, the random variables
$$\sqrt {- 2 \mathop{\rm ln} \alpha _ {1} } \cdot \cos 2 \pi \alpha _ {2} \ \ \textrm{ and } \ \ \sqrt {- 2 \mathop{\rm ln} \alpha _ {2} } \cdot \sin 2 \pi \alpha _ {2}$$
have standard normal distributions and are independent; the random variable $\mathop{\rm ln} ( \alpha _ {1} \dots \alpha _ {n} )$ has the gamma-distribution with parameter $n$; the random variable $\max ( \alpha _ {1} \dots \alpha _ {n} )$ is distributed with density $n x ^ {n-} 1$, $0 \leq x \leq 1$; the random variable $\mathop{\rm exp} \{ \sum _ {k=} 1 ^ {n} ( \mathop{\rm ln} \alpha ) / ( p + k + 1 ) \}$ has the beta-distribution with parameters $p , n$( see [3][6]).
The standard algorithm for simulating a continuous random vector $\\vec{xi} = ( \xi _ {1} \dots \xi _ {n} )$ is to successively choose the values of its components from conditional distributions corresponding to the representation
$$f {} _ {\\vec{xi} } ( x _ {1} \dots x _ {n} ) = \ f _ {1} ( x _ {1} ) f _ {2} ( x _ {2} \mid x _ {1} ) \dots f _ {n} ( x _ {n} \mid x _ {1} \dots x _ {n-} 1 ) .$$
The method of exclusion extends to the multi-dimensional case without change; it is only necessary, in its formulation, to regard $\xi$, $\xi _ {0}$ and $x$ as vectors. A multi-dimensional normal vector can be simulated by using a special linear transformation of a vector of independent standard normal random variables. Special methods have also been developed for the approximate simulation of stationary Gaussian processes (see, for example, [3], [6]).
If, in a Monte-Carlo method calculation, random variables defined by a real phenomenon are simulated, then the calculation is a direct simulation (imitation) of this phenomenon. Computer simulations have been worked out for: the processes of transport, scattering and reproduction of particles: neutrons, gamma-quanta, photons, electrons, and others (see, for example, [11][18]); simulations of the evolution of ensembles of molecules for the solution of various problems in classical and quantum statistical physics (see, for example, [10][18]); simulation of queueing and industrial processes (see, for example [2], [6], [18]); simulation of various random processes in technology, biology; etc. (see [18]). Simulation algorithms are usually carefully developed; for example, they tabulate complicated functions, modify standard procedures, etc. None-the-less, a direct simulation often cannot provide the necessary accuracy in the estimates of the required variables. Many methods have been developed for increasing the effectiveness of a simulation.
## Monte-Carlo algorithms for estimating multiple integrals.
Suppose it is required to estimate an integral $J = \int h ( x) d x$ with respect to the Lebesgue measure in an $s$- dimensional Euclidean space $X$ and let $f _ \xi ( x)$ be a probability density such that $J$ can be written as:
$$J = \int\limits _ { X } f _ \xi ( x) \frac{h ( x) }{f _ \xi ( x) } \ d x = {\mathsf E} \zeta ,$$
where $\zeta = h ( \xi ) / f _ \xi ( \xi )$. By computer simulation of $\xi$ it is possible to obtain $N$ sample values $x _ {1} \dots x _ {N}$. By the law of large numbers,
$$J \approx J _ {N} = \ \frac{1}{N} \sum _ { k= } 1 ^ { N } \frac{h ( x _ {k} ) }{f ( x _ {k} ) } .$$
Simultaneously it is possible to estimate the mean-square error in $J _ {N}$, that is, the quantity $\sigma _ {N} = ( {\mathsf D} \zeta / N ) ^ {1/2}$, and to approximately construct a suitable confidence interval for $J$. By the choice of the density $f$ it is possible to arrange for estimates with possibly smaller variance. For example, if $0 \leq m _ {1} \leq h / f \leq m _ {2} < + \infty$, then ${\mathsf D} \zeta \leq ( m _ {2} - m _ {1} ) ^ {2} / 4$ and if $f = h / J$, then ${\mathsf D} \zeta = 0$. The corresponding algorithm is called essential sampling (choice by importance). Another common modification — the method of selection of principal part — occurs when a function $h _ {0} \approx h$ is determined whose integral is known. It is sometimes useful to combine Monte-Carlo methods and classical quadratures (cf. Quadrature) in so-called random quadrature formulas, the basic idea of which is that the nodes and coefficients in any quadrature sum (for example, in interpolation) are chosen randomly from a distribution, providing an unbiased estimator of the integral [3]. Particular cases of these formulas are: the so-called method of stratified sampling, in which the nodes are chosen one in each part of a fixed partition of the domain of integration and the coefficients are proportional to the corresponding volumes; and the so-called method of symmetric sampling, which, in the case of integration over $( 0 , 1 )$, is defined by the expression (see [10])
$$2 J = {\mathsf E} \left [ \frac{h ( \xi ) + h ( 1 - \xi ) }{f _ \xi ( \xi ) } \right ] .$$
Here the order of the speed of convergence of the Monte-Carlo method is increased and, in certain cases, becomes best possible in the class of problems being considered.
In general, the domain of integration is partitioned into parallelopipeds. In each parallelopiped the value of the integral is calculated as the average of the value at a random point and the point symmetric to it relative to the centre of the parallelopiped.
A number of modifications of Monte-Carlo methods are based on the (perhaps formal) representation of the required value as a double integral
$$J = \int\limits _ { X } \int\limits _ { Y } f ( x , y ) h ( x , y ) d x d y = \ {\mathsf E} \zeta ,$$
where $\zeta = h ( \xi , \eta )$ and the vector $( \xi , \eta )$ is distributed with density $f ( x , y )$. It is known that ${\mathsf E} ( \zeta ) = {\mathsf E} {\mathsf E} ( \zeta \mid \xi )$ and that
$$\tag{1 } {\mathsf D} \zeta = {\mathsf D} {\mathsf E} ( \zeta \mid \xi ) + {\mathsf E} {\mathsf D} ( \zeta \mid \xi ) = A _ {1} + A _ {2} ,$$
where ${\mathsf E} ( \zeta \mid \xi )$ is the conditional mathematical expectation and ${\mathsf D} ( \zeta \mid \xi )$ is the conditional variance of $\zeta$ given a fixed value of $\xi$. Formula (1) is widely used in Monte-Carlo methods. In particular, it shows that ${\mathsf D} {\mathsf E} ( \zeta \mid \xi ) < {\mathsf D} \zeta$, that is, analytic averaging over any variable increases the accuracy of the Monte-Carlo method. However, in this connection, the amount of computation may be significantly increased. The computer time necessary to achieve a given accuracy is proportional to $t {\mathsf D} \zeta$, where $t$ is the average time it takes to obtain one value of $\zeta$. The method of splitting is optimal with respect to this criterion. Its simplest version uses the "unbiased" estimator
$$\zeta _ {n} = \ \frac{1}{n} \sum _ { k= } 1 ^ { n } h ( \xi , \eta _ {k} ) ,$$
where $\eta _ {1} \dots \eta _ {n}$ are conditionally-independent and distributed as $\eta$ for a fixed value of $\xi$. Using (1) it is possible to obtain an optimal value
$$n = \left [ \frac{A _ {2} }{A _ {1} } \frac{t _ {1} }{t _ {2} } \right ] ^ {1/2} ,$$
where $t _ {1} , t _ {2}$ are the average computing times corresponding to the samples $\xi , \eta$( see, for example, [4]).
If the integrand depends on a parameter, it is expedient to use the method of dependent trials, that is, to estimate the integral for various values of the parameter at one and the same random points [20]. An important property of the Monte-Carlo method is the comparatively relatively weak dependence of the mean-square error $\sigma _ {N}$ on the number of measurements; moreover, the order of convergence relative to the number of points $N$ is always one and the same: $N ^ {-} 1/2$. This allows one to estimate (after a preliminary transformation of the problem) integrals of very high, and even infinite, multiplicity. For example, methods have been worked out for the estimation of Wiener integrals [19].
## Monte-Carlo algorithms for solving integral equations of the second kind.
Suppose it is required to estimate a linear functional $J _ {h} = ( \phi , h )$, where $\phi = K \phi + f$, where the integral operator $K$ with kernel $k ( x ^ \prime , x )$ satisfies a condition providing convergence of the Neumann series: $\| K ^ {n _ {0} } \| < 1$. A Markov chain $\{ x _ {n} \}$ is defined by an initial density $\pi ( x)$ and a transition density $p ( x ^ \prime , x ) = p ( x ^ \prime \rightarrow x )$; the probability of termination of the chain at $x ^ \prime$ is equal to
$$g ( x ^ \prime ) = 1 - \int\limits p ( x ^ \prime , x ) d x .$$
Let $N$ be the random index of the last state. Further, let a functional of the trajectories of the chain with expectation $J _ {h}$ be defined. Most often the so-called collision estimator
$$\xi = \sum _ { n= } 0 ^ { N } Q _ {n} h ( x _ {n} )$$
is used, where
$$Q _ {0} = \ \frac{f ( x _ {0} ) }{\pi ( x _ {0} ) } ,\ \ Q _ {n} = Q _ {n-} 1 \frac{k ( x _ {n-} 1 , x _ {n} ) }{p ( x _ {n-} 1 , x _ {n} ) } .$$
If $p ( x ^ \prime , x ) \neq 0$ for $k ( x ^ \prime , x ) \neq 0$ and $\pi ( x) \neq 0$ for $f ( x) \neq 0$, then under certain additional conditions
$${\mathsf E} \xi = \ \sum _ { n= } 0 ^ \infty ( K ^ {n} f , h ) = \ ( \phi , h ) = \ \int\limits _ { X } \phi ( x) h ( x) d x$$
(see [3][5]). The possibility of attaining a small variance in the case of constant sign is ensured by the following result: If
$$\pi ( x) = \ \frac{f ( x) \phi ^ {*} ( x) }{( f , \phi ^ {*} ) } \ \ \textrm{ and } \ \ p ( x ^ \prime , x ) = \ \frac{k ( x ^ \prime , x ) \phi ^ {*} ( x) }{[ K ^ {*} \phi ^ {*} ] ( x ^ \prime ) } ,$$
where $\phi ^ {*} = K ^ {*} \phi ^ {*} + h$, then ${\mathsf D} \xi = 0$ and ${\mathsf E} \xi = J _ {h}$( see [4]). If a suitable Markov chain is simulated on a computer, statistical estimates of linear functionals in the solution of an integral equation of the second kind can be obtained. This gives the possibility of locally estimating the solution on the basis of the representation $\phi ( x) = ( \phi , h _ {x} ) + f ( x)$, where $h _ {x} ( x ^ \prime ) = k ( x ^ \prime , x )$. In a number of cases, alongside Monte-Carlo methods, number-theoretic methods are applied in order to solve these problems (see [21]). A Monte-Carlo method for estimating the first eigen value of an integral operator has been realized by an iteration method on the basis of the relation [22]
$${\mathsf E} [ Q _ {n} h ( x _ {n} ) ] = \ ( K ^ {n} f , h ) .$$
All these results can be almost automatically extended to systems of linear algebraic equations of the form $x + H x = h$( see [23]).
## Modifications of Monte-Carlo methods in radiative transport theory.
(See [11][17].) The density of the average number of particle collisions in a phase space with coordinates $\mathbf r$ and velocities $\omega$ satisfies an integral equation of the second kind; its kernel in the single-velocity case has the form
$$\frac{\sigma _ {s} ( \mathbf r ^ \prime ) g ( \mu ) \mathop{\rm exp} ( - \tau ( \mathbf r ^ \prime , \mathbf r ) ) \sigma ( \mathbf r ) }{\sigma ( \mathbf r ^ \prime ) 2 \pi | \mathbf r ^ \prime - \mathbf r | ^ {2} } \delta \left ( \omega - \frac{\mathbf r - \mathbf r ^ \prime }{| \mathbf r - \mathbf r ^ \prime | } \right ) .$$
Here $\sigma _ {s} ( \mathbf r )$ is the scattering coefficient (section), $\sigma ( \mathbf r )$ is the relaxation coefficient, $g ( \mu )$ is the scattering indicatrix, and $\tau ( \mathbf r ^ \prime , \mathbf r )$ is the optical length of a path from $\mathbf r ^ \prime$ to $\mathbf r$( see [3], [4]). For the construction of estimates with small variances one uses, for example, asymptotic solutions of the adjoint transport equation [4]; the simplest algorithm of this type is the so-called exponential transformation (see [4], [11]). A modification of a local estimate of the flow of particles has been developed (see [3], [4], [11][13], [17], [18]). Using a simulation of a Markov chain (for example, the physical transport process in a certain medium) it is possible to simultaneously obtain dependent estimators of functionals for different values of the parameters; by differentiating the "weight" $Q _ {n}$ it is sometimes possible to obtain unbiased estimators of the corresponding derivatives (see [4], [12]). This provides an opportunity to use Monte-Carlo methods in solving certain inverse problems [12]. For the solution of problems in transport theory "bifurcation" of the trajectory and analytic averaging is effective [11]. The simulation of trajectories of particles in compound media can sometimes be essentially simplified by the method of a maximal section (see [3][5]).
These are constructed on the basis of the corresponding integral relations. For example, the standard five-point difference approximation for the Laplace equation has the form of the formula for the complete mathematical expectation of a symmetric random walk over a grid with absorption at the boundary (see, for example, [2], [3]). A continuous analogue of this formula is the relation
$$\tag{2 } u ( P) = \ \frac{\int\limits _ {S ( P) } u ( \mathbf r ( s) ) d s }{\int\limits _ {S ( P) } d s } ,$$
where the integral is taken over the surface of a sphere lying entirely within the given domain and with centre at $P$. Formula (2) and other similar relations provide an opportunity to use isotropic "random walk on spheres" when solving elliptic and parabolic equations (see [24], [4]). Monte-Carlo methods are effective, for example, for estimating the solution of multi-dimensional boundary value problems at a point.
A simulation of Markov branching processes allows one to construct estimates of the solution of certain non-linear equations, for example, the Boltzmann equation in the theory of rarefied gases [3].
#### References
[1] J. von Neumann, Nat. Bureau Standard Appl. Math. Series , 12 (1951) pp. 36–38 [2] N.P. Buslenko, et al., "The method of statistical trials (the Monte-Carlo method)" , Moscow (1962) (In Russian) [3] S.M. Ermakov, "Die Monte-Carlo Methode und verwandte Fragen" , Deutsch. Verlag Wissenschaft. (1975) (Translated from Russian) [4] G.A. Mikhailov, "Some questions in the theory of Monte-Carlo methods" , Novosibirsk (1971) (In Russian) [5] I.M. Sobol', "Numerical Monte-Carlo methods" , Moscow (1973) (In Russian) [6] Yu.G. Pollyak, "Probabilistic simulation on computers" , Moscow (1971) (In Russian) [7] N.S. Bakhvalov, "On optimal convergence estimates for quadrature processes and integration methods of Monte-Carlo type on function classes" , Numerical Methods for Solving Differential and Integral Equations and Quadrature Formulas , Moscow (1964) pp. 5–63 (In Russian) [8] N.S. Bakhvalov, "An estimate of the remainder term in quadrature formula" Comp. Math. Math. Phys. , 1 : 1 (1961) pp. 68–82 Zh. Vychisl. Mat. i Mat. Fiz. , 1 : 1 (1961) pp. 64–77 [9] N.S. Bakhvalov, "Approximate computation of multiple integrals" Vestnik Moskov. Gos. Univ. Ser. Mat. Mekh. Astronom. Fiz. Khim. , 4 (1959) pp. 3–18 (In Russian) [10] J.M. Hammersley, D.C. Handscomb, "Monte-Carlo methods" , Methuen (1964) [11] , Monte-Carlo methods and problems of radiative transport , Moscow (1967) [12] G.I. Marchuk, et al., "The Monte-Carlo method in atmospheric optics" , Novosibirsk (1976) (In Russian) [13] J. Spanier, E. Gelbard, "Monte-Carlo principles and neutron transport problems" , Addison-Wesley (1969) [14] V.V. Chavchanidze, Izv. Akad. Nauk SSSR Ser. Fiz. , 19 : 6 (1955) pp. 629–638 [15] , The penetration of radiation through non-uniformity in protection , Moscow (1968) (In Russian) [16] A.D. Frank-Kamenetskii, Atomnaya Energiya , 16 : 2 (1964) pp. 119–122 [17] M.H. Kalos, Nuclear Sci. and Eng. , 33 (1968) pp. 284–290 [18] , Monte-Carlo methods and their application. Reports III All-Union Conf. Monte-Carlo Methods , Novosibirsk (1971) [19] I.M. Gelfand, A.S. Frolov, N.N. Chentsov, "The computation of continuous integrals by the Monte-Carlo method" Izv. Vuz. Ser. Mat. , 5 (1958) pp. 32–45 (In Russian) [20] A.S. Frolov, N.N. Chentsov, "On the calculation of definite integrals dependent on a parameter by the Monte-Carlo method" USSR Comp. Math. Math. Phys. , 2 : 4 (1962) pp. 802–807 Zh. Vychisl. Mat. i. Mat. Fiz. , 2 : 4 (1962) pp. 714–717 [21] N.M. Korobov, "Number-theoretic methods in applied analysis" , Moscow (1963) (In Russian) [22] V.S. Vladimirov, "Monte-Carlo methods as applied to the calculation of the lowest eigenvalue and the associated eigenfunction of a linear integral equation" Theor. Probab. Appl. , 1 : 1 (1956) pp. 101–116 Teor. Veroyatnost. i. Primenen. , 1 : 1 (1956) pp. 113–130 [23] J.H. Curtiss, "Monte-Carlo methods for the iteration of linear operators" J. Math. Phys. , 32 : 4 (1954) pp. 209–232 [24] M.E. Muller, "Some continuous Monte-Carlo methods for the Dirichlet problem" Ann. Math. Stat. , 27 : 3 (1956) pp. 569–589
|
2022-08-13 09:42:51
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8994767069816589, "perplexity": 433.1890168560983}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571911.5/warc/CC-MAIN-20220813081639-20220813111639-00646.warc.gz"}
|
https://www.doubtnut.com/question-answer-physics/the-level-of-water-in-a-tank-is-5-m-high-a-hole-of-area-of-cross-section-1-cm2-is-made-at-the-bottom-643194481
|
HomeEnglishClass 11PhysicsChapterFluid Mechanics
The level of water in a tank i...
# The level of water in a tank is 5 m high. A hole of area of cross section 1 cm^(2) is made at the bottom of the tank. The rate of leakage of water for the hole in m^(3)s^(-1) is (g=10ms^(-2))
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Updated On: 10-7-2021
Apne doubts clear karein ab Whatsapp par bhi. Try it now.
Watch 1000+ concepts & tricky questions explained!
2.7 K+
200+
Text Solution
10^(-3) m^(3)s^(-1)10^(-4) m^(3)s^(-1)10 m^(3)s^(-1)10^(-2) m^(3)s^(-1)
Rate of leakage of water of from the hole <br> =Av=Asqrt(2gh)=10^(-4)sqrt(2xx10xx5)=10^(-3)m^(3)s^(-1)
|
2021-12-06 23:51:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.232460618019104, "perplexity": 6986.62771063994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363327.64/warc/CC-MAIN-20211206224536-20211207014536-00098.warc.gz"}
|
http://clay6.com/qa/11093/a-bus-is-moving-with-a-speed-of-10-m-s-on-a-straight-road-a-scooterist-wish
|
# A bus is moving with a speed of $10 m/s$ on a straight road. A scooterist wishes to overtake the bus in $100 s$. If the bus is at a distance of $1 \;km$ from the scooterist with what speed should the scooterist chase the bus?
$(a)\;40\; m/s \quad (b)\;25 \;m/s \quad (c)\;10 \;m/s \quad(d)\;20\; m/s$
Let $V_{SB}$=relative velocity of scooterist with respect to bus
$V_S$=Velocity of Scooterist
$V_B$=Velocity of Bus
$V_{SB}=V_S-V_B=>V_S=V_{SB}+V_B$
Since the scooterist has to cross the bus $1km$ distance in $100$ second.
$V_{SB}=\large\frac{1000\;m }{100\;s}$$=10 m/s$
Therefore $V_S=10+10=20 m/s$
Hence d is the correct answer
edited Jan 26, 2014 by meena.p
|
2018-05-27 19:23:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5946150422096252, "perplexity": 1941.860045356926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870082.90/warc/CC-MAIN-20180527190420-20180527210420-00453.warc.gz"}
|
https://math.stackexchange.com/questions/1182953/definite-integral-involving-logarithm-of-cosine
|
# Definite integral involving logarithm of cosine
Does anyone know the provenance of or the answer to the following integral
$$\int_0^\infty\ \frac{\ln|\cos(x)|}{x^2} dx$$
Thanks.
• Have you tried complexifying the integral? Mar 9 '15 at 22:23
• He may not have tried it, but I did, and I got nowhere. However, I did not notice that I could be integrating the analytic complex function $\frac{1}{2} \frac{\ln(\cos^2z)}{z^2}$ which has poles on the real axis at odd multiples of $\pi/2$. Maybe this can be done in that way. Mar 9 '15 at 22:55
Lucian's answer is just fine (as always), but from $$\sum_{n\in\mathbb{Z}}\frac{1}{(x+n\pi)^2}=\frac{1}{\sin^2 x}\tag{1}$$ for any $x\in(-\pi,\pi)$ it also follows that: $$I = \frac{1}{2}\int_{0}^{+\infty}\frac{\log\cos^2 x}{x^2}\,dx = \frac{1}{2}\int_{-\pi/2}^{\pi/2}\frac{\log\cos x}{\sin^2 x}\,dx=-\frac{1}{2}\int_{0}^{+\infty}\frac{\log(1+t^2)}{t^2}\,dt$$ by replacing $x$ with $\arctan t$ in the last step. Integration by parts now leads to: $$I = -\int_{0}^{+\infty}\frac{dt}{1+t^2} = \color{red}{-\frac{\pi}{2}}.\tag{2}$$ Someone may ask now: How to prove $(1)$?
Well, for such a purpose, start from the Weierstrass product for the sine function: $$\frac{\sin z}{z}=\prod_{n\geq 1}\left(1-\frac{z^2}{\pi^2 n^2}\right)$$ then consider the logarithm of both sides and differentiate it twice with respect to $z$.
Hint: Let $I(n)=\displaystyle\int_0^\infty\frac{1-\cos^{2n}x}{x^2}~dx.~$ Prove first that $I(n)=n\pi~\dfrac{\displaystyle{2n\choose n}}{4^n}~,~$ then evaluate $I'(0)$.
• (+1) Maybe it is not easy to recognize at first sight that $I$ is given by the derivative of a beta function at some point, but I agree this is the fastest technique. Mar 10 '15 at 1:46
This integral is equal to $$\frac{1}{2} \int_0^\infty \frac{\ln (\cos^2 x)}{x^2} dx = \frac{1}{2}(-\pi) = -\frac{\pi}2$$
The easiest place to remember seeing this is Gradshteyn and Ryzhik, where it appears as definite integral 4.322.6. The source quoted there is Fikhtebgik'ts, G. M. (http://en.wikipedia.org/wiki/Grigorii_Fichtenholz on Wikipedia) in the book Kurs differntsial'nogo i integral'ogo ischizdat, Vloume 2, page 686.
The book is pictured on the WP page. Quoting the Wiki description, "Fichtenholz's books about analysis are widely used in Eastern European and Chinese universities due to its exceptionality of detailed and well-ordered presentation of material about mathematical analysis. "
If this is an example of content in an introductory class on calculus, I think I am glad it has not been translated into English for me to have read as an undergraduatei!
• Honestly, I think this integral can be solved also without mentioning obscure references (even if they are really good books). Mar 10 '15 at 1:50
• nice catch - belated thanks Feb 27 '18 at 22:37
|
2021-11-30 19:30:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8902391791343689, "perplexity": 348.08831620309184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964359065.88/warc/CC-MAIN-20211130171559-20211130201559-00277.warc.gz"}
|
https://www.electricalexams.co/current-transformers-design-features-mcq/
|
# Current Transformers Design Features MCQ || Design Features of Current Transformers
1. The primary winding of a current transformer is connected in ______ with the line carrying the main current.
1. Parallel
2. Series
3. Either Series or Parallel
4. None of the above
Explanation:
The primary winding of a current transformer is connected in series with the line carrying the main current. The secondary winding of the CT, where the current is many times stepped down, is directly connected across an ammeter, for measurement of current; or across the current coil of a wattmeter, for measurement of power.
2. The ratio error in the current transformer is attributed to:
1. Leakage flux
2. Magnetizing component of no-load current
3. Power factor of the primary
4. The energy component of excitation current
Explanation:
Ratio error in current transformer:
• In the current transformer, the primary current Ip should be exactly equal to the secondary current multiplied by the turns ratio.
• But there is a difference between primary current Ip should be exactly equal to the secondary current multiplied by the turns ratio.
• This difference is contributed by the core excitation or magnetizing component of no-load current.
• The error in the current transformer introduced due to this difference is called current error or ratio error.
3. The secondary leakage reactance of a C.T. _______
1. Increases its ratio error
2. Decreases its ratio error
3. Has no effect on its ratio error
4. Increases the impedance of the circuit
Explanation:
Ratio error in current transformer:
• In the current transformer, the primary current Ip should be exactly equal to the secondary current multiplied by the turns ratio.
• But there is a difference between primary current Ip should be exactly equal to the secondary current multiplied by the turns ratio.
• This difference is contributed by the core excitation or magnetizing component of no-load current.
• The error in the current transformer introduced due to this difference is called current error or ratio error.
• The windings in a current transformer must be kept close so that the secondary leakage reactance is minimum.
• So Secondary leakage reactance of a C.T. thus increases its ratio error.
4. What will happen if the secondary of a current transformer is open-circuited?
1. Hot because of heavy iron losses
2. Hot because primary will carry heavy current
3. Cool as there is no secondary current
4. Depends on other parameters
Answer.1. Hot because of heavy iron losses
Explanation:
The secondary side of a current transformer should never be kept in open condition because, when kept open, there is a very high voltage found across the secondary side. This high voltage causes a high magnetizing current to build upon the secondary side which in turn causes high flux and makes the core saturate.
If the secondary of the current transformer is made open-circuited the transformer temperature will rise to a higher value because of heavy iron losses taking place in the circuit due to high flux density.
When the secondary winding of a current transformer is open-circuited with the primary winding energized, the large voltage may act as a safety hazard for the operators and many even rupture the insulation.
5. How many types are the current transformers classified into?
1. 2
2. 3
3. 4
4. 5
Explanation:
The current transformer are generally classified in two types
1. Bar type
2. Wound type
• Wound-type Current Transformer: The transformer’s primary winding is physically connected in series with the conductor that carries the measured current flowing in the circuit. The magnitude of the secondary current is dependent on the turn ratio of the transformer. These are used at very low current ratios such as summing applications. Because of the higher values of primary ampere-turns, high accuracy can be achieved by these CTs.
• Bar-type Current Transformer: This type of current transformer uses the actual cable or bus bar of the main circuit as the primary winding, which is equivalent to a single turn. They are fully insulated from the high operating voltage of the system and are usually bolted to the current-carrying device.
6. The two types of indoor type current transformers are:
1. Bushing type and Clamp-on type
2. Wound type and Clamp-on type
3. Wound type and Bar type
4. Bar type and Bushing type
Answer.3. Wound type and Bar type
Explanation:
Indoor type current transformers are generally used for low voltage circuits and are further classified into wound type, bar type, and window type transformers.
• Wound-type Current Transformer: The transformer’s primary winding is physically connected in series with the conductor that carries the measured current flowing in the circuit. The magnitude of the secondary current is dependent on the turn ratio of the transformer. These are used at very low current ratios such as summing applications. Because of the higher values of primary ampere-turns, high accuracy can be achieved by these CTs.
• Bar-type Current Transformer: This type of current transformer uses the actual cable or bus bar of the main circuit as the primary winding, which is equivalent to a single turn. They are fully insulated from the high operating voltage of the system and are usually bolted to the current-carrying device.
• The accuracy of bar-type CT decreases due to the magnetization of the core which requires a large fraction of the total ampere-turns at low current ratings.
• Window-type Current Transformer: These are installed around the primary conductor (or line conductor) because these are constructed with no primary. These are the most common CTs available in solid and split-core constructions. Before installing solid window CT, the primary conductor must be disconnected while in the case of split-core it can directly installed around the conductor without disconnecting it.
7. The ratio error of the current transformer is defined as
1. Ratio error = KnR
2. Ratio error = Kn – RR
3. Ratio error = Kn – R
4. Ratio error = 1R
Answer.2. Ratio error = Kn – R⁄R
Explanation:
The actual ratio of transformation varies with operating conditions and the error in secondary voltage is defined as
Percentage ratio error = $= \frac{{{K_n} – R}}{R} \times 100$
Kn is the nominal ratio
R is the actual ratio
It can be reduced by secondary turns compensation i.e. slightly decreasing the secondary turns.
8. The primary of a _______ should never be energized when its secondary is open-circuited.
1. Potential transformer
2. Current transformer
3. Autotransformer
4. Power transformer
Explanation:
The secondary side of the current transformer is always kept short-circuited in order to avoid core saturation and high voltage induction so that the current transformer can be used to measure high values of currents.
• The current transformer works on the principle of shorted secondary.
• It means that the burden on the system Zb is equal to 0.
• Thus, the current transformer produces a current in its secondary which is proportional to the current in its primary.
9. Secondary and primary windings of current transformer consist of ________
1. Copper turns
2. 14 S.W.G copper wire and copper strip respectively
3. Iron coils wound around
4. Laminations
Explanation:
Current transformers are usually made of ring core construction because ring cores are jointless so their reluctance is less and moreover they are more robust. Thus ring cores are capable of withstanding the large forces that are developed in the event of a short circuit. The magnetic materials that are used for the construction of current transformer cores include
(i) hot-rolled silicon steel
(ii) cold-rolled grain-oriented silicon steel
(ii) nickel-iron alloys.
The windings should be close together in order to reduce the secondary leakage reactance as this increases the ratio error. No. 14. S.W.G. copper wire is used on the secondary side and a copper strip is used as the bar primary.
10. A current-carrying conductor is wrapped eight times around the jaw of a clamp-on meter that reads 50 A. What will be the actual value of the conductor current?
1. 400 A
2. 6.25 A
3. 50 A
4. 12.5 A
Explanation:
Given, Np = 1, Ns = 8, Ip = 50 A
the secondary current is given by
Is/Ip = Np/Ns
Is = 50 × (1 / 8)
Is = 6.25 A
Scroll to Top
|
2022-05-22 01:06:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5856977105140686, "perplexity": 2153.821267713948}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543264.49/warc/CC-MAIN-20220522001016-20220522031016-00410.warc.gz"}
|
https://math.stackexchange.com/questions/3101429/exercise-in-algebra-involving-binary-operation
|
Exercise in algebra involving binary operation
I started doing algebra exercises before the course starts, and I'm befuddled with following problem, where it is asked to show if the operation is associative and commutative and whether it has an identity element and inverse elements.
Let set $$X\neq\emptyset$$ and $$\mathcal{P}(X)$$ denote all its subsets. Let $$*$$ be binary operation on $$\mathcal{P}(X)$$ which is defined with following equation:
$$A*B = A\cup B$$.
It is commutative: $$A\cup B = B \cup A$$
And also associative for $$C \in \mathcal{P}(X)$$: $$A\cup (B\cup C ) = (A\cup B)\cup C$$
Identity element $$e = \emptyset$$, for every set holds $$\emptyset$$. So, $$A \cup e = \emptyset^{^*}$$
Now I'm not sure about inverse element such that $$A \cup B = e$$
Is it possible in advanced algebra ?
I thought of something like $$B:= e \setminus A$$
*edit: Identity element equation should be $$A \cup e = A$$
Except for $$e=\{\}$$ no set has inverse.
If you take any nonempty set $$A$$ then union of $$A$$ with any set $$B$$ is nonempty. So $$A$$ can not have inverese. So $$e$$ is only one with inverse.
The power set $$\mathcal{P}(X)$$ with operation $$*$$ has the same algebraci structure as the set of natural numbers $$\mathbb{N}_{0}$$ with operation $$+$$, namely monoid.
|
2020-05-28 19:51:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 24, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.892202615737915, "perplexity": 157.9730705184762}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347399830.24/warc/CC-MAIN-20200528170840-20200528200840-00498.warc.gz"}
|
https://core.ac.uk/display/2186338
|
Location of Repository
## Interlayer repulsion and decoupling effects in stacked turbostratic graphene flakes
### Abstract
We have explored the electronic properties of stacked graphene flakes with the help of the quantum chemistry methods. We found that the behavior of a bilayer system is governed by the strength of the repulsive interactions that arise between the layers as a result of the orthogonality of their $\pi$ orbitals. The decoupling effect, seen experimentally in AA stacked layers is a result of the repulsion being dominant over the orbital interactions and the observed layer misorientation of 2$^{\circ}-5^{\circ}$ is an attempt by the system to suppress that repulsion and stabilize itself. For misorientated graphene, in the regions of superposed lattices in the Moir\'e pattern, the repulsion between the layers induce lattice distortion in the form of a bump or, in rigid systems local interlayer decoupling.Comment: 4 pages, 3 figure
Topics: Condensed Matter - Mesoscale and Nanoscale Physics
Year: 2011
DOI identifier: 10.1103/PhysRevB.84.033403
OAI identifier: oai:arXiv.org:1103.5751
|
2018-12-14 18:28:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5782687067985535, "perplexity": 2349.1110977302264}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00130.warc.gz"}
|
https://www.tutorialspoint.com/find-the-perimeter-of-each-of-the-following-figures
|
# Find the perimeter of each of the following figures :"
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
To do:
We have to find the perimeter of the given figures.
Solution:
We know that,
Perimeter is defined as the length of the outline of a shape.
To find the perimeter of a figure, just add the lengths of all the sides of the figure.
Therefore,
(a) In the given figure,
Sum of the sides $=1+2+4+5$
$=12\ cm$
(b) In the given figure,
Sum of the sides $= 23+35+35+40$
$=133\ cm$
(iii) In the given figure,
Sum of the sides $= 15+15+15+15$
$=60\ cm$
(iv) In the given figure,
Sum of the sides $= 4+4+4+4+4$
$= 20\ cm$
(v) In the given figure,
Sum of the sides $= 1+4+0.5+2.5+2.5+0.5+4$
$=15\ cm$
(vi) In the given figure,
Sum of the sides $= 4+1+3+2+3+4+1+3+2+3+4+1+3+2+3+4+1+3+2+3$
$= 52\ cm$
Updated on 10-Oct-2022 13:36:26
|
2022-12-10 09:55:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5544548034667969, "perplexity": 6033.469736373895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710421.14/warc/CC-MAIN-20221210074242-20221210104242-00765.warc.gz"}
|
https://direct.mit.edu/jocn/article/34/10/1719/97383/Abstract-Neural-Representations-of-Category?searchresult=1
|
For decades, researchers have debated whether mental representations are symbolic or grounded in sensory inputs and motor programs. Certainly, aspects of mental representations are grounded. However, does the brain also contain abstract concept representations that mediate between perception and action in a flexible manner not tied to the details of sensory inputs and motor programs? Such conceptual pointers would be useful when concepts remain constant despite changes in appearance and associated actions. We evaluated whether human participants acquire such representations using fMRI. Participants completed a probabilistic concept learning task in which sensory, motor, and category variables were not perfectly coupled or entirely independent, making it possible to observe evidence for abstract representations or purely grounded representations. To assess how the learned concept structure is represented in the brain, we examined brain regions implicated in flexible cognition (e.g., pFC and parietal cortex) that are most likely to encode an abstract representation removed from sensory–motor details. We also examined sensory–motor regions that might encode grounded sensory–motor-based representations tuned for categorization. Using a cognitive model to estimate participants' category rule and multivariate pattern analysis of fMRI data, we found the left pFC and human middle temporal visual area (MT)/V5 coded for category in the absence of information coding for stimulus or response. Because category was based on the stimulus, finding an abstract representation of category was not inevitable. Our results suggest that certain brain areas support categorization behavior by constructing concept representations in a format akin to a symbol that differs from stimulus–motor codes.
Concepts organize our experiences into representations that can be applied across domains to support higher-order cognition. How does the brain organize sensory input into an appropriate representation for categorization? Are concepts simply a combination of sensory signals and motor plans, or does the brain construct a separate concept representation, abstracted away from sensory–motor codes? Despite much research on how people organize sensory information into a format suited for categorization (e.g., Love, Medin, & Gureckis, 2004; Kruschke, 1992; Nosofsky, 1986) and its neural basis (e.g., Zeithamova et al., 2019; Bowman & Zeithamova, 2018; Mack, Love, & Preston, 2016; Folstein, Palmeri, & Gauthier, 2013; Mack, Preston, & Love, 2013; Davis, Love, & Preston, 2012a, 2012b; Cromer, Roy, & Miller, 2010; Seger & Miller, 2010; Freedman & Assad, 2006; Sigala & Logothetis, 2002), few have explicitly examined whether category representations exist independently of sensory–motor information (Figure 1A).
Figure 1.
How the brain transforms stimulus into a concept representation for categorization. Stimuli are 12 motion dot patterns (100% coherent), from 0° to 330° in 30° steps. Blue and green colors denote the two categories. (A) An observer must transform the percept into intermediate representations for accurate categorization behavior. (B–D) Possible representations the brain might use for categorization. (B) Each stimulus is associated with a motor response, where the category representation is grounded in sensory–motor codes. (C) Stimulus-modulated representations as category representation. The stimulus representation is modulated by the category structure, which is turned into a motor representation for the response. (D) The category-modulated stimulus representation is associated with an abstract representation of each category with a different representational format to the sensory motor codes (blue and green circles), which is then turned into a response.
Figure 1.
How the brain transforms stimulus into a concept representation for categorization. Stimuli are 12 motion dot patterns (100% coherent), from 0° to 330° in 30° steps. Blue and green colors denote the two categories. (A) An observer must transform the percept into intermediate representations for accurate categorization behavior. (B–D) Possible representations the brain might use for categorization. (B) Each stimulus is associated with a motor response, where the category representation is grounded in sensory–motor codes. (C) Stimulus-modulated representations as category representation. The stimulus representation is modulated by the category structure, which is turned into a motor representation for the response. (D) The category-modulated stimulus representation is associated with an abstract representation of each category with a different representational format to the sensory motor codes (blue and green circles), which is then turned into a response.
Close modal
Some concepts seem to be “grounded” in sensory or motor experiences (Barsalou, 2008). For instance, the idea of “pain” is based on experiences of pain, and the metaphorical use of the word is presumably linked to those bodily experiences. Certain aspects of concepts are more abstracted from first-hand experience and act more like symbols or pointers, which can support flexible cognition. For example, we know water can be used to clean the dishes, but when we are thirsty, we drink it. The same object can also appear entirely different in some contexts, such as a camouflaging stick insect appearing as a leaf or when a caterpillar changes into a butterfly. In such cases where sensory information is unreliable or exhibits changes, an amodal symbol working as an abstract pointer may aid reasoning and understanding. Cognitive science and artificial intelligence researchers discuss the use of amodal symbols—abstracted away from specific input patterns—for solving complex tasks, arguing they provide a foundation to support higher cognition (Marcus, 2001; Pylyshyn, 1984; Fodor, 1975; also see Markman & Dietrich, 2000). In contrast, theories of grounded cognition suggest that all “abstract” representations are grounded in, and therefore fully explained by, sensory–motor representations (Barsalou, 1999; Harnad, 1990). Indeed, sensory–motor variables and categories are often correlated in the real world, and the brain may never need to represent “category” in a way that can be disentangled from perception and action.
Here, we consider several competing accounts. Closely related to “grounded cognition,” some researchers emphasize a central role of action for cognition (Wolpert & Witkowski, 2014; Wolpert & Ghahramani, 2000; Rizzolatti, Riggio, Dascola, & Umiltá, 1987), such that category representations could simply consist of the appropriate stimulus–motor representations and associations (Figure 1B). An alternative view holds that category-modulated stimulus representations are key for categorization, where stimulus information is transformed into a representation suited for categorization (as in cognitive models: e.g., Love et al., 2004; Kruschke, 1992). In these models, an attention mechanism gives more weight to relevant features so that within-category stimuli become closer and across-category stimuli are pushed apart in representational space (Figure 1C). Finally, the brain may recruit an additional amodal, symbol-like concept representation (Marcus, 2001; Pylyshyn, 1984; Newell, 1980; Fodor, 1975) to explicitly code for category, separate from sensory–motor representations. For instance, sensory information is processed (e.g., modulated by category structure) and then transformed into an abstract category representation before turning into a response (Figure 1D). This representation resembles an amodal symbol in that it has its own representational format (e.g., orthogonal to sensory–motor codes) and acts as a pointer between the relevant sensory signals (input) and motor responses (output). The advantage of such a representation is that it can play a role in solving the task and can persist across superficial changes in appearance and changes in motor commands. People's ability to reason and generalize in an abstract fashion suggests the brain is a type of symbol processor (Marcus, 2001).
Here, we aimed to test whether the brain constructs an abstract concept representation separate from stimulus and motor signals, if the “category” code consists of category-modulated stimulus representations and motor codes, or if it simply consists of stimulus–motor mappings. We designed a probabilistic concept learning task where the stimulus, category, and motor variables were not perfectly coupled nor entirely independent, to allow participants to naturally form the mental representations required to solve the task, and used multivariate pattern analysis (MVPA) on fMRI data to examine how these variables were encoded across the brain. For evidence supporting the amodal account (Figure 1D), some brain regions should encode category information but not the stimulus or response. For the category-modulated sensory account (Figure 1C), regions should encode both stimulus and category information, with no regions that encode category without stimulus information. Finally, for the sensory–motor account (Figure 1B), regions should code for category, stimulus, and motor response (separately or concurrently), with no regions encoding category without sensory or motor information.
We recruited participants to an initial behavioral session where they first learned the task and invited a subset of participants who performed relatively well to partake in an fMRI study. To assess how the learned concept structure is represented in the brain, we focused on brain regions implicated in flexible cognition, including pFC and parietal cortex, which are strong candidates for representing the abstract concept structure without being tied to sensory–motor variables, and sensory–motor regions that are involved in stimulus processing and may encode grounded representations such as category-modulated stimulus representations as the basis of concept knowledge. We focused on these regions to test for category representations after learning rather than testing regions that might be involved in learning (e.g., hippocampus, medial pFC), because participants spent a significant amount of time learning the category structure in a prior behavioral session (see Methods).
### Participant Recruitment and Behavioral Session
We recruited participants to partake in a behavioral session to assess their ability to learning the probabilistic concept learning task. One hundred thirty-one participants completed the behavioral session, and we invited a subset of higher performing participants who were MRI compatible to participate in the fMRI study. We set the threshold for being invited to no lower than 60% accuracy over two blocks of the task (50% chance). Only two participants in the behavioral session performed below 60% accuracy.
### Participants (fMRI Study)
Thirty-nine participants took part in the fMRI study (most returned ∼2–4 weeks after the behavioral session). Six participants were excluded because of lower-than-chance performance, misunderstanding the task, or falling asleep during the experiment. The remaining 33 participants (23 women) were aged 19–34 years (mean = 24.04 years, SEM = 0.61 years). The study was approved by the University College London Research Ethics Committee (reference: 1825/003).
### Stimuli and Apparatus
Stimuli consisted of coherently moving dots produced in PsychoPy (Peirce et al., 2019), images of faces and buildings (main task), and images of flowers and cars (practice). In each dot-motion stimulus, there were 1000 dots, and dots were 2 pixels in size and moved at a velocity of ∼0.8° per second. The dot-motion stimuli and images were 12° in diameter (or on longest axis). The fixation point was a black circle with 0.2° diameter. A gray circle (1° diameter) was placed in front of the dot stimulus but behind the fixation point to discourage smooth pursuit. The natural images were provided by members of Cognitive Brain Mapping Lab at RIKEN BSI. The task was programmed and run in PsychoPy in Python 2.7. The task was presented on an LCD projector (1024 × 768 resolution), which was viewed through a tilted mirror in the fMRI scanner. We monitored fixation with an eye tracker (Eyelink 1000 Plus, SR Research) and reminded participants to maintain fixation between runs as necessary.
To examine how the brain constructs an appropriate mental representation for categorization, we designed a probabilistic concept learning task to be first performed in a behavioral session and then the same probabilistic categorization task in the fMRI session. Specifically, we set out to test whether any brain regions coded for an abstract category signal separate from stimulus and motor signals, if the category signal mainly consisted of category-modulated sensory signals, or if the category signal was simply a combination or coexistence of sensory–motor signals. To this end, we designed a probabilistic categorization task where the task variables (category, stimulus, and motor response) were not perfectly coupled or entirely independent.
On each trial, participants were presented with a set of moving dots moving coherently in one direction and were required to judge whether it belonged to one category (“Face”) or another (“Building”) with a corresponding left or right button press. The motion stimulus was presented for 1 sec, followed by an ISI ranging from 1.8 to 7.4 sec (jittered), then the category feedback (Face or Building stimulus) for 1 sec. The intertrial interval was 1.8 sec. Naturalistic images were used to encourage task engagement and to produce a strong stimulus signal.
The moving-dot stimuli spanned 12 directions from 0° to 330° in 30° steps, with half the motion directions assigned to one of two categories determined by a category bound. For half of the participants, the category bound was placed at 15°, so that directions from 30° to 180° were in one category, and directions from 210° to 330° and 0° were in the other category. For the other half of the participants, the objective category bound was placed at 105°, so that directions from 120° to 270° were in one category, and directions from 0° to 90° and 300° to 330° were in the other category.
The corrective category feedback consisted of a face or building stimulus, which informed the participant which category the motion stimulus was most likely part of. The feedback was probabilistic such that the closer to the bound a stimulus was, the more probabilistic the feedback was (see Figure 2A). In the practice sessions, participants were introduced to a deterministic version of the task before the probabilistic task (see Experimental Procedure: Behavioral Session section below).
Figure 2.
Behavioral task. (A) On each trial, a dot-motion stimulus was presented and participants judged whether it was in Category A or B. At the end of each trial, probabilistic category feedback (a face or building stimulus) informed the participant which category the motion stimulus most likely belonged to. (B) Probabilistic category structure. For motion stimuli to the left of the category bound (dotted line), the feedback will most likely be a face (Category A), and stimuli to the right will most likely be a building (Category B). For example, for the motion direction where the blue section is 4/7, the participant will see a face four out of seven times and a building three out of seven times (corresponding to the 3/7 green section). The closer the motion direction to the category bound, the more probabilistic the feedback.
Figure 2.
Behavioral task. (A) On each trial, a dot-motion stimulus was presented and participants judged whether it was in Category A or B. At the end of each trial, probabilistic category feedback (a face or building stimulus) informed the participant which category the motion stimulus most likely belonged to. (B) Probabilistic category structure. For motion stimuli to the left of the category bound (dotted line), the feedback will most likely be a face (Category A), and stimuli to the right will most likely be a building (Category B). For example, for the motion direction where the blue section is 4/7, the participant will see a face four out of seven times and a building three out of seven times (corresponding to the 3/7 green section). The closer the motion direction to the category bound, the more probabilistic the feedback.
Close modal
### Behavioral Task Rationale
Probabilistic category feedback was used to decouple the stimulus from the category to a certain extent. Most previous concept learning studies used deterministic feedback, such that each stimulus was always associated with the same (correct) category feedback. In terms of conditional probability, the probability of a stimulus belonging to a given category (Pr(category A | stimulus x)) with deterministic category feedback is 1. With probabilistic category feedback, the conditional probability is less than 1, and as the stimulus–feedback association becomes weaker (more probabilistic), it approaches 0.5 (not predictive). In this way, the stimulus and category are weakly coupled and may lead participants to form a category representation abstracted from the more concrete experimental variables (such as stimulus and motor response). On the other hand, participants could still perform the task at greater accuracy than chance if they relied heavily on the stimulus, grounding the category in the stimulus representations.
Furthermore, the category–response association was flipped after each block (e.g., left button press for Category A in the first block, right button press for Category A in the second block), to discourage participants simply associating each category with a motor response across the experiment. Of course, it was still possible for participants to associate the category with a motor plan and change this association across blocks, leading to a category representation based on motor planning.
In summary, the task required participants to learn the category that each motion-dot stimulus belonged to by its probabilistic association to an unrelated stimulus (face or building as category feedback), whereby the probabilistic feedback could lead participants to form an abstract or grounded category representation. In addition, the category–motor association was flipped across blocks. Together, the category, stimulus, and motor variables were weakly coupled, allowing us to assess whether there are brain regions that code for these variables together or independently of one another.
### Experimental Procedure: Behavioral Session (Practice and Main Experiment)
To ensure participants understood the main experimental task, they were given four practice task runs with each version gradually increasing in task complexity. In the first three runs, the task was the same as described above except that the images used for feedback were pictures of flowers and cars. In the fourth run, it was a practice run of the main task described above. Before each run, the experimenter explained the task to the participant.
Participants were instructed to learn which motion directions led to the appearance of Flower images and which led to the Car images. Specifically, they were told that, when the moving dot stimulus appears, they should press the left (or right) button if they think a Flower will appear or the right (or left) button if they think a Car will appear. In the first run, the category boundary was at 90° (up–down rule), and motion directions were presented in sequential order around the circle. The category (“Flower” or “Car”) feedback was deterministic such that each dot-motion stimulus was always followed by the same category stimulus feedback. For feedback, participants were presented with the image in addition to a color change in the fixation point (correct: green, incorrect: red, too slow: yellow). In the second run, the task was the same except the motion directions were presented in a random order. In the third run, participants were told that the feedback is probabilistic, meaning that the feedback resembles the weather report: It is usually correct, but sometimes it is not. For example, of the five times you see that motion direction, you will be shown a flower stimulus as feedback four times, but you will be shown a car once. So the feedback is helpful on average, but sometimes it can be misleading. In the fourth run, participants were introduced to a new task to be used in the main experiment, with a new probabilistic category boundary (15° or 105°) and with face and building images as feedback.
Once participants completed the practice runs and were comfortable with the task, they proceeded to the main experimental session where they learnt the category rule from trial and error. In each block, participants completed seven trials per direction condition, giving 84 trials per block. The experimenter informed participants that the category–response association flipped after each block. Participants completed three or four experimental runs.
### Experimental Procedure: fMRI Session
A subset of participants was invited to attend an fMRI session. Participants were given one practice block run as a reminder of the task and then proceeded to complete the main experiment in the scanner. Participants learned through trial and error. They were not informed about the location of the category boundary in either the behavioral or fMRI session, which partially explains why their performance was not at ceiling. A cognitive model fit to individuals' behavior indicates that participants' category boundaries differed from the optimal boundary (see below).
Participants completed three or four blocks of the probabilistic category learning task (four participants performed an extra block because of low performance on early block runs), then a motion localizer block, and a face–scene localizer block (block order for localizer runs was counterbalanced across participants). After the scan session, participants completed a postscan questionnaire to assess their understanding of the task and to report their subjective category rule.
Each task block took approximately 12 min, and the whole scan session (main task, localizers, and structural scans) took slightly over an hour. Including preparation, practice, and postexperiment debriefing, the whole session took approximately 2 hr.
To localize the face-selective fusiform face area (FFA; Allison, Puce, Spencer, & McCarthy, 1999; Kanwisher, McDermott, & Chun, 1997) and place-sensitive parahippocampal place area (PPA; Epstein & Kanwisher, 1998) in individuals, participants completed an event-related localizer scan where they were presented with faces and buildings and made a response when they saw an image repeat (1-back task), which was followed by feedback (the fixation point changed to green for correct and red for incorrect). On each trial, an image of a face or building was presented for 1 sec with ISIs between stimulus and feedback (green/red fixation color change) ranging from 1.8 to 7.4 sec (jittered), with an intertrial interval of 1.8 sec. A total of 42 faces and 42 buildings were presented in a random order. Participants also completed a motion localizer run that was not used here.
### MRI Data Acquisition
Functional and structural MRI data were acquired on a 3-T TrioTim scanner (Siemens) using a 32-channel head coil at the Wellcome Trust Centre for Neuroimaging at University College London. An EPI-BOLD contrast image with 40 slices was acquired in 3-mm3 voxel size, repetition time (TR) = 2800 msec, and echo time (TE) = 30 msec, and the flip angle was set to 90°. A whole-brain field map with 3-mm3 voxel size was obtained with a first TE = 10 msec, second TE = 12.46 msec, and TR = 1020 msec, and the flip angle was set to 90°. A T1-weighted (T1w) structural image was acquired with 1-mm3 voxel size, TR = 2.2 msec, and TE = 2.2 msec, and the flip angle was set to 13°.
### Behavioral Model and Data Analysis
The probabilistic nature of the feedback meant that participants did not perform exactly according to the objective category rule determined by the experimenter, and inspection of behavioral performance curves suggested that most participants formed a category rule slightly different to the objective rule. To get a handle on the category rule participants formed, we applied a behavioral model to estimate each participant's subjective category boundary from their responses.
The model contains a decision bound defined by two points, b1 and b2, on a circle (0°–359°). Category A proceeds clockwise from point b1, whereas Category B proceeds clockwise from b2. Therefore, the positions of b1 and b2 define the deterministic category boundary between Categories A and B. To illustrate, if b1 = 15° and b2 = 175°, stimulus directions from 15° to 175° would be in one category, and stimulus directions from 175° to 359° and from 0° to 15° would be in the other category. Note that the number of stimulus directions is not constrained to be equal across categories, as illustrated in this example (five and seven directions in each category). Despite this, there were six stimulus directions in each category for most participants.
The only source of noise in this model are the positions of b1 and b2, which are normally distributed as 𝒩(0, σ). As the σ parameter—the standard deviation of the positions of b1 and b2—increases, the position of the boundary for a given trial becomes noisier, and therefore it becomes more likely that an item may be classified contrary to the position of the boundary. In practice, no matter the value of σ, it is always more likely that an item will be classified according to the positions of b1 and b2. The standard deviation parameter provides an estimate of how uncertain participants were of the category boundary. If a participant responded perfectly consistently according to a set of bounds (deterministically), σ would be low, whereas if the participant was more uncertain of the bound locations and responded more probabilistically, σ would be higher.
The probability a stimulus x is an A or B is calculated according to whichever boundary b1 or b2 is closer. This is a numerical simplification as it is possible for the further boundary to come into play and even for boundary noise to lead to b1 or b2 to traverse the entire circle. However, for the values of σ we consider, both of these possibilities are highly unlikely. The probability that stimulus x is labeled according to the mean positions of b1 and b2 is
$1−pz>x−bxσ$
where z is distributed according to the standard normal distribution and bx is b1 or b2, whichever is closer to x. Intuitively, the further the item is from the boundary position, the more likely it is to be classified according to the boundary position as noise (i.e., σ) is unlikely to lead to sufficient boundary movement in that trial. The probability an item is labeled in the alternative category (i.e., “incorrect” responses against the bound defined by the mean positions of b1 and b2) is simply 1 minus the above quantity.
In other words, the probability a stimulus is in a certain category is a Gaussian function of the distance to the closest bound, where the further away the stimulus is from the bound, the more likely it is to be a part of that category (see Figure 3A for an illustration of the model).
Figure 3.
Task model, behavioral results, and model-based fMRI analysis procedure. (A) The model takes individual participant behavior as input and estimates their subjective category bound (b1 and b2) and standard deviation (σ). (B) Categorization behavior. Proportion Category A responses plotted as a function of motion directions ordered by individual participants' estimated category boundary. Blue curve represents the mean, and error bars represent SEM. Translucent lines represent individual participants. (C) Model-based fMRI analysis procedure illustration. Voxel activity patterns are extracted from each ROI for each motion direction condition (top), and a classifier was trained (SVM) to decode the category based on the model-based estimation of the category boundary for each participant (bottom). The data in the scatterplot were generated to illustrate example patterns of voxel activity evoked by the motion direction stimuli (two voxels shown here) belonging to each category (blue for Category A, green for Category B). The line is a possible support vector plane that reliably discriminates voxel patterns elicited by stimuli in Category A from stimuli in Category B. To test for an abstract category signal, we subtracted the classification accuracy for the category SVM by an SVM trained to discriminate orthogonal (90° rotated) directions (see Methods for details).
Figure 3.
Task model, behavioral results, and model-based fMRI analysis procedure. (A) The model takes individual participant behavior as input and estimates their subjective category bound (b1 and b2) and standard deviation (σ). (B) Categorization behavior. Proportion Category A responses plotted as a function of motion directions ordered by individual participants' estimated category boundary. Blue curve represents the mean, and error bars represent SEM. Translucent lines represent individual participants. (C) Model-based fMRI analysis procedure illustration. Voxel activity patterns are extracted from each ROI for each motion direction condition (top), and a classifier was trained (SVM) to decode the category based on the model-based estimation of the category boundary for each participant (bottom). The data in the scatterplot were generated to illustrate example patterns of voxel activity evoked by the motion direction stimuli (two voxels shown here) belonging to each category (blue for Category A, green for Category B). The line is a possible support vector plane that reliably discriminates voxel patterns elicited by stimuli in Category A from stimuli in Category B. To test for an abstract category signal, we subtracted the classification accuracy for the category SVM by an SVM trained to discriminate orthogonal (90° rotated) directions (see Methods for details).
Close modal
Maximum likelihood estimation was used to obtain estimates for each participant (using the optimize function in SciPy). Model estimates of the subjective category bound fit participant behavior as expected. Specifically, there was high accuracy (concordance) with respect to the estimated subjective category bound (mean proportion correct = 0.82, SEM = 0.01; see Figure 3B).
Modeling and analyses were performed in Python 3.7.
### fMRI Preprocessing
Results included in this article come from preprocessing performed using fMRIprep 1.2.3 (Esteban et al., 2019; RRID:SCR_016216), which is based on Nipype 1.1.6-dev (Gorgolewski et al., 2011; RRID:SCR_002502).
#### Anatomical Data Preprocessing
The T1w image was corrected for intensity nonuniformity using N4BiasFieldCorrection (Avants, Tustison, & Song, 2009; ANTs 2.2.0) and used as T1w reference throughout the workflow. The T1w reference was then skull-stripped using antsBrainExtraction.sh (ANTs 2.2.0), using OASIS as the target template. Spatial normalization to the ICBM 152 Nonlinear Asymmetrical template version 2009c (Fonov, Evans, McKinstry, Almli, & Collins, 2009; RRID:SCR_008796) was performed through nonlinear registration with antsRegistration (ANTs 2.2.0, RRID:SCR_004757; Avants, Epstein, Grossman, & Gee, 2008), using brain-extracted versions of both T1w volume and template. Brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) was performed on the brain-extracted T1w using fast (FSL 5.0.9, RRID:SCR_002823; Zhang, Brady, & Smith, 2001).
#### Functional Data Preprocessing
Many internal operations of fMRIPrep use Nilearn 0.4.2 (Abraham et al., 2014; RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep's documentation.
### ROIs
To study how the brain represented category, stimulus, and response variables in the probabilistic categorization task, we focused on a set of visual, parietal, and prefrontal brain ROIs hypothesized to be involved in coding these variables after learning.
We selected anatomical masks from Wang et al. (2014; scholar.princeton.edu/napl/resources) to examine areas involved in early visual processing, motion processing, and attention, including early visual cortex (EVC; V1, V2, and V3 merged), motion-sensitive human middle temporal visual area (MT)/V5 (Dubner & Zeki, 1971), and the intraparietal sulcus (IPS). We included EVC to assess stimulus-related representations including orientation and direction. The IPS is implicated in both attention (Kastner & Ungerleider, 2000; Corbetta, Miezin, Shulman, & Petersen, 1993; Mesulam, 1981) and category learning (Freedman & Assad, 2016; Seger & Miller, 2010). However, we did not have strong reasons to focus on specific parts of the IPS, so we merged IPS1 to IPS5 to make a large IPS ROI.
Because these masks are provided in T1 structural MRI space (1-mm3), when they were transformed into individual participant functional space (3 mm3), several masks did not cover GM accurately (too conservative, thereby excluding some GM voxels). Therefore, we applied a small amount of smoothing to the mask (with a Gaussian kernel of 0.25 mm, using fslmaths) for a more liberal inclusion of neighboring voxels, before transforming it to individual-participant space. In addition, several potential ROIs were too small to be mapped onto our functional scans. Specifically, there were several participants with zero voxels in those masks after transforming to functional space, even with smoothing. This included the motion-sensitive area MST and the superior parietal lobule (SPL1), which were not included.
pFC is strongly implicated representing abstract task variables (Duncan, 2001; Miller & Cohen, 2001) and task-relevant sensory signals (e.g., Jackson, Rich, Williams, & Woolgar, 2017; Erez & Duncan, 2015; Roy, Riesenhuber, Poggio, & Miller, 2010; Meyers, Freedman, Kreiman, Miller, & Poggio, 2008; Goldman-Rakic, 1995). We selected prefrontal regions implicated in cognitive control and task representations (Fedorenko, Duncan, & Kanwisher, 2013; Duncan, 2010; imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem) including the posterior, middle (approximately Area 8), and anterior (approximately Area 9) portion of the middle frontal gyrus.
Primary motor cortex was selected to examine representations related to the motor response and to test for any stimulus or category signals. Primary motor cortex masks were taken from the Harvard-Oxford atlas.
We also localized and examined brain responses in the FFA and PPA, to assess whether face and place regions, involved in processing stimuli at the feedback phase, were involved in representing the learned category (see procedure below). For example, if participants learnt that a set of motion directions belonged to Category A, which was associated with face stimuli as feedback, the FFA might show information about the learnt category during the motion direction stimulus phase (i.e., not to the face but according to the learnt category bound). It is worth noting that we are interested in assessing the information coding the learnt category (Category A vs. B), not the probabilistically presented face versus building feedback stimulus.
Apart from the FFA and PPA (where bilateral ROIs were used; see below), we included both left and right ROIs. Masks were transformed from standard Montreal Neurological Institute space to each participant's native space using Advanced Normalization Tools (Avants et al., 2009).
### fMRI General Linear Model
We used the general linear model (GLM) in FMRI Expert Analysis Tool (Woolrich, Ripley, Brady, & Smith, 2001; FMRIB Software Library Version 6.00; fsl.fmrib.ox.ac.uk/fsl/) to obtain estimates of the task-evoked brain signals for each stimulus, which was used for subsequent MVPAs.
For the main GLM, we included one explanatory variable (EV) to model each motion stimulus trial (estimating trial-wise betas for subsequent MVPA) and an EV for each category feedback stimulus linked to each motion stimulus condition (12 EVs, not used in subsequent analyses; see trial-wise GLM examining the feedback response below). No spatial smoothing was applied. Stimulus EVs were 1 sec with ISIs between stimulus and feedback ranging from 1.8 to 7.4 sec (jittered), and the intertrial interval was 1.8 sec. Each block run was modeled separately for leave-one-run-out cross-validation for MVPA.
To examine motor-related brain responses, we performed an additional GLM using the same number of EVs except the EVs were time-locked to the response rather than the motion stimulus (stimulus time plus RT) and modeled as an event lasting 0.5 sec, with the assumption that the motor events were shorter than the stimulus (although this made little difference to the results). For trials without a response, the stimulus was modeled from stimulus onset as done in the main GLM and then excluded in subsequent motor-related analyses. Feedback stimuli were modeled with a single EV as above.
To localize the face-selective FFA and place-sensitive PPA, we performed an additional GLM in SPM12 (www.fil.ion.ucl.ac.uk/spm/software/spm12/). We applied spatial smoothing using a Gaussian kernel of FWHM 6 mm and included one EV for faces and one EV for building stimuli, as well as polynomials of degrees 0:6 to model drift in the data. Stimulus EVs were 1 sec with ISIs between stimulus and feedback (green/red fixation point color change) ranging from 1.8 to 7.4 sec (jittered), with an intertrial interval of 1.8 sec. We included three contrasts Faces > Buildings, Buildings > Faces, and overall Visual Activation (Faces and Buildings). To define individual participant ROIs, we used minimum statistic conjunctions with visual activations. To localize the FFA, the conjunction was (Face > Building) & Visual. For the PPA, the conjunction was (Building > Face) & Visual. The rationale behind this conjunction is that functional ROIs should be not only simply selective but also visually responsive (all voxels that were deactivated by visual stimulation were not included). The conjunction was thresholded liberally at p < .01 uncorrected. The peaks for each functional ROI were detected visually in the SPM results viewer, and we extracted the top 100 contiguous voxels around that peak.
There were four participants for which we could not find clear peaks and clusters for the left FFA, five participants for the right FFA, seven participants for the left PPA, and six participants for the right PPA. Because we were unable to reliably localize these areas for all participants in both hemispheres, we used unilateral ROIs for participants with unilateral FFA/PPA ROIs and excluded participants for that ROI if they did not have either a left or right FFA or PPA ROI. The difficultly in localizing these areas for a subset of participants might have been because of our relatively short (two runs) event-related localizer design. In summary, when testing the FFA, we excluded two participants (no left or right FFA), and when testing the PPA, we excluded four participants (no left or right PPA).
We also performed a motion localizer, but likely because of the short localizer and the event-related design, it was not possible to reliably localize participant-specific motion-sensitive regions.
To examine information during the category feedback, we performed an additional GLM modeling the same events as the main GLM (locked to motion stimuli and feedback stimuli), except that one EV was used to model each feedback trial (estimating trial-wise betas for subsequent MVPA) and one EV for each motion stimulus condition (12 EVs). This additional GLM was used to estimate the trial-wise feedback mainly for practical reasons. If modeling all cue and feedback trials, it becomes a substantially larger model for FMRIB Software Library. By modeling the cue period trials in a separate GLM to the feedback trials, we were able to reduce the number of EVs per model (96 rather than 168).
### MVPA
To examine brain representations of category, stimulus, and motor response, we used MVPA across our selected ROIs. Specifically, we trained linear support vector machines (SVMs; cf. Kamitani & Tong, 2005) to assess which brain regions contained information about the category (“Face” or “Building”), stimulus (direction, orientation, and 12-way classifier), and motor response (left or right button press).
Decoding analyses were performed using linear support vector classifiers (C = 0.1) using Scikit-learn Python package (Pedregosa et al., 2011) with a leave-one-run-out cross-validation procedure.
To test for abstract category coding, we first trained a classifier to discriminate between motion directions belonging to the two categories for each participant's subjective category bound. To ensure that this was a pure category signal unrelated to stimulus differences (e.g., simply decoding opposite motion directions), we trained a classifier based on the participant's subjective category bound, rotated 90°. For a strict test of an abstract category signal, we subtracted the classification accuracy of the first classifier from accuracy of the second classifier. The reasoning behind this is that, if a brain region contains information about the stimulus direction but no information about category, it is still possible to obtain significant classification accuracy for the category classifier (Category A vs. Category B). However, if the brain region primarily encoded stimulus information, there should be as much information for the orthogonal directions in the voxel activity patterns within an ROI (assuming sensory biases across voxels are equal). Therefore, if a brain region carries information about the category and sensory content, there would be greater classification accuracy when decoding across directions across the category boundary (with category and sensory information) than classification accuracy for the directions across the rotated boundary (sensory but no category information). The subtraction allows us to test whether the brain regions carry abstract category information over and above the sensory information. If there is only sensory information and no category information, the subtracted classification accuracies should be centered around zero. Negative values would suggest more information across directions across the boundary orthogonal to the category bound. This would most likely reflect unequal perceptual biases across voxels in that brain region, where that brain region contained more information about the motion directions across the rotated boundary compared to those across the category boundary (by chance, i.e., stimulus based, unrelated to the learned category). Different participants were randomly assigned to different objective category boundaries, which makes a systematic bias unlikely. Previous studies have tested whether a brain region contains information that can discriminate members of different categories. However, these studies could not rule out the contributions of the stimulus features to the category decoder. In our study, stimulus feature differences are matched when comparing stimuli across the category boundary versus stimuli across the orthogonal boundary. This subtraction method ensures that the sensory signal is not the main contributor to any category code found.
To ensure this category signal was not related to motor preparation or the response, we also subtracted the former category classifier accuracy from a motor response classifier accuracy (discriminating between left and right button presses).
For stimulus direction coding, we trained classifiers to discriminate between all six pairs of opposite motion directions (0° vs. 180°, 30° vs. 210°, etc.) and averaged across the classification accuracies.
To examine motor response coding, we trained classifiers to discriminate between left and right button presses on the GLM where we locked the EVs to the motor responses (RTs).
As a control analysis, we tested whether a classifier trained on the objective category structure (i.e., defined by the experimenter) produced similar results to the subjective category analysis. The procedure was the same as the abstract category classifier above, except that the directions in each category were determined by the experimenter. In another set of control analyses, we assessed if there was any information about the stimulus. We tested for orientation coding by training a classifier on all 12 pairs of orthogonal orientations irrespective of the motion direction (0° vs. 90° and 0° vs. 270°, 30° vs. 120°, 30° vs. 300°) and averaged across the classification accuracies. Finally, for a more general measure of stimulus coding, we trained a 12-way classifier to assess stimulus coding for each motion direction.
We used one-sample t tests (one-tailed) against chance-level performance of the classifier (using SciPy; Virtanen et al., 2020). Multiple comparisons across ROIs were corrected by controlling the expected false discovery rate (FDR) at 0.05 (Seabold & Perktold, 2010). For decoding category, we corrected across 12 ROIs (all apart from bilateral EVC and bilateral motor cortex), and for the direction and 12-way classifier, we corrected across 14 ROIs (excluding bilateral motor cortex). Bonferroni correction was used for tests with two ROIs (correcting for visual and motor hemispheres for orientation and motor decoding, respectively). For others, we report the uncorrected p values because none survived even without correction. MVPA and statistical analyses were performed in Python 3.7.
### Brain–Behavior Correlations
To assess whether the brain's representation of the abstract category signal contributed to categorization performance, we performed robust regression (Seabold & Perktold, 2010) to assess the relationship between categorization performance (concordance to the estimated subjective category structure) with classifier accuracy for the category for the ROIs with greater-than-chance classification accuracy for category.
Matplotlib (Hunter, 2007) and Seaborn (Waskom, 2020) were used for plotting and creating figures in this article.
### Data and Code Availability Statement
The code for the behavioral model and data analysis is available at github.com/robmok/memsampCode. The behavioral and fMRI data will be made publicly available at openneuro.org/.
To assess how learned concept structure is represented in the brain, 33 participants learned a probabilistic concept structure in an initial behavioral session and returned on a separate day to perform the probabilistic categorization task (the same task in the behavioral session) while they underwent an fMRI scan.
We used a model to estimate individual participants' subjective category bound (see Methods and Figure 3A and B). Briefly, the model assumes that participants form a mental decision boundary in the (circular) stimulus space to separate the categories, and there is some uncertainty of the placement of this bound. Formally, the model has three parameters: The first two determines bound placement (b1 and b2), and the third is a standard deviation parameter (σ) that models the (normally distributed) noise in this bound. σ provides an estimate of how certain (lower σ) participants are of their boundary placement. The model-estimated category bounds corresponded to participants' categorization behavior. To compute a measure of behavioral accuracy, we computed the proportion of categorization responses consistent with individual participants' estimated category bound. There was a strong correlation between the standard deviation parameter of the model σ and behavioral accuracy (r = −.90, p = 9.00e−13), suggesting the standard deviation parameter characterizes an aspect of the categorization behavior well.
To evaluate the three main accounts of how the brain organizes information for categorization, we performed MVPA across visual, parietal, and prefrontal ROIs hypothesized to be involved in representing the learned concept structure (Figure 3). Specifically, we trained linear SVMs to assess which brain regions contained information about category (A or B), stimulus (directions), and response (left or right). For a strict test of an abstract category signal unrelated to stimulus features, we trained a classifier to discriminate between motion directions in Category A versus directions in Category B and subtracted this from a control classifier trained to discriminate between directions in Category A rotated 90° versus directions in Category B rotated 90°. This ensured that the classifier was not simply picking up information discriminating opposite stimulus directions (see Methods for details).
Our findings most strongly align with the hypothesis that the brain constructs an amodal symbol for representing category, independent of sensory–motor variables. Specifically, we found an abstract category signal over and above stimulus information in the middle portion of the left middle frontal gyrus (mMFG: p = .0025, q(FDR) = 0.029) and left motion-sensitive area MT (p = .0086, q(FDR) = 0.048; Figures 4A and 5A and B). This is particularly striking because the category is based on the stimulus direction, and there was no hint of a direction signal in these regions (ps > .41; Figures 4B and 5A and B).
Figure 4.
fMRI MVPA results. (A) Abstract category coding in the left mMFG cortex and left MT. Abstract category coding over and above sensory coding was computed by the category classifier accuracy minus the classifier accuracy trained on orthogonal (90° rotated) directions. (B) No strong effects showing stimulus motion direction information. (C) The right motor cortex showed significant information coding response. (D) The right EVC showed significant information coding orientation. (E) The right MT contained sensory information as shown by the 12-way stimulus classifier, with the right EVC showing a similar trend. Normalized decoding accuracy measures are normalized by subtracting chance values (direction = 1/2, 12-way = 1/12, motor = 1/2, orientation = 1/2), apart from abstract category that subtracts from a control classifier (chance = 0). ***p = .0025; *p < .05; +p < .06.
Figure 4.
fMRI MVPA results. (A) Abstract category coding in the left mMFG cortex and left MT. Abstract category coding over and above sensory coding was computed by the category classifier accuracy minus the classifier accuracy trained on orthogonal (90° rotated) directions. (B) No strong effects showing stimulus motion direction information. (C) The right motor cortex showed significant information coding response. (D) The right EVC showed significant information coding orientation. (E) The right MT contained sensory information as shown by the 12-way stimulus classifier, with the right EVC showing a similar trend. Normalized decoding accuracy measures are normalized by subtracting chance values (direction = 1/2, 12-way = 1/12, motor = 1/2, orientation = 1/2), apart from abstract category that subtracts from a control classifier (chance = 0). ***p = .0025; *p < .05; +p < .06.
Close modal
Figure 5.𠀾
Abstract category coding and correlations with categorization behavior. (A–B) Univariate scatterplots showing significant abstract category coding in the left mMFG (A) and left MT (B), with no evidence of stimulus and motor coding. Gray dots are individual participants. (C–D) The strength of abstract category coding in MT (D) was correlated with categorization accuracy, that is, consistent responses with subjective category bound. There was a trend in the same direction in the left mMFG (C). In D and E, beta coefficients are from a robust regression analysis, and the shaded area represents 95% confidence intervals for the slope. ***p = .0025, *p < .05. Error bars represent SEM. Normalized decoding accuracy measures are normalized by subtracting chance values, apart from abstract category that subtracts from a control classifier (chance = 0).
Figure 5.𠀾
Abstract category coding and correlations with categorization behavior. (A–B) Univariate scatterplots showing significant abstract category coding in the left mMFG (A) and left MT (B), with no evidence of stimulus and motor coding. Gray dots are individual participants. (C–D) The strength of abstract category coding in MT (D) was correlated with categorization accuracy, that is, consistent responses with subjective category bound. There was a trend in the same direction in the left mMFG (C). In D and E, beta coefficients are from a robust regression analysis, and the shaded area represents 95% confidence intervals for the slope. ***p = .0025, *p < .05. Error bars represent SEM. Normalized decoding accuracy measures are normalized by subtracting chance values, apart from abstract category that subtracts from a control classifier (chance = 0).
Close modal
Consistent with the idea that abstract category representations can aid performance, we found that the strength of category decoding was positively correlated with categorization accuracy (responses consistent with the model-estimated category bound) in the left MT (robust regression; β = 0.74, p < .05; Figure 5D) with a similar trend for the left mMFG (β = 0.73, p = .067; Figure 5C). We also confirmed that the category signal was stronger than the motor code in the left mMFG by subtracting the classifier trained to discriminate motion directions across categories from the motor classifier (p = .015).
As expected, we found information coding motor response in the motor cortex (right: p = .006; Bonferroni-corrected for hemisphere, p = .011; left: p = .095; Bonferroni-corrected p = .19; Figure 4C) but no information about category or direction (ps > .42).
Notably, abstract category coding was only present for the participant-specific subjective category structure (“objective” category bound classifiers across all ROIs: ps > .06). Furthermore, we found no evidence of category coding in the FFA or in the PPA (ps > .31).
Although we did not find category and stimulus representations intertwined, this was not because stimulus representations were not decodable in our data. We trained a classifier on orientation in the EVC and found activity coding orientation (Figure 4D, p < .05, Bonferroni-corrected for hemisphere). We also trained a 12-way classifier to assess if there was any information about the stimulus that would not be found simply by examining orientation or direction responses and found that the right MT encoded information on the stimulus (p = .005, q(FDR) = 0.03) and a trend for right EVC (p = .06, q(FDR) = 0.18; Figure 4E). Notably, there was no evidence for this in the left mMFG or left MT, which encoded abstract category (ps > .74).
We examined the neural representations underlying categorization and found that the brain constructs an abstract category signal with a different representational format to sensory and motor codes. Specifically, the left pFC and MT encoded category in the absence of stimulus information, despite category structure being based on those stimulus features. Furthermore, the strength of this representation was correlated with categorization performance based on participants' subjective category bound estimated by our model.
Although some representations may be grounded in bodily sensations, for tasks that require flexibility and representations to support abstract operations, an amodal symbol of a different representational format to that of sensory–motor representations may prove useful (Marcus, 2001; Pylyshyn, 1984; Newell, 1980; Fodor, 1975). Indeed, a category representation tied to a motor plan or stimulus feature would facilitate stimulus–motor representations effectively in specific circumstances but become unusable given slight changes in context. In this study, it was possible to solve the task in multiple ways, such as a combination of the sensory–motor variables, using a category-modulated sensory representation or additionally recruiting an amodal representation (Figure 1BD). Despite this, we found the brain produces an additional abstract representation to support categorization (Figure 1D). Specifically, by applying a decoding approach to test for a category signal over and above a sensory code, we showed that the left mMFG and MT encoded a category signal abstracted from the sensory information. Furthermore, these areas did not carry any information about stimulus or response (evidenced by the motor, direction, orientation, and 12-way stimulus classifier), and the category code was significantly stronger than the motor code. In contrast, we did not find any regions that encoded both the category and stimulus, as predicted by the account where category information is grounded in category-modulated stimulus representations (Figure 1C). We also did not find any regions that encoded both the category and the motor response, as predicted by the account where category is grounded in stimulus–motor associations (Figure 1B). Therefore, our findings suggest that brain constructs an amodal symbol for representing category, independent of sensory–motor variables. It is worth noting that our results do not suggest that the brain does not use sensory information or that there are no grounded neural representations but, rather, that the brain constructs an additional category representation abstracted from the sensory–motor information for categorization.
In addition to the left pFC, we found that the left MT encoded a category signal in the absence of sensory information, whereas the right MT was only driven by sensory information. One possible explanation is that the category signal originated from the left pFC, which was sent back to modulate the left MT. This may have resulted in competition between the category and sensory signals, and the task-relevant category signal won out over the bottom–up sensory signal. Because there was no category signal in the right pFC, the right MT was not affected by the task and coded the bottom–up stimulus signal. Alternatively, the left MT may simply be more affected by top–down modulation from pFC. For instance, task-relevant attentional modulation in the left PPA (when attending to scenes vs. faces) seems to be stronger and more reliable than the right PPA (Chadick, Zanto, & Gazzaley, 2014; Gazzaley, Cooney, McEvoy, Knight, & D'Esposito, 2005). Unfortunately, most fMRI studies of perceptual or category learning using motion-dot stimuli did not examine the left and right MT hemispheres separately and did not report differential effects of category and stimulus across hemispheres. Future studies or meta-analytic studies could examine whether or not the left or right MT is more strongly modulated by task demands or if the lateralized modulation of sensory cortices depends on the relative lateralized recruitment of control regions such as pFC.
Previous studies have found strong stimulus coding and category-related modulation stimulus representations during concept or perceptual learning (Ester, Sprague, & Serences, 2020; Braunlich & Love, 2019; Kuai, Levi, & Kourtzi, 2013; Mack et al., 2013; Zhang & Kourtzi, 2010; Freedman & Assad, 2006; Kourtzi, Betts, Sarkheil, & Welchman, 2005). For example, concept learning studies that used object stimuli have shown strong modulation of sensory signals in the lateral occipital cortex after learning (Braunlich & Love, 2019; Kuai et al., 2013; Mack et al., 2013).
One major difference between prior work and the current study is the probabilistic relationship between stimulus and feedback. In the world outside the laboratory, the relationship between stimulus and feedback is not always deterministic and people must make decisions and learn in the presence of this uncertainty. For example, after viewing dark clouds and the weather forecast, a person with picnic plans is faced with the decision of whether to continue. After deciding, they update their knowledge based on whether it rained, which is a probabilistic function of what was known at the time of decision.
Another key difference between our study with many studies of concept learning is that the response mapping was switched after each block so that we could observe possible differences between category representations and stimulus–response mappings. Some researchers suggest that changing the response mapping should disrupt procedural learning processes involved in concept learning (Maddox & Ashby, 2004), which is one reason why response mappings are often held constant within a participant.
These two differences with previous studies made it possible for us to observe a strong category signal that was not strictly modulated by stimulus representations, nor motor response. This category signal was of a different format than information related to stimulus or response. We need not have observed this finding. It would have been possible for the brain to solve this task using a stimulus-modulated category representation (i.e., stimulus and category represented in related formats) in which the response mapping varied across blocks. Instead, it appears that an intermediate category signal was used by participants. Although our design did not necessitate our main finding, it is possible that the relatively loose coupling between stimulus, category, and response encouraged forming a category representation of a different format than either stimulus or response. Many real-world categories may place related demands on learners. For example, relational categories, such as thief, are not closely tied to sensory representations (Jones & Love, 2007).
It may be argued that, because participants had to flip the category–motor mapping across blocks, we encouraged participants to use the stimulus or the category information and discouraged a motor-based strategy. However, it was still possible to ground category information into motor representations by associating sensory representations to the motor plan within a block and reprogram the association across blocks. Because there were relatively few blocks, this would have been possible—and a viable strategy. If the brain primarily relies on the sensory–motor association for the category representation and behavior, we would find representation of motor plans without an abstract category code (i.e., not tied to motor plans), which is not in line with our results. Furthermore, we showed evidence for an amodal category signal using a decoding approach that tested for a category signal over and above stimulus coding and also showed that the category signal was stronger than the motor code. To test the hypothesis that our design might have discouraged participants to use motor-based representations, future studies could compare groups that had to switch the motor responses versus those that did not and whether the latter group would form a grounded, motor-based neural representation for categorization in the absence of abstract category representations.
Some of our analyses yielded negative classification accuracy values, including in the abstract category and the direction classifier accuracies (Figure 4A and B). As noted in the Methods, the purpose of subtracting the category classifier from the classifier trained on the orthogonal bound was to find category signal over and above any sensory information contained in the voxels. Negative values would simply reflect no category information, in addition to more information across directions across the boundary orthogonal to the category bound (i.e., sensory biases in the voxels unrelated to the task). For the direction classifier, there were some regions that showed negative classification accuracy values. In this analysis, the theoretically lowest possible value is zero, and values around zero would reflect the absence of any direction information. We were unable to find anything systematic that contributed to the negative values and suggest that these effects were most likely attributable to reasons unrelated to the task, such as some nonstationarity across blocks.
There are similarities between our probabilistic conceptual learning paradigm and tasks learning the transition probability structure of object-to-object sequences. In those tasks, the probability that an Object A is most likely followed by an Object B (e.g., with a probability of .75) but could also be followed by another Object C (probability of .25)—that is, participants learn the statistical dependencies between objects, like how our participants learn the probabilistic dependencies between stimuli and categories. Interestingly, one study by Schapiro, Rogers, Cordova, Turk-Browne, and Botvinick (2013) showed participants learnt and accurately represent object–object associations with a structured community structure in several brain regions including the left pFC. Specifically, pattern similarity analysis showed that the left pFC, anterior temporal lobe, and superior temporal gyrus encoded the statistical, relational structure across the objects. Other studies found that the regions in the medial temporal lobe including the hippocampus and entorhinal cortex are involved in the learning and retrieval of associations and prediction of object–object transitions (e.g., Garvert, Dolan, & Behrens, 2017; Schapiro, Turk-Browne, Norman, & Botvinick, 2016). Because our current study focused on category representations after learning, it would be interesting for future studies to test whether medial temporal lobe structures are involved in learning probabilistic conceptual structures in a similar way.
There are several open questions to be explored in the future. Our study was not optimized to study to role of the hippocampus in category learning, as we examined category representations after learning. Future studies could examine the neural representations involved in probabilistic concept learning early in learning and compare them to representations during categorization after learning is complete, to explore whether how the neural representations change as concept information is consolidated into long-term memory.
Future work could also assess the causal involvement of these abstract category representations in mMFG and MT. One idea would be to use TMS to disrupt the left mMFG and left MT to assess whether these areas act causally to support categorization behavior. It would be interesting to test whether mMFG or MT plays a more important role, by observing the fMRI signal after disruption. It could be that the mMFG is the origin of the category signal, but it is its influence on MT that leads to effective categorization behavior (e.g., TMS to mMFG leads to disruption of the MT category representation but not vice versa, where stimulation at both sites disrupts behavior).
What is the use of an abstract, symbol-like concept representation? In real-world scenarios, there are often no explicit rules and reliable feedback is rare. Building an abstract representation that can be mapped onto different contexts can be useful in real-world tasks, where the meaning of a situation can remain constant while the contextually appropriate stimulus or response changes. As we find here, the brain constructs an amodal, abstract representation with a different representational format separate from sensory–motor codes, well suited for flexible cognition in a complex world.
We thank Johan Carlin for his help on experimental design and data collection and Amna Ali for her help on data collection. We thank Kurt Braunlich for his advice on analysis tools. We thank the Love Lab for the helpful discussions on the project. We are grateful to the members of Cognitive Brain Mapping Lab at RIKEN BSI for sharing natural images used in this study.
Reprint requests should be sent to Robert M. Mok or Bradley C. Love, Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK, or via e-mails: [email protected]; [email protected].
Scripts will be available on GitHub, data will be available on openneuro.
Royal Society (http://dx.doi.org/10.13039/501100000288), grant number: 18302. Wellcome Trust (http://dx.doi.org/10/13039/100004440), grant number: WT106931MA. National Institutes of Health (http://dx.doi.org/10.13039/100000002), grant number: 1P01HD080679.
Abraham
,
A.
,
Pedregosa
,
F.
,
Eickenberg
,
M.
,
Gervais
,
P.
,
Mueller
,
A.
,
Kossaifi
,
J.
, et al
(
2014
).
Machine learning for neuroimaging with scikit-learn
.
Frontiers in Neuroinformatics
,
8
,
14
. ,
[PubMed]
Allison
,
T.
,
Puce
,
A.
,
Spencer
,
D. D.
, &
McCarthy
,
G.
(
1999
).
Electrophysiological studies of human face perception. I: Potentials generated in occipitotemporal cortex by face and non-face stimuli
.
Cerebral Cortex
,
9
,
415
430
. ,
[PubMed]
Avants
,
B. B.
,
Epstein
,
C. L.
,
Grossman
,
M.
, &
Gee
,
J. C.
(
2008
).
Symmetric diffeomorphic image registration with cross-correlation: Evaluating automated labeling of elderly and neurodegenerative brain
.
Medical Image Analysis
,
12
,
26
41
. ,
[PubMed]
Avants
,
B. B.
,
Tustison
,
N.
, &
Song
,
G.
(
2009
).
Advanced normalization tools (ANTS)
.
Insight Journal
,
2
,
1
35
.
Barsalou
,
L. W.
(
1999
).
Perceptual symbol systems
.
Behavioral and Brain Sciences
,
22
,
577
609
. ,
[PubMed]
Barsalou
,
L. W.
(
2008
).
Grounded cognition
.
Annual Review of Psychology
,
59
,
617
645
. ,
[PubMed]
,
Y.
,
Restom
,
K.
,
Liau
,
J.
, &
Liu
,
T. T.
(
2007
).
A component based noise correction method (CompCor) for BOLD and perfusion based fMRI
.
Neuroimage
,
37
,
90
101
. ,
[PubMed]
Bowman
,
C. R.
, &
Zeithamova
,
D.
(
2018
).
Abstract memory representations in the ventromedial prefrontal cortex and hippocampus support concept generalization
.
Journal of Neuroscience
,
38
,
2605
2614
. ,
[PubMed]
Braunlich
,
K.
, &
Love
,
B. C.
(
2019
).
Occipitotemporal representations reflect individual differences in conceptual knowledge
.
Journal of Experimental Psychology: General
,
148
,
1192
1203
. ,
[PubMed]
,
J. Z.
,
Zanto
,
T. P.
, &
Gazzaley
,
A.
(
2014
).
Structural and functional differences in medial prefrontal cortex underlies distractibility and suppression deficits in aging
.
Nature Communications
,
5
,
4223
. ,
[PubMed]
Corbetta
,
M.
,
Miezin
,
F. M.
,
Shulman
,
G. L.
, &
Petersen
,
S. E.
(
1993
).
A PET study of visuospatial attention
.
Journal of Neuroscience
,
13
,
1202
1226
. ,
[PubMed]
Cox
,
R. W.
, &
Hyde
,
J. S.
(
1997
).
Software tools for analysis and visualization of fMRI data
.
NMR in Biomedicine
,
10
,
171
78
. ,
[PubMed]
Cromer
,
J. A.
,
Roy
,
J. E.
, &
Miller
,
E. K.
(
2010
).
Representation of multiple, independent categories in the primate prefrontal cortex
.
Neuron
,
66
,
796
807
. ,
[PubMed]
Davis
,
T.
,
Love
,
B. C.
, &
Preston
,
A. R.
(
2012a
).
Learning the exception to the rule: Model-based fMRI reveals specialized representations for surprising category members
.
Cerebral Cortex
,
22
,
260
273
. ,
[PubMed]
Davis
,
T.
,
Love
,
B. C.
, &
Preston
,
A. R.
(
2012b
).
Striatal and hippocampal entropy and recognition signals in category learning: Simultaneous processes revealed by model-based fMRI
.
Journal of Experimental Psychology: Learning, Memory, and Cognition
,
38
,
821
839
. ,
[PubMed]
Dubner
,
R.
, &
Zeki
,
S. M.
(
1971
).
Response properties and receptive fields of cells in an anatomically defined region of the superior temporal sulcus in the monkey
.
Brain Research
,
35
,
528
532
. ,
[PubMed]
Duncan
,
J.
(
2001
).
An adaptive coding model of neural function in prefrontal cortex
.
Nature Reviews Neuroscience
,
2
,
820
829
. ,
[PubMed]
Duncan
,
J.
(
2010
).
The multiple-demand (MD) system of the primate brain: Mental programs for intelligent behaviour
.
Trends in Cognitive Sciences
,
14
,
172
179
. ,
[PubMed]
Epstein
,
R.
, &
Kanwisher
,
N.
(
1998
).
A cortical representation of the local visual environment
.
Nature
,
392
,
598
601
. ,
[PubMed]
Erez
,
Y.
, &
Duncan
,
J.
(
2015
).
Discrimination of visual categories based on behavioral relevance in widespread regions of frontoparietal cortex
.
Journal of Neuroscience
,
35
,
12383
12393
. ,
[PubMed]
Esteban
,
O.
,
Markiewicz
,
C. J.
,
Blair
,
R. W.
,
Moodie
,
C. A.
,
Isik
,
A. I.
,
Erramuzpe
,
A.
, et al
(
2019
).
fMRIPrep: A robust preprocessing pipeline for functional MRI
.
Nature Methods
,
16
,
111
116
. ,
[PubMed]
Ester
,
E. F.
,
Sprague
,
T. C.
, &
Serences
,
J. T.
(
2020
).
Categorical biases in human occipitoparietal cortex
.
Journal of Neuroscience
,
40
,
917
931
. ,
[PubMed]
Fedorenko
,
E.
,
Duncan
,
J.
, &
Kanwisher
,
N.
(
2013
).
Broad domain generality in focal regions of frontal and parietal cortex
.
Proceedings of the National Academy of Sciences, U.S.A.
,
110
,
16616
16621
. ,
[PubMed]
Fodor
,
J. A.
(
1975
).
The language of thought
.
Cambridge, MA
:
Harvard University Press
.
Folstein
,
J. R.
,
Palmeri
,
T. J.
, &
Gauthier
,
I.
(
2013
).
Category learning increases discriminability of relevant object dimensions in visual cortex
.
Cerebral Cortex
,
23
,
814
823
. ,
[PubMed]
Fonov
,
V. S.
,
Evans
,
A. C.
,
McKinstry
,
R. C.
,
Almli
,
C. R.
, &
Collins
,
D. L.
(
2009
).
Unbiased nonlinear average age-appropriate brain templates from birth to adulthood
.
Neuroimage
,
47(Suppl. 1)
,
S102
.
Freedman
,
D. J.
, &
,
J. A.
(
2006
).
Experience-dependent representation of visual categories in parietal cortex
.
Nature
,
443
,
85
88
. ,
[PubMed]
Freedman
,
D. J.
, &
,
J. A.
(
2016
).
Neuronal mechanisms of visual categorization: An abstract view on decision making
.
Annual Review of Neuroscience
,
39
,
129
147
. ,
[PubMed]
Garvert
,
M. M.
,
Dolan
,
R. J.
, &
Behrens
,
T. E. J.
(
2017
).
A map of abstract relational knowledge in the human hippocampal–entorhinal cortex
.
eLife
,
6
,
e17086
. ,
[PubMed]
Gazzaley
,
A.
,
Cooney
,
J. W.
,
McEvoy
,
K.
,
Knight
,
R. T.
, &
D'Esposito
,
M.
(
2005
).
Top–down enhancement and suppression of the magnitude and speed of neural activity
.
Journal of Cognitive Neuroscience
,
17
,
507
517
. ,
[PubMed]
Glasser
,
M. F.
,
Sotiropoulos
,
S. N.
,
Wilson
,
J. A.
,
Coalson
,
T. S.
,
Fischl
,
B.
,
,
J. L.
, et al
(
2013
).
The minimal preprocessing pipelines for the Human Connectome Project
.
Neuroimage
,
80
,
105
124
. ,
[PubMed]
Goldman-Rakic
,
P. S.
(
1995
).
Cellular basis of working memory
.
Neuron
,
14
,
477
485
. ,
[PubMed]
Gorgolewski
,
K.
,
Burns
,
C. D.
,
,
C.
,
Clark
,
D.
,
Halchenko
,
Y. O.
,
,
M. L.
, et al
(
2011
).
Nipype: A flexible, lightweight and extensible neuroimaging data processing framework in Python
.
Frontiers in Neuroinformatics
,
5
,
13
. ,
[PubMed]
Greve
,
D. N.
, &
Fischl
,
B.
(
2009
).
Accurate and robust brain image alignment using boundary-based registration
.
Neuroimage
,
48
,
63
72
. ,
[PubMed]
,
S.
(
1990
).
The symbol grounding problem
.
Physica D: Nonlinear Phenomena
,
42
,
335
346
.
Hunter
,
J. D.
(
2007
).
Matplotlib: A 2D graphics environment
.
Computing in Science and Engineering
,
9
,
90
95
.
Jackson
,
J.
,
Rich
,
A. N.
,
Williams
,
M. A.
, &
Woolgar
,
A.
(
2017
).
Feature-selective attention in frontoparietal cortex: Multivoxel codes adjust to prioritize task-relevant information
.
Journal of Cognitive Neuroscience
,
29
,
310
321
. ,
[PubMed]
Jenkinson
,
M.
,
Bannister
,
P.
,
,
M.
, &
Smith
,
S.
(
2002
).
Improved optimisation for the robust and accurate linear registration and motion correction of brain images
.
Neuroimage
,
17
,
825
841
. ,
[PubMed]
Jenkinson
,
M.
, &
Smith
,
S.
(
2001
).
A global optimisation method for robust affine registration of brain images
.
Medical Image Analysis
,
5
,
143
156
. ,
[PubMed]
Jones
,
M.
, &
Love
,
B. C.
(
2007
).
Beyond common features: The role of roles in determining similarity
.
Cognitive Psychology
,
55
,
196
231
. ,
[PubMed]
Kamitani
,
Y.
, &
Tong
,
F.
(
2005
).
Decoding the visual and subjective contents of the human brain
.
Nature Neuroscience
,
8
,
679
685
. ,
[PubMed]
Kanwisher
,
N.
,
McDermott
,
J.
, &
Chun
,
M. M.
(
1997
).
The fusiform face area: A module in human extrastriate cortex specialized for face perception
.
Journal of Neuroscience
,
17
,
4302
4311
. ,
[PubMed]
Kastner
,
S.
, &
Ungerleider
,
L. G.
(
2000
).
Mechanisms of visual attention in the human cortex
.
Annual Review of Neuroscience
,
23
,
315
341
. ,
[PubMed]
Kourtzi
,
Z.
,
Betts
,
L. R.
,
Sarkheil
,
P.
, &
Welchman
,
A. E.
(
2005
).
Distributed neural plasticity for shape learning in the human visual cortex
.
PLoS Biology
,
3
,
e204
. ,
[PubMed]
Kruschke
,
J. K.
(
1992
).
ALCOVE: An exemplar-based connectionist model of category learning
.
Psychological Review
,
99
,
22
44
. ,
[PubMed]
Kuai
,
S.-G.
,
Levi
,
D.
, &
Kourtzi
,
Z.
(
2013
).
Learning optimizes decision templates in the human visual cortex
.
Current Biology
,
23
,
1799
1804
. ,
[PubMed]
Lanczos
,
C.
(
1964
).
Evaluation of noisy data
.
Journal of the Society for Industrial and Applied Mathematics, Series B: Numerical Analysis
,
1
,
76
85
.
Love
,
B. C.
,
Medin
,
D. L.
, &
Gureckis
,
T. M.
(
2004
).
SUSTAIN: A network model of category learning
.
Psychological Review
,
111
,
309
332
. ,
[PubMed]
Mack
,
M. L.
,
Love
,
B. C.
, &
Preston
,
A. R.
(
2016
).
Dynamic updating of hippocampal object representations reflects new conceptual knowledge
.
Proceedings of the National Academy of Sciences, U.S.A.
,
113
,
13203
13208
. ,
[PubMed]
Mack
,
M. L.
,
Preston
,
A. R.
, &
Love
,
B. C.
(
2013
).
Decoding the brain's algorithm for categorization from its neural implementation
.
Current Biology
,
23
,
2023
2027
. ,
[PubMed]
,
W. T.
, &
Ashby
,
F. G.
(
2004
).
Dissociating explicit and procedural-learning based systems of perceptual category learning
.
Behavioural Processes
,
66
,
309
332
. ,
[PubMed]
Marcus
,
G. F.
(
2001
).
The algebraic mind: Integrating connectionism and cognitive science
.
Cambridge, MA
:
MIT Press
.
Markman
,
A. B.
, &
Dietrich
,
E.
(
2000
).
Extending the classical view of representation
.
Trends in Cognitive Sciences
,
4
,
470
475
. ,
[PubMed]
Mesulam
,
M. M.
(
1981
).
A cortical network for directed attention and unilateral neglect
.
Annals of Neurology
,
10
,
309
325
. ,
[PubMed]
Meyers
,
E. M.
,
Freedman
,
D. J.
,
Kreiman
,
G.
,
Miller
,
E. K.
, &
Poggio
,
T.
(
2008
).
Dynamic population coding of category information in inferior temporal and prefrontal cortex
.
Journal of Neurophysiology
,
100
,
1407
1419
. ,
[PubMed]
Miller
,
E. K.
, &
Cohen
,
J. D.
(
2001
).
An integrative theory of prefrontal cortex function
.
Annual Review of Neuroscience
,
24
,
167
202
. ,
[PubMed]
Newell
,
A.
(
1980
).
Physical symbol systems
.
Cognitive Science
,
4
,
135
183
.
Nosofsky
,
R. M.
(
1986
).
Attention, similarity, and the identification–categorization relationship
.
Journal of Experimental Psychology: General
,
115
,
39
61
. ,
[PubMed]
Pedregosa
,
F.
,
Varoquaux
,
G.
,
Gramfort
,
A.
,
Michel
,
V.
,
Thirion
,
B.
,
Grisel
,
O.
, et al
(
2011
).
Scikit-learn: Machine learning in Python
.
Journal of Machine Learning Research
,
12
,
2825
2830
.
Peirce
,
J.
,
Gray
,
J. R.
,
Simpson
,
S.
,
,
M.
,
Höchenberger
,
R.
,
Sogo
,
H.
, et al
(
2019
).
PsychoPy2: Experiments in behavior made easy
.
Behavior Research Methods
,
51
,
195
203
. ,
[PubMed]
Power
,
J. D.
,
Barnes
,
K. A.
,
Snyder
,
A. Z.
,
Schlaggar
,
B. L.
, &
Petersen
,
S. E.
(
2012
).
Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion
.
Neuroimage
,
59
,
2142
2154
. ,
[PubMed]
Pylyshyn
,
Z. W.
(
1984
).
Computation and cognition: Toward a foundation for cognitive science
.
Cambrige, MA
:
MIT Press
.
Rizzolatti
,
G.
,
Riggio
,
L.
,
Dascola
,
I.
, &
Umiltá
,
C.
(
1987
).
Reorienting attention across the horizontal and vertical meridians: Evidence in favor of a premotor theory of attention
.
Neuropsychologia
,
25
,
31
40
. ,
[PubMed]
Roy
,
J. E.
,
Riesenhuber
,
M.
,
Poggio
,
T.
, &
Miller
,
E. K.
(
2010
).
Prefrontal cortex activity during flexible categorization
.
Journal of Neuroscience
,
30
,
8519
8528
. ,
[PubMed]
Schapiro
,
A. C.
,
Rogers
,
T. T.
,
Cordova
,
N. I.
,
Turk-Browne
,
N. B.
, &
Botvinick
,
M. M.
(
2013
).
Neural representations of events arise from temporal community structure
.
Nature Neuroscience
,
16
,
486
492
. ,
[PubMed]
Schapiro
,
A. C.
,
Turk-Browne
,
N. B.
,
Norman
,
K. A.
, &
Botvinick
,
M. M.
(
2016
).
Statistical learning of temporal community structure in the hippocampus
.
Hippocampus
,
26
,
3
8
. ,
[PubMed]
Seabold
,
S.
, &
Perktold
,
J.
(
2010
).
Statsmodels: Econometric and statistical modeling with Python
. In
Proceedings of the 9th Python in Science Conference
(pp.
92
96
).
Seger
,
C. A.
, &
Miller
,
E. K.
(
2010
).
Category learning in the brain
.
Annual Review of Neuroscience
,
33
,
203
219
. ,
[PubMed]
Sigala
,
N.
, &
Logothetis
,
N. K.
(
2002
).
Visual categorization shapes feature selectivity in the primate temporal cortex
.
Nature
,
415
,
318
320
. ,
[PubMed]
Virtanen
,
P.
,
Gommers
,
R.
,
Oliphant
,
T. E.
,
Haberland
,
M.
,
Reddy
,
T.
,
Cournapeau
,
D.
, et al
(
2020
).
SciPy 1.0: Fundamental algorithms for scientific computing in Python
.
Nature Methods
,
17
,
261
272
. ,
[PubMed]
Wang
,
J. X.
,
Rogers
,
L. M.
,
Gross
,
E. Z.
,
Ryals
,
A. J.
,
Dokucu
,
M. E.
,
Brandstatt
,
K. L.
, et al
(
2014
).
Targeted enhancement of cortical–hippocampal brain networks and associative memory
.
Science
,
345
,
1054
1057
. ,
[PubMed]
,
M.
(
2020
).
An introduction to Seaborn—Seaborn 0.10.1 documentation
.
Statistical Data Visualization
.
Wolpert
,
D. M.
, &
Ghahramani
,
Z.
(
2000
).
Computational principles of movement neuroscience
.
Nature Neuroscience
,
3
,
1212
1217
. ,
[PubMed]
Wolpert
,
D. M.
, &
Witkowski
,
J.
(
2014
).
A conversation with Daniel Wolpert
.
Cold Spring Harbor Symposia on Quantitative Biology
,
79
,
297
298
. ,
[PubMed]
Woolrich
,
M. W.
,
Ripley
,
B. D.
,
,
M.
, &
Smith
,
S. M.
(
2001
).
Temporal autocorrelation in univariate linear modeling of fMRI data
.
Neuroimage
,
14
,
1370
1386
. ,
[PubMed]
Zeithamova
,
D.
,
Mack
,
M. L.
,
Braunlich
,
K.
,
Davis
,
T.
,
Seger
,
C. A.
,
van Kesteren
,
M. T. R.
, et al
(
2019
).
Brain mechanisms of concept learning
.
Journal of Neuroscience
,
39
,
8259
8266
. ,
[PubMed]
Zhang
,
Y.
,
,
M.
, &
Smith
,
S.
(
2001
).
Segmentation of brain MR images through a hidden Markov random field model and the expectation–maximization algorithm
.
IEEE Transactions on Medical Imaging
,
20
,
45
57
. ,
[PubMed]
Zhang
,
J.
, &
Kourtzi
,
Z.
(
2010
).
Learning-dependent plasticity with and without training in the human brain
.
Proceedings of the National Academy of Sciences, U.S.A.
,
107
,
13503
13508
. ,
[PubMed]
## Author notes
This article is part of a Special Focus entitled Integrating Theory and Data: Using Computational Models to Understand Neuroimaging Data; deriving from a symposium at the 2020 Annual Meeting of the Cognitive Neuroscience Society.
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. For a full description of the license, please visit https://creativecommons.org/licenses/by/4.0/legalcode.
|
2023-04-02 13:36:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4301786422729492, "perplexity": 4624.118245629193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00792.warc.gz"}
|
https://zbmath.org/?q=an:1055.60056
|
# zbMATH — the first resource for mathematics
On Wong-Zakai approximations with $$\delta$$-martingales. (English) Zbl 1055.60056
The authors study approximations of the solutions of the following Stratonovich SDE $dx(t)= b(x(t))\,dt+ \sum^n_{j=1} \sigma_j(x(t))\circ dW^i(t),\quad x(0)= \xi\in\mathbb{R}^d,\tag{$$*$$}$ driven by an $$n$$-dimensional Wiener process $$(W^1(t),\dots,W^n(t))$$. Starting point of their investigation are known approximations of the driving Wiener process (say, by smoothing by convolution or polynomial interpolation) through absolutely continuous processes $$B_\delta(t)$$ in such a way that $$\sup_{0\leq t\leq T}\mathbb{E}| W(t)- B_\delta(t)|^2\leq C_T\delta^\varepsilon$$ (‘approximation with exponent $$\varepsilon$$’). The main result is the problem whether this pushes through to the solution $$x_\delta(t)$$ of the SDE $$(*)$$ with $$dB_\delta(t)$$ replacing $$dW(t)$$ as driving noise. The answer is affirmative if, say, $$B_\delta(t)$$ is a good approximation with exponent $$\varepsilon$$ and if $$b$$, $$\sigma$$ and $$\nabla\sigma$$ are bounded and globally Lipschitz continuous. In order to be able to treat whole classes of approximating noise terms (and not just the examples mentioned above) the authors coin the notion of approximating $$\delta$$-martingale which is essentially an $${\mathcal F}_{t+\delta}$$-adapted semimartingale $$m_\delta(t)$$ with locally bounded variation which satisfies for every $${\mathcal F}_t$$-adapted process $$f(t)$$ the additional requirement that $$| E\int^t_0 f(s)\,dm_\delta(s)|$$ is bounded by the $$L^2(P\times dt)$$-norm of $$f(t)$$ and the oscillations of $$f$$ (taken in $$L^1(P)\otimes L^\infty(dt)$$-norm). (More manageable criteria for a process to be an approximating $$\delta$$-martingale are also given.) Most of the ‘classical’ approximations of the Wiener process satisfy these criteria.
##### MSC:
60H10 Stochastic ordinary differential equations (aspects of stochastic analysis) 60J65 Brownian motion 60G15 Gaussian processes
Full Text:
|
2021-01-19 09:51:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8729555606842041, "perplexity": 482.2042001238629}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703518201.29/warc/CC-MAIN-20210119072933-20210119102933-00195.warc.gz"}
|
https://www.dummies.com/article/academics-the-arts/math/pre-algebra/how-to-increase-the-terms-of-a-fraction-191314/
|
##### Algebra I For Dummies
Even if fractions look different, they can actually represent the same amount; in other words, one of the fractions will have increased terms compared to the other. You may need to increase the terms of fractions to work with them in an equation.
To increase the terms of a fraction by a certain number, you multiply both the numerator and the denominator by that number. For example, to increase the terms of the fraction 3/4 by 2, multiply both the numerator and the denominator by 2:
Similarly, to increase the terms of the fraction 5/11 by 7, multiply both the numerator and the denominator by 7:
Increasing the terms of a fraction doesn’t change its value. Because you’re multiplying the numerator and denominator by the same number, you’re essentially multiplying the fraction by a fraction that equals 1.
One key thing to know is how to increase the terms of a fraction so that the denominator becomes a preset number. Here’s how you do it:
1. Divide the new denominator by the old denominator.
To keep the fractions equal, you have to multiply the numerator and denominator of the old fraction by the same number. This first step tells you what the old denominator was multiplied by to get the new one.
For example, suppose you want to raise the terms of the fraction 4/7 so that the denominator is 35. That is, you’re trying to fill in the question mark here:
Divide 35 by 7, which tells you that the denominator was multiplied by 5.
2. Multiply this result by the old numerator to get the new numerator.
You now know how the two denominators are related. The numerators need to have the same relationship, so multiply the old numerator by the number you found in Step 1.
Multiply 5 by 4, which gives you 20. So here’s the answer:
|
2022-10-01 05:37:44
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660794734954834, "perplexity": 187.7953580007793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335530.56/warc/CC-MAIN-20221001035148-20221001065148-00149.warc.gz"}
|
https://www.snapxam.com/problems/65076276/-5-0x-2-10-0x-0
|
Try NerdPal! Our new app on iOS and Android
# Solve the quadratic equation $-5x^2-10x-4=0$
## Step-by-step Solution
Go!
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
θ
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
### Videos
$x=-0.552786,\:x=-1.447214$
Got another answer? Verify it here!
## Step-by-step Solution
Problem to solve:
$-5x^2-10x-4=0$
Specify the solving method
1
For a simpler handling of the equation, change the sign of all terms, multiplying the entire whole by $-1$
$5x^2+10x+4=0$
Learn how to solve quadratic equations problems step by step online.
$5x^2+10x+4=0$
Learn how to solve quadratic equations problems step by step online. Solve the quadratic equation -5x^2-10x-4=0. For a simpler handling of the equation, change the sign of all terms, multiplying the entire whole by -1. To find the roots of a polynomial of the form ax^2+bx+c we use the quadratic formula, where in this case a=5, b=10 and c=4. Then substitute the values of the coefficients of the equation in the quadratic formula:<ul><li>\displaystyle x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}</li></ul>. To obtain the two solutions, divide the equation in two equations, one when \pm is positive (+), and another when \pm is negative (-). Subtract the values 2\sqrt{5} and -10.
$x=-0.552786,\:x=-1.447214$
SnapXam A2
### beta Got another answer? Verify it!
Go!
1
2
3
4
5
6
7
8
9
0
a
b
c
d
f
g
m
n
u
v
w
x
y
z
.
(◻)
+
-
×
◻/◻
/
÷
2
e
π
ln
log
log
lim
d/dx
Dx
|◻|
θ
=
>
<
>=
<=
sin
cos
tan
cot
sec
csc
asin
acos
atan
acot
asec
acsc
sinh
cosh
tanh
coth
sech
csch
asinh
acosh
atanh
acoth
asech
acsch
$-5x^2-10x-4=0$
|
2023-03-23 23:23:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7680809497833252, "perplexity": 1812.907746275084}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00354.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=568345
|
# Statics Question (Using Modulus of Rigidity)
by papasmurf
Tags: modulus, rigidity, statics
P: 22 1. The problem statement, all variables and given/known data Find the displacement (mm) in the horizontal direction of point A due to the force, P. P=100kN w1=19mm w2=15mm 2. Relevant equations $\tau$ = G * $\gamma$ $\tau$ = Shear stress = P / A $\gamma$ = Shear strain = (pi / 2) - $\alpha$ 3. The attempt at a solution I haven't attempted to work out a solution here yet, but I do have a question regarding the separate G values that are given. Can I just look at the top layer, the layer where P is acting, and use that G value to determine $\delta$? Or do I need to do something with the other G value as well? If I were to try something, I would find tau by doing 100[kN] / (100[mm] * 2[mm]). So tau would be equal to 1[kN]/2[mm2] = 0.5[GPa]. Next I would find gamma by dividing tau by G (100[GPa]) giving me $\gamma$ = .005rad. I can use trig to define gamme as $\gamma$=sin-1($\delta$/40). Setting this equal to .005 I would get $\delta$= .20[mm]. Even if I do have to do something with both of the G values, I feel like my method is correct. Any help is appreciated, thanks in advance. Attached Thumbnails
P: 22 I'm getting closer to the correct answer. First I set V/A, where V is the internal shear force and A is the area of the cross section where the shear force is acting, equal to G*$\gamma$, where G is the modulus of rigidity and gamma is the shear strain. I rewrote gamma as pi/2 - θ, where θ=cos-1($\delta$/h), h is the height of the "layer", and put it all together so that my equation looks like this: V/A = G * ( pi/2 - cos-1($\delta$/h) ) Solving for $\delta$ I come up with $\delta$ = h * cos( (pi/2) - V/AG) I used this formula for each "layer" and added up all of the deltas. However after plugging my numbers in and making sure of correct units, I still am off by fractions of a millimeter.
|
2014-07-28 16:28:34
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7452436089515686, "perplexity": 553.8418618918548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510261249.37/warc/CC-MAIN-20140728011741-00218-ip-10-146-231-18.ec2.internal.warc.gz"}
|
https://zbmath.org/?q=an:1048.34114
|
# zbMATH — the first resource for mathematics
##### Examples
Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used.
##### Operators
a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses
##### Fields
any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article)
Oscillation and global attractivity in hematopoiesis model with periodic coefficients. (English) Zbl 1048.34114
The author considers the following nonlinear delay differential equation $$p'(t)= {\beta(t) p^m(t- k\omega)\over 1+ p^n(t- k\omega)}- \gamma(t) p(t),\tag1$$ where $k$ is a positive integer, $\beta(t)$ and $\gamma(t)$ are positive periodic functions of period $\omega$. The main result for the nondelay case is Theorem 2.1, where the author proves that (1) has a unique positive periodic solution $\overline p(t)$. He also studies the global attractivity of $\overline p(t)$. In the delay case, sufficient conditions for the oscillation of all positive solutions to (1) about $\overline p(t)$ are given, also some sufficient conditions for the global attractivity of $\overline p(t)$ are established. It should be noted that (1) is a modification of an equation proposed as a model of hematopoiesis. Similar equations are also used as models in population dynamics.
##### MSC:
34K11 Oscillation theory of functional-differential equations 34K20 Stability theory of functional-differential equations 92D25 Population dynamics (general) 92C50 Medical applications of mathematical biology 34K60 Qualitative investigation and simulation of models
##### Keywords:
oscillation; global attractivity; hematopoiesis
Full Text:
##### References:
[1] Barbalat, L.: Systemes d’ equations differentilles d’oscillations nonlinearis. Roumaine math. Pures, appl. 4, 267-270 (1959) [2] Chao, J.: On the oscillation of linear differential equations with deviating arguments. Math. practice theory 1, 32-40 (1991) [3] El-Sheikh, M. M. A.; Zaghrout, A.; Ammar: Oscillation and global attractivity in delay equation of population dynamics. Appl. math. Comput. 77, No. 2--3, 195-204 (1996) · Zbl 0848.92018 [4] Erbe, L. H.; Zhang, B. G.: Oscillation for first order linear differential equations with deviating arguments. Diff. integral eqn. 1, 305-314 (1988) · Zbl 0723.34055 [5] Glass, L.; Mackey, M. C.: Pathological conditions resulting from instabilities in physiological control systems. Ann. N.Y. Acad. sci. 316, 214-235 (1979) · Zbl 0427.92004 [6] Graef, J. R.; Qian, C.; Spikes, P. W.: Oscillation and global attractivity in a periodic delay equation. Canad. math. Bull. 38, 225-283 (1996) · Zbl 0870.34073 [7] Gopalsamy, K.; Trofimchuk, S. I.: Almost periodic solutions of lasota-wazewska type delay differential equations. J. math. Anal. appl. 237, 106-127 (1999) · Zbl 0936.34058 [8] Gopalsamy, K.; Kulenovic, M. R. S.; Ladas, G.: Environmental periodicity and time delays in a ’food limited’ population model. J. math. Anal. appl. 147, No. 2, 545-555 (1990) · Zbl 0701.92021 [9] Gopalsamy, K.; Kulenovic, M. R. S.; Ladas, G.: Oscillation and global attractivity in models of hematopoiesis. J. dyn. Diff. eqns. 2, 117-132 (1990) · Zbl 0694.34057 [10] Gopalsamy, K.; He, X. -Z.: Oscillation and convergence in almost periodic competition system. Acta appl. Math. 46, 247-266 (1997) · Zbl 0872.34050 [11] Gopalsamy, K.; He, X. -Z.: Global attractivity and oscillations in periodic logistic intergrodifferential equation. Houston J. Math. 17, 157-177 (1991) · Zbl 0735.45006 [12] Gyori, I.; Ladas, G.: Oscillation theory of delay differential equations with applications. (1991) [13] Hale, J.; Sternberg, N.: Onset of chaos in differential delay equations. J. comput. Phys. 77, 221-239 (1988) · Zbl 0644.65050 [14] Hui, F.; Li, J.: On the existence of periodic solutions of a neutral delay model of single-species population growth. J. math. Anal. appl. 256, 8-17 (2001) · Zbl 0995.34073 [15] Kon, M.; Sficas, Y. G.; Stavroulakis, I. P.: Oscillation criteria for delay equations. Proc. am. Math. soc. 128, 2989-2997 (2000) · Zbl 0951.34045 [16] Karakostas, G.; Philos, Ch.G.; Sficas, Y. G.: Stable steady state of some population models. J. dyn. Diff. eqns. 4, 161190 (1992) · Zbl 0744.34071 [17] Kubiaczyk, I.; Saker, S. H.: Oscillation and stability in nonlinear delay differential equations of population dynamics. Math. comput. Modelling 35, 295-301 (2002) · Zbl 1069.34107 [18] Ladde, G. S.; Lakshmikantham, V.; Zhang, B. Z.: Oscillation theory of differential equations with deviating arguments. (1987) · Zbl 0832.34071 [19] Li, B.: Oscillation of delay differential equations with variable coefficients. J. math. Anal. appl. 192, 217-234 (1995) · Zbl 0836.34033 [20] Mackey, M. C.; Glass, L.: Oscillation and chaos in physiological control system. Science 197, 287-289 (1977) [21] Mackey, M. C.; Der Heiden, U. An: Dynamical diseases and bifurcations: understanding functional disorders in physiological systems. Func. biol. Med. 156, 156-164 (1982) [22] Mackey, M. C.; Nechaeva, I.: Noise and stability in differential delay equations. J. dyn. Diff. eqns. 6, 395-426 (1994) · Zbl 0807.34092 [23] Nicholson, A. J.: The balance of animal population. J. animal ecol. 2, 132-178 (1993) [24] Saker, S. H.: Oscillation and global attractivity of hematopoiesis model with delay time. Appl. math. Comput. 136, 241-250 (2003) · Zbl 1026.34082 [25] Saker, S. H.; Agarwal, S.: Oscillation and global attractivity in a nonlinear delay periodic model of respiratory dynamics. Comput. math. Appl. 44, 623-632 (2002) · Zbl 1041.34073 [26] S.H. Saker, S. Agarwal, Oscillation and global attractivity in nonlinear delay periodic model of population dynamics, Appl. Anal., in press · Zbl 1041.34061 [27] Saker, S. H.; Agarwal, S.: Oscillation and global attractivity in a periodic Nicholson’s blowflies model. Math. comput. Modelling 35, 719-731 (2002) · Zbl 1012.34067 [28] S.H. Saker, S. Agarwal, Oscillation and global attractivity of a periodic survival red blood cells model, J. Dyn. Continuous, Discrete Impulsive Syst. Series B: Appl. Algorithms, in press · Zbl 1078.34062 [29] Shen, J.; Tang, X.: New oscillation criteria for linear delay differential equations. Comput. math. Appl. 36, 53-61 (1998) · Zbl 0961.34056 [30] Yan, J.; Feng, Q.: Global attractivity and oscillation in nonlinear delay equation. Nonlinear anal. 43, 101-108 (2001) · Zbl 0987.34065 [31] Yu, J. S.; Wang, Z. C.: Some further results on oscillation of neutral differential equations. Bull. aust. Math. soc. 46, 149-157 (1992) · Zbl 0729.34051 [32] Yu, J. S.; Wang, Z. C.; Zhang, G. B.; Qian, X. Z.: Oscillation of differential equations with deviating arguments. Panamerican math. J. 2, 59-72 (1992)
|
2016-05-05 03:03:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6629047393798828, "perplexity": 8636.437077603217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860125857.44/warc/CC-MAIN-20160428161525-00143-ip-10-239-7-51.ec2.internal.warc.gz"}
|
https://k12.libretexts.org/Bookshelves/Mathematics/Algebra/02%3A_Linear_Equations/2.03%3A_Solve_Two-Step_Equations/2.3.02%3A_Two-Step_Equations_from_Verbal_Models
|
# 2.3.2: Two-Step Equations from Verbal Models
## Two-Step Equations from Verbal Models
Kara works as a babysitter for the neighborhood she lives in. She is saving money to plant an amazing sustainable home garden. For each babysitting job she took on, Kara charged $4 for bus fare plus an additional$8 for each hour she worked. On Saturday, Kara earned $26 for the entire babysitting job. Write an equation to represent this situation, where h is the total number of hours that Kara worked. In this concept, you will learn to write two-step equations from verbal models. ### Writing Two-Step Equations from Verbal Models You solve equations like 3x=5,(x/−2)=−4, and x+31=8 by doing one operation to both sides of the equation to isolate the variable. But how can you solve the following equations? 3x+5=20 (x/3)−2=5 In these equations you need to do two operations to each side of the equation to isolate the variable. However, before you do this, look at some examples of word problems that involve two steps. Here is an example. Change the following word problem into an equation. Six times a number, plus five is forty-one. First, change the language into number and symbols. “Times” means multiplication, “a number” since it is not identified is your variable x, “plus” means addition, and the word “is” means equals. The answer is 6x+5=41. Here is another example. Change the following word problem into an equation. Four less than two times a number is equal to eight. First, change the language into an equation. “Less than” means subtraction but be careful about the order. “Times” means multiplication, “a number” is your variable x, and “is equal to” means the same thing as equals. The answer is 2x−4=8. ### Examples Example 2.3.2.1 Earlier, you were given a problem about Kara’s sustainable garden. She is saving money to build one at home. She has a babysitting job where she earns$8 and hour, but she also charges $4 for bus fare. If she earned$26 in total, can you write and equation to represent this.
Solution
First, let h be the number of hours Kara worked.
Next, re-phrase the text to make it easier to understand. Her total earnings were $4 plus the number of hours she worked times 8. This equals$26 the total amount earned.
Then, turn the language into numbers and symbols and write the equation. “Plus” means addition, “the number of hours” is the variable h, “times” means multiplication.
4+8h=26
Example 2.3.2.2
Write an equation for this statement.
A number divided by two, and then added to six is equal to fourteen.
Solution
First, change the language into numbers and mathematical symbols. “A number” is your variable x, “divided by” means division, “and then added to” means that after you divide you add, and “is” means equals.
Then, write the equation.
(x/2)+6=14
Write an equation for each word problem.
Example 2.3.2.3
The product of five and a number, plus three is twenty-three.
Solution
First, translate the language into numbers and symbols. “The product of” means multiply what is in the parenthetical expression “five and a number,” “a number” is your variable x, “plus” means addition, and “is” means equals.
5x+3=23
Example 2.3.2.4
Six times a number, minus four is thirty-two.
Solution
First, translate the language into numbers and symbols. “Times” means multiplication, “a number” is the variable x, “minus” means subtraction, and “is” means equals.
6x−4=32
Example 2.3.2.5
A number y, divided by 3, and then added to seven is ten.
Solution
First, translate the language into numbers and symbols. “A number y” is your variable y, “and then added to” means you divide and then add, and “is” means equals.
(y/3)+7=10
### Review
Write each statement as two-step equations.
1. Two times a number, plus seven is nineteen.
2. Three times a number, and five is twenty.
3. Six times a number, and ten is forty-six.
4. Seven less than two times a number is twenty-one.
5. Eight less than three times a number is sixteen.
6. A number divided by two, plus seven is ten.
7. A number divided by three, and six is eleven.
8. Two less than a number divided by four is ten.
9. Four times a number, and eight is twenty.
10. Five times a number, take away three is twelve.
11. Two times a number, and seven is twenty-nine.
12. Four times a number, and two is twenty-six.
13. Negative three times a number, take a way four is equal to negative ten.
14. Negative two times a number, and eight is equal to negative twelve.
15. Negative five times a number, minus eight is equal to seventeen.
|
2021-05-17 21:34:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7326865196228027, "perplexity": 3867.4867393027216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00235.warc.gz"}
|
https://plainmath.net/20967/greenhouse-feet-long-angle-roof-circ-determine-volume-greenhouse-cubic
|
# The greenhouse is 40 feet long and the angle at the top of the roof is 90^{\circ}. Determine the volume of the greenhouse in cubic feet.
The front (and back) of a greenhouse have the same shape and dimensions shown below. The greenhouse is 40 feet long and the angle at the top of the roof is $$\displaystyle{90}^{{\circ}}$$. Determine the volume of the greenhouse in cubic feet. Explain your solution.
• Questions are typically answered in as fast as 30 minutes
### Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Asma Vang
Solution below
|
2022-01-22 02:40:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4757217466831207, "perplexity": 700.1873353245778}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303729.69/warc/CC-MAIN-20220122012907-20220122042907-00219.warc.gz"}
|
https://userpage.fu-berlin.de/soga/200/2040_continous_random_variables/20451_Students_t-Distribution_in_R.html
|
The R software provides access to the t-distribution by the dt(), pt(), qt() and rt() functions. Apply the help() function on these functions for further information.
The rt() function generates random deviates of the t-distribution and is written as rt(n, df). We may easily generate n number of random samples. Recall that he number of degrees of freedom for a t-distribution is equal to the sample size minus one, that is,
$df = n - 1\text{.}$
n <- 30
df <- n - 1
rt(n, df)
## [1] 0.73870099 0.72526773 0.57681933 -0.41335675 1.08160178
## [6] -2.03766753 -0.30897959 1.40177628 1.35433129 -1.14349964
## [11] -1.01500016 0.25982365 1.06870860 -1.02522571 -0.01872354
## [16] -1.32623310 -0.53502565 -1.63400367 -0.77143105 -0.49643195
## [21] -0.32242805 -0.54464546 2.14315143 -2.54128855 0.31696683
## [26] -0.46055848 -0.26070865 -0.18188416 -0.32236584 0.06151426
Further we may generate a very large number of random samples and plot them as a histogram.
n <- 10000
df <- n - 1
samples <- rt(n, df)
hist(samples, breaks = 'Scott', freq = FALSE)
By using the dt() function we may calculate the probability density function, and thus, the vertical distance between the horizontal axis and the t-curve at any point. For the purpose of demonstration we construct a t-distribution with $$df=5$$ and calculate the probability density function at $$t = -4,-2,0,2,4$$.
x <- seq(-4, 4, by = 2)
dt(x, df = 5)
## [1] 0.005123727 0.065090310 0.379606690 0.065090310 0.005123727
Another very useful function is the pt() function, which returns the area under the t-curve for any given interval. Let us calculate the area under the curve for the intervals $$j_i = (-\infty, -2], (-\infty, 0], (-\infty, 2]$$ and $$k_i = [-2, \infty),[0, \infty), [2, \infty)$$ for a random variable following a t-distribution with $$df=5$$.
df <- 5
ji <- c(-2,0,2)
pt(ji, df = df, lower.tail = TRUE)
## [1] 0.05096974 0.50000000 0.94903026
df <- 5
ki <- c(-2,0,2)
pt(ki, df = df, lower.tail = FALSE)
## [1] 0.94903026 0.50000000 0.05096974
|
2019-11-13 09:07:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.651518702507019, "perplexity": 1521.6207514323576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667177.24/warc/CC-MAIN-20191113090217-20191113114217-00505.warc.gz"}
|
https://stats.stackexchange.com/questions/366138/why-cant-we-use-backpropagation-and-gradient-descent-on-a-restricted-boltzmann
|
Why can't we use backpropagation and gradient descent on a Restricted Boltzmann Machine
Can someone please explain why we cannot use the backpropagation algorithm and gradient descent to train a Restricted Boltzmann Machine. In other words, why can't we train an RBM in the same manner that we train a feedforward network?
Whenever I have googled around for answers to this question, I keep seeing answers that say the partition function for the RBM is intractable and so you need to use something like Gibbs sampling and Constrastive Divergence to train the RBM.
The problem with this answer is that no one really explains why you don't need to use the partition function when training a feedforward network? Whenever they teach feedforward networks in textbooks nowadays, they just start by showing the backpropagation algorithm. I have not seen anyone try to explain why you don't need to worry about a partition function for a feedforward network. Can someone explain this part?
I have used Gibbs sampling and Metropolis-Hastings when training hierarchical Bayesian models. So I understand the reason why the partition function is so complicated in that circumstance because you are really trying to use MCMC to estimate the posterior distribution of the parameter. But aren't we trying to do the same thing in a feedforward network when we are trying to learn weights from the data?
So that is where I am getting confused. Why MCMC with RBMs but not feedforward networks?
Whereas RBM is a generative model which directly models the distribution of the data $p(x)$, most neural networks are used in a discriminative fashion and model $p(y|x)$, so they are not really comparable.
When neural networks are used for modeling $p(x)$, it is usually as a product of conditionals $p(\mathbf x) = p(x_0)\prod p(x_i|x_{< i}; \theta)$ which does not use any unobserved variables -- see language modeling -- or a VAE model, where the intractable parts are avoided by indirectly optimizing a lower bound on $p(x)$.
|
2020-10-24 18:21:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7504904866218567, "perplexity": 263.0108343521194}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107884322.44/warc/CC-MAIN-20201024164841-20201024194841-00522.warc.gz"}
|
https://techytok.com/lesson-types/
|
# Types and Structures
In this lesson we will learn what types are and how it is possible to define functions that work on types. We will learn which are the differences between abstract and concrete types, how to define immutable and mutable types and how to create a type constructor. We will give a brief introduction to multiple dispatch and see how types have a role in it.
You can find the code for this lesson here.
We can think of types as containers for data only. Moreover, it is possible to define a type hierarchy so that functions that work for parent type work also for the children (if they are written properly). A parent type can only be an AbstractType (like Number), while a child can be both an abstract or concrete type.
In the tree diagram, types in round bubbles are abstract types, while the ones in square bubbles are concrete types.
# Implementation
To declare a Type we use either the type or struct keyword.
To declare an abstract type we use:
abstract type Person
end
abstract type Musician <: Person
end
You may find it surprising, but apparently musicians are people, so Musician is a sub-type of Person. There are many kind of musicians, for example rock-stars and classic musicians, so we define two new concrete types (in particular this kind of type is called a composite type):
mutable struct Rockstar <: Musician
name::String
instrument::String
bandName::String
instrumentsPlayed::Int
end
struct ClassicMusician <: Musician
name::String
instrument::String
end
Notably rock-stars love to change the colour of their headband, so we have made Rockstar a mutable struct, which is a concrete type whose elements value can be modified. On the contrary, classic musicians are known for their everlasting love for their instrument, which will never change, so we have made ClassicMusician an immutable concrete type.
We can define another sub-type of Person, Physicist, as I am a physicist and I was getting envious of rock-stars:
mutable struct Physicist <: Person
name::String
sleepHours::Float64
favouriteLanguage::String
end
aure = Physicist("Aurelio", 6, "Julia")
>>>aure.name
Aurelio
>>>aure.sleepHours
6
>>>aure.favouriteLanguage
"Julia"
Luckily my exam session is over now and I finally have a little bit more time to sleep, so I’ll adjust my sleeping schedule to sleep eight hours:
aure.sleepHours = 8
Incidentally I am also a ClassicMusician and I play violin, so I can create a new structure:
aure_musician = ClassicMusician("Aurelio", "Violin")
>>>aure_musician.instrument = "Cello"
setfield! immutable struct of type ClassicMusician cannot be changed
As you can see, I love violin and I just can’t change my instrument, as ClassicMusician is an immutable struct.
I am not a rock-star, but my friend Ricky is one, so we’ll define:
ricky = Rockstar("Riccardo", "Voice", "Black Lotus", "red", 2)
red
# Functions and types: multiple dispatch
It is possible to write functions that operate on both abstract and concrete types. For example, every person is likely to have a name, so we can define the following function:
function introduceMe(person::Person)
println("Hello, my name is $(person.name).") end >>>introduceMe(aure) Hello, my name is Aurelio While only musicians play instruments, so we can define the following function: function introduceMe(person::Musician) println("Hello, my name is$(person.name) and I play $(person.instrument).") end >>>introduceMe(aure_musician) Hello, my name is Aurelio and I play Violin and for a rock-star we can write: function introduceMe(person::Rockstar) if person.instrument == "Voice" println("Hello, my name is$(person.name) and I sing.")
else
println("Hello, my name is $(person.name) and I play$(person.instrument).")
end
println("My band name is $(person.bandName) and my favourite headband colour is$(person.headbandColor)!")
end
The ::SomeType notation indicates to Julia that person has to be of the aforementioned type or a sub-type. Only the most strict type requirement is considered (which is the lowest type in the type tree), for example ricky is a Person, but “more importantly” he is a Rockstar (Rockstar is placed lower in the type tree), thus introduceMe(person::Rockstar) is called. In other words, the function with the closest type signature will be called.
This is an example of multiple dispatch, which means that we have written a single function with different methods depending on the type of the variable. We will come back again to multiple dispatch in this lesson, as it is one of the most important features of Julia and is considered a more advanced topic, together with type annotations. As an anticipation ::Rockstar is a type annotation, the compiler will check if person is a Rockstar (or a sub-type of it) and if that is true it will execute the function.
# Type constructor
When a type is applied like a function it is called a constructor. When we created the previous types, two constructors were generated automatically (these are called default constructors). One accepts any arguments and calls convert to convert them to the types of the fields, and the other accepts arguments that match the field types exactly (String and String in the case of ClassicMusician). The reason both of these are generated is that this makes it easier to add new definitions without inadvertently replacing a default constructor.
Sometimes it is more convenient to create custom constructor, so that it is possible to assign default values to certain variables, or perform some initial computations.
mutable struct MyData
x::Float64
x2::Float64
y::Float64
z::Float64
function MyData(x::Float64, y::Float64)
x2=x^2
z = sin(x2+y)
new(x, x2, y, z)
end
end
>>>MyData(2.0, 3.0)
MyData(2.0, 4.0, 3.0, 0.6569865987187891)
Sometimes it may be useful to use other types for x, x2 and y, so it is possible to use parametric types (i.e. types that accept type information at construction time):
mutable struct MyData2{T<:Real}
x::T
x2::T
y::T
z::Float64
function MyData2{T}(x::T, y::T) where {T<:Real}
x2=x^2
z = sin(x2+y)
new(x, x2, y, z)
end
end
>>>MyData2{Float64}(2.0,3.0)
MyData2{Float64}(2.0, 4.0, 3.0, 0.6569865987187891)
>>>MyData2{Int}(2,3)
MyData2{Int64}(2, 4, 3, 0.6569865987187891)
It is crucial for performance that you use concrete types inside a composite type (like Float64 or Int instead of Real, which is an abstract type), thus parametric types are a good option to maintain type flexibility while also defining all the types of the variables inside a composite type.
# Example
Mutable types are particularly useful when it comes to storing data that needs to be shared between some functions inside a module. It is not uncommon to define custom types in a module to store all the data which needs to be shared between functions and which is not constant.
module TestModuleTypes
export Circle, computePerimeter, computeArea, printCircleEquation
mutable struct Circle{T<:Real}
perimeter::Float64
area::Float64
# we initialize perimeter and area to -1.0, which is not a possible value
end
end
@doc raw"""
computePerimeter(circle::Circle)
Compute the perimeter of circle and store the value.
"""
function computePerimeter(circle::Circle)
return circle.perimeter
end
@doc raw"""
computeArea(circle::Circle)
Compute the area of circle and store the value.
"""
function computeArea(circle::Circle)
return circle.area
end
@doc raw"""
printCircleEquation(xc::Real, yc::Real, circle::Circle )
Print the equation of a cricle with center at (xc, yc) and radius given by circle.
"""
function printCircleEquation(xc::Real, yc::Real, circle::Circle )
println("(x - $xc)^2 + (y -$yc)^2 = \$(circle.radius^2)")
return
end
end # end module
#%%
using .TestModuleTypes
circle1 = Circle{Float64}(5.0)
computePerimeter(circle1)
circle1.perimeter
computeArea(circle1)
circle1.area
printCircleEquation(2, 3, circle1)
This is a simple module which implements a Circle type which contains the radius, perimeter and area of the circle. There are three functions which respectively compute the perimeter and area of the circle and store them inside theCircle structure. The third function prints the equation of a circle with a given centre and the radius stored inside a Circle structure.
Notice that we could have simply computed the perimeter and area inside the type constructor, but I have chosen not to do so for educative purposes.
# Conclusions
This lesson has been a little bit more conceptually difficult than the previous ones, but you don’t need to remember everything right now! We will use types in the future lessons, so you will naturally get accustomed to how they works over time.
We have learnt how to define abstract and concrete types, and how to define mutable and immutable structures. We have then learnt how it is possible to define functions that work on custom types and we have introduced multiple dispatch. Furthermore, we have seen how to define an inner constructor, to aid the user create an instance of a composite type. Lastly, we have seen an example of a module which uses a custom type (Circle) to perform and store some specific computations.
If you liked this lesson and you would like to receive further updates on what is being published on this website, I encourage you to subscribe to the newsletter! If you have any question or suggestion, please post them in the discussion below!
Thank you for reading this lesson and see you soon on TechyTok!
Tags:
Categories:
Updated:
|
2022-12-09 12:21:16
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3421534597873688, "perplexity": 1344.8766024073107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711396.19/warc/CC-MAIN-20221209112528-20221209142528-00012.warc.gz"}
|
https://www.open.edu/openlearn/mod/oucontent/view.php?id=105535&extra=longdesc_idm46597346553264&clicked=1
|
Figure 4 shows two arrows (or axes). One runs vertical and one runs horizontal, intersecting to make a cross. On the horizontal scale the dimension is labelled from individualism at the left hand side to communitarianism at the right hand side. On the vertical scale, the dimension is labelled from egalitarianism at the bottom to hierarchy at the top.
2.1 Culture and risk
|
2022-01-24 22:19:20
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9636117815971375, "perplexity": 1371.8790108950557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00306.warc.gz"}
|
https://socratic.org/questions/how-do-you-find-all-six-trigonometric-function-of-theta-if-the-point-12-5-is-on-
|
# How do you find all six trigonometric function of theta if the point (12, -5) is on the terminal side of theta?
Jun 7, 2017
$\tan t = \frac{y}{x} = - \frac{5}{12}$
${\cos}^{2} t = \frac{1}{1 + {\tan}^{2} t} = \frac{1}{1 + \frac{25}{144}} = \frac{144}{169}$
$\cos t = \frac{12}{13}$ (t is in Quadrant 4)
$\sin t = \tan t . \cos t = \left(- \frac{5}{12}\right) \left(\frac{12}{13}\right) = - \frac{5}{13}$
$\cot t = \frac{1}{\tan} = - \frac{12}{5}$
$\sec t = \frac{1}{\cos} = \frac{13}{12}$
$\csc t = \frac{1}{\sin} = - \frac{13}{5}$
|
2019-01-19 15:30:19
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5673890113830566, "perplexity": 1457.8917580765842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583668324.55/warc/CC-MAIN-20190119135934-20190119161927-00010.warc.gz"}
|
https://brilliant.org/discussions/thread/discuss-about-linand-field-theory/
|
# Learning more about ligand field theory
Note by Ayushi Agrawal
5 years, 5 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
The valence-bond model and the crystal field theory explain some aspects of the chemistry of the transition metals, but neither model is good at predicting all of the properties of transition-metal complexes. A third model, based on molecular orbital theory, was therefore developed that is known as ligand-field theory. Ligand-field theory is more powerful than either the valence-bond or crystal-field theories. Unfortunately it is also more abstract.
- 5 years, 5 months ago
i wanted to know about the applications of this theory......??
- 5 years, 5 months ago
The following list summarizes the key concepts of Ligand Field Theory.
1)One or more orbitals on the ligand overlap with one or more atomic orbitals on the metal. 2)If the metal- and ligand-based orbitals have similar energies and compatible symmetries, a net interaction exists. 3)The net interaction produces a new set of orbitals, one bonding and the other antibonding in nature. (An * indicates an orbital is antibonding.) 4)Where no net interaction exists, the original atomic and molecular orbitals are unaffected and are nonbonding in nature as regards the metal-ligand interaction. 5)Bonding and antibonding orbitals are of sigma (σ) or pi (π) character, depending upon whether the bonding or antibonding interaction lies along the line connecting the metal and the ligand. (Delta (δ) bonding is also possible, but it is unusual and is relatively weak.)
- 5 years, 5 months ago
A more detailed description of bonding in coordination compounds is provided by Ligand Field Theory. In coordination chemistry, the ligand is a Lewis base, which means that the ligand is able to donate a pair of electrons to form a covalent bond. The metal is a Lewis acid, which means it has an empty orbital that can accept a pair of electrons from a Lewis base to form a covalent bond. This bond is sometimes called a coordinate covalent bond or a dative covalent bond to indicate that both electrons in the bond come from the ligand.did u know The principles of Ligand Field Theory are similar to those for Molecular Orbital Theory.
- 5 years, 5 months ago
i too dont know about it Grace......... i wanted it to dicuss here because i too want to know about it
- 5 years, 5 months ago
Ayushi,
Could you help me understand where the valence-bond model and the crystal field theory fall apart with the transition metals. That is sort of where I get lost, do you know anywhere on the web where this is explained?
- 5 years, 5 months ago
In general, please provide some details about what you're describing, so that others will be able to understand what you mean, and be able to respond accordingly. You can edit your original post, to better guide the discussion.
Staff - 5 years, 5 months ago
Ayushi,
Do you mean ligand field theory? It has been touched on in my AP chemistry course, and I would love to talk more about it. In general the behavior of transition metals, is hard for me to keep strait. They form covalent bonds, and ionic ones, and coordination complexes seem like a blend of the two. I might be wrong there, but I would talk about ligand field theory.
- 5 years, 5 months ago
|
2018-06-21 20:18:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9239423274993896, "perplexity": 2207.0640002507635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864257.17/warc/CC-MAIN-20180621192119-20180621212119-00556.warc.gz"}
|
http://ams.org/bookstore?fn=20&arg1=survseries&ikey=SURV-40-3
|
New Titles | FAQ | Keep Informed | Review Cart | Contact Us Quick Search (Advanced Search ) Browse by Subject General Interest Logic & Foundations Number Theory Algebra & Algebraic Geometry Discrete Math & Combinatorics Analysis Differential Equations Geometry & Topology Probability & Statistics Applications Mathematical Physics Math Education
The Classification of the Finite Simple Groups, Number 3
Daniel Gorenstein, Richard Lyons, Rutgers University, New Brunswick, NJ, and Ronald Solomon, Ohio State University, Columbus, OH
SEARCH THIS BOOK:
Mathematical Surveys and Monographs
1998; 419 pp; hardcover
Volume: 40
ISBN-10: 0-8218-0391-3
ISBN-13: 978-0-8218-0391-2
List Price: US$96 Member Price: US$76.80
Order Code: SURV/40.3
This book offers a single source of basic facts about the structure of the finite simple groups with emphasis on a detailed description of their local subgroup structures, coverings and automorphisms. The method is by examination of the specific groups, rather than by the development of an abstract theory of simple groups. While the purpose of the book is to provide the background for the proof of the classification of the finite simple groups--dictating the choice of topics--the subject matter is covered in such depth and detail that the book should be of interest to anyone seeking information about the structure of the finite simple groups.
This volume offers a wealth of basic facts and computations. Much of the material is not readily available from any other source. In particular, the book contains the statements and proofs of the fundamental Borel-Tits Theorem and Curtis-Tits Theorem. It also contains complete information about the centralizers of semisimple involutions in groups of Lie type, as well as many other local subgroups.
Graduate students and research mathematicians interested in the subgroup structure of the finite simple groups of Lie type, the alternating groups and the sporadic simple groups.
Reviews
"This is the third volume in a series in which the authors aim to write down a complete proof of the classification of simple finite groups. This third volume concentrates entirely on various basic properties of the known finite simple groups. The volume is written in the careful, clear and thorough style we have come to expect from the authors. Quite apart from its role in the series, it contains a wealth of information about the known simple groups which is essential for use in applications of finite group theory. For this reason, it will surely stand on its own as a standard text on simple groups."
-- Bulletin of the London Mathematical Society
"The book is carefully written and much of the material presented has uses well beyond the task at hand. There is a wealth of information in this volume, including quite a number of useful tables ... will be a valuable reference for future generations of mathematicians."
-- Mathematical Reviews
• Some theory of linear algebraic groups
• The finite groups of Lie type
• Local subgroups of groups of Lie type. I
• Local subgroups of groups of Lie type. II
• The alternating groups and the twenty-six sporadic groups
• Coverings and embeddings of quasisimple $$\mathcal {K}$$-groups
• General properties of $$\mathcal {K}$$-groups
• Background references
• Expository references
• Errata for numbers 1 and 2
• Glossary
• Index
|
2013-12-08 15:38:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2800809144973755, "perplexity": 676.2904850672943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163066095/warc/CC-MAIN-20131204131746-00006-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://sni.scientificscholar.in/visualization-of-ictal-networks-using-gamma-oscillation-regularity-correlation-analysis-in-focal-motor-epilepsy-illustrative-cases/
|
Notice: Please configure GTranslate from WP-Admin -> Settings -> GTranslate to see it in action.
Case Report
2022
:13;
105
doi:
10.25259/SNI_193_2022
Visualization of ictal networks using gamma oscillation regularity correlation analysis in focal motor epilepsy: Illustrative cases
Department of Neurosurgery, Showa University School of Medicine, Shinagawa-ku, Japan.
Corresponding author: Yosuke Sato, Department of Neurosurgery, Showa University School of Medicine, Shinagawa-ku, Japan. [email protected]
Licence
This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-Share Alike 4.0 License, which allows others to remix, transform, and build upon the work non-commercially, as long as the author is credited and the new creations are licensed under the identical terms.
How to cite this article: Nakamura T, Sato Y, Kobayashi Y, Kawauchi Y, Shimizu K, Mizutani T. Visualization of ictal networks using gamma oscillation regularity correlation analysis in focal motor epilepsy: Illustrative cases. Surg Neurol Int 2022;13:105.
Background:
Focal motor epilepsy is difficult to localize within the epileptogenic zone because ictal activity quickly spreads to the motor cortex through ictal networks. We previously reported the usefulness of gamma oscillation (30–70 Hz) regularity (GOR) correlation analysis using interictal electrocorticographic (ECoG) data to depict epileptogenic networks. We conducted GOR correlation analysis using ictal ECoG data to visualize the ictal networks originating from the epileptogenic zone in two cases — a 26-year-old woman with negative motor seizures and a 53-year-old man with supplementary motor area (SMA) seizures.
Case Description:
In both cases, we captured several habitual seizures during monitoring after subdural electrode implantation and performed GOR correlation analysis using ictal ECoG data. A significantly high GOR suggestive of epileptogenicity was identified in the SMA ipsilateral to the lesions, which were connected to the motor cortex through supposed ictal networks. We resected the high GOR locations in the SMA and the patients’ previously identified tumors were removed. The patients were seizure-free without any neurological deficits after surgery.
Conclusion:
The GOR correlation analysis using ictal ECoG data could be a powerful tool for visualizing ictal networks in focal motor epilepsy.
INTRODUCTION
Focal motor epilepsy typically involves swift and complex motor behaviors,[14,18] and negative motor seizures (NMS) and supplementary motor area (SMA) seizures present with various symptoms.[10] These seizures often have an epileptogenic focus in the mesial or near-mesial frontal lobe.[2] In studies using electrocorticographic (ECoG) data, Ikeda et al.[8] and Ohara et al.[13] suggested that NMS and SMA seizures differ from other focal motor seizures involving the primary motor cortex and that these ictal discharges spread rapidly from the epileptogenic focus to the symptomatic zone, that is, the primary motor cortex. These features make it difficult to accurately diagnose epileptogenic foci in NMS and SMA seizures based on conventional electroencephalography (EEG) findings.[11]
In view of the context of epilepsy as a network disorder,[7] it has been challenging to depict epileptogenic networks using EEG methods such as stereo EEG (SEEG)[1] and magnetoencephalography.[5] In particular, in focal motor seizures, where seizure activity propagates quickly from the epileptogenic focus to the adjacent motor cortex,[3] visualization of the epileptogenic network enables accurate assessment of the epileptogenic focus and improves surgical treatment outcomes.[7] Recent studies have shown that gamma oscillation (30–70 Hz) regularity (GOR) in ECoG data is significantly associated with epileptogenicity in the epileptogenic focus.[15,16] Furthermore, researchers have reported successful intraoperative visualization of epileptogenic networks connecting the lateral temporal lobe to the ipsilateral hippocampus using GOR correlation analysis of ECoG data in a patient with dual foci in temporal lobe epilepsy.[9] In this context, we hypothesized that applying GOR correlation analysis to ictal ECoG data in focal motor epilepsy would make it possible to depict the ictal networks between the epileptogenic focus and the associated motor cortex.
CLINICAL PRESENTATION
Case 1
The patient was a 26-year-old woman who experienced an indescribable aura and subsequent atonic seizures in the right hemibody without loss of consciousness for more than 5 years, which was considered to be NMS. Contrast-enhanced magnetic resonance imaging (MRI) showed a 27 × 21 mm tumor within the left frontal lobe in contact with the SMA. The tumor comprised solid and cystic components and no calcification was observed [Figure 1a]. Iomazenil single-photon emission computed tomography (IMZ-SPECT) showed decreased accumulation in the left prefrontal cortex [Figure 1b]. Interictal scalp EEG revealed no significant epileptic discharge. To evaluate the epileptogenic focus accurately, we performed video/intracranial ECoG monitoring with subdural grid electrodes placed on the left frontal lobe [Figure 1c]. Interical ECoG showed spikes at electrodes 12 and 13 [Figure 1d]. GOR analysis with interictal ECoG data revealed a significantly high GOR at electrodes 7, 8, 12, and 13 [Figure 1e]. Habitual seizures started with spike activity at electrode 12, followed by seizure activity spreading into electrodes 7, 8, and 13 [Figure 1f]. GOR correlation analysis with ictal ECoG data revealed ictal networks between the epileptogenic focus and the ipsilateral motor cortex [Figure 1g]. These results led us to the diagnosis of intractable NMS with an epileptogenic focus originating from the SMA.
The patient underwent cortical resection of the epileptogenic focus (electrodes 7, 8, 12, and 13) within the SMA and subsequent tumor removal [Figure 1h]. The patient was seizure-free and had no complications. Postoperative pathological examination confirmed the diagnosis of ganglioglioma.
Case 2
The patient was a 53-year-old man who experienced short tonic posturing of the left hand for over 2 years. Contrast-enhanced MRI showed a 9.2 × 9.4 mm tumor at the right mesial frontal lobe, and high intensity was seen in fluid-attenuated inversion recovery (FLAIR) images [Figure 2a]. IMZ-SPECT showed slightly decreased accumulation in the right mesial frontal cortex [Figure 2b]. Interictal scalp EEG revealed no significant epileptic discharge. We performed video/intracranial ECoG monitoring with subdural grid electrodes placed on the right mesial and lateral frontal lobes [Figure 2c]. Interical ECoG showed fast activity and spikes at electrodes 21 and 22 on the right mesial frontal cortex [Figure 2d]. GOR analysis with interictal ECoG data revealed a significantly high GOR at electrodes 21 and 22 [Figure 2e]. Habitual seizures started with spike activity at electrodes 21 and 22, followed by seizure activity spreading into electrodes 12, 13, 14, 17, 18, and 19 [Figure 2f]. GOR correlation analysis with ictal ECoG data revealed ictal networks between the epileptogenic focus and the ipsilateral premotor and motor cortex [Figure 2g]. These results led us to diagnose intractable SMA seizures. The patient underwent cortical resection of the epileptogenic focus (electrodes 21 and 22) within the SMA with high intensity in FLAIR [Figure 2h]. The patient subsequently became seizure-free and had no complications. Postoperative pathological examination confirmed the diagnosis of anaplastic astrocytoma.
ECoG data recordings
ECoG data were recorded using a Nihon Kohden Neurofax EEG system (Nihon Kohden, Tokyo, Japan) with a bandpass filter from 0.16 to 300 Hz with a sampling rate of 1 kHz. A 60-Hz notch filter was applied to all channels and the sensitivity was between 30 and 100 µV/mm according to the amplitudes of the background activities and epileptic discharges. Recordings were obtained using a reference electrode placed on the forehead. All selected ECoG epochs were inspected to ensure that they were not contaminated by artifacts.
GOR analysis
The detailed algorithm employed for GOR analysis using the sample entropy method has been described in the previous studies.[9,17] In each step of the GOR correlation analysis, we selected 20 s of ECoG data without any significant artifacts. ECoG data were down-sampled to 200 Hz, where the timescale factor (τ) = 3–7 corresponded to the gamma frequency (28.6–66.7 Hz). We defined the GOR as an average score with (τ) = 3–7. The time-series GOR was then obtained by sweeping the 5-s analysis interval by 0.1 s over the entire 10 s (i.e., 51 time-series GOR). The correlation coefficient rij for the time series GOR at electrodes i and j was defined as:
${r}_{ij}=\frac{{s}_{ij}}{{s}_{i}{s}_{j}}$
Sij is the covariance of electrodes i and j, and Si is the standard deviation of electrode i. In the network diagram, the threshold was set to 0.7 in this case. The edge was placed between nodes i and j when rij = 0.7. We weighted the threshold between 0.7 and 1 linearly with the thickness of the edge. To visually assess the GOR, we color-coded the average GOR over 10 s. These procedures were performed using a custom program developed in cooperation with EFken Inc. (Tokyo, Japan).
DISCUSSION
Focal motor epilepsy is difficult to diagnose because of its very rapid propagation, and abnormalities in scalp EEG often remain undetected.[14,18] Among focal motor epilepsies, SMA seizures and NMS are known to express various worrisome symptoms. The SMA is divided into two areas — the rostral part (pre-SMA) and the dorsal part (SMA-proper). The preSMA is connected to the prefrontal cortex. The SMA-proper projects to the primary motor cortex, dorsal premotor cortex and spinal cord. Furthermore, the SMA is suggested to be involved in other functions such as spatial and language processing[6] and is related to negative motor responses (e.g., atonic seizures and speech arrest) in addition to positive motor responses (e.g., convulsions).[13] The NMA is also separated into two subareas: the primary NMA and the supplementary NMA. These two subareas correspond to area 44 in Broadmann’s map and pre-SMA, respectively.[8] These anatomical and functional complexities make the diagnosis of SMA seizures and NMS very difficult.
We previously reported the usefulness of GOR analysis in locating the epileptogenic focus[9,15-17] and showed that GOR correlation analysis is an effective method to depict the interictal epileptogenic network intraoperatively.[9] In the present study, we applied GOR correlation analysis to ictal ECoG data in two patients with NMS and SMA seizures and revealed the ictal networks between the SMA region corresponding to the epileptogenic focus and the motor areas, which has been difficult to assess using conventional methods. The ability to depict ictal networks in focal motor epilepsy, which is structurally and functionally complex, allows for reasonable and minimally invasive epilepsy surgery. Furthermore, our GOR correlation analysis may be applicable not only to epilepsy but also to the study of motor-related networks.
The brain’s U-fibers, which connect the neighboring cortical regions,[12] play a major role in frontal cortex formation.[4] The fact that these U-fibers are tightly connected to the various motor-associated areas may be related to the very fast propagation of seizure activities in focal motor epilepsy. We assume that the ictal networks visualized with our GOR correlation analysis indicate the connection between the SMA/NMA and motor-associated areas through the U-fibers, although further studies are needed to confirm this.
A limitation of this study is that the networks are presented as an undirected graph; hence, the direction of the seizure propagation cannot be strictly evaluated. As we were able to show that there is a connection between the epileptogenic focus and the motor areas as symptomatic zones and that the removal of such epileptogenic foci resulted in liberation from seizures, we can only indirectly understand that seizure activities start from the epileptogenic focus and subsequently propagate to the motor areas. To solve this problem, we are currently developing a GOR correlation analysis to depict visualized networks as a directed graph. In addition, ECoG data can only be used for planar network analysis. Our goal is to use SEEG data with our GOR correlation analysis to enable three-dimensional network depiction for more minimally invasive epilepsy surgery.
CONCLUSION
GOR correlation analysis using ictal ECoG data as described here could be a very useful method for visualizing ictal networks in focal motor epilepsy.
Declaration of patient consent
Institutional Review Board (IRB) permission obtained for the study.
JSPS KAKENHI Grant Number JP 20K09356.
Conflicts of interest
There are no conflicts of interest.
REFERENCES
1. , , , , , , . Defining epileptogenic networks: Contribution of SEEG and signal analysis. Epilepsia. 2017;58:1131-47.
2. , . Frontal lobe epilepsy. J Clin Neurosc. 2011;18:593-600.
3. , , , , , , . Frontal lobe seizures: From clinical semiology to localization. Epilepsia. 2014;55:264-77.
4. , , , , , , . Short frontal lobe connections of the human brain. Cortex. 2012;48:273-91.
5. , , , , . Functional modularity of background activities in normal and epileptic brain networks. Phys Rev Lett. 2010;104:118701.
6. , . Supplementary motor area as key structure for domain-general sequence processing: A unified account. Neurosci Biobehav Rev. 2017;72:28-42.
7. , , , . Epileptogenic network formation. Neurosurg Clin North Am. 2020;31:335-44.
8. , , , , , , . Negative motor seizure arising from the negative motor area: Is it ictal apraxia? Epilepsia. 2009;50:2072-84.
9. , , , . Intraoperative epileptogenic network visualization using gamma oscillation regularity correlation analysis in epilepsy surgery. Surg Neurol Int. 2021;12:254.
10. , , , , , , . Frontal lobe epilepsy: Clinical characteristics, surgical outcomes and diagnostic modalities. Seizure. 2008;17:514-23.
11. . Non-invasive electroencephalography evaluation of the irritative zone. In: Textbook of Epilepsy Surgery. London: Informa Healthcare, Taylor and Francis Distributor; . p. 530-6.
12. , , , , , , . A method for U-fiber quantification from 7T diffusion-weighted MRI data tested in subjects with non-lesional focal epilepsy. Neuroreport. 2017;28:457-61.
13. , , , , , , . Propagation of tonic posturing in supplementary motor area (SMA) seizures. Epilepsy Res. 2004;62:179-87.
14. , , , , , , . Structural and effective connectivity in focal epilepsy. Neuroimage Clin. 2018;17:943-52.
15. , , , . Low entropy of interictal gamma oscillations is a biomarker of the seizure onset zone in focal cortical dysplasia type II. Epilepsy Behav. 2019;96:155-9.
16. , , , , , , . Epileptogenic zone localization using intraoperative gamma oscillation regularity analysis in epilepsy surgery for cavernomas: Patient series. J Neurosurg Case Lessons. 2021;1:20121.
17. , , , , , . Spatiotemporal changes in regularity of gamma oscillations contribute to focal ictogenesis. Sci Rep. 2017;7:9362.
18. , , , , , , . Rapidly spreading seizures arise from large-scale functional brain networks in focal epilepsy. NeuroImage. 2021;237:118104.
Fulltext Views
2
0
|
2022-05-22 04:19:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5563341975212097, "perplexity": 10547.714293567386}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00667.warc.gz"}
|
https://socratic.org/questions/a-fisherman-reels-in-12-0-m-of-line-while-landing-a-fish-using-a-constant-forwar
|
# A fisherman reels in 12.0 m of line while landing a fish, using a constant forward pull of 25.0 N. How much work does the tension in the line do on the fish?
Nov 8, 2015
$300 \text{J}$
$W = F \times d$
$\therefore W = 25 \times 12 = 300 \text{J}$
|
2019-09-22 23:26:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 3, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29475337266921997, "perplexity": 2689.5922232122516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514575751.84/warc/CC-MAIN-20190922221623-20190923003623-00187.warc.gz"}
|
https://mathoverflow.net/questions/208064/introductory-article-of-knot-heegaard-floer-homology
|
# Introductory article of knot Heegaard Floer Homology
I am looking for some article that gives an introduction to Heegaard Floer homology of knot.
I heard that it is very useful to determine the unknotting number of a knot, but I couldn't find any introductory article. (http://arxiv.org/abs/1411.4540 and http://arxiv.org/abs/1003.6041 are Greek to me!)
Can you point me to some introductory article? Could you also please give me some example of calculation of knot Floer homology, say for "small" knots like $3_1$ and $4_1$?
If possible, can you also tell me why Heegaard Floer homology does not determine the unknotting number of 8_10?
(Since this is just a string of references, I do not believe this constitutes a 'real answer' but it is too long for a comment, so I'm placing it in the answer field. Editors, please feel free to correct my etiquette.)
As for a general introduction or survey article, you might also look at these:
1. "An introduction to Heegaard Floer homology" by Ozsvath and Szabo. https://web.math.princeton.edu/~szabo/clay.pdf
Regarding the second part of your reference request, about the calculation of the knot Floer groups of the trefoil or the figure eight; well, these are alternating knots, and so their Floer groups are completely determined by their signature and Alexander polynomials (see Theorem 1.3). However, I think what you are asking for is an explicit calculation from a Heegaard diagram. In the paper "Holomorphic disks and knot invariants" by Ozsvath and Szabo, you can find such a calculation for the trefoil in Section 6.1. However, this is not an introductory article --- it is full strength. You may also benefit from this expository article (in PDF form) written by Andrew Manion. His exposition also contains examples of explicit calculations, especially in Section 3.
Sometimes the grid diagram approach to calculating knot Floer groups makes for a gentler introduction. For that, you might look at the paper "A combinatorial description of knot Floer homology" by Manolescu, Ozsvath and Sarkar. In Section 4 there are explicit calculations for the Hopf link and the trefoil.
Finally, for the third part of your reference request, I don't really understand what you mean by 'does not determine the unknotting number,' but I think you should look at the paper "Knots with unknotting number one and Heegaard Floer homology" by Ozsvath and Szabo, in particular Theorem 1.1 and Corollary 1.2. (The arXiv version is linked). They use the Heegaard Floer homology of the branched double cover of a knot to give an obstruction to that knot having unknotting number one.
They apply their obstruction (the symmetry condition of Theorem 1.1) to show that the alternating knot $8_{10}$ does not have unknotting number one. I am under the impression this knot was already known to have $u(K)\leq2$, therefore they conclude it has unknotting number two.
• This it totally on-topic, so you should not feel bad at all for posting it. It's a great and useful answer! – Andy Putman Jun 1 '15 at 17:22
|
2021-04-18 17:17:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6081574559211731, "perplexity": 324.5609577055807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038507477.62/warc/CC-MAIN-20210418163541-20210418193541-00025.warc.gz"}
|
https://math.stackexchange.com/questions/1123570/generating-function-for-pell-numbers
|
# Generating function for Pell numbers
Problem: The Pell numbers $p_n$ are defined by the recurrence relation \begin{align*} p_{n+1} = 2p_n + p_{n-1} \end{align*} for $n \geq 1$. The initial conditions are $p_0 = 0$ and $p_1 = 1$.
a) Determine the generating function \begin{align*} P(x) = \sum_{n=0}^{\infty} p_n x_n \end{align*} for the Pell numbers. What is the radius of convergence?
b) Determine (on the basis of the found generating function) an explicit formula for $p_n$.
Solution: Together with the initial conditions we have \begin{align*} P(x) &= 0 + x + \sum_{n=2}^{\infty} p_n x_n \\ &= x + \sum_{n=1}^{\infty} p_{n+1} x^{n+1} \\ &= x + \sum_{n=1}^{\infty} (2p_n + p_{n-1}) x^{n+1} \\ &= x + \sum_{n=1}^{\infty} 2p_n x^{n+1} + \sum_{n=1}^{\infty} p_{n-1} x^{n+1} \\ &= x + 2x \sum_{n=1}^{\infty} p_n x^n + x \sum_{n=1}^{\infty} p_{n-1} x^n. \end{align*} Now we look at each series separately to see what we've got. The first series on the left expands as $(x + p_2 x^2 + p_3 x^3 + ...)$. This is nothing but the original $P(x)$ (because we can ignore the constant term $0$ right?). So for the first series we've got $2x P(x)$.
The second series on the right expands as $(0 + x^2 + p_2 x^3 + ...)$. We can factorize $x$ out such that we get $P(x)$ again. So everything together we have: \begin{align*} P(x) = x + 2xP(x) + x^2 P(x), \end{align*} which gives us \begin{align*} P(x) (1-2x-x^2) = x, \end{align*} or \begin{align*} P(x) = \frac{x}{(1-2x-x^2)}. \end{align*}
But then I don't know how to determine the radius of convergence, and how to do b). Any help would be appreciated.
• Bell numbers are something completely different. – Lucian Jan 28 '15 at 15:57
• I know, I had written 'Pell' numbers first but someone edited it for me... – Kamil Jan 28 '15 at 16:02
• @Lucian: It is my mistake! Thanks for correcting that. – Mhenni Benghorbal Jan 28 '15 at 16:02
• @Kamil $(1-2x-x^2) \neq -(1+x)^2$ – rlartiga Jan 28 '15 at 16:12
• @rlartiga. Thanks, typed in the wrong expression in Maple. – Kamil Jan 28 '15 at 16:20
The radius of convergence is the distance to the nearest singularity
For $(b)$ you can advance as (based on your calculations) using partial fraction
$$P(x)= \frac{x}{(1-2x-x^2)}= \frac{A}{x-a} + \frac{B}{x-b}$$
where $a,b$ are the roots of $1-2x-x^2$ and $A$ and $B$ need to be determined. The calculations gives:
$$\frac{-1-\sqrt{2}}{2 \sqrt{2} (x+\sqrt{2}+1)}+\frac{1-\sqrt{2}}{2 \sqrt{2} (x-\sqrt{2}+1)}$$
Then use the geometric series expansion.
Note:
$$\frac{1}{a-t} = \frac{1}{a}\sum_{n=0}^{\infty} \frac{t^n}{a^n}$$
• $1-2x-x^2 \neq (1-x)^2$ – rlartiga Jan 28 '15 at 16:01
• @rlartiga: It is a typo! Thank you! – Mhenni Benghorbal Jan 28 '15 at 16:03
• What do you mean with singularity? – Kamil Jan 28 '15 at 16:21
• @Kamil: Where the function blows up? For instance what's the singularity of $\frac{1}{1-x}$? So you can see the radius of convergence? – Mhenni Benghorbal Jan 28 '15 at 16:22
• @MhenniBenghorbal I put the calculation of the partial decompostion. Feel free to change it. – rlartiga Jan 28 '15 at 16:24
The terms in your sum are $p_nx^n$. If you solve the recurrence relation, you can find that $p_n$ grows as $r^n$ for some $r$, the larger root of the characteristic equation. You need the terms to decease faster than $\frac 1n$, so need $|x| \lt \frac 1r$
|
2020-09-24 15:56:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9994531273841858, "perplexity": 250.47632900602758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400219221.53/warc/CC-MAIN-20200924132241-20200924162241-00603.warc.gz"}
|
https://calendar.math.illinois.edu/?year=2013&month=08&day=01&interval=next+12+months®exp=Geometry+Seminar
|
Department of
# Mathematics
Seminar Calendar
for Geometry Seminar events the next 12 months of Thursday, August 1, 2013.
.
events for the
events containing
Questions regarding events or the calendar should be directed to Tori Corkery.
July 2013 August 2013 September 2013
Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa Su Mo Tu We Th Fr Sa
1 2 3 4 5 6 1 2 3 1 2 3 4 5 6 7
7 8 9 10 11 12 13 4 5 6 7 8 9 10 8 9 10 11 12 13 14
14 15 16 17 18 19 20 11 12 13 14 15 16 17 15 16 17 18 19 20 21
21 22 23 24 25 26 27 18 19 20 21 22 23 24 22 23 24 25 26 27 28
28 29 30 31 25 26 27 28 29 30 31 29 30
Monday, August 26, 2013
10:00 am in 145 Altgeld Hall,Monday, August 26, 2013
#### de Rham Complexes on Orbit Spaces and Symplectic Quotients
###### Jordan Watts [email] (UIUC Math)
Abstract: Let G be a Lie group acting on a manifold M. If the action is proper and free, then M/G is a manifold which admits a de Rham complex isomorphic to the subcomplex of basic forms on M. We will introduce the notion of a diffeology in order to extend this result to all proper actions. Time permitting, we will then compare this definition to a de Rham complex on a symplectic quotient as defined by Sjamaar.
Tuesday, August 27, 2013
3:00 pm in 243 Altgeld Hall,Tuesday, August 27, 2013
#### Organizational Meeting
###### friends of A. Coble (UIUC)
4:00 pm in 243 Altgeld Hall,Tuesday, August 27, 2013
#### Organizational Meeting
Abstract: We will discuss the structure and schedule of the seminar. We'll be taking speakers slots too, so if you'd like to give a talk or know someone who would, please come. There will be cookies.
Tuesday, September 3, 2013
1:00 pm in 243 Altgeld Hall,Tuesday, September 3, 2013
#### Positively curved Alexandrov spaces with many symmetries
###### John Harvey (Notre Dame)
Abstract: I will introduce two new tools -- the ramified orientable double cover and the slice theorem -- for Alexandrov geometry. These will be used to classify positively curved Alexandrov spaces under certain symmetry conditions, shedding new light on similar Riemannian results. This is joint work with Catherine Searle.
3:00 pm in 243 Altgeld Hall,Tuesday, September 3, 2013
#### The Hitchin fibration and real forms through spectral data
###### Laura Schaposnik (UIUC)
Abstract: The talk will be dedicated to the study of the moduli space of G-Higgs bundles and the Hitchin fibration through spectral data, where G is a real form of a complex Lie group. Through some examples we shall see applications of this new geometric way of understanding the moduli space and, time permitting, we will mention how the data approach relates to Langlands duality and (A,B,A)-branes.
4:00 pm in 243 Altgeld Hall,Tuesday, September 3, 2013
#### CANCELLED
Thursday, September 5, 2013
2:00 pm in 245 Altgeld Hall,Thursday, September 5, 2013
#### Cohomology of the moduli space of curves
###### Rahul Pandharipande (ETH Zürich)
Abstract: The moduli space of curves carries tautological cohomology classes. I will discuss the study of relations amongst these classes starting with ideas of Mumford in the 1980s. The subject advanced in the 1990s with conjectures of Faber and Faber-Zagier. I will explain the current state of affairs based on Pixton's conjectures related to cohomological field theories. The talk represents joint work with A. Pixton and D. Zvonkine.
Monday, September 9, 2013
10:00 am in 145 Altgeld Hall,Monday, September 9, 2013
#### Some interactions between classical, semiclassical, and random symplectic geometry
###### Alvaro Pelayo (Washington University Math)
Abstract: I will describe some recent results about classical and quantum integrable systems, emphasizing the interplay between symplectic geometry and semiclassical analysis. I will also briefly describe some random counterparts of classical results in symplectic geometry.
Tuesday, September 10, 2013
2:00 pm in 243 Altgeld Hall,Tuesday, September 10, 2013
#### On elliptesque and hyperbolesque curves
###### Bruce Reznick (UIUC Math)
Abstract: Any five points in the plane (no four on a line) determine a unique conic section. What can be said about a curve $C$ with the property that any five points chosen from $C$ either always determine an ellipse (or circle) or always determine a hyperbola? Such a curve is "elliptesque" or "hyperbolesque". Non-trivial examples include $y = x^3, x \ge 0$, which is hyperbolesque and $y = x^{3/2}, 1 \le x \le 1.3$, which is elliptesque. We show that if a smooth closed curve $C$ satisfies either condition, then it must be elliptesque and bound a convex region; no unbounded smooth curve can be elliptesque. Proofs are elementary.
The opening act of this talk is a discussion of the remarkable differential equation $((y'')^{-2/3})'''=0$; Sylvester observed in 1886 that the solutions to this equation are precisely the non-degenerate conic sections, simplifying a result originally proved by Monge in 1809. Two proofs of this will be given, and both are readily accessible to undergraduate math majors who have had calculus as well as linear algebra.
Departmental veterans will recognize this as a "Potpourri" talk.
3:00 pm in 245 Altgeld Hall,Tuesday, September 10, 2013
#### Symplectic Galois groups and Springer theory
###### Kevin McGerty (University of Oxford)
Abstract: One of the fundamental phenomena in geometric representation theory is Springer's action of the Weyl group on the cohomology of the fibres of the Springer resolution of the nilpotent cone. Recently there has been much interest in the geometry of symplectic resolutions, of which the Springer resolution is an example. We will discuss how Springer theory can be generalized to this setting.
Monday, September 16, 2013
10:00 am in 145 Altgeld Hall,Monday, September 16, 2013
#### Topological Hamiltonian and contact dynamics, part I: an introduction
###### Stefan Mueller (UIUC Math)
Abstract: In classical mechanics, the dynamics of a Hamiltonian vector field models the motion of particles in phase space, and the dynamics of a contact vector field play a similar role in geometric optics (in the mathematical model of Huygens' principle). Topological Hamiltonian dynamics and topological contact dynamics are relatively recent theories that explore natural questions regarding the regularity of such dynamical systems (on an arbitrary symplectic or contact manifold). In a nutshell, Hamiltonian and contact dynamics admit genuine generalizations to non-smooth dynamical systems with non-smooth generating (contact) Hamiltonian functions. The talk begins with examples that illustrate the central ideas and lead naturally to the key definitions. The main technical ingredient is the well-known energy-capacity inequality for displaceable subsets of a symplectic manifold. We use it to prove an extension of the classical 1-1 correspondence between isotopies and their generating Hamiltonians. This crucial result turns out to be equivalent to certain rigidity phenomena for smooth Hamiltonian and contact dynamical systems. We then look at some of the foundational results of the new theories. The end of the talk touches upon sample applications to topological dynamics and to Riemannian geometry, which will be explored further in a second talk.
Tuesday, September 17, 2013
1:00 pm in 243 Altgeld Hall,Tuesday, September 17, 2013
#### Quasi-Regular Mappings of Lens Spaces
###### Anton Lukyanenko (UIUC Math)
Abstract: A quasi-regular QR mapping between metric manifolds is a branched cover with bounded dilatation, e.g. $f(z)=z^2$. In a joint work with K. Fassler and K. Peltonen, we define QR mappings of sub-Riemannian manifolds and show that:
1) Every lens space admits a uniformly QR (UQR) mapping $f$.
2) Every UQR mapping leaves invariant a measurable conformal structure.
The first result uses an explicit "conformal trap" construction, while the second builds on similar results by Sullivan-Tukia and a connection to higher-rank symmetric spaces.
4:00 pm in 243 Altgeld Hall,Tuesday, September 17, 2013
#### An Introduction to Tropical Geometry
###### Nathan Fieldsteel (UIUC Math)
Abstract: This will be the first talk in a two-part series in which we will give a broad overview of the relatively young field of tropical geometry, aiming to introduce the central objects of study while providing motivation, examples and connections to other fields. Professors are welcome to attend.
Monday, September 23, 2013
10:00 am in 145 Altgeld Hall,Monday, September 23, 2013
#### Hamiltonian and contact dynamics, part II: applications
###### Stefan Mueller (UIUC Math)
Abstract: After recalling the precise definition of a topological Hamiltonian dynamical system, I will sketch the proof of the 1-1 correspondence between topological Hamiltonian isotopies and topological Hamiltonian functions. I also show that this result has non-empty content by constructing a non-smooth topological Hamiltonian dynamical system (with support in a Darboux chart). We then shift gears and focus on two sample applications to 1) hydrodynamics (topological character of the helicity invariant, which measures the average asymptotic linking number of the flow lines of a divergence-free vector field) and to 2) Riemannian geometry (C^0-rigidity of the geodesic flows associated to a sequence of weakly uniformly converging Riemannian metrics).
Tuesday, September 24, 2013
1:00 pm in 243 Altgeld Hall,Tuesday, September 24, 2013
#### Divergence of Weil-Petrsson geodesic rays.
###### Babak Modami ((UIUC Math))
Abstract: The Weil-Petrsson (WP) geodesic flow is a non-uniformly hyperbolic flow on the moduli space of Riemann surface. We review some results about a kind of symbolic coding of the flow using laminations and subsurface coefficients. Then we apply some estimates on WP metric and its derivatives in the thin part of moduli space to show that the strong asymptotics of a class of WP geodesic rays is determined by the associated laminations. As a result we give a symbolic condition for divergence of WP geodesic rays in the moduli space.
3:00 pm in 243 Altgeld Hall,Tuesday, September 24, 2013
#### On S-duality and T-duality and algebro-geometric proof of modularity conjectures in BPS counting theories
###### Artan Sheshmani (Ohio State)
Abstract: We construct an algebraic-geometric framework to calculate the partition functions of "massive black holes" enumerating invariants of supersymmetric D4-D2-D0 BPS states in type IIA string theory. Using S-duality, the entropy of such black holes can be related to a certain N=2, d=4 Super Yang-Mills theory on a divisor in a threefold. Physicists: Gaiotto, Strominger, Yin, Denef, Moore, via careful study of such S-duality, have conjectured that these partition functions have modular properties. We give a rigorous mathematical proof of their conjectures in different geometric setups. This is a report of joint project with Amin Gholampour and Richard Thomas. We also use an algebro-geometric analogue of the string theoretic D4/D2 T-duality to prove the modularity properties of certain PT stable pair invariants over threefolds given by smooth and Nodal surface fibrations over a curve. Here our strategy is to use combination of degeneration techniques, conifold transitions, and wall crossing of Bridgeland stability conditions. This is a report of joint project with Gholampour and Yukinobu Toda.
4:00 pm in 243 Altgeld Hall,Tuesday, September 24, 2013
#### An Introduction to Tropical Geometry: Part II
###### Nathan Fieldsteel (UIUC Math)
Abstract: A continuation of last week's seminar, we will begin by tying up some loose ends from last time. We will then present more of the general theory of tropical geometry, and discuss connections to polyhedral geometry, hyperplane arrangements, and grassmanians, time permitting. Professors are welcome to attend.
Monday, September 30, 2013
10:00 am in 145 Altgeld Hall,Monday, September 30, 2013
#### Positive loops and orderability in contact geometry
###### Peter Weigel (Purdue University Math)
Abstract: Orderability of contact manifolds is related in some non-obvious ways to the topology of a contact manifold V. We know, for instance, that if V admits a 2-subcritical Stein filling, it must be non-orderable. By way of contrast, in this talk I will discuss ways of modifying Liouville structures for high-dimensional V so that the result is always orderable. The main technical tool is a Morse-Bott Floer theoretic growth rate, which has some parallels with Givental's nonlinear Maslov index. I will also discuss a generalization to the relative case, and applications to bi-invariant metrics on Cont(V).
Tuesday, October 1, 2013
4:00 pm in 243 Altgeld Hall,Tuesday, October 1, 2013
#### Introduction to Grothendieck Topologies
###### Juan S. Villeta-Garcia (UIUC Math)
Abstract: We will introduce Grothendieck topologies, sites, sheaves on them, and their cohomology. Examples will be taken from scheme theory and commutative algebras. The exposition will be basic and aimed at beginners (such as the speaker). This will be the first of a two-part talk. Professors are welcome to attend.
Monday, October 7, 2013
10:00 am in 145 Altgeld Hall,Monday, October 7, 2013
#### Lie Algebroid Spray
###### Songhao Li (Washington University Math)
Abstract: Analogous to the spray in Riemannian geometry, we introduce the Lie algebroid spray, or A-spray. A special case is the Poisson spray as introduced by Crainic and Marcut. As an application, we show that the source-simply-connected symplectic groupoid of a log symplectic surface is diffeomorphic to the cotangent bundle in such a way that the source map coincide with the bundle projection. (Joint work in progress with Marco Gualtieri)
Tuesday, October 8, 2013
3:00 pm in 243 Altgeld Hall,Tuesday, October 8, 2013
#### Stacky Resolutions of Singularities
###### Matthew Satriano (University of Michigan)
Abstract: We will discuss a technique which allows one to approximate singular varieties by smooth spaces called stacks. As an application, we will address the following question, as well as some generalizations: given a linear action of a group G on complex n-space C^n, when is the quotient C^n/G a singular variety? We will also mention some applications to Hodge theory and to derived equivalences.
4:00 pm in 243 Altgeld Hall,Tuesday, October 8, 2013
#### Introduction to Grothendieck Topologies
###### Juan S. Villeta-Garcia (UIUC Math)
Abstract: We will introduce Grothendieck topologies, sites, sheaves on them, and their cohomology. Examples will be taken from scheme theory and commutative algebras. The exposition will be basic and aimed at beginners (such as the speaker). Professors are welcome to attend.
Monday, October 14, 2013
10:00 am in 145 Altgeld Hall,Monday, October 14, 2013
#### A generalization of the group of Hamiltonian homeomorphisms
###### Augustin Banyaga (Pennsylvania State University Math)
Abstract: The Eliashberg-Gromov rigidity theorem implies that Symplectic Geometry underlines a topology. This talk is about the automorphism groups of this "continuous" symplectic topology. The group of symplectic homeomorphisms (Sympeo) has a remarkable subgroup: the group of Hamiltonian homeomorphisms (Hameo), defined by Oh and Müller using the $L^{(1q,\infty)}$ Hofer norm. We introduce a generalization of Hameo, called the group of strong symplectic homeomorphisms (SSympeo), using a generalization of the Hofer norm from the group of Hamiltonian diffeomorphisms to the whole group of symplectic diffeomorphisms. Each group Hameo and SSympeo has also a $L^\infty$ version. The two versions coincide (Müller, Banyaga-Tchuiaga).
Tuesday, October 15, 2013
3:00 pm in 243 Altgeld Hall,Tuesday, October 15, 2013
#### Genera and derived algebraic geometry
###### Nick Rozenblyum (Northwestern)
Abstract: We will describe an approach, motivated by quantum field theory, to describe invariants of algebraic varieties using derived algebraic geometry. In particular, we will describe a version of non-abelian duality that can be used to produce volume forms on derived mapping spaces. Integration of these volume forms produces interesting invariants such as the Todd genus, the Witten genus and the B-model operations on Hochschild homology.
4:00 pm in 243 Altgeld Hall,Tuesday, October 15, 2013
#### The Geometry of Filtered Quiver Varieties
###### Mee Seong Im (UIUC Math)
Abstract: Invariant theory has connections to many areas of mathematics: to name a few, Higgs bundles, David Mumford's geometric invariant theory and Hilbert schemes in algebraic geometry, Nakajima's quiver variety in representation theory, the Hamiltonian reduction construction in symplectic geometry, combinatorics, graph theory, coding theory, DNA strand configuration, and fingerprint technology. Around 1990's, Aidan Schofield and a number of other mathematicians introduced and extended the study of classical invariant theory to quiver varieties. I will discuss the evolution of invariant theory, invariant theory in geometric representation theory, some results and conjectures, and interesting applications.
Monday, October 21, 2013
10:00 am in 145 Altgeld Hall,Monday, October 21, 2013
#### On the Topological Dynamics Arising from a Contact Form
###### Peter Spaeth (Pennsylvania State University Math)
Abstract: Stefan Müller and Yong-Geun Oh introduced the Hamiltonian metric on the group of Hamiltonian isotopies of a symplectic manifold, and with it defined the groups of topological Hamiltonian isotopies and homeomorphisms. With Augustin Banyaga we introduced the contact metric on the space of strictly contact isotopies of a contact manifold, and defined the groups of topological strictly contact isotopies and homeomorphisms in a similar manner. In the talk I will explain how the one to one correspondence between smooth strictly contact isotopies and generating contact Hamiltonian functions extends to their topological counterparts when the contact form is regular. I will also prove that the group of diffeomorphisms that preserve a contact form is rigid in the sense of Eliashberg-Gromov. This last result is joint with Müller.
Tuesday, October 22, 2013
4:00 pm in 243 Altgeld Hall,Tuesday, October 22, 2013
#### Propaganda for Higgs Bundles
###### Brian Collier (UIUC Math)
Abstract: The goal of the talk is introduce the Hitchin System associated to the moduli space of Higgs bundles and the spectral data associated to it. The talk will introduce/review some facts about the moduli spaces of holomorphic vector bundles and Einstein metrics and illustrate how Higgs bundles generalize this picture.
Monday, October 28, 2013
10:00 am in 145 Altgeld Hall,Monday, October 28, 2013
#### All boundaries of contact type can keep secrets
###### Ely Kerman (UIUC Math)
Abstract: Let $(M, \omega)$ be a symplectic manifold with nonempty boundary, $W$. The restriction of $\omega$ to $W$, $\omega_W$, has a one dimensional kernel which defines the characteristic foliation of $W$. If $W$ is a boundary of contact type then it admits a tubular neighborhood comprised of hypersurfaces whose characteristic foliations are all conjugate to those of $W$. Since these hypersurfaces lie in the interior one might guess (or hope) that the interior of $(M, \omega)$ determines $omega_W$ or at least some of its symplectic invariants. Several questions in this direction were raised by Eliashberg and Hofer in the early nineties. In this talk I will describe the resolution of some of these questions. I will prove that neither $\omega_W$ or its action spectrum is determined by the interior of $(M, \omega)$. This involves the construction of a new dynamical symplectic plug. The construction uses only soft techniques (Moser's method) and so should hopefully be accessible to all.
Tuesday, October 29, 2013
3:00 pm in 243 Altgeld Hall,Tuesday, October 29, 2013
#### Centers and traces of the categorified affine Hecke algebra (or, some tricks with coherent complexes on the Steinberg variety)
###### Anatoly Preygel (Berkeley)
Abstract: This is a talk on some tricks and constructions on categories of bounded coherent complexes on nice stacks. The goal of the talk will be to explain how "proper descent with singular-support conditions" gives a framework for getting interesting answers when computing (dg-categorical) invariants of the circle by chopping it into intervals. Our main application will be to the affine Hecke category in geometric representation theory: The Steinberg (derived) stack parametrizes G-local systems on an annulus with B-reductions on the boundary. Its dg-category of bounded coherent complexes is monoidal, and categorifies the affine Hecke algebra in representation theory. We'll see how to identify the trace of this monoidal category with a full subcategory of bounded coherent complexes on Loc_G(torus), cut out by a nilpotent micro support condition. This is joint work with Ben-Zvi and Nadler.
4:00 pm in 243 Altgeld Hall,Tuesday, October 29, 2013
#### Moduli of Elliptic Curves
###### Peter Nelson (UIUC Math)
Abstract: I'll talk about various sorts of moduli things of elliptic curves, and how you compute things about them. There might even be some computations!
Wednesday, October 30, 2013
1:00 pm in Altgeld Hall,Wednesday, October 30, 2013
#### Fixed points of symplectic circle actions
###### Donghoon Jang (UIUC Math)
Abstract: The study of fixed points of maps is a classical and important topic in geometry and topology. During this talk, we focus on the fixed points of maps in the case where manifolds admit symplectic structures and circle actions on the manifolds preserve the symplectic structures. We discuss main theorems on fixed points of symplectic circle actions and discuss techniques to study, ABBV Localization formula and Atiyah-Singer index formula.
Monday, November 4, 2013
10:00 am in 145 Altgeld Hall,Monday, November 4, 2013
#### Semi-toric systems as Hamiltonian S^1-spaces
###### Daniele Sepe (Utrecht University Math)
Abstract: The classification of completely integrable Hamiltonian systems on symplectic manifolds is a driving question in the study of Hamiltonian mechanics and symplectic geometry. From a symplectic perspective, such systems correspond to Hamiltonian R^n-actions which are locally toric. The class of integrable Hamiltonian systems on 4-dimensional symplectic manifolds corresponding to Hamiltonian S^1 x R actions (with some extra assumptions on the singularities) is known as semi-toric: it was introduced by Vu Ngoc, and Pelayo and Vu Ngoc obtained a classification for generic' semi-toric systems. From such a system one obtains a 4-dimensional manifold with a Hamiltonian S^1-action by restricting the action: when the underlying symplectic manifold is closed, Karshon classified these spaces in terms of a labelled graph. This talk aims at explaining how, starting from a semi-toric system on a closed 4-dimensional symplectic manifold, Karshon's invariants of the underlying Hamiltonian S^1-space can be recovered using the notion of polygons with monodromy' introduced by Vu Ngoc. This should be thought of as analogous to the procedure to obtain Karshon's invariants from Delzant polygons in the case of symplectic toric manifolds. This is joint work with Sonja Hohloch (EPFL) and Silvia Sabatini (IST Lisbon), and part of a longer term project to study Hamiltonian S^1 x R actions on closed 4-dimensional manifolds.
Tuesday, November 5, 2013
1:00 pm in 243 Altgeld Hall,Tuesday, November 5, 2013
#### Unicorns and Beyond
###### Sebastian Hensel
Abstract: In this talk, I will first present joint work with Piotr Przytycki and Richard Webb giving a new short proof of uniform hyperbolicity of curves and arc graphs. Namely, I will describe unicorn paths in arc and curve graphs and show that they form 1-slim triangles. Using this, one can deduce that arc graphs are 7-hyperbolic (and curve graphs are 17-hyperbolic) I will then overview some other results which, in a similar vein, give quick and purely topological-combinatorial proofs of curve graph results. If time permits, I will explain how such proofs can sometimes be adapted to work in the Out(F_n) setting.
3:00 pm in 243 Altgeld Hall,Tuesday, November 5, 2013
#### Local cohomology with support in generic determinantal ideals
###### Claudiu Raicu (Princeton University)
Abstract: The space $Mat(m,n)$ of $m\times n$ matrices admits a natural action of the group $\textrm{GL}_m \times \textrm{GL}_n$ via row and column operations on the matrix entries. The invariant closed subsets are the closures of the orbits of constant rank matrices. I will explain how to describe the local cohomology modules of the ring $S$ of polynomial functions on $Mat(m,n)$ with support in these orbit closures, and mention some consequences of the methods employed to computing minimal free resolutions of invariant ideals in $S$. These ideals correspond to nilpotent scheme structures on the orbit closures, and their study goes back to the work of De Concini, Eisenbud and Procesi in the 80s. Joint work with Jerzy Weyman.
4:00 pm in 243 Altgeld Hall,Tuesday, November 5, 2013
#### Equations of Parametric Curves and Surfaces via Syzygies
###### Eliana Duarte (UIUC Math)
Abstract: The problem of finding an implicit equation of a parametric curve or surface, known as the Implicitization Problem, dates back to 1862. The method of eliminating parameters and the use of resultants were the main tools to find implicit equations. In this talk I will explain Sederberg’s method(1997) of how to use syzygies to compute the implicit equation of a parametric curve or surface.
Thursday, November 7, 2013
3:00 pm in 243 Altgeld Hall,Thursday, November 7, 2013
#### Intersection Multiplicity of Serre in the Unramified Case
###### Chris Skalit (University of Chicago)
Abstract: Let $A$ be a regular local ring whose completion is a power series ring over a DVR. For properly-meeting subschemes of complimentary dimension, $Y, Z \subseteq \operatorname{Spec} A$, we show that the Serre intersection multiplicity, $\chi(\mathcal{O}_{Y},\mathcal{O}_Z) = \sum{(-1)^i \ell (\operatorname{Tor}_i^A(\mathcal{O}_Y,\mathcal{O}_Z))}$, is bounded below by the product of the multiplicities of $Y$ and $Z$. For those cases in which this bound is achieved, we investigate the implications it has for $Y, Z$, and their strict transforms on the blowup.
Monday, November 11, 2013
10:00 am in 145 Altgeld Hall,Monday, November 11, 2013
#### Integration of Exact Courant algebroids
###### Xiang Tang (Washington University Math)
Abstract: In this talk, we will discuss some recent progress about the problem of integration of exact Courant algebroids. We construct an infinite-dimensional symplectic 2-groupoid as the integration of an exact Courant algebroid. We show that every integrable Dirac structure integrates to a Lagrangian" sub-2-groupoid of this symplectic 2-groupoid.
Tuesday, November 12, 2013
1:00 pm in 243 Altgeld Hall,Tuesday, November 12, 2013
#### Elliptic Actions on Teichmüller Space
###### Matthew Durham (UIC Math)
Abstract: Kerckhoff's solution to the Nielsen realization problem showed that the action of any finite subgroup of the mapping class group on Teichmüller space has a fixed point. The set of fixed points is a totally geodesic submanifold. We study the coarse geometry of the set of points which have bounded diameter orbits in the Teichmüller metric. We show that each such almost-fixed point is within a uniformly bounded distance of the fixed point set, but that the set of almost-fixed points is not quasiconvex. In addition, the orbit of any point is shown to have a fixed barycenter. In this talk, I will discuss the machinery and ideas used in the proofs of these theorems.
3:00 pm in 243 Altgeld Hall,Tuesday, November 12, 2013
#### To Be Announced
###### Chunyi Li (UIUC)
3:00 pm in 243 Altgeld Hall,Tuesday, November 12, 2013
#### MMP for deformed Hilbert scheme of points on projective plane
###### Chunyi Li (UIUC)
Abstract: The idea of running the minimal model program for the moduli space of sheaves via the wall-crossing of Bridgeland stability conditions is, as far as I know, first introduced by Toda. In the Hilb^n P2 case, a strong form conjecture, which is about the correspondence between the base locus decomposition walls for the effective cone of Hilb^n P2 and the destabilizing walls on the stability condition plane, is posed by Arcara, Bertram, Coskun and Huizenga. In this talk, I will introduce the stability condition on D^b(coh P2) and the birational geometry of (deformed)Hilb^n P2. Also I would state our theorem which proves ABCH's conjecture and generalizes the result to deformed Hilb^n P2 case.
4:00 pm in 243 Altgeld Hall,Tuesday, November 12, 2013
###### CANCELLED
Monday, November 18, 2013
10:00 am in 145 Altgeld Hall,Monday, November 18, 2013
#### A normal form theorem around symplectic leaves
###### Ioan Marcut (UIUC Math)
Abstract: In this talk, I will discuss a normal form result in Poisson geometry, which generalizes Conn's theorem from fixed points to arbitrary symplectic leaves. The local model, at least in the integrable case, coincides with the local model of a free and proper Hamiltonian action around the zero set of the moment map. The result is joint work with Marius Crainic.
Tuesday, November 19, 2013
2:00 pm in 243 Altgeld Hall,Tuesday, November 19, 2013
#### Escape paths of Besicovitch triangles (revisited)
###### Yevgenya Movshivich (EIU Math)
Abstract: An escape path of an oval is the shortest path that does not fit in the interior of the oval. In 1965, A. S. Besicovitch conjectured that a certain symmetric unit $z$-arc is an escape path for the equilateral triangle of side $\sqrt{28/27}$. The conjecture was proven in "Besicovitch triangles cover unit arcs", Geom. Dedicata, 123 (2006) by P. Coulton and Y. M. for a family of Besicovitch isosceles triangular covers of unit arcs. The base angle, alpha, there ranged from about 52.2 degrees to 60 degrees. The low limit of this range was changed to 45 degrees in “Besicovitch triangles extended”, Geom. Dedicata, 159 (2012), by Y. M. Having just one escape unit arc, means that this cover of unit arcs is minimal (tight). In the Spring 2008 in two separate talks by P. Coulton and by Y. Movshovich, it was announced that a family of non-isosceles triangular covers of unit arcs (that contained all Besicovitch isosceles triangular covers) were found and the isosceles covers had infinitely many escape unit paths. A few months later we discovered that the pure non-isosceles covers are not minimal, all unit arcs fit in their interior, thus they have no escape unit paths. At the same time, each isosceles cover had a $Z$-arc as its only escape unit path. We will present a geometric argument supporting this last statement and conjecture on the sizes of the non-isosceles triangular covers of unit arcs that would make them minimal.
3:00 pm in 243 Altgeld Hall,Tuesday, November 19, 2013
#### A classification of extremal Lagrangian planes
###### Benjamin Bakker (Courant Institute)
Abstract: Classically, an extremal class $R$ in the cone of effective curves on a K3 surface $X$ is representable by a smooth rational curve if and only if $R^2=-2$. Settling a conjecture of Hassett and Tschinkel, we prove the natural generalization to higher dimensions: for a holomorphic symplectic variety $M$ deformation equivalent to a Hilbert scheme of $n$ points on a K3 surface, an extremal effective curve class $R$ sweeps out a Lagrangian $n$-plane if and only if certain intersection-theoretic criteria are met, including $(R,R)=-(n+3)/2$. The proof uses recent work of Bayer and Macri to represent effective cycles in moduli spaces of sheaves using Bridgeland stability conditions.
4:00 pm in 243 Altgeld Hall,Tuesday, November 19, 2013
#### Introduction to Grothendieck Topologies: Part II
###### Juan S. Villeta-Garcia (UIUC Math)
Abstract: We will continue our discussion of Grothendieck topologies, focusing on the etale site, and its associated cohomology. We'll begin with an introduction to etale morphisms and why we care about them. We will draw our examples from the cohomology of curves. The exposition will be basic and aimed at beginners (such as the speaker). Professors are welcome to attend.
Monday, December 2, 2013
10:00 am in Altgeld Hall,Monday, December 2, 2013
#### Relative equilibria and vector fields on stacks
###### Eugene Lerman (UIUC Math)
Abstract: TBA
Tuesday, December 3, 2013
3:00 pm in 243 Altgeld Hall,Tuesday, December 3, 2013
#### Construction of the second flip of $M_{g}$
###### David Smyth (ANU)
Abstract: I will discuss aspects of the construction of the second flip in the log minimal model program for $M_{g}$ (joint with Alper, Fedorchuk, van der Wyck). I will focus on the way in which formal local VGIT is used to construct the second flip as an algebraic space.
4:00 pm in 243 Altgeld Hall,Tuesday, December 3, 2013
#### The Étale Fundamental Group
###### Matej Penciak (UIUC Math)
Abstract: The purpose of this talk is to introduce the étale fundamental group of a scheme. Taking the Galois theory of fields and the theory of covering spaces as our guides, we will explore their generalizations to the setting of schemes. After defining the étale fundamental group, we will give an idea of how these groups may be computed.
Monday, December 9, 2013
10:00 am in 145 Altgeld Hall,Monday, December 9, 2013
#### Morse Theory and the Moduli Space of Curves
###### Susan Tolman (UIUC Math)
Abstract: Based on joint work with Bott and Weitsman, we will explain how to use Morse theory to calculate the Betti number of reduced spaces for proper Hamiltonian loop-group actions, such as the moduli space of curves.
Tuesday, December 10, 2013
3:00 pm in 243 Altgeld Hall,Tuesday, December 10, 2013
#### Almost purity theorem with applications to the homological conjectures - Part I
###### Kazuma Shimomoto (Meiji University)
Abstract: I will talk about almost purity theorem proved by Davis and Kedlaya with applications to the homological conjectures in local algebra. The almost purity theorem originates from p-adic Hodge theory by Faltings. I will also talk about its brief history and then construct a big Cohen-Macaulay algebra under some special condition.
4:00 pm in 243 Altgeld Hall,Tuesday, December 10, 2013
#### An Introduction to Boij-S\"oderberg Theory
###### Matt Mastroeni (UIUC Math)
Abstract: Let $k$ be a field. The aim of the talk is to give sufficient background on free resolutions and graded Betti numbers over the polynomial ring $k[x_1, \dots, x_n]$ in order to state the Boij-S\"oderberg Conjectures, which were proved in 2008 by Eisenbud and Schreyer. I will also explain how this answers the Multiplicity Conjecture of Herzog, Huneke, and Srinivasan and give an example illustrating the conjectures. Time permitting, I might say a few words about the proof of the Boij-S\"oderberg Conjectures, but the details will be reserved for a future talk.
Thursday, December 12, 2013
3:00 pm in 243 Altgeld Hall,Thursday, December 12, 2013
#### Almost purity theorem with applications to the homological conjectures - Part II
###### Kazuma Shimomoto (Meiji University)
Abstract: I will talk about almost purity theorem proved by Davis and Kedlaya with applications to the homological conjectures in local algebra. The almost purity theorem originates from p-adic Hodge theory by Faltings. I will also talk about its brief history and then construct a big Cohen-Macaulay algebra under some special condition.
Monday, February 3, 2014
3:00 pm in 145 Altgeld Hall,Monday, February 3, 2014
#### Imaginary time flow in geometric quantization and in Kahler geometry, degeneration to real polarizations and tropicalization
###### Jose Mourao (Instituto Superior Técnico)
Abstract: We will recall the problem of dependence of quantization of a symplectic manifold on the choice of polarization and study its relation with geodesics in the space Kahler metrics. Complex one parameter subgroups of the "group" of complexified hamiltonian symplectmorphisms appear naturally in this context. For some classes of symplectic manifolds we will describe geodesic rays of Kahler structures degenerating to real polarizations and study the associated metric collapse. Each such ray selects a basis of holomorphic sections which converge to distributional sections supported on Bohr-Sommerfeld fibers as the geodesic time goes to infinity. The same geodesic rays lead to tropicalization of toric varieties and of hypersurfaces on toric varieties.
Monday, February 10, 2014
3:00 pm in 145 Altgeld Hall,Monday, February 10, 2014
#### Real slices of the moduli space of Higgs bundles
###### Laura Schaposnik Massolo (UIUC Math)
Abstract: After introducing Higgs bundles and their moduli space, through the natural hyperkähler structure of the moduli space of Higgs bundles for complex groups we shall construct three anti-holomorphic involutions whose fixed points in the moduli space give branes in the A-model and B-model. After defining what those branes are, we shall attempt to relate them to log-symplectic structures and their invariants.
Tuesday, February 11, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, February 11, 2014
#### Mapping stacks and the notion of properness in algebraic geometry
###### Daniel Halpern-Leistner (Columbia University)
Abstract: One essential feature of a scheme X which is flat and proper over a base scheme S is that for any other finite type S scheme, there is a finite type algebraic space Map(X,Y) parameterizing families of maps from X to Y. There have been several extensions of these results to the setting where X is a proper stack, and Y is a stack satisfying various hypotheses. Unfortunately many of the stacks arising in nature, such as global quotient stacks X/G, have affine stabilizer groups and are about as far as possible from being proper. However, we will show that for many non proper X and a large class of Y, the mapping stack Map(X,Y) is still algebraic and finite type. This leads us to introduce new notions of "projective" and "proper" for morphisms between stacks such that "projective" => "proper", and flat and "proper" => Map(X,Y) is algebraic for reasonable X. We discuss a large list of examples of "projective stacks", including X/G where G is reductive and X is projective-over-affine with H^0(O_X)^G finite dimensional, as well as any quotient stack which admits a projective good moduli space. Based on these, we will come up with an even longer list of "proper" stacks, including stacks which are proper over a scheme in the classical definition. Along the way, we will discuss some surprising "derived h-descent" results in derived algebraic geometry.
4:00 pm in 243 Altgeld Hall,Tuesday, February 11, 2014
#### Connections between the Geometry of Hyperplane Arrangements and their Combinatorics
###### Nathan Fieldsteel (UIUC Math)
Abstract: From the data of an arrangement $\mathcal{A}$ of hyperplanes, we can construct two toric varieties. The first is determined by the rational fan $\Sigma(\mathcal{A})$ which has as its maximal cones the sectors of the complement of $\mathcal{A}$. The second is determined by $\Sigma(\mathcal{L}(\mathcal{A}),G)$, a rational fan determined by intersection lattice of the arrangement, together with a choice of building set. This second construction follows the work of Feichtner and Yuzvinsky in which they associate a smooth toric variety to any atomic lattice. We are interested in finding a relationship between these two fans, especially when $\mathcal{A}$ is the arrangement of type $A_n$, $B_n$, or $D_n$.
Monday, February 17, 2014
3:00 pm in 145 AH,Monday, February 17, 2014
#### Upper bounds for the Gromov width of coadjoint orbits of compact Lie groups
###### Alexander Caviedes Castro (University of Toronto)
Abstract: I will show how to find an upper bound for the Gromov width of coadjoint orbits with respect to the Kirillov-Kostant-Souriau symplectic form by computing certain Gromov-Witten invariants. The approach presented here is closely related to the one used by Gromov in his celebrated Non-squeezing theorem.
Tuesday, February 18, 2014
1:00 pm in 243 Altgeld Hall,Tuesday, February 18, 2014
#### Circumcenter of Mass and the generalized Euler line
###### Sergei Tabachnikov (Penn State Math)
Abstract: I shall define and study a variant of the center of mass of a polygon, called the Circumcenter of Mass. The Circumcenter of Mass is an affine combination of the circumcenters of the triangles in a non-degenerate triangulation of a polygon, weighted by their areas, and it does not depend on the triangulation. For an inscribed polygon, this center coincides with the circumcenter. The Circumcenter of Mass satisfies an analog of the Archimedes Lemma, similarly to the center of mass of the polygonal lamina. The line connecting the circumcenter and the centroid of a triangle is called the Euler line. Taking an affine combination of the circumcenters and the centroids of the triangles in a triangulation, one obtains the Euler line of a polygon. The construction of the Circumcenter of Mass extends to simplicial polytopes and to the spherical and hyperbolic geometries.
4:00 pm in Altgeld Hall,Tuesday, February 18, 2014
#### Fusion products and a novel way to compute their characters
Abstract: We will introduce a graded tensor product of simple Lie algebras called the Fusion product and discuss the character of this module. This will be done through examples. Then we will see a novel way to compute the characters of Fusion products of $\mathfrak{sl}_2(\mathbb{C})$-modules using the quantum Q-system for $\mathfrak{sl}_2(\mathbb{C})$.
Monday, February 24, 2014
3:00 pm in 145 Altgeld Hall,Monday, February 24, 2014
#### Convexity Theorems for Semisimple Symmetric Spaces
###### Dana Balibanu (Utrecht Math)
Tuesday, February 25, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, February 25, 2014
#### Morse Theory of D-Modules
###### Thomas Nevins (UIUC Math)
Abstract: Hamiltonian reduction arose as a mechanism for reducing complexity of systems in mechanics, but it also provides a tool for constructing complicated but interesting algebraic varieties from simpler ones. I will illustrate how this works via examples. I will explain a new structure theory, motivated by Hamiltonian reduction, for some categories (of D-modules) of interest to representation theorists, and, if there is time, indicate applications to the cohomology of (hyperkaehler) manifolds. The talk will not assume that members of the audience know the meaning of any of the above-mentioned terms. The talk is based on joint work with K. McGerty.
4:00 pm in Altgeld Hall,Tuesday, February 25, 2014
#### Implicitization Using Approximation Complexes
###### Eliana Duarte (UIUC Math)
Abstract: I will present the method of using approximation complexes to compute the image of a rational map from $\mathbb{P}^{n-1}$ to $\mathbb{P}^{n}$, under some hypotheses on the base locus and on the image. The method uses tools from commutative algebra such as Koszul complexes and Castelnuovo-Mumford regularity which I will introduce.
Monday, March 3, 2014
3:00 pm in 145 Altgeld Hall,Monday, March 3, 2014
#### Folded Symplectic Reduction
###### Daniel Hockensmith (UIUC Math)
Abstract: The Marsden-Weinstein-Meyer reduction theorem is an indispensable tool for the study of Hamiltonian group actions on symplectic manifolds. It gives an explicit recipe for the construction of a symplectic reduced space using only regular values of the moment map and the group action. I will prove that if one replaces symplectic manifolds with oriented, folded-symplectic manifolds in the statement of the MWM reduction theorem then a reduced space with a natural folded-symplectic form is obtained in the same way. I will then argue that the assumptions of this generalized theorem are too strong, leading us towards a more robust set of assumptions for a folded-symplectic reduction theorem.
Tuesday, March 4, 2014
4:00 pm in Altgeld Hall,Tuesday, March 4, 2014
#### What does a "right" cohomology for rings look like?
###### Juan S. Villeta-Garcia (UIUC Math)
Abstract: Motivated by the aforementioned question, we introduce Andre-Quillen (co)-homology for commutative algebras using methods of homotopy theory. We connect the theory to the cotangent comples, and prove certain vanishing theorems characterizing classes of maps. We end with some examples in the rational case, and mention a topological characterization.
Monday, March 10, 2014
3:00 pm in 145 Altgeld Hall,Monday, March 10, 2014
#### Local Rigidity & Nash-Moser Methods
###### Roy Wang (Utrecht University Math)
Abstract: J. Conn used analytic methods to prove his theorem on the linearization of Poisson structures. For some time that proof was heuristically interpreted as a local rigidity result for linear, compact, semi-simple Poisson structures. In his thesis I. Marcut made this interpretation rigorous, which lead to surprising new results. In collaboration we aim to isolate the method and formulate a local rigidity theorem, which we apply to other geometrical structures. As an example I sketch a proof of the Newlander-Nirenberg theorem.
Tuesday, March 11, 2014
1:00 pm in 243 Altgeld Hall,Tuesday, March 11, 2014
#### A new proof of Bowen's theorem on Hausdorff dimension of quasi-circles
###### Andy Sanders (UIC Math)
Abstract: A quasi-Fuchsian group is a discrete group of Mobius transformations of the Riemann sphere which is isomorphic to the fundamental group of a compact surface and acts properly on the complement of a Jordan curve: the limit set. In 1979, Bowen proved a remarkable rigidity theorem on the Hausdorff dimension of the limit set of a quasi-Fuchsian group: it is equal to 1 if and only if the limit set is a round circle. This theorem now has many generalizations. We will present a new proof of Bowen's result as a by-product of a new lower bound on the Hausdorff dimension of the limit set of a quasi-Fuchsian group. This lower bound is in terms of the differential geometric data of an immersed, incompressible minimal surface in the quotient manifold. If time permits, generalizations of this result to other convex-co-compact surface groups will be presented.
3:00 pm in 243 Altgeld Hall,Tuesday, March 11, 2014
#### Birational geometry of the moduli space of one-dimensional sheaves
###### Jinwon Choi (KIAS)
Abstract: We study the birational geometry of the moduli space of stable sheaves on $\mathbb{P}^2$ with Hilbert polynomial $dm+1$. We determine the effective/nef cone in terms of natural geometric divisors. We also present the birational model constructed from the locally free resolutions of the general sheaves. The two spaces are related by the Bridgeland-type wall-crossing. As corollaries, we compute the Betti numbers of the moduli spaces when $d \leq 6$. The results confirm the prediction from physics. This is joint work with Kiryong Chung.
4:00 pm in 243 Altgeld Hall,Tuesday, March 11, 2014
#### Infinitesimal Algebraic Geometry and Infinitesimal Infinitesimal Algebraic Geometry
###### Peter Nelson (UIUC Math)
Abstract: Sometimes the more classical infinitesimal objects attached to a "smooth" group don't contain as much information as one would like, especially in an algebraic setting. I'll discuss one or two (still pretty classical) improvements on the situation. Since I like thinking about universal things, I'll try to say a few things about the moduli spaces of these improvements, and maybe even how they relate to the moduli of the original groups.
Monday, March 17, 2014
3:00 pm in 145 Altgeld Hall,Monday, March 17, 2014
#### Legendrian Knots and Constructible Sheaves
###### Eric Zaslow (Northwestern)
Abstract: We study the unwrapped Fukaya category of Lagrangian branes ending on a Legendrian knot. Our knots live at contact infinity in the cotangent bundle of a surface, the Fukaya category of which is equivalent to the category of constructible sheaves on the surface itself. Consequently, our category can be described as constructible sheaves with singular support controlled by the front projection of the knot. We use a theorem of Guillermou-Kashiwara-Schapira to show that the resulting category is invariant under Legendrian isotopies, and conjecture it is equivalent to the representation category of the Chekanov-Eliashberg differential graded algebra of the knot. This sounds harder than it is. Briefly-- INPUT: Knot diagram, OUTPUT: Category. I will illustrate the above with simple examples. This work is joint with David Treumann and Vivek Shende.
Monday, March 31, 2014
3:00 pm in 145 Altgeld Hall,Monday, March 31, 2014
#### Dynamical convexity and elliptic orbits for Reeb flows
###### Miguel Abreu (Instituto Superior Técnico)
Abstract: A classical conjecture states that any convex hypersurface in even-dimensional euclidean space carries an elliptic closed orbit of its characteristic flow. Dell'Antonio-D'Onofrio-Ekeland proved it in 1995 for antipodal invariant convex hypersurfaces. In this talk I will present a generalization of this result using contact homology and a notion of dynamical convexity first introduced by Hofer-Wysocki-Zehnder for contact forms on the 3-sphere. Applications include certain geodesic flows, magnetic flows and toric contact manifolds. This is joint work with Leonardo Macarini.
Tuesday, April 1, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, April 1, 2014
#### Cohomological characterization of products of theta-divisors
###### Sofia Tirabassi (University of Utah)
Abstract: We present a joint work with J. Jiang and M. Lahoz in which it is proven that any smooth complex projective variety of maximal Albenese dimension, with Euler characteristic 1 and Albanese image normal and of general type is a product of theta-divisors. We also generalize in higher dimension Hacon--Pardini classification of surfaces of maximal Albanese dimension with genus and irregularity equal 3. The techniques we use are based on Green--Lazarsfeld generic vanishing theorems and on the use of integral transforms.
4:00 pm in Altgeld Hall,Tuesday, April 1, 2014
#### Asymptotics of certain families of Higgs bundles
###### Brian Collier (UIUC Math)
Abstract: Higgs bundles are algebro-geometric objects that live over a Kahler manifold. Through nonabelian Hodge theory, the moduli space of Higgs bundles is homeomorphic to the space of reductive representations of the fundamental group (or a central extension) of the manifold. To get this homeomorphism one goes through two deep, nonconstructive existence theorems. In this talk I will sketch this correspondence, then consider a family of Higgs bundles of particular geometric interest, and talk about some new results on the asymptotics of certain families of Higgs bundles.
Monday, April 7, 2014
3:00 pm in 145 Altgeld Hall,Monday, April 7, 2014
#### Integration of generalized complex structures
###### Michael Bailey (CIRGET/UQAM/McGill)
Abstract: Generalized complex geometry is a generalization of both symplectic and complex geometry, proposed by Nigel Hitchin in 2002, which is of particular interest in string theory and mirror symmetry. Modulo a parity condition, generalized complex manifolds locally "look like" holomorphic Poisson manifolds, though globally they may not admit a complex structure at all. Therefore, locally they should integrate to holomorphic symplectic groupoids. One can take the global integration if one passes to holomorphic "symplectic" stacks. Earlier work by Crainic defined an integration for generalized complex structures which did not capture the holomorphic nature.
Tuesday, April 8, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, April 8, 2014
#### Springer Theory for D-modules
###### Sam Gunningham (University of Texas-Austin)
Abstract: The Springer correspondence relates unipotent conjugacy classes in a reductive algebraic group G (e.g. GL_n), with representations of its Weyl group W (e.g. S_n). More precisely to every irreducible representation of W, one can attach an equivariant local system on a unipotent conjugacy class. Lusztig was able to account for all such local systems using his notion of cuspidal sheaves, together with certain relative Weyl groups. In this talk I will give a new perspective on Springer Theory using tools from sheaf theory and category theory, and I will explain how to generalize the Springer correspondence to give a description of the derived category of conjugation equivariant D-modules on G.
Monday, April 14, 2014
3:00 pm in 145 Altgeld Hall,Monday, April 14, 2014
#### Transverse Geometry of Codimension one Foliations Calibrated by Closed 2-Forms
###### David Martinez Torres (PUC-Rio de Janeiro)
Abstract: A codimension one foliation is (topologically) taut if it admits a closed 1-cycle everywhere transverse to the foliation. The theory of taut foliations is extremely rich in dimension 3, however, it less satisfactory in higher dimensions. In this talk we will discuss a different generalization of 3-dimensional taut foliati- ons to higher dimensions inspired in symplectic geometry. These are codimension one foliations which admit a closed 2-form which makes every leaf a symplectic manifold. Our main result is that on an ambient closed manifold a foliation (of class at least C^1 in the transverse direction) admitting a 2-calibration has its transverse geometry encoded in a 3-dimensional foliated submanifold. This is joint work with Álvaro del Pino and Francisco Presas (ICMAT, Madrid)
Tuesday, April 15, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, April 15, 2014
#### Holomorphic one-forms on varieties of general type
###### Mihnea Popa (UIC Math)
Abstract: I will explain recent work with C. Schnell, in which we prove that every holomorphic one-form on a variety of general type has non-empty zero locus (together with a suitable generalization to arbitrary Kodaira dimension). The proof makes use of generic vanishing theory for Hodge D-modules on abelian varieties.
4:00 pm in Altgeld Hall,Tuesday, April 15, 2014
#### Schemes as Functors
###### Matej Penciak (UIUC Math)
Abstract: Replacing schemes with their functor of points offers a useful perspective to tackle moduli problems. In this talk I will explain this interpretation of schemes, characterize the functors that come from this construction, and try to motivate this viewpoint through various examples. Along the way I will discuss the Quot and Hilbert schemes--two schemes that represent common moduli problems.
Monday, April 21, 2014
3:00 pm in 145 Altgeld Hall,Monday, April 21, 2014
#### Lagrangian correspondences - a toric case study
###### Ana Cannas da Silva (ETH)
Abstract: What lagrangians in a symplectic reduced space admit a (one-to-one transverse) lifting to the original symplectic manifold? I will discuss this question (going back to work of Werheim and Woodward) through examples and counterexamples (joint work with Meike Akveld).
Tuesday, April 22, 2014
1:00 pm in 243 Altgeld Hall,Tuesday, April 22, 2014
#### On the geometry of the flip graph
###### Valentina Disarlo (Indiana U Math)
Abstract: Given an orientable finite type punctured surface, its flip graph is the graph whose vertices are the ideal triangulations of the surface (up to isotopy) and two vertices are joined by an edge if the two corresponding triangulations differ by a flip, i.e. the replacement of one diagonal of the a quadrilateral by the other one. The combinatorics of this graph is crucial in works of Thurston and Penner's decorated Teichmuller theory. In this talk we will explore the geometric properties of this graph, proving that it provides a coarse model of the mapping class group in which the mapping class groups of the subsurfaces are convex. Moreover, we will provide bounds on the growth of the diameter of the flip graph modulo the mapping class group, providing a partial answer to an open problem in combinatorics.
3:00 pm in 243 Altgeld Hall,Tuesday, April 22, 2014
#### Counting curves on K3 surfaces: the Katz-Klemm-Vafa formula
###### Rahul Pandharipande (ETH Zurich)
Abstract: I will explain our recent proof (with R. Thomas) of the KKV formula governing higher genus curve counting in arbitrary classes on K3 surfaces. The subject intertwines Gromov-Witten, Noether-Lefschetz, and Donaldson-Thomas theories. A tour of these ideas will be included in the talk.
4:00 pm in Altgeld Hall,Tuesday, April 22, 2014
#### Regularity and Piecewise Polynomial Functions
###### Michael DiPasquale (UIUC Math)
Abstract: The algebra $C^r(\mathcal{P})$ of piecewise polynomial functions continuously differentiable of order $r$ over a polytopal complex $\mathcal{P}$ is a fundamental object in approximation theory. One of the fundamental questions in spline theory is to compute the dimension of the vector space $C^r_k(\mathcal{P})$ of splines of degree at most $k$. In the 1980s Billera pioneered an algebraic approach to spline theory using tools from homological and commutative algebra. We show how this approach, particularly the notions of the Hilbert polynomial and Castelnuovo-Mumford regularity, has interesting things to say about computing the dimension of $C^r_k(\mathcal{P})$.
Monday, April 28, 2014
3:00 pm in Altgeld Hall,Monday, April 28, 2014
#### Lie algebra cohomology and a degenerate cup product on the flag manifold
###### Sam Evens (Notre Dame)
Abstract: Belkale and Kumar introduced a degeneration of the usual cup product on $H^*(G/P)$ which gives an optimal solution to the geometric Horn problem. In this talk, I will explain joint work with Bill Graham where we realize the Belkale-Kumar product using relative Lie algebra cohomology. We do this using a family in the variety of Lagrangian subalgebras.
Tuesday, April 29, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, April 29, 2014
#### Wall-crossing in genus zero Landau-Ginzburg theory
###### Dustin Ross (University of Michigan)
Abstract: Given a quasi-homogeneous polynomial of degree d, Landau-Ginzburg theory studies certain intersection numbers on the moduli space of d-spin curves (parametrizing curves with d-th roots of the canonical bundle). I will describe a generalization of these intersection numbers obtained by allowing some of the points on the curves to be weighted in the sense of Hassett. As one changes the weights, the invariants thus obtained can be related by a wall-crossing formula. I will explain how the wall-crossing formula generalizes the mirror theorem of Chiodo-Iritani-Ruan, and in particular how it gives a completely enumerative (A-model) interpretation of the mirror phenomenon.
4:00 pm in Altgeld Hall,Tuesday, April 29, 2014
#### The symplectic nature of the fundamental group
###### Brian Collier (UIUC Math)
Abstract: Let $\pi$ be the fundamental group of a RIemann surface and $G$ be a real or complex reductive algebraic group. The goal of this talk is to understand the representation variety $Hom(\pi,G)//G$ from a algebraic geometry perspective. In particular, we will describe the symplectic structure on the representation variety in terms the cup product in group cohomology. The talk will very closely follow the wonderful paper of Bill Goldman with the same title as this talk. All concepts will be explained as if the audience has little or no experience with them, as this is the case for the speaker. Also, the relation of the above topic with HIggs bundles will only be briefly mentioned at the end.
Monday, May 5, 2014
3:00 pm in 145 Altgeld Hall,Monday, May 5, 2014
#### Symplectic toric manifolds as centered reductions of products of weighted projective spaces
###### Milena Pabiniak (Instituto Superior Técnico)
Abstract: We prove that every symplectic toric orbifold is a "centered" symplectic reduction of a Cartesian product of weighted projective spaces. Reduction is centered if the level set contains central Lagrangian torus fiber of the product of weighted projective spaces. In that case one can deduce certain information about non-displaceable sets or existence of quasimorphisms. For example, a theorem of Abreu and Macarini shows that if the level set of the reduction passes through a non-displaceable set then the image of this set in the reduced space is also non-displaceable. Using this theorem and our result we reprove that every symplectic toric orbifold contains a non-displaceable fiber and identify this fiber. Joint work with Aleksandra Marinkovic.
Tuesday, May 6, 2014
3:00 pm in 243 Altgeld Hall,Tuesday, May 6, 2014
#### Combinatorics and topology of toric maps
###### Mircea Mustata (University of Michigan)
Abstract: Toric varieties are algebraic varieties endowed with a nice" action of an algebraic torus. A remarkable feature is that their geometry can be fully described in terms of combinatorics of fans and polytopes. I will discuss some results concerning the topology of the fibers of toric maps and a combinatorial invariant that comes out of these considerations. This is based on joint work in progress with Marc de Cataldo and Luca Migliorini.
|
2020-07-16 01:33:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6753308176994324, "perplexity": 1034.3587643357257}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657176116.96/warc/CC-MAIN-20200715230447-20200716020447-00361.warc.gz"}
|
https://www.physicsforums.com/threads/union-of-open-sets-question.549250/
|
# Union of open sets question
I have to prove that the arbitrary union of open sets (in R) is open.
So this is what I have so far:
Let $\{A_{i\in I}\}$ be a collection of open sets in $\mathbb{R}$. I want to show that $\bigcup_{i\in I}A_{i}$ is also open...
Any ideas from here?
Deveno
what is your definition of open set?
The definition we use is that a set $A\subseteq\mathbb{R}$ is an open set if for each $x\in A$ there exists an $\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subseteq A$.
Deveno
note that if $x \in \bigcup_{i \in I}A_i$, then necessarily $x \in A_i$ for some i.
can you continue...?
Let $\{A_{i\in I}\}$ be a collection of open sets in $\mathbb{R}$. Let $x\in\bigcup_{i\in I}A_{i}$, then $x\in A_{i}$ for some $i$. Since each $A_{i}$ is open, there exists an $\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subseteq A_{i}\subseteq\bigcup_{i\in I}A_{i}$. Thus, $\bigcup_{i\in I}A_{i}$ is open...
Am I on the right track?
Deveno
|
2022-05-18 05:57:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9463347792625427, "perplexity": 63.102314393556334}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00725.warc.gz"}
|
https://pharmacyscope.com/roller-mill/
|
# Roller Mill
## Principle
The principle of the roller mill is breaking and crushing actions are achieved mechanically with the application of pressure. Stress is applied by rotating heavy corrugated wheels (mullers or rollers). The oil is squeezed out from the solid material.
## Construction
The construction of a roller mill is shown in Figure. It consists of three rollers, with one roller moving above and between the other two. The rollers are made of cast iron, corrugated or grooved in various patterns. A feed filling mechanism provides the necessary force to allow the feed to pass through the first pair of rollers, either by the use of a small mill opening or a high rate of feed.
## Working
The rollers are allowed to rotate. The material is fed from the hopper into the gap between the rollers (1 and 2). The material is squeezed between the top and second roller and is then directed into the nip of the top and third roller for a second time pressing. The clearance between the rollers can be adjusted to control the degree of pressing. The material is pressed (shearing and crushing) against each roller by rams. The product is collected into the receiver.
### Pharmaceutical Uses
Roller mills are used in the cane-sugar industries. In this, trains of four or seven rollers are used. Crushing of the cane is carried between rollers, as the feed is introduced by an apron conveyor.
In de-watering of paper, a two-roller mill is used, usually supported on a felt. A double roller mill is used to squeeze the water (or process liquid) in the final step of textiles. In dyeing, the cloth is thoroughly impregnated with small quantities of dye and is successively squeezed with pad rolls.
Make sure you also check our other amazing Article on : Screw Press
|
2023-01-29 12:29:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34063342213630676, "perplexity": 2926.5964888546105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499713.50/warc/CC-MAIN-20230129112153-20230129142153-00699.warc.gz"}
|
https://jscholarship.library.jhu.edu/handle/1774.2/58624?show=full
|
dc.contributor.advisor Szalay, Alexander S. dc.creator Yang, Lin dc.date.accessioned 2018-05-22T03:39:24Z dc.date.available 2018-05-22T03:39:24Z dc.date.created 2017-12 dc.date.issued 2017-08-11 dc.date.submitted December 2017 dc.identifier.uri http://jhir.library.jhu.edu/handle/1774.2/58624 dc.description.abstract In the successful concordance model of cosmology, dark matter is crucial for structures to form as we observe it in the universe. Despite the overwhelming observational evidence for its existence, it is not yet directly detected, and its nature is largely unknown. Physicists propose various dark matter candidates, with masses ranging over dozens of orders of magnitude. However, both indirect and direct detection experiments for dark matter have reported no convincing results. Dark matter research is therefore critically relying on computer simulations. Using supercomputer numerical simulations, we can test the correctness of the current cosmological model, as well as obtain guidance for future detection experiments. In this dissertation, we study dark matter from several perspectives using cosmological simulations: its possible radiation, its warmth, and other related issues. A commonly accepted candidate for dark matter is the weakly interacting massive particle (WIMP). WIMPs interact with normal matter only through the weak force (as well as gravity). It is thus extremely challenging to detect these particles directly. However, depending on the type of dark matter, they can %self-annihilate annihilate with other dark matter particles, or decay into high-energy photons (i.e., $\gamma$-ray). We studied the spatial distribution of possible emission components from dark matter annihilation or decay in a large simulation of a galaxy like the Milky Way. The predicted emission components can be used as templates for observations such as those from the {\it Fermi}/LAT $\gamma$-ray instrument, to constrain for the physical properties of dark matter. Structure formation theory suggests that dark matter is cold'', i.e., moving non-relativistically during structure formation. However, cold dark matter predicts many more dark-matter satellites, or subhaloes, around galaxies such as the Milky Way than observed. One well-established mechanism to bring the theory in line with observations is that many of these satellites are not visible because they are too small for baryons to form stars in them. Another way is to attenuate the small-scale structure directly, positing warm'' dark matter. Using simulation, we propose a method of testing this possibility in a complementary environment, by measuring the density profile of cosmic voids. Our results suggest that there are sufficient differences between warm and cold dark matter to test using future observations. Furthermore, our data analyzing methods are based on sophisticated data stream algorithms and newly developed Graphic Process Unit (GPU) hardware. These tools lead to other studies of dark matter as well. For example, we studied the spin alignment of dark matter halos with its environment. We show that the spin alignments are highly related to the hierarchical levels of the cosmic web, in which the halo is located. We also studied the responses in different density variables to ringing'' the initial density field at different spatial frequencies (i.e. putting spikes in the power spectrum at a particular scale). The conventional wisdom is that power generally migrates from large to small comoving scales from the initial to final conditions. But in this work, we found that this conventional wisdom is only true for a density variable emphasizing dense regions, such as the usual overdensity field. In the log-density field, however, power stays about at the same scale but broadens. In the reciprocal-density field, emphasizing low-density regions, power moves to larger scales. This is an example of voids as cosmic magnifying glasses.'' The GPU density-estimation technique was crucial for this study, allowing the density to be estimated accurately even when modestly sampled with particles. Our results provide guidance for designing future statistic analytics for dark matter and the large-scale structure of the Universe in general. dc.format.mimetype application/pdf dc.language.iso en_US dc.publisher Johns Hopkins University dc.subject Dark Matter dc.subject Cosmological Simulation dc.subject CDM dc.title INVESTIGATIONS OF DARK MATTER USING COSMOLOGICAL SIMULATIONS dc.type Thesis thesis.degree.discipline Physics thesis.degree.grantor Johns Hopkins University thesis.degree.grantor Krieger School of Arts and Sciences thesis.degree.level Doctoral thesis.degree.name Ph.D. dc.date.updated 2018-05-22T03:39:25Z dc.type.material text thesis.degree.department Physics and Astronomy dc.contributor.committeeMember Wyse, Rosemary dc.contributor.committeeMember Braverman, Vladimir dc.contributor.committeeMember Budavári, Tamás dc.contributor.committeeMember Broholm, Collin L. dc.publisher.country USA
|
2020-02-27 23:43:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5855942964553833, "perplexity": 1670.5974614108122}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146907.86/warc/CC-MAIN-20200227221724-20200228011724-00395.warc.gz"}
|
https://indico.cern.ch/event/658267/contributions/2813710/
|
# Connecting The Dots 2018
Mar 20 – 22, 2018
University of Washington Seattle
US/Pacific timezone
## A novel standalone track reconstruction algorithm for the LHCb upgrade
Mar 20, 2018, 4:30 PM
15m
Physics-Astronomy Auditorium A118 (University of Washington Seattle)
### Physics-Astronomy Auditorium A118
#### University of Washington Seattle
Poster 2: Real-time pattern recognition and fast tracking
### Speaker
Mr Renato Quagliani (Centre National de la Recherche Scientifique (FR))
### Description
During the LHC Run III, starting in 2020, the instantaneous luminosity of LHCb will be increased up to $2\times10^{33}$ cm$^{-2}$ s$^{-1}$, five times larger than in Run II. The LHCb detector will then have to be upgraded in 2019. In fact, a full software event reconstruction will be performed at the full bunch crossing rate by the trigger, in order to profit of the higher instantaneous luminosity provided by the accelerator. In addition, all the tracking devices will be replaced and, in particular, a scintillating fiber tracker (SciFi) will be installed after the magnet, allowing to cope with the higher occupancy. The new running conditions, and the tighter timing constraints in the software trigger, represent a big challenge for the track reconstruction.
This talk presents the design and performance of a novel algorithm that has been developed to reconstruct track segments using solely hits from the SciFi. This algorithm is crucial for the reconstruction of tracks originating from long-lived particles such as $K_S$ and $\Lambda$. The implementation strategy is based on a progressive cleaning of the tracking environment and on an active use of the information from the stereo hits in order to select tracks. It also profit from the definition of an improved track parameterization. When compared to its previous implementation, the new algorithm has significantly higher performances in terms of efficiency, number of fake tracks and timing, allowing to enhance the physics potential and capabilities of the LHCb upgrade.
### Primary author
Mr Renato Quagliani (Centre National de la Recherche Scientifique (FR))
### Co-author
Michel De Cian (Ruprecht Karls Universitaet Heidelberg (DE))
|
2021-12-03 07:18:08
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5759884119033813, "perplexity": 2706.80680684166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362605.52/warc/CC-MAIN-20211203060849-20211203090849-00587.warc.gz"}
|
https://www.ademcetinkaya.com/2023/03/petv-petvivo-holdings-inc-common-stock.html
|
Outlook: PetVivo Holdings Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating.
Dominant Strategy : Sell
Time series to forecast n: 14 Mar 2023 for (n+6 month)
Methodology : Deductive Inference (ML)
Abstract
PetVivo Holdings Inc. Common Stock prediction model is evaluated with Deductive Inference (ML) and Pearson Correlation1,2,3,4 and it is concluded that the PETV stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell
Key Points
1. How do predictive algorithms actually work?
2. Can statistics predict the future?
3. What statistical methods are used to analyze data?
PETV Target Price Prediction Modeling Methodology
We consider PetVivo Holdings Inc. Common Stock Decision Process with Deductive Inference (ML) where A is the set of discrete actions of PETV stock holders, F is the set of discrete states, P : S × F × S → R is the transition probability distribution, R : S × F → R is the reaction function, and γ ∈ [0, 1] is a move factor for expectation.1,2,3,4
F(Pearson Correlation)5,6,7= $\begin{array}{cccc}{p}_{a1}& {p}_{a2}& \dots & {p}_{1n}\\ & ⋮\\ {p}_{j1}& {p}_{j2}& \dots & {p}_{jn}\\ & ⋮\\ {p}_{k1}& {p}_{k2}& \dots & {p}_{kn}\\ & ⋮\\ {p}_{n1}& {p}_{n2}& \dots & {p}_{nn}\end{array}$ X R(Deductive Inference (ML)) X S(n):→ (n+6 month) $R=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 0\\ 0& 0& 1\end{array}\right)$
n:Time series to forecast
p:Price signals of PETV stock
j:Nash equilibria (Neural Network)
k:Dominated move
a:Best response for target price
For further technical information as per how our model work we invite you to visit the article below:
How do AC Investment Research machine learning (predictive) algorithms actually work?
PETV Stock Forecast (Buy or Sell) for (n+6 month)
Sample Set: Neural Network
Stock/Index: PETV PetVivo Holdings Inc. Common Stock
Time series to forecast n: 14 Mar 2023 for (n+6 month)
According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell
X axis: *Likelihood% (The higher the percentage value, the more likely the event will occur.)
Y axis: *Potential Impact% (The higher the percentage value, the more likely the price will deviate.)
Z axis (Grey to Black): *Technical Analysis%
IFRS Reconciliation Adjustments for PetVivo Holdings Inc. Common Stock
1. Credit risk analysis is a multifactor and holistic analysis; whether a specific factor is relevant, and its weight compared to other factors, will depend on the type of product, characteristics of the financial instruments and the borrower as well as the geographical region. An entity shall consider reasonable and supportable information that is available without undue cost or effort and that is relevant for the particular financial instrument being assessed. However, some factors or indicators may not be identifiable on an individual financial instrument level. In such a case, the factors or indicators should be assessed for appropriate portfolios, groups of portfolios or portions of a portfolio of financial instruments to determine whether the requirement in paragraph 5.5.3 for the recognition of lifetime expected credit losses has been met.
2. At the date of initial application, an entity shall assess whether a financial asset meets the condition in paragraphs 4.1.2(a) or 4.1.2A(a) on the basis of the facts and circumstances that exist at that date. The resulting classification shall be applied retrospectively irrespective of the entity's business model in prior reporting periods.
3. If a put option written by an entity prevents a transferred asset from being derecognised and the entity measures the transferred asset at fair value, the associated liability is measured at the option exercise price plus the time value of the option. The measurement of the asset at fair value is limited to the lower of the fair value and the option exercise price because the entity has no right to increases in the fair value of the transferred asset above the exercise price of the option. This ensures that the net carrying amount of the asset and the associated liability is the fair value of the put option obligation. For example, if the fair value of the underlying asset is CU120, the option exercise price is CU100 and the time value of the option is CU5, the carrying amount of the associated liability is CU105 (CU100 + CU5) and the carrying amount of the asset is CU100 (in this case the option exercise price).
*International Financial Reporting Standards (IFRS) adjustment process involves reviewing the company's financial statements and identifying any differences between the company's current accounting practices and the requirements of the IFRS. If there are any such differences, neural network makes adjustments to financial statements to bring them into compliance with the IFRS.
Conclusions
PetVivo Holdings Inc. Common Stock is assigned short-term Ba1 & long-term Ba1 estimated rating. PetVivo Holdings Inc. Common Stock prediction model is evaluated with Deductive Inference (ML) and Pearson Correlation1,2,3,4 and it is concluded that the PETV stock is predictable in the short/long term. According to price forecasts for (n+6 month) period, the dominant strategy among neural network is: Sell
PETV PetVivo Holdings Inc. Common Stock Financial Analysis*
Rating Short-Term Long-Term Senior
Outlook*Ba1Ba1
Income StatementCBaa2
Balance SheetBaa2Baa2
Leverage RatiosBaa2Baa2
Cash FlowB2Caa2
Rates of Return and ProfitabilityCaa2Ba3
*Financial analysis is the process of evaluating a company's financial performance and position by neural network. It involves reviewing the company's financial statements, including the balance sheet, income statement, and cash flow statement, as well as other financial reports and documents.
How does neural network examine financial reports and understand financial state of the company?
Prediction Confidence Score
Trust metric by Neural Network: 90 out of 100 with 659 signals.
References
1. C. Claus and C. Boutilier. The dynamics of reinforcement learning in cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference, AAAI 98, IAAI 98, July 26-30, 1998, Madison, Wisconsin, USA., pages 746–752, 1998.
2. E. Altman, K. Avrachenkov, and R. N ́u ̃nez-Queija. Perturbation analysis for denumerable Markov chains with application to queueing models. Advances in Applied Probability, pages 839–853, 2004
3. Athey S, Bayati M, Doudchenko N, Imbens G, Khosravi K. 2017a. Matrix completion methods for causal panel data models. arXiv:1710.10251 [math.ST]
4. Wager S, Athey S. 2017. Estimation and inference of heterogeneous treatment effects using random forests. J. Am. Stat. Assoc. 113:1228–42
5. Scholkopf B, Smola AJ. 2001. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, MA: MIT Press
6. F. A. Oliehoek and C. Amato. A Concise Introduction to Decentralized POMDPs. SpringerBriefs in Intelligent Systems. Springer, 2016
7. Mikolov T, Yih W, Zweig G. 2013c. Linguistic regularities in continuous space word representations. In Pro- ceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746–51. New York: Assoc. Comput. Linguist.
Frequently Asked QuestionsQ: What is the prediction methodology for PETV stock?
A: PETV stock prediction methodology: We evaluate the prediction models Deductive Inference (ML) and Pearson Correlation
Q: Is PETV stock a buy or sell?
A: The dominant strategy among neural network is to Sell PETV Stock.
Q: Is PetVivo Holdings Inc. Common Stock stock a good investment?
A: The consensus rating for PetVivo Holdings Inc. Common Stock is Sell and is assigned short-term Ba1 & long-term Ba1 estimated rating.
Q: What is the consensus rating of PETV stock?
A: The consensus rating for PETV is Sell.
Q: What is the prediction period for PETV stock?
A: The prediction period for PETV is (n+6 month)
|
2023-03-27 09:42:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4813065230846405, "perplexity": 6323.93412276946}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948620.60/warc/CC-MAIN-20230327092225-20230327122225-00028.warc.gz"}
|
https://dotblogs.com.tw/mis2000lab/2009/05/15/8415
|
### FLEX / ActionScript 如何製作拖拉效果(Drag and Drop)
The executing SWF file for the previous example is shown below:
# Using Drag and Drop
The drag-and-drop operation lets you move data from one place in an Adobe® Flex® application to another. It is especially useful in a visual application where you can drag data between two lists, drag controls in a container to reposition them, or drag Flex components between containers.
ASP.NET MVC線上課程 第一天 免費看 (5.5小時)
ASP.NET遠距教學、線上課程(Web Form + MVC)。 第一天課程, "完整" 試聽。
......................................................................................................................................................
### ASP.NET MVC 5 線上教學
累積時數約 75~ 80小時...... 第一天(5.5小時)完整內容,"免費"讓您評估
|
2021-01-27 20:15:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2987191379070282, "perplexity": 13755.443603779368}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704832583.88/warc/CC-MAIN-20210127183317-20210127213317-00155.warc.gz"}
|
https://www.hackmath.net/en/math-problem/2567
|
On a straight stretch of road is marked 12 percent drop. What angle makes the direction of the road with the horizontal plane?
Result
x = 6.843 °
#### Solution:
$x = \dfrac{ 180^\circ }{ \pi } \cdot \arctan(12/100) \doteq 6.8428 = 6.843 ^\circ = 6^\circ 50'34"$
Our examples were largely sent or created by pupils and students themselves. Therefore, we would be pleased if you could send us any errors you found, spelling mistakes, or rephasing the example. Thank you!
Leave us a comment of this math problem and its solution (i.e. if it is still somewhat unclear...):
Be the first to comment!
Tips to related online calculators
Most natural application of trigonometry and trigonometric functions is a calculation of the triangles. Common and less common calculations of different types of triangles offers our triangle calculator. Word trigonometry comes from Greek and literally means triangle calculation.
## Next similar math problems:
Average climb of the road is given by ratio 1:15. By what angle road average climb?
55%+36%+88%+71%+100=63% what is whole (X)? Percents can be added directly together if they are taken from the same whole, which means they have the same base amount. .. . You would add the two percentages to find the total amount.
3. Slope of the pool
Calculate slope (rise:run) of the bottom of swimming pool long 30 m. Water depth at beginning of pool is 1.13 m (for children) and depth at end is 1.84 m (for swimmers). Slope express as percentage and as angle in degrees.
4. Bevel
I have bevel in the ratio 1:6. What is the angle and how do I calculate it?
5. Reflector
Circular reflector throws light cone with a vertex angle 49° and is on 33 m height tower. The axis of the light beam has with the axis of the tower angle 30°. What is the maximum length of the illuminated horizontal plane?
6. Profit gain
If 5% more is gained by selling an article for Rs. 350 than by selling it for Rs. 340, the cost of the article is:
7. Summerjob
The temporary workers planted new trees. Of the total number of 500 seedlings, they managed to plant 426. How many percents did they meet the daily planting limit?
8. Percentage increase
Increase number 400 by 3.5%
9. Borrowing
I borrow 25,000 to 6.9% p.a.. I pay 500 per month. How much will I pay and for how long?
10. Theorem prove
We want to prove the sentence: If the natural number n is divisible by six, then n is divisible by three. From what assumption we started?
11. Reference angle
Find the reference angle of each angle:
12. Pyramid
Pyramid has a base a = 5cm and height in v = 8 cm. a) calculate angle between plane ABV and base plane b) calculate angle between opposite side edges.
13. Maple
Maple peak is visible from a distance 3 m from the trunk from a height of 1.8 m at angle 62°. Determine the height of the maple.
14. Tree
How tall is the tree that observed in the visual angle of 52°? If I stand 5 m from the tree and eyes are two meters above the ground.
15. High wall
I have a wall 2m high. I need a 15 degree angle (upward) to second wall 4 meters away. How high must the second wall?
16. Three workshops
There are 2743 people working in three workshops. In the second workshop works 140 people more than in the first and in third works 4.2 times more than the second one. How many people work in each workshop?
17. AP - simple
Determine the first nine elements of sequence if a10 = -1 and d = 4
|
2020-02-18 16:36:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5394448041915894, "perplexity": 1498.400492999743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143784.14/warc/CC-MAIN-20200218150621-20200218180621-00211.warc.gz"}
|
https://kokecacao.me/page/Course/F21/15-251/Lecture_003.md
|
# Lecture 003
### Regular Language
Regular Language
• closed under complement: if $L \subseteq \Sigma^*$ is regular, then so is $\overline{L} = \Sigma^* - L$
\langle{Q, \Sigma, \delta, q_0, F}\rangle \rightarrow \langle{Q, \Sigma, \delta, q_0, Q-F}\rangle
• closed under union: if $L_1 \subseteq \Sigma^*$ and $L_2 \subseteq \Sigma^*$ are regular, then so is $L_1 \cup L_2$
\langle{Q_1, \Sigma, \delta_1, q_{0a}, F_1}\rangle \times \langle{Q_2, \Sigma, \delta_2, q_{0b}, F_2}\rangle \rightarrow \\ \langle{Q_1 \times Q_2, \Sigma, \delta((q_1, q_2), \sigma) = (\delta_1(q_1, \sigma), \delta_2(q_2, \sigma)), (q_{0a}, q_{0b}), \{(q_0, q_1) | q_0 \in F_1 \lor q_1 \in F_2\}}\rangle
• closed under intersection: same, replace OR with AND (or $L_1 \cap L_2 = \overline{\overline{L_1} \cup \overline{L_2}}$)
• closed under difference:
\langle{Q_1, \Sigma, \delta_1, q_{0a}, F_1}\rangle \times \langle{Q_2, \Sigma, \delta_2, q_{0b}, F_2}\rangle \rightarrow \\ \langle{Q_1 \times Q_2, \Sigma, \delta((q_1, q_2), \sigma) = (\delta_1(q_1, \sigma), \delta_2(q_2, \sigma)), (q_{0a}, q_{0b}), \{(q_0, q_1) | q_0 \in F_1 \land q_1 \in Q - F_2\}}\rangle
• closed under concatenation: if $L_1 \subseteq \Sigma^*$ and $L_2 \subseteq \Sigma^*$ are regular, then so is $L_1L_2$
\langle{Q_1, \Sigma, \delta_1, q_{0a}, F_1}\rangle \times \langle{Q_2, \Sigma, \delta_2, q_{0b}, F_2}\rangle \rightarrow \\ \langle{Q_1 \times \mathcal{P}(Q_2), \Sigma, \delta((q, \{q_1, ..., q_n\}), \sigma) = \begin{cases} (\delta_1(q, \sigma), \{\delta_2(q_1, \sigma), ..., \delta_2(q_n, \sigma)\}) \text{ if } \delta(q, \sigma)\notin F\\ (\delta_1(q, \sigma), \{\delta_2(q_1, \sigma), ..., \delta_2(q_n, \sigma)\} \cup \{q_{0b}\}) \text{ if } \delta(q, \sigma)\in F\\ \end{cases}, \begin{cases} (q_{0a}, \emptyset) \text{ if } q_{0a} \notin F\\ (q_{0a}, \{q_{0b}\}) \text{ if } q_{0a} \in F\\ \end{cases}, \{(q \in Q_1, S \subseteq Q_2) | (\exists q'\in S) q' \in F_2\}}\rangle
• closed under star: if $L \subseteq \Sigma^*$ is regular, then so is $L^*$. Show $L^* = \bigcup_{n\in\mathbb{N}^{+}}L^n \cup \{\epsilon\}$ by construct DFA.
\langle{Q, \Sigma, \delta, q_0, F}\rangle \rightarrow \langle{\mathcal{P}(Q), \Sigma, \delta'(S, \sigma)\begin{cases} \{\delta(s, \sigma) | s \in S\} \cup \{q_0\} \text{ if } (\exists s \in S) \delta(s, \sigma) \in F\\ \{\delta(s, \sigma) | s \in S\} \text{ otherwise } \end{cases}, {q_0}, \{S \subseteq Q | S \cap F \neq \emptyset\}}\rangle
// WARNING: This cannot be proven by induction $L^* = \bigcup_{n\in\mathbb{N}}L^n$ because regular language are not closed under infinite unions. They are closed under finite unions.
Recursive Definition:
• $\emptyset$ is regular
• $(\forall a \in \Sigma) \{a\}$ is regular
• $L_1$, $L_2$ regular $\implies L_1 \cup L_2$ regular
• $L_1$, $L_2$ regular $\implies L_1L_2$ regular
• $L$ regular $\implies L^*$ regular
Other propertiesn
• While $\forall k \in \mathbb{N} \bigcup_{i=0}^k L_i$ is regular, $\bigcup_{i \geq 0}L_i$ is not regular.
• While $L_n = \{0^n1^n\}$ is regular $\bigcup_{n \geq 0}L_n = \{0^n1^n | n \in \mathbb{N}\}$ is not regular.
• Union of two irregular language can be regular.
DFA Construction Example:
\begin{align} Q &= \mathbb{P}(Q')\\ \Sigma &= \Sigma'\\ \delta(S_{\text{set of states}}, a) &= \{\delta'(s, a) | s \in S_{\text{set of states}}\}\\ q_0 &= Q'\\ F &= \{q \in Q | (\exists p \in q)(p \in F') \}\\ \end{align}
If $M$ accepts, that means there exists $x \in \Sigma^*$, $w$ is the input, such that $M'$ accepts $xw \in \Sigma^*$. We also know if $M'$ accepts $xw \in \Sigma^*$, then $w = x$ only if they are the same length.
\begin{align} Q &= \mathbb{P}(Q') \times \mathbb{P}(Q')\\ \Sigma &= \Sigma'\\ \delta(S_{\text{set of states}}, T_{\text{set of states}}, a) &= \{\delta'((s, t), a) | s \in S_{\text{set of states}}\}\\ q_0 &= \{q_0'\}\\ F &= \{q \in Q | (\exists p \in q)(p \in F') \}\\ \end{align}
## Sutner's Lecture
### Regular Languages
Tally Language: $L \subseteq \Sigma = \{a\}$ for all $a$ is a tally language. Since we can interpret $a^n$ as a natural number $n$ over $\Sigma$, then a tally language is a subset of $\mathbb{N}$
• Tally Language always produce DFA like this:
Linear: a set $A \in \mathbb{N}$ is linear if $A = \{c + \sum_{i=1}^d c_ic_x | x_i \geq 0\}$ were $c$ is constant and $c_i$ is periods.
• Example: $\{5 + 1x + 2y + 4z\}$ (number of variables is finite, but the set is infinite)
Semilinear: finite union of linear sets (every finite set is semilinear by allowing $d=0$ in linear set. $A = A_0 \cup \{b + (b_i + xp) | x \geq 0, i \in [k]\}$ where $A_0$ is finite) // TODO: understand this
• closed under union, intersection, complement
• $A \subseteq \mathbb{N} \text{ is semilinear } \iff A \text{ is a regular language}$ (proof by every DFA over $\{a\}$ is a "lasso": a transient followed by a loop, produces periodic set.)
Primitive Recursive: closed under Boolean operations (union, intersection, complement) Regular Language: closed under
• Boolean operations (union, intersection, complement)
• concatenation, Kleene star
• reversal
• homomorphisms, inverse homomorphisms
### DFA Accessibility
Cartesian Product Automation: $A = A_1 \times A_2 = \langle{Q_1 \times Q_2, \Sigma, \delta_1, \delta_2; (q_{01}, q_{02}), F_1 \times F_2}\rangle$
• $\mathcal{L}(A_1 \times A_2) = L_1 \cap L_2$
• union: if $F = F_1 \times Q_2 \cup Q_1 \times F_2$
• intersection if: $F = F_1 \times F_2$
• difference if: $F = F_1 \times (Q_2 - F_2)$
Accessible state: state $p$ in finite automation is accessible if there is a run from initial state to $p$.
Accessible automation: if all states are accessible.
• Accessible part of $A$ is equivalent to $A$ (cutting down useless states)
Co-accessibility of state: state $p$ in finite automation is co-accessible if there is a run from $p$ to final state.
Co-accessibility automation: if all states are co-accessible.
• Co-accessible part of DFA may not be a DFA.
Trim automation: both accessible and co-accessible.
• can be constructed by graph algorithms
### State Complexity
$\text{stc}(L_1 \cap L_2) \leq \text{stc}(L_1) \times \text{stc}(L_2)$ (size of the product machine $A = \prod A_i$ may be exponential, PSPACE-hard)
• state complexity bound of concat language in DFA: $|A_1|2^{|A_2|}$
Lemma: $A_1 \equiv A_2 \iff \mathcal{L}(A_1) - \mathcal{L}(A_2) = \emptyset \land \mathcal{L}(A_2) - \mathcal{L}(A_1) = \emptyset$
NFA State Complexity: state complexity bound of concat language in NFA: $|A_1|+|A_2|$ (NFA accepts if there is a run from $I$ to $F$)
NFA Reversal: $A^{op} = \langle{Q, \Sigma, \tau^{op}; F, I}\rangle$ where $(p, a, q) \in \tau^{op} \iff (q, a, p) \in \tau$
NFAE: with autonomous transition (epsilon moves)
• transition function $\tau \subseteq Q \times (\Sigma \cup \{\epsilon\}) \times Q$
Hierarchy: $DFA \subseteq PDFA \subseteq NFA \subseteq NFAE$
• but full power automation of NFA $A$ is $pow_f(A)$ has state complexity $2^n$
Rabin-Scott construction:
• but some DFA can be smaller than it's NFA
Table of Content
|
2022-11-27 09:20:56
|
{"extraction_info": {"found_math": true, "script_math_tex": 78, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000088214874268, "perplexity": 9986.951106323828}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710218.49/warc/CC-MAIN-20221127073607-20221127103607-00593.warc.gz"}
|
https://zenodo.org/record/5843028/export/csl
|
Journal article Open Access
# Implementation of Iterative bilateral filtering for removal of Rician noise in MR images using FPGA
Durga Pathrikar; V. N. Jirafe
### Citation Style Language JSON Export
{
"DOI": "10.35940/ijrte.C4351.099320",
"container_title": "International Journal of Recent Technology and Engineering (IJRTE)",
"language": "eng",
"title": "Implementation of Iterative bilateral filtering for removal of Rician noise in MR images using FPGA",
"issued": {
"date-parts": [
[
2020,
9,
30
]
]
},
"abstract": "<p>Magnetic resonance image noise reduction is important to process further and visual analysis. Bilateral filter is denoises image and also preserves edge. It proposes Iterative bilateral filter which reduces Rician noise in the magnitude magnetic resonance images and retains the fine structures, edges and it also reduces the bias caused by Rician noise. The visual and diagnostic quality of the image is retained. The quantitative analysis is based on analysis of standard quality metrics parameters like peak signal-to-noise ratio and mean structural similarity index matrix reveals that these methods yields better results than the other proposed denoising methods for MRI. Problem associated with the method is that it is computationally complex hence time consuming. It is not recommended for real time applications. To use in real time application a parallel implantation of the same using FPGA is proposed.</p>",
"author": [
{
"family": "Durga Pathrikar"
},
{
"family": "V. N. Jirafe"
}
],
"page": "279-284",
"volume": "9",
"type": "article-journal",
"issue": "3",
"id": "5843028"
}
40
15
views
|
2022-07-05 15:00:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23177316784858704, "perplexity": 3735.8633857094633}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104585887.84/warc/CC-MAIN-20220705144321-20220705174321-00008.warc.gz"}
|
http://en.wikipedia.org/wiki/Talk:Selenium
|
# Talk:Selenium
Selenium has been listed as one of the Natural sciences good articles under the good article criteria. If you can improve it further, please do so. If it no longer meets these criteria, you can reassess it.
Version 0.5 (Rated GA-Class)
This Natsci article has been selected for Version 0.5 and subsequent release versions of Wikipedia. It has been rated GA-Class on the assessment scale (comments).
WikiProject Elements (Rated GA-class, Top-importance)
GA This article has been rated as GA-Class on the quality scale.
Top This article has been rated as Top-importance on the importance scale.
## Article change
Article changed over to new Wikipedia:WikiProject Elements format by Dwmyers, Maveric149 and Malcolm Farmer. Elementbox converted 13:54, 1 July 2005 by Femto (previous revision was that of 02:42, 18 June 2005).
## Information Sources
Some of the text in this entry was rewritten from Los Alamos National Laboratory - Selenium. Additional text was taken directly from USGS Selenium Statistics and Information, USGS Periodic Table - Selenium, from the Elements database 20001107 (via dict.org), Webster's Revised Unabridged Dictionary (1913) (via dict.org) and WordNet (r) 1.7 (via dict.org). Data for the table was obtained from the sources listed on the subject page and Wikipedia:WikiProject Elements but was reformatted and converted into SI units.
## Nonmetal vs Metalloid
Is Selenium a nonmetal (as per this article) or a metalloid (as per Chalcogen)? Ian Cairns 01:16, 6 Mar 2005 (UTC)
Hi. I also have noticed a discrepancy:
http://simple.wikipedia.org/wiki/Metalloid
so, if you have came to some conclusions, could you share them?
It is a non-metal according to the Periodic table, so I'm gonna think that's right.--71.139.149.107 (talk) 07:28, 18 September 2008 (UTC)
According to Chemical Priciples by Atkins and Jones Selenium is a nonmetal. A Swedish encyclopedia supports this so the article is wrong.
Zotamedu (talk) 14:06, 4 February 2009 (UTC)
Selenium as a metalloid
It is reasonable that the term metalloid is meant to denote a group of chemical elements which exhibit characteristics roughly intermediate between metals and nonmetals. Although the famous metalloid stair step is a starting place, that line, traditionally drawn under B, on the right side of Al, under Si on the left side of As, etc. clearly doesn't define metalloids as all elements that lie symmetrically along that line. For instance, the unbroken diagonal of At Te As Si and B above the line isn't mirrored by Po Sb Ge and Al below the line: Al is not considered a metalloid; it clearly acts as a metal. Nor is At diagonally below Te, considered a metalloid.
Selenium is often referred to as a metalloid in the literature:
Here are a few articles that note that selenium is a metalloid:
Quote: “Selenium is a metalloid which is an essential micronutrient at low concentrations but becomes toxic at higher concentrations, with the range between being very narrow.”
Citation: David F. Lambert, Nicholas J. Turoczy; Comparison of digestion methods for the determination of selenium in fish tissue by cathodic stripping voltammetry; Analytica Chimica Acta 408 (2000) 97–102.
Quote: “The mobility and availability of the toxic metalloid selenium in the environment are largely controlled by sorption and redox reactions, which may proceed at temporal scales similar to that of subsurface water movement under saturated or unsaturated conditions.”
Citation: L. Charlet, A.C. Scheinost, C. Tournassat, J.M. Greneche, A. Géhin, A. Fernández-Martı´nez, S. Coudert, D. Tisserand, J. Brendle; Electron transfer at the mineral/water interface: Selenium reduction by ferrous iron sorbed on clay; Geochimica et Cosmochimica Acta, Volume 71; 2007; 5731-5749.
Quote: “The metalloid selenium is a required micronutrient in mammals needed for insertion into specific selenoproteins.”
Citation: Dennis Ganyc, William T. Self; High affinity selenium uptake in a keratinocyte model; FEBS Letters, Volume 582; 2008; 299-304.
Quote: “However, anodic stripping voltammetry (ASV), usually carried out with gold electrodes, is not so often applied to determination of Se because the stripping of this metalloid is accompanied by multiple peaks, which impair the reproducibility of the curves to be obtained.”
Citation: Claudete Fernandes Pereira, Fabiano Barbieri Gonzaga, Antonio Moraes Guarita-Santos, Jurandir Rodrigues SouzaDe, Determination of Se(IV) by anodic stripping voltammetry using gold electrodes made from recordable CDs, Talanta, Volume 69; 2006; 877-881.
Quote: “There is an increasing recognition that selenium is an important metalloid with industrial, environmental, biological and toxicological significance.”
Citation: F. Hellal, M. Dachraoui, Application of Doehlert matrix to the study of flow injection procedure for selenium (IV) determination, Talanta, Volume 63; 2004; 1089-1094.
Quote: “Selenium (Se) is a metalloid. It is essential for animals and humans.”
Citation: Lin Wu, Review of 15 years of research on ecotoxicology and remediation of land contaminated by agricultural drainage sediment rich in selenium, Ecotoxicology and Environmental Safety, Volume 57; 2004; 257-269.
Quote: “Selenium is a metalloid that in recent decades has gained international importance because of elevated residues found in fish and wildlife.” Citation: Steven J. Hamilton, Rationale for a tissue-based selenium criterion for aquatic life, Aquatic Toxicology, Volume 57; 2002; 85-100.
173.109.238.20 put an unsupported unsigned objection to this quote and citation. I undid that. --Eldin raigmore (talk) 20:06, 8 November 2009 (UTC)
Quote: “Urinary levels of another metalloid, selenium (Se), have recently been shown to be associated with increased As excretion and altered metabolite distribution.”
Citation: W. Jay Christian, Claudia Hopenhayn, Jose´ A. Centeno, Todor Todorov; Distribution of urinary selenium and arsenic among pregnant women exposed to arsenic in drinking water; Environmental Research 100 (2006) 115–122.
I propose adding the description in this article of selenium as a metalloid. Redchasteen (talk) 19:56, 27 April 2009 (UTC)
Why selenium is named as metalloid (really often), but very similar in metallicity carbon and phosphorus are almost always named as (ordinary) metalloids? Maybe grey selenium looks more metallic than black (metallic, grey) phosphorus and graphite and more selenides have metallic appearance than phosphides and carbides, but it not justify naming selenium as (half-)metalloid and carbon and phosphorus only as nonmetals! Graphite and black phosphorus, metallic allotropes of C and P, have much higher melting point than gray Se and are probably better conductors of heat (black P - 12.1 W·m−1·K−1, graphite - 119-165 W·m−1·K−1 (better than some metals), tellurium (element more metallic than Se) - (1.97–3.38) W·m−1·K−1). 95.49.249.123 (talk) 19:18, 14 October 2013 (UTC)
## Possibly a featured article?
I find this article pretty comprehensive and think would make a good candidate for a featured article. However, it seems to be lacking in a few things which I can't quite pin-point. Leftist 14:56, 21 March 2006 (UTC)
Maybe a hint of garlic? SBHarris 19:40, 7 September 2006 (UTC)
I'd like to see a list of foods in which selenium is naturally occurring. / LNelson, 27 Aug 2007 —Preceding unsigned comment added by 76.237.171.91 (talk) 12:55, August 27, 2007 (UTC)
## Typography
I don't have strong opinions about this, and I can understand why most people would never think of representing mu the way I originally did, but I think 100$\mu$g is rather clearer than 100μg . I think it must be the lack of curved lines on the standard μ . Elroch 16:47, 10 April 2006 (UTC)
The [itex] markup is altogether undefined with respect to the standard browser fonts; 100 $\mu$g looks horrible on my system. The micro sign µ is typographically unambiguos. It's a different beast from the character entity μ (in any coding) for the Greek language letter μ, which should not be used as a substitute for the micro prefix for this reason. Femto 18:51, 10 April 2006 (UTC)
The mu I used, $\mu$, just looks like a more curvy mu in Internet Explorer 6.0 and Firefox 1.5: presumably you are using a different browser. I was not aware there was a separate symbol for "micro". I thought the use of a lower case Greek letter was standard - did you not use the letter from the Greek alphabet at the bottom of the editor window? Elroch 20:31, 10 April 2006 (UTC)
No I don't have JavaScript enabled. Around here, I can enter the ISO 8859-1 code \$B5 directly from my keyboard (Want some € too? :). You can always substitute with the HTML entity µ (µ). Theoretically, the lower case Greek mu μ (μ) is standard; it should also appear typographically consistent with the ordinary text for everybody, though for historical coding reasons the separate micro is kept for the units. Femto 21:17, 10 April 2006 (UTC)
It certainly looks exactly the same on my two browsers. Whether it would on all others is another matter. Elroch 00:41, 11 April 2006 (UTC)
## Selenium and Mercury
Should information that selenium counteracts the negative effects of mercury be added?
MSTCrow 12:11, 17 June 2006 (UTC)
## HIV/AIDS
While it's true that people have existed in sub-Saharan Africa for longer than the AIDS epidemic, it does not follow that they have had selenium deficiency since before the AIDS epidemic. Furthermore, assuming that they did, it does not follow that they should have had AIDS then; the HIV virus is a necessary condition for AIDS and lacking (or predating) its existence, the contributory selenium factor (assuming it exists) is rendered moot.
The "fact" that copper-mining produces selenium as a by-product (source? I find it implausible that the existence of one raw element necessarily requires the existence of another) doesn't imply that said by-product is sufficiently present in agricultural soil and therefore in food. Therefore, nothing about copper-mining necessarily has any effect on selenium deficiency in areas in which it exists.
The lack of a published standard of what is subjectively considered "low" has no bearing on the fact that selenium, as a physical and measurable element, can be objetively found to be lower or higher in the diets of some regions. This is all that is needed to establish a negative correlation between dietary selenium and severity of the AIDS epidemic. 140.247.248.52 12:09, 7 September 2006 (UTC)
Actually, a study was just published about the effects of Selenium on the HIV virus [1]. I'm not sure if it's totally accurate, as the server that hosts the study is down for an hour or so, but it would certainly be worth looking into.Ahudson 18:13, 23 January 2007 (UTC)
It's all over Google News, so it can at least be referenced in a preliminary fashion from journalistic sources. Elijahmeeks 06:40, 24 January 2007 (UTC)
Perhaps this one could be interesting in selenium/HIV context [2] ??
## Need this info
Whats the cost per gram??? and the number of electrons at solid state??? i needed this for project and couldn't find it —The preceding unsigned comment was added by 70.18.122.163 (talkcontribs) 23:41, 3 December 2006 (UTC).
Prices are usually not given for elements, since they vary greatly according to purity and form. And change with time and supplier. I don't understand your question on electrons. What does solid state have to do with it? Generally the number of electrons her atom for isolated elements is the same as the atomic number. SBHarris 01:11, 4 December 2006 (UTC)
## Early reference to photovoltaic effects
I was doing some research in The Times of 1921, and on 24 September 1921 pages 6 and 8 there are articles on the use of selenium in an early "talkie" (film + phonograph recording). Jackiespeel 18:14, 12 February 2007 (UTC)
## Fact tag to Isotope section
Hi there,
I have just been dealing with some vandalism at Badminton where I noticed the only other contribution of the anon IP was some time ago on this article see [3]. Basically I have added fact tag because the information on the number of isotopes and how many are stable needs referencing and of course the edit by the anon IP may have added incorrect information, and need verification.
Cheers Lethaniol 15:39, 15 February 2007 (UTC)
Gave the section an overhaul, reference is isotopes of selenium. Femto 16:14, 15 February 2007 (UTC)
## Allotropes
Is the red form really an allotrope, or an oxide?--THobern 10:24, 23 April 2008 (UTC)
## Reference to Evolution_(film)
I added the reference to the movie Evolution as i felt that selenium was such a key component and the 'savior of the day'. I hope this doesn't need a spoiler warning! Megatonman (talk) 18:46, 14 May 2008 (UTC)
## Points to reference that does not mention it
In this page it says, "Natural sources of selenium include certain selenium-rich soils, and selenium that has been bioconcentrated by certain toxic plants such as locoweed. Anthropogenic sources of selenium include coal burning and the mining and smelting of sulfide ores.[3]"
The link to locoweed points to the locoweed page -- which does not mention selenium. That's a shame, because selenium poisoning is a big time problem in the state of Wyoming.
Probably the locoweed page needs this added.
Thanks, David Small Sept. 29 2008 71.229.213.194 (talk) 22:08, 28 September 2008 (UTC)
71.229.213.194 (talk) 22:08, 28 September 2008 (UTC)
## Shouldn't Selenium be in Category:Biology and pharmacology of chemical elements ?
Shouldn't Selenium be in Category:Biology and pharmacology of chemical elements ? Eldin raigmore (talk) 20:42, 16 May 2009 (UTC)
## The article should mention mitochondria.
The article should mention that, in eukaryotic cells, selenocysteine is found mostly in organelles that have their own extra-nuclear genetic material, such as mitochondria. It should mention that the evolution of mitochondria was a necessary step towards tolerating the high-oxygen atmosphere caused by the evolution of photosynthes. Eldin raigmore (talk) 23:34, 29 June 2009 (UTC)
## Makes sense to some gibberish to others
Someone one should really consider finding a way to simplify pages like these so those without a master's degree in the subject can understand it. Pyrolord777 (talk) 17:01, 30 December 2009 (UTC)
## Lacks an etymology section
most "element" articles have a section about where the word comes from. This one doesn't. 63.3.9.129 (talk) 20:27, 2 January 2010 (UTC)
Has a sentence in History and global demand.--Stone (talk) 19:55, 5 March 2010 (UTC)
## The Venturi view
Comment left in the article section by an IP user:
"This article contains a number of citations (regarding iodine as an antioxidant) to non-peer reviewed articles, expressing views that are not widely shared in teh scientific community. It would seem that a certain author (VEnturi) is using wikipedia to push his views, rather than go through the conventional route."
There may be something to this. Although the journals are peer-reviewed, so far as I can tell, at the same time, the Venturis are nearly the only authors writing about the evolution of iodine's role in biology. So what do we do? Perhaps summarize more in the main iodine article, offload even more to the main article on iodine in evolution, and there note that these views are those held by one major camp. Here's another example, cited elsewhere. It's a perfectly good article, but like most such things, contains a lot of hypothesis. What is NOT hypothesis is that iodine is heavily and actively concentrated in many non thryoid tissues, so it's surely doing something important there.
Nutr Health. 2009;20(2):119-34. Iodine in evolution of salivary glands and in oral health. Venturi S, Venturi M.
Servizio di Igiene, ASL n. 1, Regione Marche, Pennabilli (PU), Italy.
The authors hypothesize that dietary deficiency or excess of iodine (I) has an important role in oral mucosa and in salivary glands physiology. Salivary glands derived from primitive I-concentrating oral cells, which during embryogenesis, migrate and specialize in secretion of saliva and iodine. Gastro-salivary clearance and secretions of iodides are a considerable part of "gastro-intestinal cycle of iodides", which constitutes about 23% of iodides pool in the human body. Salivary glands, stomach and thyroid share I-concentrating ability by sodium iodide symporter (NIS) and peroxidase activity, which transfers electrons from iodides to the oxygen of hydrogen peroxide and so protects the cells from peroxidation. Iodide seems to have an ancestral antioxidant function in all I-concentrating organisms from primitive marine algae to more recent terrestrial vertebrates. The high I-concentration of thymus supports the important role of iodine in the immune system and in the oral immune defence. In Europe and in the world, I-deficiency is surprisingly present in a large part of the population. The authors suggest that the trophic, antioxidant and apoptosis-inductor actions and the presumed antitumour activity of iodides might be important for prevention of oral and salivary glands diseases, as for some other extrathyroidal pathologies.PMID: 19835108 [PubMed - indexed for MEDLINE]
Anybody else have thoughts on this mattter? If not, I'll cut down and summarize the evolutionary section in this main element article a bit more, to reflect the lack of scientific concensus. Though (as a personal matter) I suspect the Venturis are probably more right than wrong. SBHarris 19:06, 5 March 2010 (UTC)
## Nano selenium
1)Does the sentence really need 5 refs and 2) for me nano size selenium is not notable enough to be mentioned in the elements article itself.
Nano-size selenium has equal efficacy, but much lower toxicity.26,27,28,29,30,31
--Stone (talk) 20:10, 5 March 2010 (UTC)
All refs come from same author (COI). Thus I removed all but one (most recent). Can't evaluate the validity of this. As to nano-Se (and other "nano" elemental solids), IMO it is notable enough for an element article. Materialscientist (talk) 04:58, 6 March 2010 (UTC)
## Really ugly element box photo replaced
Melted, fused selenium comes out as a shiny semi-metal when broken, with facets that look rather like those of silicon crystals. This photo was just ugly in the extreme. I've replaced it with the allotrope photo (yes, I know that's duplicated now), until somebody can come up with something better. SBHarris 04:21, 8 April 2010 (UTC)
## Pronunciation
I can't find any orthoepic authority for the pronunciation sih-LEN-ee-um. Every dictionary I checked, American and British, only lists sih-LEE-nee-um as a pronunciation. I have therefore removed that pronunciation. Someone can restore it if they can provide any authority for it. nohat (talk) 13:53, 19 November 2010 (UTC)
--Stone (talk) 21:32, 11 March 2012 (UTC)
## Lancet
Review on role in health doi:10.1016/S0140-6736(11)61452-9 JFW | T@lk 20:28, 29 April 2012 (UTC)
## Electron configuration
It should be 3 4 4, and not 4 3 4. Why do you write it in that random order? First the 3rd orbit and then 4th. — Preceding unsigned comment added by 31.210.181.158 (talk) 20:29, 25 May 2012 (UTC)
Well, some people believe that the 4s orbital will be filled up before 3d orbital, which is unpredictable in general, and we better follow simple rules of increasing orbital number. Changed. Materialscientist (talk) 23:44, 25 May 2012 (UTC)
## Se and impotency
I rolled back edits by Horoporo because the topic, sexual impotency, is always topical but the citations seemed specialized. We probably should follow WP:MEDRS in this area. In general, my impression is that the media are filled with advice on Se in the diet. IMHO, we should be cautious with well intentioned but potentially misleading/incomplete information that overlaps with the nutritional supplements business.--Smokefoot (talk) 12:54, 6 June 2012 (UTC)
## GA Review
Toolbox
Reviewer: Pyrotec (talk · contribs) 13:35, 12 June 2012 (UTC)
I will review. Pyrotec (talk) 13:35, 12 June 2012 (UTC)
I'm done an initial quick read of the article. On this basis, it looks a very strong contender for GA; and I would expect to be awarding GA at the end of this review. Having said that, I've not yet checked any of the references, so there may be some corrective actions - we will see. Pyrotec (talk) 15:53, 14 June 2012 (UTC)
I'm now reviewing the article against WP:WIAGA section by section, starting from Characteristics and then doing the WP:Lead last. Here I will be mostly highlighting "problems", so a section that is OK is likely to have few comments here. There will be an overall summary at the end of the review. Pyrotec (talk) 16:19, 14 June 2012 (UTC)
• Characteristics -
• Physical properties -
• Appears compliant. I had a look at one of my old course books (Cotton & Wilkinson, 3rd ed, 1972) - Se8 made by evaporation of solutions below 72 C, stable gray form can be grown from hot solutions of Se in aniline or from melts: worth mentioning?
• Isotopes & Occurrence -
• These two subsections appear to be compliant.
Pyrotec (talk) 18:13, 15 June 2012 (UTC)
• History -
• Pyrotec (talk) - Ref 21 (Trofast, Jan. Berzelius' Discovery of Selenium) has not been fully cited. The article comes from a journal, which has a publisher, journal title, volume & no., date of publication, pages and ISSN: none of these are given in the citation.
• Pyrotec (talk) - Ref 28 (The need for selenite and molybdate in the formation of formic dehydrogenase by members of the Coli-aerogenes group of bacteria. 57. 1954.) has not been fully cited. The article comes from a journal, which has a personal author, journal title, and page numbers: none of these are given in the citation.
• Otherwise OK.
• Production -
• Appears compliant.
• Chemical compounds -
• Chalcogen compounds -
• The text appears to contradict itself in respect of SeO3: i.e. "Selenium forms two stable oxides: selenium dioxide (SeO2) and selenium trioxide (SeO3). .... Unlike sulfur, which forms a stable trioxide, selenium trioxide is unstable and decomposes to the dioxide above 185 °C". I suspect that the problem lies in the wording of the "Unlike sulfur, ..." sentence.
Done SO3 is stable, but SeO3 isn't. I've fixed it. Double sharp (talk) 15:47, 16 June 2012 (UTC)
• There is no mention of how selenium trioxide is made (I looked it up and also when it was first prepared - 1930, by the way).
Done Would that be enough? Double sharp (talk) 15:47, 16 June 2012 (UTC)
• Pyrotec (talk) - The equation: "3 Se + 4 HNO3 → 3 H2SeO3 + 4 NO" is balanced for Se and N but not H and O; there appears to be one molecule of water missing.
• Halogen compounds -
...stopping for now. To be continued. Pyrotec (talk) 19:24, 15 June 2012 (UTC)
In passing, I've completed refs. 21 and 28 and the HNO3 reaction (water is also missing in the source book), though don't expect much from me on more time-consuming issues :-). Materialscientist (talk) 01:22, 16 June 2012 (UTC)
Thanks Materialscientist. Pyrotec (talk) 13:45, 16 June 2012 (UTC)
• Selenides -
• The reaction: "Al2Se3 + 6 H2O → 8 Al2O3 + H2Se" is clearly wrong (or perhaps the vandals have struck?).
Done Double sharp (talk) 16:06, 16 June 2012 (UTC)
• Other compounds -
• I find the wikilink on the chemical compound S4N4 visually distracting. I would prefer the link to be on its proper name Tetrasulfur tetranitride with the chemical formula afterwards. It has been done below for "(sulfur hexafluoride), SeF6", why the inconsistency?
Done Double sharp (talk) 16:06, 16 June 2012 (UTC)
• Organoselenium compounds -
• Looks OK.
• Applications -
• Ref 54 (^ Davis, Joseph R. (2001). %7C page 278 Copper and Copper Alloys. ISBN 978-0-87170-726-0.) is strangely cited. It's a book, but the publisher is not cited; the page number appears to be 278, but its not clear what the "%7C page 278" means.
Done Double sharp (talk) 16:11, 16 June 2012 (UTC)
• Refs 58, 59, 60 and 66 are books, but the publishers are not cited.
• Biological role -
...stopping for now. To be continued. Pyrotec (talk) 15:08, 16 June 2012 (UTC)
• I'm not sure I understand "2 GSH + H2O2----GSH-Px → GSSG + 2 H2O". The equation "2 GSH + H2O2 → GSSG + 2 H2O" seems to make more sense and its almost identical to the equation in Glutathione peroxidase.
• Otherwise, OK.
• Looks OK.
Pyrotec (talk) 08:27, 17 June 2012 (UTC)
### Overall summary
GA review – see WP:WIAGA for criteria
1. Is it reasonably well written?
A. Prose quality:
B. MoS compliance for lead, layout, words to watch, fiction, and lists:
2. Is it factually accurate and verifiable?
A. References to sources:
Well referenced.
B. Citation of reliable sources where necessary:
Well referenced.
C. No original research:
3. Is it broad in its coverage?
A. Major aspects:
B. Focused:
4. Is it neutral?
Fair representation without bias:
5. Is it stable?
No edit wars, etc:
6. Does it contain images to illustrate the topic?
A. Images are copyright tagged, and non-free images have fair use rationales:
B. Images are provided where possible and appropriate, with suitable captions:
7. Overall:
Pass or Fail:
I'm awarding this article GA status. Having looked at other chemical/element articles, such as Oxygen which is now an FA, I suspect that Selenium could in due course become a strong candidate for WP:FAC. Congratulations on a fine article. Pyrotec (talk) 08:27, 17 June 2012 (UTC)
## Selenium health effects and RCTs
The section on controversial health effects currently says the following:
A number of correlative epidemiological studies have implicated selenium deficiency (as measured by blood levels) in a number of serious or chronic diseases, such as cancer,[110] diabetes,[110] HIV/AIDS,[111] and tuberculosis. In addition, selenium supplementation has been found to be a chemopreventive for some types of cancer in some types of rodents. However, in randomized, blinded, controlled prospective trials in humans, selenium supplementation has not succeeded in reducing the incidence of any disease, nor has a meta-analysis of such selenium supplementation studies detected a decrease in overall mortality.
This is factually incorrect or at best misleading; health benefits, including a statistically significant reduced mortality from cancer, has been found in at least one RCT and not only in epidemiological and laboratory studies. The overall picture when averaging a bunch of trials appears to show a mortality RR 0.97 per the Cochrane review, which is a nonstatistically significant decrease. However, this may mislead the reader as to what the research actually shows. The most notable RCT which reduced cancer is the NPC study (see, e.g., Hatfield & Gladyshev 2009 for commentary). This is commonly viewed as being "disproved" by a larger trial called SELECT by the press and unsurprisingly is not going to show up in an abstract (perhaps not even in the full-text of the Cochrane review covering all antioxidants, which I don't have); however, researchers commenting often acknowledge that NPC was relatively focused on those with low serum selenium (for example, Hatfield & Gladyshev mention that "subjects in the NPC trial were selected, in part, on the basis of their having relatively low serum selenium levels (5); it was in this cohort that selenium supplementation was effective in reducing cancer risks") whereas I recall that in the SELECT, subjects were generally replete (an odd decision on the part of researchers, given that selenium was widely-regarded as toxic for many years). This needs to be corrected in the article. II | (t - c) 09:26, 21 April 2013 (UTC)
## Grey selenium
What is electrical resistnce of grey selenium?
That page informs that only 12 μΩcm (20 °C). It would be lower than resistivity of many metals.
http://www.periodni.com/se.html
What is thermal conductivity of grey allotrope of Se?
The same page informs that 2,04 W m-1K-1.
Is grey selenium ductile, malleable or elastic, or it is brittle?
79.191.191.243 (talk) 20:41, 23 October 2013 (UTC)
Why grey selenium is really often counts as metalloid (sometimes even (heavy) metal)) but black phosphorus and graphite only to nonmetals? They are much so similar to classify them separately in metallic character. Grey Se is famous photoconductor, but has wider band gap (about 1.8 eV) than black phosphorus (0.34 eV, narrower even than germanium and silicon). Carbon and phosphorus are associated with biology, but selenium is associated with toxicity. Maybe it is a source of that prejudice? Black P has thermal conductivity much over four times better than Se "metal".
95.49.68.75 (talk) 10:49, 26 October 2013 (UTC)
I suppose it's a case of books parroting each other out of context without thinking about the subject at hand? Se does behave like As in aquatic environments IIRC, so a metalloid classification of Se and not C and P isn't wholly unwarranted in that field. But I would say that if you're going to classify Se generally as a metalloid, C, P, and I should also be counted as metalloids. On Wikipedia we do not put any of them as metalloids, which is also consistent. See Metalloid for more info. On Wikipedia we have tried to use a better classification of metalloids, but we don't have control over what authors write and have to just report what they say (not that we have to follow their classification without question!). Double sharp (talk) 11:54, 26 October 2013 (UTC)
I agree that C, P and Se are too similar to make a difference in "metallicity class" among them (in Wikipedia they are named as "polyatomic nonmetals" (along with sulfur), which border to typical metalloids (they are on the left of them)). I am irritating when I see a periodic table in which only selenium (from these three elements) is classified as a metalloid. In German Wikipedia this flawful classification occurs.
95.49.68.75 (talk) 15:48, 26 October 2013 (UTC)
I am asking qustions gain: what is electrical conductivity of pure hexgonal selenium (especilly internally stressed, in darkness)? What is its coordination number?
95.49.94.5 (talk) 00:34, 13 November 2013 (UTC)
Is there a difference between trigonal and hexagonal Se? Hexagonal (metallic, grey form) probably looks even silvery, but has far much lower melting and boiling point than graphite and diamond. I think that grey form can be confused with black allotrope, which is significantly darker. What is band gap and electrical conductivity of GREY (not trigonal or black) allotrope of Se?
There are many pages in which is written that hexagonal Se is the same as hexagonal form of its element.
Because trigonal tellurium has a much higher conductivity than trigonal selenium does (in the dark, 3 - 5 x 10-5 S/cm) this demonstration provides a potentially useful way for tuning the electrical conductivity of these nanostructures...
It would be not so large conductivity, similar to that of pure fullerenes. Band gap of trigonal Se is about 1,6 - 1,8 eV (from Internet), also similar to band gaps of fullerenes.
http://www.phys.washington.edu/~cobden/P600/Selenium_nanowires.pdf
But there are also pages in which "selenium metal" looks really like a metal - shiny, silvery-white substance:
|
2014-04-18 16:15:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6511486768722534, "perplexity": 5917.9176467505295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
|
https://proofwiki.org/wiki/Definition:Scale_(Measure)
|
# Definition:Scale (Measure)
A scale is a tool for providing a measurement of a measurable physical quantity $Q$ by means of an indication at a particular point on that scale.
|
2023-03-28 02:51:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5253888964653015, "perplexity": 484.8206123196475}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948756.99/warc/CC-MAIN-20230328011555-20230328041555-00705.warc.gz"}
|
http://www.pearltrees.com/u/8835889-formulation-encyclopedia
|
# Path integral formulation
The path integral also relates quantum and stochastic processes, and this provided the basis for the grand synthesis of the 1970s which unified quantum field theory with the statistical field theory of a fluctuating field near a second-order phase transition. The Schrödinger equation is a diffusion equation with an imaginary diffusion constant, and the path integral is an analytic continuation of a method for summing up all possible random walks. For this reason path integrals were used in the study of Brownian motion and diffusion a while before they were introduced in quantum mechanics.[3] These are just three of the paths that contribute to the quantum amplitude for a particle moving from point A at some time t0 to point B at some other time t1. Quantum action principle But the Hamiltonian in classical mechanics is derived from a Lagrangian, which is a more fundamental quantity considering special relativity. and where and the partial derivative now is with respect to p at fixed q.
Related: QUANTUM PHYSICS
Einstein–Maxwell–Dirac equations Einstein–Maxwell–Dirac equations (EMD) are related to quantum field theory. The current Big Bang Model is a quantum field theory in a curved spacetime. Unfortunately, no such theory is mathematically well-defined; in spite of this, theoreticians claim to extract information from this hypothetical theory. On the other hand, the super-classical limit of the not mathematically well-defined QED in a curved spacetime is the mathematically well-defined Einstein–Maxwell–Dirac system. (One could get a similar system for the standard model.) Functional integration In an ordinary integral there is a function to be integrated—the integrand—and a region of space over which to integrate the function—the domain of integration. The process of integration consists of adding up the values of the integrand for each point of the domain of integration. Making this procedure rigorous requires a limiting procedure, where the domain of integration is divided into smaller and smaller regions. For each small region the value of the integrand cannot vary much so it may be replaced by a single value. In a functional integral the domain of integration is a space of functions. For each function the integrand returns a value to add up.
Scattering theory Top: the real part of a plane wave travelling upwards. Bottom: The real part of the field after inserting in the path of the plane wave a small transparent disk of index of refraction higher than the index of the surrounding medium. This object scatters part of the wave field, although at any individual point, the wave's frequency and wavelength remain intact. In mathematics and physics, scattering theory is a framework for studying and understanding the scattering of waves and particles. Invariance mechanics The invariant quantities made from the input and output states of a system are the only quantities needed to give a probability amplitude to a given system. This is what is meant by the system obeying a symmetry. Since all the quantities involved are relative quantities, invariance mechanics can be thought of as taking relativity theory to its natural limit. Invariance mechanics has strong links with loop quantum gravity in which the invariant quantities are based on angular momentum. In invariance mechanics, space and time come secondary to the invariants and are seen as useful concepts that emerge only in the large scale limit.
Schrödinger equation In quantum mechanics, the Schrödinger equation is a partial differential equation that describes how the quantum state of some physical system changes with time. It was formulated in late 1925, and published in 1926, by the Austrian physicist Erwin Schrödinger.[1] In classical mechanics, the equation of motion is Newton's second law, and equivalent formulations are the Euler–Lagrange equations and Hamilton's equations. All of these formulations are used to solve for the motion of a mechanical system and mathematically predict what the system will do at any time beyond the initial settings and configuration of the system. In quantum mechanics, the analogue of Newton's law is Schrödinger's equation for a quantum system (usually atoms, molecules, and subatomic particles whether free, bound, or localized).
Quantum triviality In a quantum field theory, charge screening can restrict the value of the observable "renormalized" charge of a classical theory. If the only allowed value of the renormalized charge is zero, the theory is said to be "trivial" or noninteracting. Thus, surprisingly, a classical theory that appears to describe interacting particles can, when realized as a quantum field theory, become a "trivial" theory of noninteracting free particles. Quantum statistical mechanics Expectation From classical probability theory, we know that the expectation of a random variable X is completely determined by its distribution DX by assuming, of course, that the random variable is integrable or that the random variable is non-negative.
Wigner's theorem Wigner's theorem, proved by Eugene Wigner in 1931,[1] is a cornerstone of the mathematical formulation of quantum mechanics. The theorem specifies how physical symmetries such as rotations, translations, and CPT act on the Hilbert space of states. According to the theorem, any symmetry acts as a unitary or antiunitary transformation in the Hilbert space. More precisely, it states that a surjective (not necessarily linear) map on a complex Hilbert space that satisfies
Related:
|
2018-11-14 11:56:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465818166732788, "perplexity": 212.81770786121294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039741979.10/warc/CC-MAIN-20181114104603-20181114130603-00472.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/college-algebra-6th-edition/chapter-1-equations-and-inequalities-exercise-set-1-7-page-198/129
|
## College Algebra (6th Edition)
The elevator can only carry the elevator operator plus $less$ $than$ $29$ $bags$ of cement safely in one trip.
To solve this exercise, we must first model the total weight for the elevator, including that of the elevator operator: $$Weight_{total} = 245 + 95c$$ where $c$ represents the total amount of bags of cement. Since the maximum amount the elevator can safely carry is $3,000$ pounds, we can further model this in the following manner: $$Weight_{total} \lt 3,000$$ $$245 + 95c \lt 3,000$$ By solving for $c$: $$95c \lt 3,000 - 245$$ $$95c \lt 2,755$$ $$c \lt 29$$ we can say that the elevator can only carry the elevator operator plus $less$ $than$ $29$ $bags$ of cement safely in one trip.
|
2018-12-13 11:41:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.805148720741272, "perplexity": 540.8518142181969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824675.15/warc/CC-MAIN-20181213101934-20181213123434-00393.warc.gz"}
|
https://www.saxo.com/dk/foundations-of-analysis_edmund-landau_hardback_9780821826935
|
KUN I DAG: Spar 50% på alle digitale titler fra Politikens forlag
Du er her:
# Foundations of Analysis (Ams Chelsea Publishing, nr. 79)
(Bog, hardback)
Kunder (0 anmeldelser)
Why does $2 \times 2 = 4$? What are fractions? Imaginary numbers? Why do the laws of algebra hold? What are the properties of the numbers on which the Differential and Integral Calculus is based? In other words, What are numbers? And why do they have the properties we attribute to the... Læs mere
Why does $2 \times 2 = 4$? What are fractions? Imaginary numbers? Why do the laws of algebra hold? What are the properties of the numbers on which the... Læs mere
Produktdetaljer:
Sprog:
Engelsk
ISBN-13:
9780821826935
Sideantal:
136
Udgivet:
01-01-1900
Udgave:
3rd Revised edition
Nr. i serien:
No. 79
Vis mere
Sæt bog på liste
• Bogliste
Leveringstid
2-4 hverdage
Leveres senest
23-10-2017
kr. 399,95
Fragt
Gratis
kr. 329,95
Fri fragt
Du tilmelder dig Saxo Plusmedlemskab til 69 kr. hver måned. Læs mere om vores forskellige typer medlemskab her
Til dig, der elsker bøger
Læs mere om medlemskab
Op til 70% rabat Fri fragt Udvidet returret Ingen binding
Forlagets beskrivelse
Why does $2 \times 2 = 4$? What are fractions? Imaginary numbers? Why do the laws of algebra hold? What are the properties of the numbers on which the Differential and Integral Calculus is based? In other words, What are numbers? And why do they have the properties we attribute to them? This work answers such questions.
Bibliotekernes beskrivelse
Why does $2 \times 2 = 4$? What are fractions? Imaginary numbers? Why do the laws of algebra hold? And how do we prove these laws? What are the properties of the numbers on which the Differential and Integral Calculus is based? In other words, What are numbers? And why do they have the properties we attribute to them? Thanks to the genius of Dedekind, Cantor, Peano, Frege and Russell, such questions can now be given a satisfactory answer. This English translation of Landau's famous ""Grundlagen der Analysis"" - also available from the AMS - answers these important questions.
## Kundernes boganmeldelser af Foundations of Analysis (Ams Chelsea Publishing, nr. 79)
Anmeld bogen og vær med i konkurrencen om gavekort – læs mere her.
Der er ingen anmeldelser af Foundations of Analysis (Ams Chelsea Publishing, nr. 79)
for at skrive en anmeldelse.
Bogens kategori:
### Din personlige bogassistent
Han følger dig rundt og finder nye anbefalinger, baseret på de bøger du kigger på.
Skjul bogassistenten
Få løbende anbefalinger fra din personlige bogassistent, mens du kigger rundt her på siden.
|
2017-10-17 13:26:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6899822354316711, "perplexity": 14053.903351431667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187821189.10/warc/CC-MAIN-20171017125144-20171017145144-00641.warc.gz"}
|
http://patricktalkstech.com/standard-error/calculate-error-from-standard-deviation.html
|
Home > Standard Error > Calculate Error From Standard Deviation
# Calculate Error From Standard Deviation
## Contents
Retrieved 17 July 2014. Note: The Student's probability distribution is a good approximation of the Gaussian when the sample size is over 100. The unbiased standard error plots as the ρ=0 diagonal line with log-log slope -½. How do I calculate standard error when independent and dependent variables are given? have a peek here
This approximate formula is for moderate to large sample sizes; the reference gives the exact formulas for any sample size, and can be applied to heavily autocorrelated time series like Wall doi:10.2307/2340569. Recall that the regression line is the line that minimizes the sum of squared deviations of prediction (also called the sum of squares error). Privacy policy About Wikipedia Disclaimers Contact Wikipedia Developers Cookie statement Mobile view Standard Error of the Estimate Author(s) David M.
## How To Calculate Standard Error In Excel
Larger sample sizes give smaller standard errors As would be expected, larger sample sizes give smaller standard errors. The relationship with the standard deviation is defined such that, for a given sample size, the standard error equals the standard deviation divided by the square root of the sample size. The following expressions can be used to calculate the upper and lower 95% confidence limits, where x ¯ {\displaystyle {\bar {x}}} is equal to the sample mean, S E {\displaystyle SE} Repeating the sampling procedure as for the Cherry Blossom runners, take 20,000 samples of size n=16 from the age at first marriage population.
Here you will find daily news and tutorials about R, contributed by over 573 bloggers. ISBN 0-8493-2479-3 p. 626 ^ a b Dietz, Davidl; Barr, Christopher; Çetinkaya-Rundel, Mine (2012), OpenIntro Statistics (Second ed.), openintro.org ^ T.P. As an example, consider an experiment that measures the speed of sound in a material along the three directions (along x, y and z coordinates). Standard Error Definition Follow us!
It is rare that the true population standard deviation is known. Standard Error Formula Statistics However, the mean and standard deviation are descriptive statistics, whereas the standard error of the mean describes bounds on a random sampling process. A natural way to describe the variation of these sample means around the true population mean is the standard deviation of the distribution of the sample means. What is the mean of a data at 5% standard error?
This article will show you how it's done. Difference Between Standard Error And Standard Deviation The ages in that sample were 23, 27, 28, 29, 31, 31, 32, 33, 34, 38, 40, 40, 48, 53, 54, and 55. Because of random variation in sampling, the proportion or mean calculated using the sample will usually differ from the true proportion or mean in the entire population. Can it be said to be smaller or larger than the standard deviation?
## Standard Error Formula Statistics
How do I find the mean of one group using just the standard deviation and a total number of two groups? The standard deviation of the age for the 16 runners is 10.23. How To Calculate Standard Error In Excel For the age at first marriage, the population mean age is 23.44, and the population standard deviation is 4.72. Standard Error Of Proportion Using a sample to estimate the standard error In the examples so far, the population standard deviation σ was assumed to be known.
To calculate the standard error of any particular sampling distribution of sample means, enter the mean and standard deviation (sd) of the source population, along with the value ofn, and then navigate here Scenario 1. The area between each z* value and the negative of that z* value is the confidence percentage (approximately). Wilson Mizner: "If you steal from one author it's plagiarism; if you steal from many it's research." Don't steal, do research. . Standard Error Formula Regression
The notation for standard error can be any one of SE, SEM (for standard error of measurement or mean), or SE. When the true underlying distribution is known to be Gaussian, although with unknown σ, then the resulting estimated distribution follows the Student t-distribution. Roman letters indicate that these are sample values. Check This Out If the sample size is small (say less than 60 in each group) then confidence intervals should have been calculated using a value from a t distribution.
For example if the 95% confidence intervals around the estimated fish sizes under Treatment A do not cross the estimated mean fish size under Treatment B then fish sizes are significantly Standard Error Formula Proportion In cases where n is too small (in general, less than 30) for the Central Limit Theorem to be used, but you still think the data came from a normal distribution, For example, the area between z*=1.28 and z=-1.28 is approximately 0.80.
## Sokal and Rohlf (1981)[7] give an equation of the correction factor for small samples ofn<20.
For example, a test was given to a class of 5 students, and the test results are 12, 55, 74, 79 and 90. You want to estimate the average weight of the cones they make over a one-day period, including a margin of error. Answer this question Flag as... How To Find Standard Error On Ti 84 Regressions differing in accuracy of prediction.
This is a sampling distribution. Mathematically, the standard error of the mean formula is given by: σM = standard error of the mean σ = the standard deviation of the original distribution N = the sample Moreover, this formula works for positive and negative ρ alike.[10] See also unbiased estimation of standard deviation for more discussion. this contact form As will be shown, the standard error is the standard deviation of the sampling distribution.
Related articles Related pages: Calculate Standard Deviation Standard Deviation . Consider the following scenarios. Boost Your Self-Esteem Self-Esteem Course Deal With Too Much Worry Worry Course How To Handle Social Anxiety Social Anxiety Course Handling Break-ups Separation Course Struggling With Arachnophobia? The age data are in the data set run10 from the R package openintro that accompanies the textbook by Dietz [4] The graph shows the distribution of ages for the runners.
In other words, the range of likely values for the average weight of all large cones made for the day is estimated (with 95% confidence) to be between 10.30 - 0.17 This formula may be derived from what we know about the variance of a sum of independent random variables.[5] If X 1 , X 2 , … , X n {\displaystyle In fact, data organizations often set reliability standards that their data must reach before publication. The true standard error of the mean, using σ = 9.27, is σ x ¯ = σ n = 9.27 16 = 2.32 {\displaystyle \sigma _{\bar {x}}\ ={\frac {\sigma }{\sqrt
|
2018-04-25 12:05:24
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8878522515296936, "perplexity": 637.2589640954318}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947803.66/warc/CC-MAIN-20180425115743-20180425135743-00024.warc.gz"}
|
http://www.lojban.org/publications/cll.before-20160606/cll_v1.1_xhtml-chapter-chunks/chapter-letterals.html
|
Chapter 17. As Easy As A-B-C? The Lojban Letteral System And Its Uses
17.1. What's a letteral, anyway?
James Cooke Brown, the founder of the Loglan Project, coined the word letteral (by analogy with numeral) to mean a letter of the alphabet, such as f or z. A typical example of its use might be
Example 17.1.
There are fourteen occurrences of the letteral e in this sentence.
(Don't forget the one within quotation marks.) Using the word letteral avoids confusion with letter, the kind you write to someone. Not surprisingly, there is a Lojban gismu for letteral, namely lerfu, and this word will be used in the rest of this chapter.
Lojban uses the Latin alphabet, just as English does, right? Then why is there a need for a chapter like this? After all, everyone who can read it already knows the alphabet. The answer is twofold:
First, in English there are a set of words that correspond to and represent the English lerfu. These words are rarely written down in English and have no standard spellings, but if you pronounce the English alphabet to yourself you will hear them: ay, bee, cee, dee ... . They are used in spelling out words and in pronouncing most acronyms. The Lojban equivalents of these words are standardized and must be documented somehow.
Second, English has names only for the lerfu used in writing English. (There are also English names for Greek and Hebrew lerfu: English-speakers usually refer to the Greek lerfu conventionally spelled phi as fye, whereas fee would more nearly represent the name used by Greek-speakers. Still, not all English-speakers know these English names.) Lojban, in order to be culturally neutral, needs a more comprehensive system that can handle, at least potentially, all of the world's alphabets and other writing systems.
Letterals have several uses in Lojban: in forming acronyms and abbreviations, as mathematical symbols, and as pro-sumti – the equivalent of English pronouns.
In earlier writings about Lojban, there has been a tendency to use the word lerfu for both the letterals themselves and for the Lojban words which represent them. In this chapter, that tendency will be ruthlessly suppressed, and the term lerfu word will invariably be used for the latter. The Lojban equivalent would be lerfu valsi or lervla.
17.2. A to Z in Lojban, plus one
The first requirement of a system of lerfu words for any language is that they must represent the lerfu used to write the language. The lerfu words for English are a motley crew: the relationship between doubleyou and w is strictly historical in nature; aitch represents h but has no clear relationship to it at all; and z has two distinct lerfu words, zee and zed, depending on the dialect of English in question.
All of Lojban's basic lerfu words are made by one of three rules:
• to get a lerfu word for a vowel, add bu;
• to get a lerfu word for a consonant, add y;
• the lerfu word for ' is .y'y.
Therefore, the following table represents the basic Lojban alphabet:
' .y'y.
a .abu
b by.
c cy.
d dy.
e .ebu
f fy.
g gy.
i .ibu
j jy.
k ky.
l ly.
m my.
n ny.
o .obu
p py.
r ry.
s sy.
t ty.
u .ubu
v vy.
x xy.
y .ybu
z zy.
There are several things to note about this table. The consonant lerfu words are a single syllable, whereas the vowel and ' lerfu words are two syllables and must be preceded by pause (since they all begin with a vowel). Another fact, not evident from the table but important nonetheless, is that by and its like are single cmavo of selma'o BY, as is .y'y. The vowel lerfu words, on the other hand, are compound cmavo, made from a single vowel cmavo plus the cmavo bu (which belongs to its own selma'o, BU). All of the vowel cmavo have other meanings in Lojban (logical connectives, sentence separator, hesitation noise), but those meanings are irrelevant when bu follows.
Here are some illustrations of common Lojban words spelled out using the alphabet above:
Example 17.2.
ty. .abu ny. ry. .ubu t a n r u
Example 17.3.
ky. .obu .y'y. .abu k o ' a
Spelling out words is less useful in Lojban than in English, for two reasons: Lojban spelling is phonemic, so there can be no real dispute about how a word is spelled; and the Lojban lerfu words sound more alike than the English ones do, since they are made up systematically. The English words fail and vale sound similar, but just hearing the first lerfu word of either, namely eff or vee, is enough to discriminate easily between them – and even if the first lerfu word were somehow confused, neither vail nor fale is a word of ordinary English, so the rest of the spelling determines which word is meant. Still, the capability of spelling out words does exist in Lojban.
Note that the lerfu words ending in y were written (in Example 17.2 and Example 17.3) with pauses after them. It is not strictly necessary to pause after such lerfu words, but failure to do so can in some cases lead to ambiguities:
Example 17.4.
mi cy. claxu I lerfu-“c” without
I am without (whatever is referred to by) the letter “c”.
without a pause after cy would be interpreted as:
Example 17.5.
micyclaxu (Observative:)-doctor-without
Something unspecified is without a doctor.
A safe guideline is to pause after any cmavo ending in y unless the next word is also a cmavo ending in y. The safest and easiest guideline is to pause after all of them.
17.3. Upper and lower cases
Lojban doesn't use lower-case (small) letters and upper-case (capital) letters in the same way that English does; sentences do not begin with an upper-case letter, nor do names. However, upper-case letters are used in Lojban to mark irregular stress within names, thus:
Example 17.6.
.iVAN.
the name “Ivan” in Russian/Slavic pronunciation.
It would require far too many cmavo to assign one for each upper-case and one for each lower-case lerfu, so instead we have two special cmavo ga'e and to'a representing upper case and lower case respectively. They belong to the same selma'o as the basic lerfu words, namely BY, and they may be freely interspersed with them.
The effect of ga'e is to change the interpretation of all lerfu words following it to be the upper-case version of the lerfu. An occurrence of to'a causes the interpretation to revert to lower case. Thus, ga'e .abu means not a but A, and Ivan's name may be spelled out thus:
Example 17.7.
.ibu ga'e vy. .abu ny. to'a i [upper] V A N [lower]
The cmavo and compound cmavo of this type will be called shift words.
How long does a shift word last? Theoretically, until the next shift word that contradicts it or until the end of text. In practice, it is common to presume that a shift word is only in effect until the next word other than a lerfu word is found.
It is often convenient to shift just a single letter to upper case. The cmavo tau, of selma'o LAU, is useful for the purpose. A LAU cmavo must always be immediately followed by a BY cmavo or its equivalent: the combination is grammatically equivalent to a single BY. (See Section 17.14 for details.)
A likely use of tau is in the internationally standardized symbols for the chemical elements. Each element is represented using either a single upper-case lerfu or one upper-case lerfu followed by one lower-case lerfu:
Example 17.8.
tau sy. [single-shift] S
S (chemical symbol for sulfur)
Example 17.9.
tau sy. .ibu [single-shift] S i
Si (chemical symbol for silicon)
If a shift to upper-case is in effect when tau appears, it shifts the next lerfu word only to lower case, reversing its usual effect.
17.4. The universal bu
So far we have seen bu only as a suffix to vowel cmavo to produce vowel lerfu words. Originally, this was the only use of bu. In developing the lerfu word system, however, it proved to be useful to allow bu to be attached to any word whatsoever, in order to allow arbitrary extensions of the basic lerfu word set.
Formally, bu may be attached to any single Lojban word. Compound cmavo do not count as words for this purpose. The special cmavo ba'e, za'e, zei, zo, zoi, la'o, lo'u, si, sa, su, and fa'o may not have bu attached, because they are interpreted before bu detection is done; in particular,
Example 17.10.
zo bu the-word “bu”
the word “bu”
is needed when discussing bu in Lojban. It is also illegal to attach bu to itself, but more than one bu may be attached to a word; thus .abubu is legal, if ugly. (Its meaning is not defined, but it is presumably different from .abu.) It does not matter if the word is a cmavo, a cmene, or a brivla. All such words suffixed by bu are treated grammatically as if they were cmavo belonging to selma'o BY. However, if the word is a cmene it is always necessary to precede and follow it by a pause, because otherwise the cmene may absorb preceding or following words.
The ability to attach bu to words has been used primarily to make names for various logograms and other unusual characters. For example, the Lojban name for the happy face is .uibu, based on the attitudinal .ui that means happiness. Likewise, the smiley face, written :-) and used on computer networks to indicate humor, is called zo'obu The existence of these names does not mean that you should insert .uibu into running Lojban text to indicate that you are happy, or zo'obu when something is funny; instead, use the appropriate attitudinal directly.
Likewise, joibu represents the ampersand character, &, based on the cmavo joi meaning mixed and. Many more such lerfu words will probably be invented in future.
The . and , characters used in Lojbanic writing to represent pause and syllable break respectively have been assigned the lerfu words denpa bu (literally, pause bu) and slaka bu (literally, syllable bu). The written space is mandatory here, because denpa and slaka are normal gismu with normal stress: denpabu would be a fu'ivla (word borrowed from another language into Lojban) stressed denPAbu. No pause is required between denpa (or slaka) and bu, though.
17.5. Alien alphabets
As stated in Section 17.1, Lojban's goal of cultural neutrality demands a standard set of lerfu words for the lerfu of as many other writing systems as possible. When we meet these lerfu in written text (particularly, though not exclusively, mathematical text), we need a standard Lojbanic way to pronounce them.
There are certainly hundreds of alphabets and other writing systems in use around the world, and it is probably an unachievable goal to create a single system which can express all of them, but if perfection is not demanded, a usable system can be created from the raw material which Lojban provides.
One possibility would be to use the lerfu word associated with the language itself, Lojbanized and with bu added. Indeed, an isolated Greek alpha in running Lojban text is probably most easily handled by calling it .alfas. bu. Here the Greek lerfu word has been made into a Lojbanized name by adding s and then into a Lojban lerfu word by adding bu. Note that the pause after .alfas. is still needed.
Likewise, the easiest way to handle the Latin letters h, q, and w that are not used in Lojban is by a consonant lerfu word with bu attached. The following assignments have been made:
.y'y.bu h ky.bu q vy.bu w
As an example, the English word quack would be spelled in Lojban thus:
Example 17.11.
ky.bu .ubu .abu cy. ky. q u a c k
Note that the fact that the letter c in this word has nothing to do with the sound of the Lojban letter c is irrelevant; we are spelling an English word and English rules control the choice of letters, but we are speaking Lojban and Lojban rules control the pronunciations of those letters.
A few more possibilities for Latin-alphabet letters used in languages other than English:
ty.bu þ (thorn) dy.bu ð (edh)
However, this system is not ideal for all purposes. For one thing, it is verbose. The native lerfu words are often quite long, and with bu added they become even longer: the worst-case Greek lerfu word would be .Omikron. bu, with four syllables and two mandatory pauses. In addition, alphabets that are used by many languages have separate sets of lerfu words for each language, and which set is Lojban to choose?
The alternative plan, therefore, is to use a shift word similar to those introduced in Section 17.3. After the appearance of such a shift word, the regular lerfu words are re-interpreted to represent the lerfu of the alphabet now in use. After a shift to the Greek alphabet, for example, the lerfu word ty would represent not Latin t but Greek tau. Why tau? Because it is, in some sense, the closest counterpart of t within the Greek lerfu system. In principle it would be all right to map ty. to phi or even omega, but such an arbitrary relationship would be extremely hard to remember.
Where no obvious closest counterpart exists, some more or less arbitrary choice must be made. Some alien lerfu may simply not have any shifted equivalent, forcing the speaker to fall back on a bu form. Since a bu form may mean different things in different alphabets, it is safest to employ a shift word even when bu forms are in use.
Shifts for several alphabets have been assigned cmavo of selma'o BY:
lo'a Latin/Roman/Lojban alphabet ge'o Greek alphabet je'o Hebrew alphabet jo'o Arabic alphabet ru'o Cyrillic alphabet
The cmavo zai (of selma'o LAU) is used to create shift words to still other alphabets. The BY word which must follow any LAU cmavo would typically be a name representing the alphabet with bu suffixed:
Example 17.12.
zai .devanagar. bu
Devanagari (Hindi) alphabet
Example 17.13.
zai .katakan. bu
Japanese katakana syllabary
Example 17.14.
zai .xiragan. bu
Japanese hiragana syllabary
Unlike the cmavo above, these shift words have not been standardized and probably will not be until someone actually has a need for them. (Note the . characters marking leading and following pauses.)
In addition, there may be multiple visible representations within a single alphabet for a given letter: roman vs. italics, handwriting vs. print, Bodoni vs. Helvetica. These traditional font and face distinctions are also represented by shift words, indicated with the cmavo ce'a (of selma'o LAU) and a following BY word:
Example 17.15.
ce'a .xelveticas. bu
Helvetica font
Example 17.16.
ce'a .xancisk. bu
handwriting
Example 17.17.
ce'a .pavrel. bu
12-point font size
The cmavo na'a (of selma'o BY) is a universal shift-word cancel: it returns the interpretation of lerfu words to the default of lower-case Lojban with no specific font. It is more general than lo'a, which changes the alphabet only, potentially leaving font and case shifts in place.
Several sections at the end of this chapter contain tables of proposed lerfu word assignments for various languages.
17.6. Accent marks and compound lerfu words
Many languages that make use of the Latin alphabet add special marks to some of the lerfu they use. French, for example, uses three accent marks above vowels, called (in English) acute, grave, and circumflex. Likewise, German uses a mark called umlaut; a mark which looks the same is also used in French, but with a different name and meaning.
These marks may be considered lerfu, and each has a corresponding lerfu word in Lojban. So far, no problem. But the marks appear over lerfu, whereas the words must be spoken (or written) either before or after the lerfu word representing the basic lerfu. Typewriters (for mechanical reasons) and the computer programs that emulate them usually require their users to type the accent mark before the basic lerfu, whereas in speech the accent mark is often pronounced afterwards (for example, in German a umlaut is preferred to umlaut a).
Lojban cannot settle this question by fiat. Either it must be left up to default interpretation depending on the language in question, or the lerfu-word compounding cmavo tei (of selma'o TEI) and foi (of selma'o FOI) must be used. These cmavo are always used in pairs; any number of lerfu words may appear between them, and the whole is treated as a single compound lerfu word. The French word été, with acute accent marks on both e lerfu, could be spelled as:
Example 17.18.
tei .ebu .akut.bu foi ty. tei .akut.bu .ebu foi ( e acute ) t ( acute e )
and it does not matter whether akut. bu appears before or after .ebu; the teifoi grouping guarantees that the acute accent is associated with the correct lerfu. Of course, the level of precision represented by Example 17.18 would rarely be required: it might be needed by a Lojban-speaker when spelling out a French word for exact transcription by another Lojban-speaker who did not know French.
This system breaks down in languages which use more than one accent mark on a single lerfu; some other convention must be used for showing which accent marks are written where in that case. The obvious convention is to represent the mark nearest the basic lerfu by the lerfu word closest to the word representing the basic lerfu. Any remaining ambiguities must be resolved by further conventions not yet established.
Some languages, like Swedish and Finnish, consider certain accented lerfu to be completely distinct from their unaccented equivalents, but Lojban does not make a formal distinction, since the printed characters look the same whether they are reckoned as separate letters or not. In addition, some languages consider certain 2-letter combinations (like ll and ch in Spanish) to be letters; this may be represented by enclosing the combination in teifoi.
In addition, when discussing a specific language, it is permissible to make up new lerfu words, as long as they are either explained locally or well understood from context: thus Spanish ll or Croatian lj could be called .ibu, but that usage would not necessarily be universally understood.
Section 17.19 contains a table of proposed lerfu words for some common accent marks.
17.7. Punctuation marks
Lojban does not have punctuation marks as such: the denpa bu and the slaka bu are really a part of the alphabet. Other languages, however, use punctuation marks extensively. As yet, Lojban does not have any words for these punctuation marks, but a mechanism exists for devising them: the cmavo lau of selma'o LAU. lau must always be followed by a BY word; the interpretation of the BY word is changed from a lerfu to a punctuation mark. Typically, this BY word would be a name or brivla with a bu suffix.
Why is lau necessary at all? Why not just use a bu-marked word and announce that it is always to be interpreted as a punctuation mark? Primarily to avoid ambiguity. The bu mechanism is extremely open-ended, and it is easy for Lojban users to make up bu words without bothering to explain what they mean. Using the lau cmavo flags at least the most important of such nonce lerfu words as having a special function: punctuation. (Exactly the same argument applies to the use of zai to signal an alphabet shift or ce'a to signal a font shift.)
Since different alphabets require different punctuation marks, the interpretation of a lau-marked lerfu word is affected by the current alphabet shift and the current font shift.
Chinese characters (han 4 zi 4 in Chinese, kanji in Japanese) represent an entirely different approach to writing from alphabets or syllabaries. (A syllabary, such as Japanese hiragana or Amharic writing, has one lerfu for each syllable of the spoken language.) Very roughly, Chinese characters represent single elements of meaning; also very roughly, they represent single syllables of spoken Chinese. There is in principle no limit to the number of Chinese characters that can exist, and many thousands are in regular use.
It is hopeless for Lojban, with its limited lerfu and shift words, to create an alphabet which will match this diversity. However, there are various possible ways around the problem.
First, both Chinese and Japanese have standard Latin-alphabet representations, known as pinyin for Chinese and romaji for Japanese, and these can be used. Thus, the word han4zi4 is conventionally written with two characters, but it may be spelled out as:
Example 17.19.
.y'y.bu .abu ny. vo zy. .ibu vo h a n 4 z i 4
The cmavo vo is the Lojban digit 4. It is grammatical to intersperse digits (of selma'o PA) into a string of lerfu words; as long as the first cmavo is a lerfu word, the whole will be interpreted as a string of lerfu words. In Chinese, the digits can be used to represent tones. Pinyin is more usually written using accent marks, the mechanism for which was explained in Section 17.6.
The Japanese company named Mitsubishi in English is spelled the same way in romaji, and could be spelled out in Lojban thus:
Example 17.20.
my. .ibu ty. sy. .ubu by. .ibu sy. .y'y.bu .ibu m i t s u b i s h i
Alternatively, a really ambitious Lojbanist could assign lerfu words to the individual strokes used to write Chinese characters (there are about seven or eight of them if you are a flexible human being, or about 40 if you are a rigid computer program), and then represent each character with a tei, the stroke lerfu words in the order of writing (which is standardized for each character), and a foi. No one has as yet attempted this project.
17.9. lerfu words as pro-sumti
So far, lerfu words have only appeared in Lojban text when spelling out words. There are several other grammatical uses of lerfu words within Lojban. In each case, a single lerfu word or more than one may be used. Therefore, the term lerfu string is introduced: it is short for sequence of one or more lerfu words.
A lerfu string may be used as a pro-sumti (a sumti which refers to some previous sumti), just like the pro-sumti ko'a, ko'e, and so on:
Example 17.21.
.abu prami by.
A loves B
In Example 17.21, .abu and by. represent specific sumti, but which sumti they represent must be inferred from context.
Alternatively, lerfu strings may be assigned by goi, the regular pro-sumti assignment cmavo:
Example 17.22.
le gerku goi gy. cu xekri .i gy. klama le zdani
The dog, or G, is black. G goes to the house.
There is a special rule that sometimes makes lerfu strings more advantageous than the regular pro-sumti cmavo. If no assignment can be found for a lerfu string (especially a single lerfu word), it can be assumed to refer to the most recent sumti whose name or description begins in Lojban with that lerfu. So Example 17.22 can be rephrased:
Example 17.23.
le gerku cu xekri. .i gy. klama le zdani
The dog is black. G goes to the house.
(A less literal English translation would use D for dog instead.)
Here is an example using two names and longer lerfu strings:
Example 17.24.
la stivn. mark. djonz. merko Steven Mark Jones is-American.
.i la .aleksandr. paliitc. kuzNIETsyf. rusko Alexander Pavlovitch Kuznetsov is-Russian.
.i symyjy. tavla .abupyky. bau la lojban. SMJ talks-to APK in Lojban.
Perhaps Alexander's name should be given as ru'o.abupyky instead.
Example 17.25.
.abu dunda by. cy. A gives B C
Does this mean that A gives B to C? No. by. cy. is a single lerfu string, although written as two words, and represents a single pro-sumti. The true interpretation is that A gives BC to someone unspecified. To solve this problem, we need to introduce the elidable terminator boi (of selma'o BOI). This cmavo is used to terminate lerfu strings and also strings of numerals; it is required when two of these appear in a row, as here. (The other reason to use boi is to attach a free modifier – subscript, parenthesis, or what have you – to a lerfu string.) The correct version is:
Example 17.26.
.abu [boi] dunda by. boi cy. [boi]
A gives B to C
where the two occurrences of boi in brackets are elidable, but the remaining occurrence is not. Likewise:
Example 17.27.
xy. boi ro [boi] prenu cu prami X all persons loves.
X loves everybody.
requires the first boi to separate the lerfu string xy. from the digit string ro.
17.10. References to lerfu
The rules of Section 17.9 make it impossible to use unmarked lerfu words to refer to lerfu themselves. In the sentence:
Example 17.28.
.abu cu lerfu A is-a-letteral.
the hearer would try to find what previous sumti .abu refers to. The solution to this problem makes use of the cmavo me'o of selma'o LI, which makes a lerfu string into a sumti representing that very string of lerfu. This use of me'o is a special case of its mathematical use, which is to introduce a mathematical expression used literally rather than for its value.
Example 17.29.
me'o .abu cu lerfu
The-expression “a” is-a-letteral.
Now we can translate Example 17.1 into Lojban:
Example 17.30.
dei vasru vo lerfu po'u me'o .ebu this-sentence contains four letterals which-are the-expression “e”
This sentence contains four “e” s.
Since the Lojban sentence has only four e lerfu rather than fourteen, the translation is not a literal one – but Example 17.31 is a Lojban truth just as Example 17.1 is an English truth. Coincidentally, the colloquial English translation of Example 17.31 is also true!
The reader might be tempted to use quotation with luli'u instead of me'o, producing:
Example 17.31.
lu .abu li'u cu lerfu [quote] .abu [unquote] is-a-letteral.
(The single-word quote zo cannot be used, because .abu is a compound cmavo.) But Example 17.31 is false, because it says:
Example 17.32.
The word .abu is a letteral
which is not the case; rather, the thing symbolized by the word .abu is a letteral. In Lojban, that would be:
Example 17.33.
la'e lu .abu li'u cu lerfu The-referent-of [quote] .abu [unquote] is-a-letteral.
which is correct.
17.11. Mathematical uses of lerfu strings
This chapter is not about Lojban mathematics, which is explained in Chapter 18, so the mathematical uses of lerfu strings will be listed and exemplified but not explained.
• A lerfu string as mathematical variable:
Example 17.34.
li .abu du li by. su'i cy. the-number a equals the-number b plus c
a = b + c
• A lerfu string as function name (preceded by ma'o of selma'o MAhO):
Example 17.35.
li .y.bu du li ma'o fy. boi xy. the-number y equals the-number the-function f of x y = f(x)
Note the boi here to separate the lerfu strings fy and xy.
• A lerfu string as selbri (followed by a cmavo of selma'o MOI):
Example 17.36.
le vi ratcu ny.moi le'i mi ratcu the here rat is-nth-of the-set-of my rats
This rat is my Nth rat.
• A lerfu string as utterance ordinal (followed by a cmavo of selma'o MAI):
Example 17.37.
ny.mai
Nthly
• A lerfu string as subscript (preceded by xi of selma'o XI):
Example 17.38.
xy. xi ky. x sub k
• A lerfu string as quantifier (enclosed in veive'o parentheses):
Example 17.39.
vei ny. [ve'o] lo prenu ( “n” ) persons
The parentheses are required because ny. lo prenu would be two separate sumti, ny. and lo prenu. In general, any mathematical expression other than a simple number must be in parentheses when used as a quantifier; the right parenthesis mark, the cmavo ve'o, can usually be elided.
All the examples above have exhibited single lerfu words rather than lerfu strings, in accordance with the conventions of ordinary mathematics. A longer lerfu string would still be treated as a single variable or function name: in Lojban, .abu by. cy. is not the multiplication a × b × c but is the variable abc. (Of course, a local convention could be employed that made the value of a variable like abc, with a multi-lerfu-word name, equal to the values of the variables a, b, and c multiplied together.)
There is a special rule about shift words in mathematical text: shifts within mathematical expressions do not affect lerfu words appearing outside mathematical expressions, and vice versa.
17.12. Acronyms
An acronym is a name constructed of lerfu. English examples are DNA, NATO, CIA. In English, some of these are spelled out (like DNA and CIA) and others are pronounced more or less as if they were ordinary English words (like NATO). Some acronyms fluctuate between the two pronunciations: SQL may be ess cue ell or sequel.
In Lojban, a name can be almost any sequence of sounds that ends in a consonant and is followed by a pause. The easiest way to Lojbanize acronym names is to glue the lerfu words together, using ' wherever two vowels would come together (pauses are illegal in names) and adding a final consonant:
Example 17.40.
la dyny'abub. .i la ny'abuty'obub. .i la cy'ibu'abub.
DNA. NATO. CIA.
… .i la sykybulyl. .i la .ibubymym. .i la ny'ybucyc.
… SQL. IBM. NYC.
There is no fixed convention for assigning the final consonant. In Example 17.40, the last consonant of the lerfu string has been replicated into final position.
Some compression can be done by leaving out bu after vowel lerfu words (except for .y.bu, wherein the bu cannot be omitted without ambiguity). Compression is moderately important because it's hard to say long names without introducing an involuntary (and illegal) pause:
Example 17.41.
la dyny'am. .i la ny'aty'om. .i la cy'i'am.
DNA. NATO. CIA.
… .i la sykybulym. .i la .ibymym. .i la ny'ybucym.
… SQL. IBM. NYC.
In Example 17.41, the final consonant m stands for merko, indicating the source culture of these acronyms.
Another approach, which some may find easier to say and which is compatible with older versions of the language that did not have a ' character, is to use the consonant z instead of ' :
Example 17.42.
la dynyzaz. .i la nyzatyzoz. .i la cyzizaz.
DNA. NATO. CIA.
… .i la sykybulyz. .i la .ibymyz. .i la nyzybucyz.
… SQL. IBM. NYC.
One more alternative to these lengthy names is to use the lerfu string itself prefixed with me, the cmavo that makes sumti into selbri:
Example 17.43.
la me dy ny. .abu that-named what-pertains-to “d” “n” “a”
This works because la, the cmavo that normally introduces names used as sumti, may also be used before a predicate to indicate that the predicate is a (meaningful) name:
Example 17.44.
la cribe cu ciska That-named “Bear” writes.
Bear is a writer.
Example 17.44 does not of course refer to a bear (le cribe or lo cribe) but to something else, probably a person, named Bear. Similarly, me dy ny. .abu is a predicate which can be used as a name, producing a kind of acronym which can have pauses between the individual lerfu words.
17.13. Computerized character codes
Since the first application of computers to non-numerical information, character sets have existed, mapping numbers (called character codes) into selected lerfu, digits, and punctuation marks (collectively called characters). Historically, these character sets have only covered the English alphabet and a few selected punctuation marks. International efforts have now created Unicode, a unified character set that can represent essentially all the characters in essentially all the world's writing systems. Lojban can take advantage of these encoding schemes by using the cmavo se'e (of selma'o BY). This cmavo is conventionally followed by digit cmavo of selma'o PA representing the character code, and the whole string indicates a single character in some computerized character set:
Example 17.45.
me'o se'e cixa cu lerfu la .asycy'i'is. The-expression [code] 36 is-a-letteral-in-set ASCII
loi merko rupnu for-the-mass-of American currency-units.
The character code 36 in ASCII represents American dollars. “$” represents American dollars. Understanding Example 17.45 depends on knowing the value in the ASCII character set (one of the simplest and oldest) of the$ character. Therefore, the se'e convention is only intelligible to those who know the underlying character set. For precisely specifying a particular character, however, it has the advantages of unambiguity and (relative) cultural neutrality, and therefore Lojban provides a means for those with access to descriptions of such character sets to take advantage of them.
As another example, the Unicode character set (also known as ISO 10646) represents the international symbol of peace, an inverted trident in a circle, using the base-16 value 262E. In a suitable context, a Lojbanist may say:
Example 17.46.
me'o se'e rexarerei sinxa le ka panpi the-expression [code] 262E is-a-sign-of the quality-of being-at-peace
When a se'e string appears in running discourse, some metalinguistic convention must specify whether the number is base 10 or some other base, and which character set is in use.
17.14. List of all auxiliary lerfu-word cmavo
bu BU makes previous word into a lerfu word ga'e BY upper case shift to'a BY lower case shift tau LAU case-shift next lerfu word only lo'a BY Latin/Lojban alphabet shift ge'o BY Greek alphabet shift je'o BY Hebrew alphabet shift jo'o BY Arabic alphabet shift ru'o BY Cyrillic alphabet shift se'e BY following digits are a character code na'a BY cancel all shifts zai LAU following lerfu word specifies alphabet ce'a LAU following lerfu word specifies font lau LAU following lerfu word is punctuation tei TEI start compound lerfu word foi FOI end compound lerfu word
Note that LAU cmavo must be followed by a BY cmavo or the equivalent, where equivalent means: either any Lojban word followed by bu, another LAU cmavo (and its required sequel), or a teifoi compound cmavo.
17.15. Proposed lerfu words – introduction
The following sections contain tables of proposed lerfu words for some of the standard alphabets supported by the Lojban lerfu system. The first column of each list is the lerfu (actually, a Latin-alphabet name sufficient to identify it). The second column is the proposed name-based lerfu word, and the third column is the proposed lerfu word in the system based on using the cmavo of selma'o BY with a shift word.
These tables are not meant to be authoritative (several authorities within the Lojban community have niggled over them extensively, disagreeing with each other and sometimes with themselves). They provide a working basis until actual usage is available, rather than a final resolution of lerfu word problems. Probably the system presented here will evolve somewhat before settling down into a final, conventional form.
For Latin-alphabet lerfu words, see Section 17.2 (for Lojban) and Section 17.5 (for non-Lojban Latin-alphabet lerfu).
17.16. Proposed lerfu words for the Greek alphabet
alpha .alfas. bu .abu beta .betas. bu by gamma .gamas. bu gy delta .deltas. bu dy epsilon .Epsilon. bu .ebu zeta .zetas. bu zy eta .etas. bu .e'ebu theta .tetas. bu ty. bu iota .iotas. bu .ibu kappa .kapas. bu ky lambda .lymdas. bu ly mu .mus. bu my nu .nus. bu ny xi .ksis. bu ksis. bu omicron .Omikron. bu .obu pi .pis. bu py rho .ros. bu ry sigma .sigmas. bu sy tau .taus. bu ty upsilon .Upsilon. bu .ubu phi .fis. bu py. bu chi .xis. bu ky. bu psi .psis. bu psis. bu omega .omegas. bu .o'obu rough .dasei,as. bu .y'y smooth .psiles. bu xutla bu
17.17. Proposed lerfu words for the Cyrillic alphabet
The second column in this listing is based on the historical names of the letters in Old Church Slavonic. Only those letters used in Russian are shown; other languages require more letters which can be devised as needed.
a .azys. bu .abu b .bukys. bu by v .vedis. bu vy g .glagolis. bu gy d .dobros. bu dy e .iestys. bu .ebu zh .jivet. bu jy z .zemlias. bu zy i .ije,is. bu .ibu short i .itord. bu .itord. bu k .kakos. bu ky l .liudi,ies. bu ly m .myslites. bu my n .naciys. bu ny o .onys. bu .obu p .pokois. bu py r .riytsis. bu ry s .slovos. bu sy t .tyvriydos. bu ty u .ukys. bu .ubu f .friytys. bu fy kh .xerys. bu xy ts .tsis. bu tsys. bu ch .tcriyviys. bu tcys. bu sh .cas. bu cy shch .ctas. bu ctcys. bu hard sign .ier. bu jdari bu yeri .ierys. bu .y.bu soft sign .ieriys. bu ranti bu reversed e .ecarn. bu .ecarn. bu yu .ius. bu .iubu ya .ias. bu .iabu
17.18. Proposed lerfu words for the Hebrew alphabet
aleph .alef. bu .alef. bu bet .bet. bu by gimel .gimel. bu gy daled .daled. bu dy he .xex. bu .y'y vav .vav. bu vy zayin .zai,in. bu zy khet .xet. bu xy. bu tet .tet. bu ty. bu yud .iud. bu .iud. bu kaf .kaf. bu ky lamed .LYmed. bu ly mem .mem. bu my nun .nun. bu ny samekh .samex. bu samex. bu ayin .ai,in. bu .ai,in bu pe .pex. bu py tzadi .tsadik. bu tsadik. bu quf .kuf. bu ky. bu resh .rec. bu ry shin .cin. bu cy sin .sin. bu sy taf .taf. bu ty. dagesh .daGEC. bu daGEC. bu hiriq .xirik. bu .ibu tzeirekh .tseirex. bu .eibu segol .seGOL. bu .ebu qubbutz .kubuts. bu .ubu qamatz .kamats. bu .abu patach .patax. bu .a'abu sheva .cyVAS. bu .y.bu kholem .xolem. bu .obu shuruq .curuk. bu .u'ubu
17.19. Proposed lerfu words for some accent marks and multiple letters
This list is intended to be suggestive, not complete: there are lerfu such as Polish dark l and Maltese h-bar that do not yet have symbols.
acute .akut. bu or .pritygal. bu [pritu galtu] grave .grav. bu or .zulgal. bu [zunle galtu] circumflex .cirkumfleks. bu or .midgal. bu [midju galtu] tilde .tildes. bu macron .makron. bu breve .brevis. bu over-dot .gapmoc. bu [gapru mokca] umlaut/trema .relmoc. bu [re mokca] over-ring .gapyjin. bu [gapru djine] cedilla .seDIlys. bu double-acute .re'akut. bu [re akut.] ogonek .ogoniek. bu hacek .xatcek. bu ligatured fi tei fy. ibu foi Danish/Latin ae ae tei .abu .ebu foi Dutch ij tei .ibu jy. foi German es-zed tei sy. zy. foi
17.20. Proposed lerfu words for radio communication
There is a set of English words which are used, by international agreement, as lerfu words (for the English alphabet) over the radio, or in noisy situations where the utmost clarity is required. Formally they are known as the ICAO Phonetic Alphabet, and are used even in non-English-speaking countries.
This table presents the standard English spellings and proposed Lojban versions. The Lojbanizations are not straightforward renderings of the English sounds, but make some concessions both to the English spellings of the words and to the Lojban pronunciations of the lerfu (thus carlis. bu, not tcarlis. bu).
Alfa .alfas. bu Bravo .bravos. bu Charlie .carlis. bu Delta .deltas. bu Echo .ekos. bu Foxtrot .fokstrot. bu Golf .golf. bu Hotel .xoTEL. bu India .indias. bu Juliet .juliet. bu Kilo .kilos. bu Lima .limas. bu Mike .maik. bu November .novembr. bu Oscar .oskar. bu Papa .paPAS. bu Quebec .keBEK. bu Romeo .romios. bu Sierra .sieras. bu Tango .tangos. bu Uniform .Uniform. bu Victor .viktas. bu Whiskey .uiskis. bu X-ray .eksreis. bu Yankee .iankis. bu Zulu .zulus. bu
|
2019-03-21 02:10:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7481642365455627, "perplexity": 10318.59860846363}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00013.warc.gz"}
|
https://top10webjs.com/page/158/
|
# [Vue.js] v-if using table row field and date range
I’m trying to highlight a <td> based on that field’s date range from today’s date.
I’ve been trying to use current <td> date value less than Date.now() - #(number of days) to determine whether to highlight the <td> (green, yellow, or red) but not having any success with how I’m doing so.
<td v-if=”props.item.date < Date.now() - 2”>
<v-icon small style=”color:green;”>fiber_manual_record</v-icon>{ props.item.date }
</td>
<td v-else-if=”props.item.date < Date.now() - 7”>
<v-icon small style=”color:yellow;”>fiber_manual_record</v-icon>{ props.item.date }
</td>
<td v-else>
<v-icon small style=”color:red;”>fiber_manual_record</v-icon>{ props.item.date }
</td>
I’d like to think I’m close to the solution but I may not be doing it the appropriate way. Any help would be greatly appreciated.
UPDATE 2
<td v-if=”Date.parse(props.item.date) > Date.now()”>
<v-icon small class=”greenDate”>fiber_manual_record</v-icon>{ props.item.date_sent }
</td>
<td v-else-if=”Date.parse(props.item.date) < Date.now()”>
<v-icon small class=”yellowDate”>fiber_manual_record</v-icon>{ props.item.date_sent }
</td>
<td v-else”>
<v-icon small class=”redDate”>fiber_manual_record</v-icon>{ props.item.date_sent }
</td>
Tried this just to test and see if anything is even recognized the conditions and it doesnt seem like it. Always hits the last condition (red). Maybe its not like props.item.date formatted mm/dd/yyyy. I also realized my conditions in the original example would conflict since there is both a less than 2 days and 7 days, but no condition for also greater than 2 days under 7 days condition
### Solution :
The Date.now() method returns the number of milliseconds
So if you want to reduce somedays. you have to covert days to milliseconds and then minus. 1 day = 1000 * 60 * 60 * 24 * 1 milliseconds
The Date.parse() method parses a string representation of a date, and returns the number of milliseconds
Convert props.item.date to milliseconds using Date.parse
for example change code like below
<td v-if=”Date.parse(props.item.date) < Date.now() - (2 * 1000 * 60 * 60 * 24)”>
<v-icon small style=”color:green;”>fiber_manual_record</v-icon>{ props.item.date }
</td>
<td v-else-if=”Date.parse(props.item.date) < Date.now() - (7 * 1000 * 60 * 60 * 24)”>
<v-icon small style=”color:yellow;”>fiber_manual_record</v-icon>{ props.item.date }
</td>
<td v-else>
<v-icon small style=”color:red;”>fiber_manual_record</v-icon>{ props.item.date }
</td>
# [Vue.js] Making only one module persistent with vuex-persistedstate
I need to use vuex-persistedstate to make only one of my modules to persists state through refresh of the page.
Right now, it doesn’t work when I use plugins: [createPersistedState()] only inside the user module.
plugins: [createPersistedState()] works only when I use it inside the store’s index.js but it make all modules persistent which is not what I want.
Please, is there a way how to configure vuex-persistedstate to work only with one module?
index.js
//import createPersistedState from ‘vuex-persistedstate’
import vue.js from ‘vue’
import Vuex from ‘vuex’
import user from ‘./modules/user’
import workout from ‘./modules/workout’
Vue.use(Vuex)
export default new Vuex.Store({
state: {
},
getters: {
},
mutations: {
},
actions: {
},
modules: {
user,
workout
},
//This makes all store modules persist through page refresh
//plugins: [createPersistedState()]
})
user.js
import { USER } from ‘../mutation-types’
import createPersistedState from ‘vuex-persistedstate’
export default {
namespaced: true,
state: {
darkMode: true
},
getters: {
getDarkMode: state => () => state.darkMode
},
actions: {
toggleDarkMode: ({commit}) => commit(USER.TOGGLE_DARKMODE)
}
mutations: {
[USER.TOGGLE_DARKMODE]: (state) => state.darkMode = !state.darkMode
},
//This doesn’t work
plugins: [createPersistedState()]
}
### Solution :
Looking at the API docs, you will need to configure the plugin to only persist a certain subset of the store.
export default new Vuex.Store({
// …
plugins: [
createPersistedState({
paths: [‘user’],
}),
],
});
From the docs above:
paths <Array>: An array of any paths to partially persist the state. If no paths are given, the complete state is persisted. Paths must be specified using dot notation. If using modules, include the module name. eg: “auth.user” (default: [])
# [Vue.js] Carrierwave upload an image from vue front to rails api
when not sure how to make the axios post for the image.
This is what my json object looks like.
{
“id”:20,
“title”:”pineapple”,
“text”:”pineapple”,
“date”:null,
“created_at”:”2019-03-23T01:42:48.142Z”,
“updated_at”:”2019-03-23T01:42:48.142Z”,
“image”:{
“url”:null
}
}
This is my image input from the vue.js form.
<input type=”file”
id=”file”
ref=”myFiles”
class=”custom-file-input”
@change=”takeFile”
multiple>
Here is me trying to make sense of it.
export default {
data() {
return {
blog: {
title: ‘’,
content: ‘’,
}
}
},
methods: {
submitArticle(blog) {
axios.post(‘http://localhost:3000/articles', {
title: blog.title,
text: blog.content,
image: {
}
})
.then(function (response) {
console.log(response);
})
.catch(function (error) {
console.log(error);
});
},
takeFile(event) {
console.log(this.$refs.myFiles.files); this.blog.link = this.$refs.myFiles.files
}
}
}
Here is a link to the file in my repo.
### Solution :
First this.$refs.myFiles.files returns an array of files. Change the method like this to set the file to blog.link: takeFile(event) { this.blog.link = this.$refs.myFiles.files[0]
}
Now to send file in the post request, you should use FormData:
submitArticle(blog) {
let formData = new FormData()
formData.append(“article[title]“, blog.title)
formData.append(“article[text]“, blog.content)
axios.post(‘http://localhost:3000/articles', formData, {
‘Content-Type’: ‘application/json’
}
}).then(function (response) {
console.log(response)
}).catch(function (error) {
console.log(error)
})
},
# [Vue.js] Vue JS Difference of data() { return {} } vs data () => ({ })FixReason
I’m curious both of this data function, is there any difference between this two.
I usually saw is
data () {
return {
obj
}
}
And ES6 “fat arrow” which I typically used
data:()=>({
obj
})
### Solution :
No difference in the specific example, but there is a very important difference between those two notations, specially when it comes to Vue.js: the this won’t reflect the vue.js instance in arrow functions.
So if you ever have something like:
export default {
props: [‘stuffProp’],
data: () => ({
myData: ‘someData’,
myStuff: this.stuffProp
})
}
It won’t work as you expect. The this.stuffProp won’t get the stuffProp prop’s value (see below for more on the reason why).
Fix
Change the arrow function to, either (ES6/EcmaScript 2015 notation):
export default {
props: [‘stuffProp’],
data() { // <== changed this line
return {
myData: ‘someData’,
myStuff: this.stuffProp
}
}
}
Or to (regular, ES5 and before, notation):
export default {
props: [‘stuffProp’],
data: function() { // <== changed this line
return {
myData: ‘someData’,
myStuff: this.stuffProp
}
}
}
Reason
Don’t use arrow functions (() => {}) when declaring vue.js methods. They pick up the this from the current scope (possibly window), and will not reflect the vue.js instance.
From the API Docs:
Note that you should not use an arrow function with the data property (e.g. data: () => { return { a: this.myProp }). The reason is arrow functions bind the parent context, so this will not be the vue.js instance as you expect and this.myProp will be undefined.
# [Vue.js] Vuetify Navigation Drawer Drag To Resize
So there is gotten it to where the the ‘drag to resize’ works - it just feels a little laggy… does anyone know why this may be, and how to fix it?
there is tried forcing a refresh using vm.$forceUpdate() but that did not seem to do anything.. The CodePen can be found here. ### Solution : Thats because of transition effect on navigation drawer. set transition to initial at mouse down, then release that on mouse up. at mousedown add el.style.transition =’initial’; at mouseup add el.style.transition =’’; # [Vue.js] Abnormal behavior of Select in Vue.js there is a primitive vue.js widget, which consists of two pages. In each page you can select dummy options from dropdown (which change value1 and value2 variables accordingly) The problem is when I move from “stepOne” to “stepTwo”, for some reason the value of value2 becomes undefined unexpectedly (even though there is no logical connection between value1 and value2, nor step variable). Ideally, after the first step, in second step it should automatically select “option 1”, as the value equals to value2=1 I wonder why undefined is assigned to value2, and how can I prevent the given behavior Here is my sample code, that contains this weird behavior: <div> <div id=”app”> <div v-if=”step === steps.stepOne”> <p>This is step One</p> <select v-model=”value1”> <option v-for=”item in array1” :value=”item.value”>{ item.name }</option> </select> <button @click=”changeStep()”>Next</button> </div> <div v-if=”step === steps.stepTwo”> <p>This is step Two</p> <select v-model=”value2”> <option value=”2”>option 2</option> <option value=”0”>option 0</option> <option value=”1”>option 1</option> </select> </div> value1: {value1} <br> value2: {value2} </div> </div> <script src=”https://cdn.jsdelivr.net/npm/vue"></script> <script> var steps = { stepOne: 1, stepTwo: 2, }; var app = new Vue({ el: ‘#app’, data: { step: steps.stepOne, value1: ‘b’, value2: 1, array1: [ { name: ‘option a’, value: ‘a’, }, { name: ‘option b’, value: ‘b’, }, ] }, methods: { changeStep() { this.step = steps.stepTwo; } }, watch: { value1: function(newValue) { console.log(“value1: “ + newValue); }, value2: function(newValue) { console.log(“value2: “ + newValue); } }, }); </script> ### Solution : there is no idea about how vue.js works, but I’ve tried putting a : before the value attribute and it started working! <div> <div id=”app”> <div v-if=”step === steps.stepOne”> <p>This is step One</p> <select v-model=”value1”> <option v-for=”item in array1” :value=”item.value”>{ item.name }</option> </select> <button @click=”changeStep()”>Next</button> </div> <div v-if=”step === steps.stepTwo”> <p>This is step Two</p> <select v-model=”value2”> <option :value=”2”>option 2</option> <option :value=”0”>option 0</option> <option :value=”1”>option 1</option> </select> </div> value1: {value1} <br> value2: {value2} </div> </div> <script src=”https://cdn.jsdelivr.net/npm/vue"></script> <script> var steps = { stepOne: 1, stepTwo: 2, }; var app = new Vue({ el: ‘#app’, data: { step: steps.stepOne, value1: ‘b’, value2: 1, array1: [ { name: ‘option a’, value: ‘a’, }, { name: ‘option b’, value: ‘b’, }, ] }, methods: { changeStep() { this.step = steps.stepTwo; } }, watch: { value1: function(newValue) { console.log(“value1: “ + newValue); }, value2: function(newValue) { console.log(“value2: “ + newValue); } }, }); </script> # [Vue.js] Convert String to Object Key Name I’m sending different objects to a component outside, and component data varies depending on objects. I’m getting the names with the Object.key function because the keywords I send have different key. Then to sort by the key. For this I need to define the name I received with Object.key function. How can I do it? upSortTable(items, val) { //items = Object, //val = index let Keys = Object.keys(items[0]); // [“item_id”,”item_title”] let keyname = Keys[val]; //item_id String value //want to use in sort function as b.item_id return items.sort(function(a, b) { return b.keyname - a.keyname; }); }, ### Solution : You’ll need to use computed property: return items.sort(function(a, b) { return b[keyname] - a[keyname]; }); When you do a.keyname you’re actually looking for the property keyname in a itself. # [Vue.js] Creating parallax effect with Vuetify using an SVG file vue.jstify provides the v-parallax component to create a typical parallax effect, which creates a 3d effect that makes an image appear to scroll slower than the window: <template> <v-parallax src=”https://cdn.vuetifyjs.com/images/parallax/material.jpg"></v-parallax> </template> Unfortunately it doesn’t work with svg files. Does anybody know whether there is an easy way to create a parallax effect with vuetify or vue, but using an svg file? ### Solution : You can use it if you pass the svg data base64 encoded data:image/svg+xml;base64,[data] where [data] would be the data you get by passing it into an encoder like https://www.base64encode.org/ example: <v-parallax src=”data:image/svg+xml;base64,PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iaXNvLTg4NTktMSI/PjxzdmcgdmVyc2lvbj0iMS4xIiBpZD0iQ2FwYV8xIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNTggNTgiIHN0eWxlPSJlbmFibGUtYmFja2dyb3VuZDpuZXcgMCAwIDU4IDU4OyIgeG1sOnNwYWNlPSJwcmVzZXJ2ZSI+PGNpcmNsZSBzdHlsZT0iZmlsbDojRUJCQTE2OyIgY3g9IjI5IiBjeT0iMjkiIHI9IjI5Ii8+PHBvbHlnb24gc3R5bGU9ImZpbGw6I0ZGRkZGRjsiIHBvaW50cz0iNDQsMjkgMjIsNDQgMjIsMjkuMjczIDIyLDE0Ii8+PHBhdGggc3R5bGU9ImZpbGw6I0ZGRkZGRjsiIGQ9Ik0yMiw0NWMtMC4xNiwwLTAuMzIxLTAuMDM4LTAuNDY3LTAuMTE2QzIxLjIwNSw0NC43MTEsMjEsNDQuMzcxLDIxLDQ0VjE0YzAtMC4zNzEsMC4yMDUtMC43MTEsMC41MzMtMC44ODRjMC4zMjgtMC4xNzQsMC43MjQtMC4xNSwxLjAzMSwwLjA1OGwyMiwxNUM0NC44MzYsMjguMzYsNDUsMjguNjY5LDQ1LDI5cy0wLjE2NCwwLjY0LTAuNDM3LDAuODI2bC0yMiwxNUMyMi4zOTQsNDQuOTQxLDIyLjE5Nyw0NSwyMiw0NXogTTIzLDE1Ljg5M3YyNi4yMTVMNDIuMjI1LDI5TDIzLDE1Ljg5M3oiLz48L3N2Zz4=” height=”200”> # [Vue.js] How to dynamically add an Id to the navbar used on laravel/VueJs SPA On my laravel/vuejs Single Page App, when mounting all my components on a single page (welcome.blade.php file), and inside there is included my navbar blade component @include(‘layouts.navbar’). @extends(‘layouts.app’) @section(‘content’) @include(‘layouts.homenavbar’) <router-view></router-view> @endsection My homepage has a transparent background navbar on a huge banner. However, the same navbar is served to the other pages/components but with a coloured background and white colored fonts. I tried using itenary to check if the incoming route is home and add an id that gives the navbar a transparent background, and if the route isn’t home leave the coloured background navbar like so; <nav class=”navbar navbar-default navbar-static-top” id=>”{ Route::currentRouteName() === ‘home’ ? “home_nav” : ‘’}”> -— </nav> and in my css; #home_nav{ background: iceblue; color: #fff; } However, when i go to other routes i keep getting a transparent navbar unless i reload the page. How do i fix this? ### Solution : Try checking window.location.href to get current route on mounted() state of the routes and add the css on that condition ### Solution 2: You could pass a class as a prop with the route: https://router.vuejs.org/guide/essentials/passing-props.html # [Vue.js] VueJS how to write path with parameter correctly I’m following a tutorial and the paths with parameter are not working. data() { return { id: this.$route.params.id,
element: {
title: ‘’,
description: ‘’,
}
}
},
methods: {
getBook() {
const path = ‘http://127.0.0.1:8000/api/v1/books/${this.id}/' axios.get(path).then((response) => { this.element.title = response.data.title this.element.description = response.data.description }) .catch((error) => { console.log(error) }) }, created() { this.getBook() } In console: “GET /api/v1/books/$%7Bthis.id%7D/ HTTP/1.1” 404 2410
What exactly is wrong with what I’m doing?
### Solution :
You need to use backticks :
or just:
|
2021-05-07 10:29:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24548131227493286, "perplexity": 11940.284749673052}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00536.warc.gz"}
|
https://mail.mynutra.com/vq5j9/asymptotic-properties-of-ols-a0541a
|
Select Page
"Inferences from parametric is available, then the asymptotic variance of the OLS estimator is In more general models we often can’t obtain exact results for estimators’ properties. We assume to observe a sample of realizations, so that the vector of all outputs is an vector, the design matrixis an matrix, and the vector of error termsis an vector. is . ) Asymptotic Properties of OLS. , asymptotic results will not apply to these estimators. Technical Working OLS Revisited: Premultiply the ... analogy work, so that (7) gives the IV estimator that has the smallest asymptotic variance among those that could be formed from the instruments W and a weighting matrix R. ... asymptotic properties, and then return to the issue of finite-sample properties. mean, For a review of some of the conditions that can be imposed on a sequence to Linear As a consequence, the covariance of the OLS estimator can be approximated Derivation of the OLS estimator and its asymptotic properties Population equation of interest: (5) y= x +u where: xis a 1 Kvector = ( … and is consistently estimated by its sample is. that are not known. where: identification assumption). could be assumed to satisfy the conditions of is consistently estimated CONSISTENCY OF OLS, PROPERTIES OF CONVERGENCE Though this result was referred to often in class, and perhaps even proved at some point, a student has pointed out that it does not appear in the notes. estimator on the sample size and denote by dependence of the estimator on the sample size is made explicit, so that the Derivation of the OLS estimator and its asymptotic properties Population equation of interest: (5) y= x +u where: xis a 1 Kvector = ( 1;:::; K) x 1 1: with intercept Sample of size N: f(x which do not depend on is permits applications of the OLS method to various data and models, but it also renders the analysis of finite-sample properties difficult. convergence in probability of their sample means On the other hand, the asymptotic prop-erties of the OLS estimator must be derived without resorting to LLN and CLT when y t and x t are I(1). matrixis The first assumption we make is that these sample means converge to their estimator of the asymptotic covariance matrix is available. We assume to observe a sample of Colin Cameron: Asymptotic Theory for OLS 1. Title: PowerPoint Presentation Author: Angie Mangels Created Date: 11/12/2015 12:21:59 PM is uncorrelated with Ordinary Least Squares is the most common estimation method for linear models—and that’s true for a good reason.As long as your model satisfies the OLS assumptions for linear regression, you can rest easy knowing that you’re getting the best possible estimates.. Regression is a powerful analysis that can analyze … is a consistent estimator of , for any However, these are strong assumptions and can be relaxed easily by using asymptotic theory. Linear covariance matrix CONSISTENCY OF OLS, PROPERTIES OF CONVERGENCE Though this result was referred to often in class, and perhaps even proved at some point, a student has pointed out that it does not appear in the notes. estimators. that is, when the OLS estimator is asymptotically normal and a consistent column that For any other consistent estimator of ; say e ; we have that avar n1=2 ^ avar n1=2 e : 4 , guarantee that a Central Limit Theorem applies to its sample mean, you can go , tothat the OLS estimator, we need to find a consistent estimator of the long-run in steps Under asymptotics where the cross-section dimension, n, grows large with the time dimension, T, fixed, the estimator is consistent while allowing essentially arbitrary correlation within each individual.However, many panel data sets have a non-negligible time dimension. Stack Exchange network consists of 177 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share … 7.2.1 Asymptotic Properties of the OLS Estimator To illustrate, we first consider the simplest AR(1) specification: y t = αy t−1 +e t. (7.1) Suppose that {y t} is a random walk such that … for any The second assumption we make is a rank assumption (sometimes also called OLS Estimator Properties and Sampling Schemes 1.1. PPT – Multiple Regression Model: Asymptotic Properties OLS Estimator PowerPoint presentation | free to download - id: 1bdede-ZDc1Z. by Assumption 4, we have In any case, remember that if a Central Limit Theorem applies to Therefore, in this lecture, we study the asymptotic properties or large sample properties of the OLS estimators. and the fact that, by Assumption 1, the sample mean of the matrix normal By Assumption 1 and by the theorem, we have that the probability limit of 1 Topic 2: Asymptotic Properties of Various Regression Estimators Our results to date apply for any finite sample size (n). requires some assumptions on the covariances between the terms of the sequence I provide a systematic treatment of the asymptotic properties of weighted M-estimators under standard stratified sampling. is a consistent estimator of the long-run covariance matrix When we want to study the properties of the obtained estimators, it is convenient to distinguish between two categories of properties: i) the small (or finite) sample properties, which are valid whatever the sample size, and ii) the asymptotic properties, which are associated with large samples, i.e., when tends to . matrix the to. We show that the BAR estimator is consistent for variable selection and has an oracle property … the estimators obtained when the sample size is equal to residuals: As proved in the lecture entitled The Adobe Flash plugin is … regression - Hypothesis testing discusses how to carry out . Nonetheless, it is relatively easy to analyze the asymptotic performance of the OLS estimator and construct large-sample tests. The OLS estimator is the vector of regression coefficients that minimizes the sum of squared residuals: As proved in the lecture entitled Li… Assumption 3 (orthogonality): For each becomesorwhich , thatconverges can be estimated by the sample variance of the see, for example, Den and Levin (1996). row and is,where and . tends to Thus, in order to derive a consistent estimator of the covariance matrix of is a consistent estimator of matrixThen, What is the origin of Americans sometimes refering to the Second World War "the Good War"? on the coefficients of a linear regression model in the cases discussed above, Am I at risk? vector. How to do this is discussed in the next section. This paper studies the asymptotic properties of a sparse linear regression estimator, referred to as broken adaptive ridge (BAR) estimator, resulting from an L 0-based iteratively reweighted L 2 penalization algorithm using the ridge estimator as its initial value. Asymptotic distribution of OLS Estimator. Assumption 1 (convergence): both the sequence In this section we are going to discuss a condition that, together with does not depend on For example, the sequences • In other words, OLS is statistically efficient. -th distribution with mean equal to This paper studies the asymptotic properties of a sparse linear regression estimator, referred to as broken adaptive ridge (BAR) estimator, resulting from an L 0-based iteratively reweighted L 2 penalization algorithm using the ridge estimator as its initial value. Linear the associated The assumptions above can be made even weaker (for example, by relaxing the Assumption 6b: The lecture entitled Consider the linear regression model where the outputs are denoted by , the associated vectors of inputs are denoted by , the vector of regression coefficients is denoted by and are unobservable error terms. we have used the fact that endstream endobj 106 0 obj<> endobj 107 0 obj<> endobj 108 0 obj<> endobj 109 0 obj<> endobj 110 0 obj<> endobj 111 0 obj<> endobj 112 0 obj<> endobj 113 0 obj<> endobj 114 0 obj<>stream Proposition is Asymptotic Efficiency of OLS Estimators besides OLS will be consistent. . the population mean we have used the Continuous Mapping theorem; in step is defined . Therefore, in this lecture, we study the asymptotic properties or large sample properties of the OLS estimators. Proposition vectors of inputs are denoted by If this assumption is satisfied, then the variance of the error terms where, is the vector of regression coefficients that minimizes the sum of squared Asymptotic Properties of OLS estimators. has full rank, then the OLS estimator is computed as equationby Asymptotic Properties of OLS Asymptotic Properties of OLS Probability Limit of from ECOM 3000 at University of Melbourne and in the last step we have applied the Continuous Mapping theorem separately to getBut , Before providing some examples of such assumptions, we need the following an OLS Revisited: Premultiply the ... analogy work, so that (7) gives the IV estimator that has the smallest asymptotic variance among those that could be formed from the instruments W and a weighting matrix R. ... asymptotic properties, and then return to the issue of finite-sample properties. By asymptotic properties we mean properties that are true when the sample size becomes large. A Roadmap Consider the OLS model with just one regressor yi= βxi+ui. There is a random sampling of observations.A3. In this lecture we discuss byand in distribution to a multivariate normal vector with mean equal to We show that the BAR estimator is consistent for variable selection and has an oracle property for parameter estimation. and If Assumptions 1, 2, 3, 4, 5 and 6 are satisfied, then the long-run covariance haveFurthermore, On the other hand, the asymptotic prop-erties of the OLS estimator must be derived without resorting to LLN and CLT when y t and x t are I(1). The OLS estimator is consistent: plim b= The OLS estimator is asymptotically normally distributed under OLS4a as p N( b )!d N 0;˙2Q 1 XX and … We now allow, $X$ to be random variables $\varepsilon$ to not necessarily be normally distributed. However, these are strong assumptions and can be relaxed easily by using asymptotic theory. 8.2.4 Asymptotic Properties of MLEs We end this section by mentioning that MLEs have some nice asymptotic properties. "Properties of the OLS estimator", Lectures on probability theory and mathematical statistics, Third edition. In short, we can show that the OLS We have proved that the asymptotic covariance matrix of the OLS estimator by. and I consider the asymptotic properties of a commonly advocated covariance matrix estimator for panel data. In econometrics, Ordinary Least Squares (OLS) method is widely used to estimate the parameters of a linear regression model. is implies by Assumption 3, it Online appendix. In particular, we will study issues of consistency, asymptotic normality, and efficiency.Manyofthe proofs will be rigorous, to display more generally useful techniques also for later chapters. covariance stationary and regression, if the design matrix Ìg'}ºÊ\Ò8æ. in step HT1o0
w~Å©2×ÉJJMªts¤±òï}\$mc}ßùùÛ»ÂèØ»ëÕ GhµiýÕ)/Ú O Ñj)|UWYøtFì regression, we have introduced OLS (Ordinary Least Squares) estimation of by, First of all, we have the sample mean of the For a review of the methods that can be used to estimate Assumptions 1-3 above, is sufficient for the asymptotic normality of OLS infinity, converges follows: In this section we are going to propose a set of conditions that are OLS is consistent under much weaker conditions that are required for unbiasedness or asymptotic normality. In this case, we will need additional assumptions to be able to produce $\widehat{\beta}$: $\left\{ y_{i},x_{i}\right\}$ is a … Simple, consistent asymptotic variance matrix estimators are proposed for a broad class of problems. adshelp[at]cfa.harvard.edu The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement NNX16AC86A and covariance matrix equal to. Now, By Assumption 1 and by the by the Continuous Mapping theorem, the long-run covariance matrix √ find the limit distribution of n(βˆ probability of its sample To the entry at the intersection of its Óö¦ûÃèn°x9äÇ}±,K¹]N,J?§?§«µßØ¡!,Ûmß*{¨:öWÿ[+o! and Let us make explicit the dependence of the Chebyshev's Weak Law of Large Numbers for estimators on the sample size and denote by If Assumptions 1, 2, 3, 4 and 5 are satisfied, and a consistent estimator When we want to study the properties of the obtained estimators, it is convenient to distinguish between two categories of properties: i) the small (or finite) sample properties, which are valid whatever the sample size, and ii) the asymptotic properties, which are associated with large samples, i.e., when tends to . at the cost of facing more difficulties in estimating the long-run covariance The OLS estimator βb = ³P N i=1 x 2 i ´−1 P i=1 xiyicanbewrittenas bβ = β+ 1 N PN i=1 xiui 1 N PN i=1 x 2 i. is uncorrelated with byTherefore, an ), and vector, the design followswhere: where the outputs are denoted by Paper Series, NBER. Taboga, Marco (2017). for any Kindle Direct Publishing. and asymptotic covariance matrix equal hypothesis tests then, as we have used the hypothesis that Proposition in distribution to a multivariate normal It is then straightforward to prove the following proposition. Note that the OLS estimator can be written as because and we take expected values, we ªÀ ±Úc×ö^!ܰ6mTXhºU#Ð1¹ºMn«²ÐÏQìu8¿^Þ¯ë²dé:yzñ½±5¬Ê ÿú#EïÜ´4V?¤;Ë>øËÁ!ðÙâ¥ÕØ9©ÐK[#dI¹Ïv' ~ÖÉvκUêGzò÷sö&"¥éL|&ígÚìgí0Q,i'ÈØe©ûÅݧ¢ucñ±c׺è2ò+À ³]y³ Linear regression models have several applications in real life. -th the coefficients of a linear regression model. Theorem. by, This is proved as each entry of the matrices in square brackets, together with the fact that that the sequences are , and covariance matrix equal to Continuous Mapping we have used the Continuous Mapping Theorem; in step is asymptotically multivariate normal with mean equal to if we pre-multiply the regression Asymptotic and finite-sample properties of estimators based on stochastic gradients Panos Toulis and Edoardo M. Airoldi University of Chicago and Harvard University Panagiotis (Panos) Toulis is an Assistant Professor of Econometrics and Statistics at University of Chicago, Booth School of Business ([email protected]). The estimation of Under Assumptions 1, 2, 3, and 5, it can be proved that and and of the OLS estimators. Note that, by Assumption 1 and the Continuous Mapping theorem, we Assumption 5: the sequence bywhich termsis as proved above. are unobservable error terms. If Assumptions 1, 2, 3 and 4 are satisfied, then the OLS estimator However, under the Gauss-Markov assumptions, the OLS estimators will have the smallest asymptotic variances. are orthogonal, that . the long-run covariance matrix the sample mean of the . In the lecture entitled where OLS estimator is denoted by . thatconverges Proposition OLS estimator (matrix form) 2. matrix correlated sequences, Linear which Thus, by Slutski's theorem, we have linear regression model. . We now consider an assumption which is weaker than Assumption 6. • The asymptotic properties of estimators are their properties as the number of observations in a sample becomes very large and tends to infinity. does not depend on If Assumptions 1, 2 and 3 are satisfied, then the OLS estimator under which assumptions OLS estimators enjoy desirable statistical properties meanto is. Not even predeterminedness is required. is consistently estimated This assumption has the following implication. is consistently estimated First of all, we have Most of the learning materials found on this website are now available in a traditional textbook format. Chebyshev's Weak Law of Large Numbers for is. Assumption 6: is in distribution to a multivariate normal random vector having mean equal to We say that OLS is asymptotically efficient. The next proposition characterizes consistent estimators by, First of all, we have Under the asymptotic properties, the properties of the OLS estimators depend on the sample size. is orthogonal to Assumption 2 (rank): the square matrix has full rank (as a consequence, it is invertible). In Section 3, the properties of the ordinary least squares estimator of the identifiable elements of the CI vector obtained from a contemporaneous levels regression are examined. and Lecture 6: OLS Asymptotic Properties Consistency (instead of unbiasedness) First, we need to define consistency. fact. I provide a systematic treatment of the asymptotic properties of weighted M-estimators under standard stratified sampling. Suppose Wn is an estimator of θ on a sample of Y1, Y2, …, Yn of size n. Then, Wn is a consistent estimator of θ if for every e > 0, P(|Wn - θ| > e) → 0 as n → ∞. of The results of this paper confirm this intuition. matrix hypothesis that Efficiency of OLS Gauss-Markov theorem: OLS estimator b 1 has smaller variance than any other linear unbiased estimator of β 1. is uncorrelated with 1 Asymptotic distribution of SLR 1. consistently estimated If Assumptions 1, 2, 3, 4, 5 and 6b are satisfied, then the long-run With Assumption 4 in place, we are now able to prove the asymptotic normality is a consistent estimator of thatBut residualswhere. vector of regression coefficients is denoted by satisfies. Haan, Wouter J. Den, and Andrew T. Levin (1996). to the population means Hot Network Questions I want to travel to Germany, but fear conscription. In statistics, ordinary least squares (OLS) is a type of linear least squares method for estimating the unknown parameters in a linear regression model. We see from Result LS-OLS-3, asymptotic normality for OLS, that avar n1=2 ^ = lim n!1 var n1=2 ^ = (plim(X0X=n)) 1 ˙2 u Under A.MLR1-2, A.MLR3™and A.MLR4-5, the OLS estimator has the smallest asymptotic variance. Section 8: Asymptotic Properties of the MLE In this part of the course, we will consider the asymptotic properties of the maximum likelihood estimator. that. satisfies a set of conditions that are sufficient to guarantee that a Central For the validity of OLS estimates, there are assumptions made while running linear regression models.A1. The conditional mean should be zero.A4. Limit Theorem applies to its sample sufficient for the consistency regression - Hypothesis testing. Let us make explicit the dependence of the thatFurthermore,where mean, Proposition 1. of OLS estimators. is consistently estimated by, Note that in this case the asymptotic covariance matrix of the OLS estimator OLS estimator solved by matrix. . . As the asymptotic results are valid under more general conditions, the OLS and Simple, consistent asymptotic variance matrix estimators are proposed for a broad class of problems. matrix. The main The third assumption we make is that the regressors realizations, so that the vector of all outputs. Asymptotic Normality Large Sample Inference t, F tests based on normality of the errors (MLR.6) if drawn from other distributions ⇒ βˆ j will not be normal ⇒ t, F statistics will not have t, F distributions solution—use CLT: OLS estimators are approximately normally … Important to remember our assumptions though, if not homoskedastic, not true. needs to be estimated because it depends on quantities Under Assumptions 3 and 4, the long-run covariance matrix to the lecture entitled Central Limit Proposition population counterparts, which is formalized as follows. by Assumptions 1, 2, 3 and 5, We see from Result LS-OLS-3, asymptotic normality for OLS, that avar n1=2 ^ = lim n!1 var n1=2 ^ = (plim(X0X=n)) 1 ˙2 u Under A.MLR1-2, A.MLR3™and A.MLR4-5, the OLS estimator has the smallest asymptotic variance. satisfies a set of conditions that are sufficient for the convergence in For any other consistent estimator of … • Some texts state that OLS is the Best Linear Unbiased Estimator (BLUE) Note: we need three assumptions ”Exogeneity” (SLR.3), that their auto-covariances are zero on average). covariance matrix In this case, we might consider their properties as →∞. Asymptotic distribution of the OLS estimator Summary and Conclusions Assumptions and properties of the OLS estimator The role of heteroscedasticity 2.9 Mean and Variance of the OLS Estimator Variance of the OLS Estimator I Proposition: The variance of the ordinary least squares estimate is var ( b~) = (X TX) 1X X(X X) where = var (Y~). Furthermore, . 2.4.1 Finite Sample Properties of the OLS and ML Estimates of , Under the asymptotic properties, the properties of the OLS estimators depend on the sample size. is a consistent estimator of https://www.statlect.com/fundamentals-of-statistics/OLS-estimator-properties. ( In short, we can show that the OLS Asymptotic Properties of OLS and GLS - Volume 5 Issue 1 - Juan J. Dolado in the last step, we have used the fact that, by Assumption 3, . are orthogonal to the error terms we have used Assumption 5; in step iswhere View Asymptotic_properties.pdf from ECO MISC at College of Staten Island, CUNY. As in the proof of consistency, the Assumption 4 (Central Limit Theorem): the sequence the OLS estimator obtained when the sample size is equal to Estimation of the variance of the error terms, Estimation of the asymptotic covariance matrix, Estimation of the long-run covariance matrix. is consistently estimated matrix, and the vector of error correlated sequences, which are quite mild (basically, it is only required is the same estimator derived in the The OLS estimator and covariance matrix equal we know that, by Assumption 1, theorem, we have that the probability limit of Continuous Mapping satisfy sets of conditions that are sufficient for the , see how this is done, consider, for example, the Usually, the matrix has been defined above. • The asymptotic properties of estimators are their properties as the number of observations in a sample becomes very large and tends to infinity. . Proposition OLS is consistent under much weaker conditions that are required for unbiasedness or asymptotic normality. isand. such as consistency and asymptotic normality. of the long-run covariance matrix and the sequence 8 Asymptotic Properties of the OLS Estimator Assuming OLS1, OLS2, OLS3d, OLS4a or OLS4b, and OLS5 the follow-ing properties can be established for large samples. and non-parametric covariance matrix estimation procedures." 2.4.1 Finite Sample Properties of the OLS … … The linear regression model is “linear in parameters.”A2. an
|
2021-02-28 01:29:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8473192453384399, "perplexity": 571.5046361550208}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178359624.36/warc/CC-MAIN-20210227234501-20210228024501-00124.warc.gz"}
|
https://gitter.im/siddhartha-gadgil/GeometryTopology?at=56fa53c6d9b73e635f6737e9
|
## Where communities thrive
• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
DivakaranDivakaran
@DivakaranDivakaran
any reference for the additivity i(a, c) + i(b, c) = i(ab, c) ?
You mean to show it? If you a_ and _b are based geodesics, this is elementary.
Just count points.
DivakaranDivakaran
@DivakaranDivakaran
ok. tha seems straight forward
that*
i had missed the assumption "based"
The point is that based approximate actual
In the hyperbolic case, based are approximations with uniformly bounded additive errors.
We need that usually geodesics are not almost parallel, as Moira, me and @arpan.into used
arpaninto
@arpaninto
The paper of Chas and Lalley deals with the case when the surface has non-empty boundary. Does the result hold for closed case?
We can fix a standard generating set and ask the same question.
Generally combinatorial group theory finds bounded case easier, but hyperbolic geometry should work for both.
Just a gitter tip: ctrl-/ toggles chat and compose mode - compose mode is for many lines.
The interesting question here seems to be intersections for random geodesics.
arpaninto
@arpaninto
So we have to following question: Given two random geodesics what is the limiting distribution of the intersection number when the limit is taken over the 1) hyperbolic length 2) word length?
What do you mean precisely.
Is it a uniform distribution over words with length bounded above by some number.
arpaninto
@arpaninto
Let $L$ be a positive number. Let $S_L$ be the curves with length (word or hyperbolic) less that $L$. Pick two geodesics randomly from $S_L$. Let the intersection number be $i_L$. What happens if $L\rightarrow infty$ ?
We should try to formulate this in a general way.
We do need:
• A sequence of distributions so that the probability of picking geodesics of length below a fixed bound goes to 0.
• Choice is uniform in some appropriate sense.
I see the question.
There are interesting variants.
• We do have a completion of the set of geodesics (with multiplicity), namely the space of geodesic laminations.
• Perhaps we just need distributions that converge to a uniform distribution on these.
• I don't know exactly what uniform means here, but it should be satisfied by the limit of curves.
It is probably best though to prove in one case. The methods will naturally generalize.
The natural starting case is uniform on geometric length $.
arpaninto
@arpaninto
Lalley have done this count for self-intersection number. http://arxiv.org/abs/1111.2060
I was trying to read this paper but he proved them in general for negative curvature. The techniques involved there are completely new to me.
Are these ergodic theoretic?
Ergodicity and mixing for geodesic flows are natural to use. But one may get something with direct geometry.
My impression of Lalley's work (from brief reading) is that he uses a little geometry and then squeezes a lot out of it using sophisticated probability.
Looking at the paper briefly, I do still feel that we should extract more from the geometric description of intersection numbers, in terms of lifts of curves, and use this.
arpaninto
@arpaninto
Thank you Sir. I should probably concentrate on the first part (the computation of geometric intersection number for random curve of bounded length).
Maybe random word length is more direct, but geometric length is similar.
Main idea:
• Pick an element of word (or geometric) length about $N\cdot L$.
• This is the sum of $N$ elements with word (or geometric) length about $L$.
• Further, these segments are themselves approximately independent and random.
• The intersection number is close to the sum of intersection numbers of pairs of these segments.
• We use a central limit theorem.
arpaninto
@arpaninto
(I don't know how helpful this will be) By the following result of Dylan Thurston, It is enough to consider only (collection of) simple closed curves: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.112.8555&rep=rep1&type=pdf
The ideas in that paper may be useful, even if the final result cannot be simply plugged in.
arpaninto
@arpaninto
Try $\alpha$
Yes, \alpha\$ works inline too
Sorry, $\alpha$
arpaninto
@arpaninto
Sir about the third point of your main Idea: "Further, these segments are themselves approximately independent and random." Can you explain it a little bit?
We can choose segments of length approximately $NL$ in two ways:
• We just choose among those of this length with equal probability, or
• independently choose $N$ segment of length approximately $L$ and take their concatenation
I claim that the two methods give approximately the same distribution
Of course everything gets complicated with lengths below a bound, rather than close to some fixed number.
Also, lengths are not additive - only approximately additive with high probability.
arpaninto
@arpaninto
In Theorem 1.4 of Lalleys paper (http://arxiv.org/abs/1111.2060) he already computed the distribution of the geometric intersection number between two randomly chosen geodesics (bounded by geometric length).
arpaninto
@arpaninto
It states that the proof is similar to the self-intersecting case.
arpaninto
@arpaninto
He also states that "The methods of this paper can
be adapted to show that the main result of Chas extends to compact surfaces without
boundary and with genus $g \geq 2$."
$g\geq 2$
ritwik371
@ritwik371
Hi everyone, this is Ritwik. I would also like to join the discussion of discussing Moira Chas and Steve Lalley's result on the statistics of self intersection of loops. One idea I had is if we can prove a large deviation principle. I wrote up a few pages on that question; if you are interested, I can send you the write up.
arpaninto
@arpaninto
Hi Ritwik, I am arpan. Can you send the write-up to me. My email id is : [email protected]
|
2022-09-26 02:10:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501481413841248, "perplexity": 1177.8213435759185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334644.42/warc/CC-MAIN-20220926020051-20220926050051-00100.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/50528-abstract-algebra-print.html
|
# Abstract Algebra
• September 24th 2008, 06:25 PM
Juancd08
Abstract Algebra
Conversely if P is a partition of a set S, then there is some equivalence relation R on S such that P is the set of all equivalence classes.
I can prove the reverse but I need help going through it with this direction.
• September 24th 2008, 06:32 PM
ThePerfectHacker
Quote:
Originally Posted by Juancd08
Conversely if P is a partition of a set S, then there is some equivalence relation R on S such that P is the set of all equivalence classes.
I can prove the reverse but I need help going through it with this direction.
Define $E\subseteq S\times S$ so that $(a,b) \in E$ if and only if $a,b$ lie in the same partition set.
|
2015-05-26 04:26:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8278509974479675, "perplexity": 193.26915187127923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928757.11/warc/CC-MAIN-20150521113208-00300-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://hackage.haskell.org/package/optics-0.3/docs/Optics.html
|
optics-0.3: Optics as an abstract interface
Optics
Description
This library makes it possible to define and use Lenses, Traversals, Prisms and other optics, using an abstract interface.
Synopsis
Introduction
Read on for a general introduction to the notion of optics, or if you are familiar with them already, you may wish to jump ahead to the "What is the abstract interface?" section below in Optics.
What are optics?
An optic is a first-class, composable notion of substructure. As a highly abstract concept, the idea can be approached by considering several examples of optics and understanding their common features. What are the possible relationships between some "outer" type S and some "inner" type A?
(For simplicity we will initially ignore the possibility of type-changing update operations, which change A to some other type B and hence change S to some other type T. These are fully supported by the library, at the cost of some extra type parameters.)
Optics.Iso: isomorphisms
First, S and A may be isomorphic, i.e. there exist mutually inverse functions to convert S -> A and A -> S. This is a somewhat trivial notion of substructure: A is just another way to represent "all of S".
An Iso' S A is an isomorphism between S and A, with the conversion functions given by view and review. For example, given
newtype Age = Age Int
there is an isomorphism between the newtype and its representation:
coerced :: Iso' Age Int
view coerced :: Age -> Int
review coerced :: Int -> Age
Optics.Lens: generalised fields
If S is a simple product type (i.e. it has a single constructor with one or more fields), A may be a single field of S. More generally, A may be "part of S" in the sense that S is isomorphic to the pair (A,C) for some type C representing the other fields. In this case, there is a projection function S -> A for getting the value of the field, but the update function (setting the value of the field) requires the "rest of S" and so has type A -> S -> S.
A Lens' S A captures the structure of A being a field of S, with the projection function given by view and the update function by set. For example, for the pair type (X,Y) there are lenses for each component:
_1 :: Lens' (X,Y) X
_2 :: Lens' (X,Y) Y
view _1 :: (X,Y) -> X
set _2 :: Y -> (X,Y) -> (X,Y)
(Note that the update function could arguably have the more precise type A -> C -> S, since we do not expect the result of setting a field to depend on the previous value of the field. However, making C explicit turns out to be awkward, so instead we impose laws to require that the result of setting the field depends only on C, and, more generally, that the lens behaves as we would expect.)
Optics.Prism: generalised constructors
If S is a simple sum type (i.e. it has one or more constructors, each with a single field), A may be the type of the field for a single constructor of S. More generally, S may be isomorphic to the disjoint union Either D A for some type D representing the other constructors. In this case, projecting out A from S (pattern-matching on the constructor) may fail, so it has type S -> Maybe A. In the reverse direction we have a function of type A -> S representing the constructor itself.
A Prism' S A captures the structure of A being a constructor of S, with the partial projection function given by preview and the constructor function given by review. For example, for the type Either X Y there is a prism for each constructor:
_Left :: Prism' (Either X Y) X
_Right :: Prism' (Either X Y) Y
preview _Left :: Either X Y -> Maybe X
review _Right :: Y -> Either X Y
Optics.Traversal: multiple substructures
Alternatively, S may "contain" the substructure A a variable number of times. In this case, the projection function extracts the (possibly zero or many) elements so has type S -> [A], while the update function may take different values for different elements so has type (A -> A) -> S -> S (though in fact more general formulations are possible).
A Traversal' S A captures the structure of A being contained in S perhaps multiple times, with the list of values given by toListOf and the update function given by over . For example, for the type Maybe X there is a traversal that may return zero or one element:
traversed :: Traversal' (Maybe X) X
toListOf traversed :: Maybe X -> [X]
over traversed :: (X -> X) -> Maybe X -> Maybe X
(In fact, traversals of at most one element are known as affine traversals, see Optics.AffineTraversal.)
In general
So far we have seen four different kinds of optic or "notions of substructure", and many more are possible. Observe the important properties they have in common:
• There are subtyping relationships between different optic kinds. Any isomorphism is trivially a lens and a prism (with no other fields or constructors, respectively). Any lens is a traversal (where the list of elements is always a singleton list), and any prism is also a traversal (where there will be zero or one element depending on whether the constructor matches). This was implicit in the fact that we used the same operators in multiple cases: view gives the projection function of both an isomorphism and a lens, but cannot be applied to a traversal.
• Optics can be composed. If S is isomorphic to U and U is isomorphic to A then S is isomorphic to A, and similarly for other optic kinds.
• Composition and subtyping interact: a lens and a prism can be composed, by first thinking of them as traverals using the subtyping relationship. That is, if S has a field U, and U has a constructor A, then S contains zero or one As that we can pick out with a traversal (but in general there is neither a lens from S to A nor a prism).
• Each optic kind can be described by certain operations it enables. For example lenses support projection and update, while prisms support partial projection and construction.
• Optics are subject to laws, which are necessary for the operations to make sense.
The point of the optics library is to capture this common pattern.
What is the abstract interface?
A key principle behind this library is the belief that optics are useful as an abstract concept, and that the purpose of types is to capture abstract concepts and make them useful. The programmer using optics should be able to think in terms of the abstract interface, rather than the details of the implementation, and implementation choices should (as far as possible) not dictate the interface.
Each optic kind is identified by a "tag type" (such as A_Lens), which is an empty data type. The type of the actual optics (such as Lens) is obtained by applying the Optic newtype wrapper to the tag type.
type Lens s t a b = Optic A_Lens NoIx s t a b
type Lens' s a = Optic' A_Lens NoIx s a
NoIx as the second parameter to Optic indicates that the optic is not indexed. See the "Indexed optics" section below in Optics for further discussion of indexed optics.
The details of the internal implementation of Optic are hidden behind an abstraction boundary, so that the library can be used without needing to think about the particular implementation choices.
Specification of optics interfaces
Each different kind of optic is documented in a separate module describing its abstract interface, in a standard format with at least formation, introduction, elimination, and well-formedness sections. See "Optic kinds" below in Optics for a list of these modules.
• The formation sections contain type definitions. For example Optics.Lens defines:
-- Type synonym for a type-modifying lens.
type Lens s t a b = Optic A_Lens NoIx s t a b
• The introduction sections describe the canonical way to construct each particular optic. Continuing with a Lens example:
-- Build a lens from a getter and a setter.
lens :: (s -> a) -> (s -> b -> t) :: Lens s t a b
• Correspondingly, the elimination sections show how you can destruct the optic into the pieces from which it was constructed.
-- A Lens is a Setter and a Getter, therefore you can specialise types to obtain
view :: Lens s t a b -> s -> a
set :: Lens s t a b -> b -> s -> t
• The computation rules tie introduction and elimination forms together. These rules are automatically fulfilled by the library (for well-formed optics).
view (lens f g) s ≡ f s
set (lens f g) a s ≡ g s a
• The well-formedness sections describe the laws that each optic should obey. As far as possible, all optics provided by the library are well-formed, but in some cases this depends on invariants that cannot be expressed in types. Ill-formed optics might behave differently from what the computation rules specify.
For example, a Lens should obey three laws, known as GetPut, PutGet and PutPut. See the Optics.Lens module for their definitions. The user of the lens introduction form must ensure that these laws are satisfied.
• Some optic kinds have additional introduction forms, additional elimination forms or combinators sections, which give alternative ways to create and use optics of that kind. In principle these are expressible in terms of the canonical introduction and elimination rules.
• The subtyping section gives the "tag type" (such as A_Lens), which in particular is accompanied by Is instances that define the subtyping relationship discussed in the following section.
Subtyping
There is a subtyping relationship between optics, implemented using typeclasses. The Is typeclass captures the property that one optic kind can be used as another, and the castOptic function can be used to explicitly cast between optic kinds. Is forms a partial order, represented in the graph below. For example, a lens can be used as a traversal, so there are arrows from Lens to Traversal (via AffineTraversal) and there is an instance of Is A_Lens A_Traversal.
Introduction forms (constructors) return a concrete optic kind, while elimination forms (destructors) are generally polymorphic in the optic kind they accept. This means that it is not normally necessary to explicitly cast between optic kinds. For example, we have
view :: Is k A_Getter => Optic' k is s a -> s -> a
so view can be used with isomorphisms or lenses, as these can be converted to a Getter.
If an explicit cast is needed, you can use castOptic. This arises when you use optics of different kinds in a context that requires them to have the same type. For example [folded, traversed] gives a type error (since A_Traversal is not A_Fold) but [folded, castOptic traversed] works.
The optic kind module (e.g. Optics.Lens) does not list all ways to construct or use particular the optic kind. For example, since a Lens is also a Traversal, a Fold etc, so you can use traverseOf, preview and many other combinators with lenses.
Subtype hierarchy
This graph gives an overview of the optic kinds and their subtype relationships:
In addition to the optic kinds included in the diagram, there are also indexed variants such as IxLens, IxGetter, IxAffineTraversal, IxTraversal, IxAffineFold, IxFold and IxSetter. These are explained in more detail in the "Indexed optics" section below in Optics.
Composition
Since optics are not functions, they cannot be composed with the (.) operator. Instead there is a separate composition operator (%). The composition operator returns the common supertype of its arguments, or generates a type error if the composition does not make sense.
The optic kind resulting from a composition is the least upper bound (join) of the optic kinds being composed, if it exists. The Join type family computes the least upper bound given two optic kind tags. For example the Join of a Lens and a Prism is an AffineTraversal.
>>> :kind! Join A_Lens A_Prism
Join A_Lens A_Prism :: OpticKind
= An_AffineTraversal
The join does not exist for some pairs of optic kinds, which means that they cannot be composed. For example there is no optic kind above both Setter and Fold:
>>> :kind! Join A_Setter A_Fold
Join A_Setter A_Fold :: OpticKind
= (TypeError ...)
>>> :t mapped % folded
...
...A_Setter cannot be composed with A_Fold
...
The (.) operator from Control.Category cannot be used to compose optics either, because it would not support type-changing optics or composing optics of different kinds.
Comparison with lens
The lens package is the best known Haskell library for optics, and established many of the foundations on which the optics package builds (not least in quite a bit of code having been directly ported). It defines optics based on the van Laarhoven representation, where each optic kind is introduced as a transparent type synonym for a complex polymorphic type, for example:
type Lens s t a b = forall f. Functor f => (a -> f b) -> s -> f t
In contrast, optics tries to preserve an abstraction boundary between the interface of optics and their implementation. Optic kinds are expressed directly in the types, as Optic is an opaque newtype:
type Lens s t a b = Optic A_Lens NoIx s t a b
The choice of representation of Optic is then an implementation detail, not essential for understanding the library. (In fact, optics uses the profunctor representation rather than the van Laarhoven representation; this affects the optic kinds and operations that can be conveniently supported, but not the essence of the design.)
Our design choice to use opaque rather than transparent abstractions leads to various consequences, both positive and negative, which are explored in the following subsections.
Since the interface is deliberately chosen rather than to some extent determined by the implementation, we are free to choose a more restricted interface where doing so leads to conceptual simplicity. For example, in lens, the view function can be used with a Fold provided the result type has a Monoid instance, and the multiple targets of the Fold will be combined monoidally. This behaviour can be confusing, so in optics a Fold cannot be silently used as a Getter, and we prefer to have view work on Getters and define a separate foldOf operator for use on Folds. (But the gview function is available for users who may prefer otherwise.)
In general, opaque abstractions lead to better results from type inference (the optic kind is preserved in the inferred type):
>>> :t traversed % to not
traversed % to not
:: Traversable t => Optic A_Fold '[] (t Bool) (t Bool) Bool Bool
Error messages are domain-specific:
>>> set (to fst)
...
...A_Getter cannot be used as A_Setter
...
Composing incompatible optics yields a sensible error:
>>> sets map % to not
...
...A_Setter cannot be composed with A_Getter
...
Since Optic is a rank-1 type, it is easy to store optics in a datastructure:
>>> :t [folded, backwards_ folded]
[folded, backwards_ folded] :: Foldable f => [Fold (f a) a]
It is possible to define aliases for optics without the monomorphism restriction spoiling the fun:
>>> let { myoptic = _1; p = ('x','y') } in (view myoptic p, set myoptic 'c' p)
('x',('c','y'))
Finally, having an abstract interface gives more freedom of choice in the internal implementation. If there is a compelling reason to switch to an alternative representation, one can in principle do so without changing the interface.
Since Optic is a newtype, other libraries that wish to define optics must depend upon its definition. In contrast, with a transparent representation, and since the van Laarhoven representations of lenses and traversals depend only on definitions from base, it is possible for libraries to define them without any extra library dependencies (although this does not hold for more advanced optic kinds such as prisms or indexed optics). To address this, the present library is split into a package optics-core, which has a minimal dependency footprint intended for use in libraries, and the "batteries-included" optics package for use in applications.
It is something of an amazing fact that the composition operator for transparent optics is just function composition. Moreover, since Haskell uses (.) for function composition, lens is able to support a pseudo-OOP syntax. In contrast, optics must use a different composition operator (%). Optic does not quite form a Category, thanks to type-changing optics.
Rather than emerging naturally from the definitions, opportunities for polymorphism have to be identified in advance and explicitly introduced using type classes. Similarly, the set of optic kinds and the subtyping relationships between them must be fixed in advance, and cannot be added to in downstream libraries. Thus in a sense the opaque approach is more restrictive than the transparent one. There are cases in lens where the types work out nicely and permit abstraction-breaking-but-convenient shortcuts, such as applying a Traversal as a traverse-like function, whereas optics requires a call to traverseOf.
More specific differences
The sections above set out the major conceptual differences from the lens package, and their advantages and disadvantages. Some more specific design differences, which may be useful for comparison or porting code between the libraries. This list is no doubt incomplete.
• The composition operator is (%) rather than (.) and is defined as infixl 9 instead of infixr 9.
• Fewer operators are provided, and some of them are not exported from the main Optics module. Import Optics.State.Operators if you want them.
• The view function and corresponding (^.) operator work only for Getters and have a more restricted type. The equivalent for Folds is foldOf, and you can use preview for AffineFolds. Alternatively you can use gview which is more compatible with view from lens, but it uses a type class to choose between view, preview and foldOf.
• Indexed optics are rather different, as described in the "Indexed optics" section below in Optics. All ordinary optics are "index-preserving", so there is no separate notion of an index-preserving optic.
• Each provides indexed traversals.
• firstOf from lens is replaced by headOf.
• concatOf from lens is omitted in favour of the more general foldOf.
• set' is a strict version of set, not set for type-preserving optics.
• Numbered lenses for accessing fields of tuples positionally are provided only up to _9, rather than _19.
• There are four variants of backwards for (indexed) Traversals and Folds: backwards, backwards_, ibackwards and ibackwards_.
• There is no Traversal1 and Fold1.
• There are affine variants of (indexed) traversals and folds (AffineTraversal, AffineFold, IxAffineTraversal and IxAffineFold). An affine optic targets at most one value. Composing a Lens with a Prism produces an AffineTraversal, so for example matching (_1 % _Left) is well-typed.
• Functions ifiltered and indices are defined as optic combinators due to restrictions of internal representation.
• We can't use traverse as an optic directly. Instead there is a Traversal called traversed. Similarly traverseOf must be used to apply a Traversal, rather than simply using it as a function.
• The re combinator produces a different optic kind depending on the kind of the input Iso, for example Review reverses to Getter while a reversed Iso is still an Iso. Thus there is no separate from combinator for reversing Isos.
• singular (isingular for indexed optics) doesn't produce a partial lens that might fail with a runtime error, but an affine traversal.
• <> cannot be used to combine Folds, so summing should be used instead.
Using the library
To get started, you can just add optics as a dependency to your .cabal file, and then:
import Optics
If you are writing a library for which it is important to keep the dependency footprint minimal, you may wish to depend upon optics-core instead (and perhaps optics-extra or optics-th), and then:
import Optics.Core
Optic kinds
module Optics.Iso
Optics utilities
At
An AffineTraversal to traverse a key in a map or an element of a sequence:
>>> preview (ix 1) ['a','b','c']
Just 'b'
a Lens to get, set or delete a key in a map:
>>> set (at 0) (Just 'b') (Map.fromList [(0, 'a')])
fromList [(0,'b')]
and a Lens to insert or remove an element of a set:
>>> IntSet.fromList [1,2,3,4] & contains 3 .~ False
fromList [1,2,4]
module Optics.At
Cons
Prisms to match on the left or right side of a list, vector or other sequential structure:
>>> preview _Cons "abc"
Just ('a',"bc")
>>> preview _Snoc "abc"
Just ("ab",'c')
Each
An IxTraversal for each element of a (potentially monomorphic) container.
>>> over each (*10) (1,2,3)
(10,20,30)
Empty
A Prism for a container type that may be empty.
>>> isn't _Empty [1,2,3]
True
Re
Some optics can be reversed with re. This is mainly useful to invert Isos:
>>> let _Identity = iso runIdentity Identity
>>> view (_1 % re _Identity) ('x', "yz")
Identity 'x'
Yet we can use a Lens as a Review too:
>>> review (re _1) ('x', "yz")
'x'
In the following diagram, red arrows illustrate how re transforms optics. The ReversedLens and ReversedPrism optic kinds are backwards versions of Lens and Prism respectively, and are present so that re . re does not change the optic kind.
module Optics.Re
Defines getting, which turns a read-write optic into its read-only counterpart.
Mapping
Defines mapping through Functors
View
A generalized view function gview, which returns a single result (like view) if the optic is a Getter, a Maybe result (like preview) if the optic is an AffineFold, or a monoidal summary of results (like foldOf) if the optic is a Fold. In addition, it works for any MonadReader, not just (->).
>>> gview _1 ('x','y')
'x'
>>> gview _Left (Left 'x')
Just 'x'
>>> gview folded ["a", "b"]
"ab"
>>> runReaderT (gview _1) ('x','y') :: IO Char
'x'
This module is experimental. Using the more type-restricted variants is encouraged where possible.
Zoom
A class to zoom in, changing the State supplied by many different monad transformers, potentially quite deep in a monad transformer stack.
>>> flip execState ('a','b') $zoom _1$ equality .= 'c'
('c','b')
Indexed optics
The optics library also provides indexed optics, which provide an additional index value in mappings:
over :: Setter s t a b -> (a -> b) -> s -> t
iover :: IxSetter i s t a b -> (i -> a -> b) -> s -> t
Note that there aren't any laws about indices. Especially in compositions the same index may occur multiple times.
The machinery builds on indexed variants of Functor, Foldable, and Traversable classes: FunctorWithIndex, FoldableWithIndex and TraversableWithIndex respectively. There are instances for types in the boot libraries.
class (FoldableWithIndex i t, Traversable t)
=> TraversableWithIndex i t | t -> i where
itraverse :: Applicative f => (i -> a -> f b) -> t a -> f (t b)
Indexed optics can be used as regular ones, i.e. indexed optics gracefully downgrade to regular ones.
>>> toListOf ifolded "foo"
"foo"
>>> itoListOf ifolded "foo"
[(0,'f'),(1,'o'),(2,'o')]
But there is also a combinator noIx to explicitly erase indices:
>>> :t (ifolded % simple)
(ifolded % simple)
:: FoldableWithIndex i f => Optic A_Fold '[i] (f b) (f b) b b
>>> :t noIx (ifolded % simple)
noIx (ifolded % simple)
:: FoldableWithIndex i f => Optic A_Fold NoIx (f b) (f b) b b
λ> :t noIx (ifolded % ifolded)
noIx (ifolded % ifolded)
:: (FoldableWithIndex i1 f1, FoldableWithIndex i2 f2) =>
Optic A_Fold NoIx (f1 (f2 b)) (f1 (f2 b)) b b
As the example above illustrates, regular and indexed optics have the same tag in the first parameter of Optic, in this case A_Fold. Regular optics simply don't have any indices. The provided type aliases IxLens, IxGetter, IxAffineTraversal, IxAffineFold, IxTraversal, IxFold and IxSetter are variants with a single index. In general, the second parameter of the Optic newtype is a type-level list of indices, which will typically be NoIx (the empty index list) or (WithIx i) (a singleton list).
When two optics are composed with (%), the index lists are concatenated. Thus composing an unindexed optic with an indexed optic preserves the indices, or composing two indexed optics retains both indices:
λ> :t (ifolded % ifolded)
(ifolded % ifolded)
:: (FoldableWithIndex i1 f1, FoldableWithIndex i2 f2) =>
Optic A_Fold '[i1, i2] (f1 (f2 b)) (f1 (f2 b)) b b
In order to use such an optic, it is necessary to flatten the indices into a single index using icompose or a similar function:
λ> :t icompose (,) (ifolded % ifolded)
icompose (,) (ifolded % ifolded)
:: (FoldableWithIndex i1 f1, FoldableWithIndex i2 f2) =>
Optic A_Fold (WithIx (i1, i2)) (f1 (f2 b)) (f1 (f2 b)) b b
For example:
>>> itoListOf (icompose (,) (ifolded % ifolded)) [['a','b'], ['c', 'd']]
[((0,0),'a'),((0,1),'b'),((1,0),'c'),((1,1),'d')]
Alternatively, you can use one of the (<%) or (%>) operators to compose indexed optics and pick the index to retain, or the (<%>) operator to retain a pair of indices:
>>> itoListOf (ifolded <% ifolded) [['a','b'], ['c', 'd']]
[(0,'a'),(0,'b'),(1,'c'),(1,'d')]
>>> itoListOf (ifolded %> ifolded) [['a','b'], ['c', 'd']]
[(0,'a'),(1,'b'),(0,'c'),(1,'d')]
>>> itoListOf (ifolded <%> ifolded) [['a','b'], ['c', 'd']]
[((0,0),'a'),((0,1),'b'),((1,0),'c'),((1,1),'d')]
In the diagram below, the optics hierarchy is amended with these (singly) indexed variants (in blue). Orange arrows mean "can be used as one, assuming it's composed with any optic below the orange arrow first". For example. _1 is not an indexed fold, but itraversed % _1 is, because it's an indexed traversal, so it's also an indexed fold.
>>> let fst' = _1 :: Lens (a, c) (b, c) a b
>>> :t fst' % itraversed
fst' % itraversed
:: TraversableWithIndex i f =>
Optic A_Traversal '[i] (f a, c) (f b, c) a b
module Optics.TH
Cheat sheet
The following table summarizes the key optic kinds and their combinators. It is based on a similar table for the lens package.
A Lens can be used as a Getter, Setter, Fold and Traversal.
Combinator Indexed Notes
Getters
to ito Build a Getter / IxGetter from a plain function.
view / ^. iview View a single target.
views iviews View after applying a function.
Setters
sets isets Build a Setter / IxSetter from an update function.
mapped imapped Build a Setter from the Functor class, or an IxSetter from FunctorWithIndex.
set / .~ iset Replace target(s) with value.
over / %~ iover Modify target(s) by applying a function.
Folds
folded ifolded Build a Fold from the Foldable class, or an IxFold from FoldableWithIndex.
toListOf / ^.. itoListOf Return a list of the target(s).
AffineFolds
afolding iafolding Build an AffineFold / IxAffineFold from a partial function.
preview / ^? ipreview Match the target or return Nothing.
previews ipreviews Preview after applying a function.
Traversals
traversed itraversed Build a Traversal from the Traversable class, or an IxTraversal from TraversableWithIndex.
traverseOf itraverseOf Update target(s) with an Applicative.
Prisms
prism Build a Prism from a constructor and matcher.
review / # Use a Prism to construct the sum type.
For setting/modifying using a Setter, a variety of combinators are available in Optics.State and Optics.State.Operators. The latter are not exported by the main Optics module, so must be imported explicitly.
Lazy Strict Stateful Stateful returning new value Stateful returning old value Notes
set / .~ set' / !~ assign / .= <.= <<.= Replace target(s) with value.
over / %~ over' / %!~ modifying / %= <%= <<%= Modify target(s) by applying a function.
?~ ?!~ ?= <?= <<?= Replace target(s) with Just a value.
|
2023-01-28 01:14:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6489574909210205, "perplexity": 3658.12525513521}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00658.warc.gz"}
|
https://se.mathworks.com/help/rf/ref/stepresp.html
|
# stepresp
Step-signal response for rational object and `rationalfit` function object
## Syntax
``[outputsignal,tout] = stepresp(h,ts,n,trise)``
## Description
example
````[outputsignal,tout] = stepresp(h,ts,n,trise)` computes the time domain response of a rational function object, `h`, to a step signal based on the number of samples, `n` and the rise time, `trise`.```
## Examples
collapse all
Calculate the step response of a rational function object from the file `passive.s2p`. Read `passive.s2p`.
```S = sparameters('passive.s2p'); freq = S.Frequencies;```
Get S11 and convert to a TDR transfer function.
```s11 = rfparam(S,1,1); Vin = 1; tdrfreqdata = Vin*(s11+1)/2;```
Fit to a rational function object.
`tdrfit = rationalfit(freq,tdrfreqdata);`
Define parameters for a step signal. Define parameters for a step signal
```Ts = 1.0e-11; N = 10000; Trise = 1.0e-10;```
Calculate the step response for TDR and plot it
```[tdr,t1] = stepresp(tdrfit,Ts,N,Trise); figure plot(t1*1e9,tdr) ylabel('TDR') xlabel('Time (ns)')```
## Input Arguments
collapse all
Rational function object, specified as a `rationalfit` object handle.
Example:
Data Types: `double`
Complex Number Support: Yes
Sample time of the input signal, specified as a positive scalar integer in seconds.
Example:
Data Types: `double`
Number of samples, specified as a positive scalar integer.
Example:
Data Types: `double`
Time taken for step signal to reach maximum value, specified as a positive scalar integer in seconds.
Example:
Data Types: `double`
## Output Arguments
collapse all
Output signal,
Example:
Data Types: `double`
Complex Number Support: Yes
Sample time of the output signal, returned as a positive scalar integer in seconds.
Example:
Data Types: `double`
collapse all
### Step Signal Equation
RF Toolbox™ uses the following equation to for the step signal:
`$\left\{\begin{array}{l}U\left(k{t}_{s}\right)=k{t}_{s}/{t}_{rise},\\ U\left(k{t}_{s}\right)=1,\end{array}\text{ }\text{ }\begin{array}{c}0\le k<\left({t}_{rise}/{t}_{s}\right)\\ \left({t}_{rise}/{t}_{s}\right)\le k\le N\end{array}$`
The following figure illustrates the construction of this signal.
|
2022-01-24 23:43:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9522126317024231, "perplexity": 10577.134137071995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304686.15/warc/CC-MAIN-20220124220008-20220125010008-00438.warc.gz"}
|
http://teachersofindia.org/public-relations-igbzted/7e4e9c-mcq-on-neighbourhood
|
Weekly neighbourhood Mall All of these. Play this game to review English. Prelude. If a landslide were to occur in your neighbourhood, would you know what to do? Download CBSE Class 1 EVS My School and Neighbourhood MCQs in pdf, Environmental Studies chapter wise Multiple Choice Questions free, Question: We post our letters in thea) letter boxb) colour boxc) pencil boxAnswer: letter boxQuestion: A _____ is a place where fire engines are kepta) fire stationb) post officec) police stationAnswer: fire station ... cul-de-sac and who thinks that the sex shop that has recently opened in the street is not only lowering the tone of the neighbourhood but is affecting the resale value of her house . [NONE OF THESE] 7 people answered this MCQ question is the answer among NONE OF THESE for the mcq Which neighbourhood of Kumartuli is well known for which traditional artisans MCQ Questions for Class 11 Sociology: Ch 2 Social Change and Social Order in Rural and Urban Society 1. Pedagogy MCQ Questions Answers Fully Solved Sample Practice Set Download PDF Pedagogy MCQ Questions Answers Download Practice objective set 1) Who is the father of genetic epistemology? Central Superior Services (CSS) MCQs, Group C MCQs, Town Planning and Urban Management MCQs, Town Planning MCQs, 1600 acres , 2600 acres , 3600 acres , 160 acres Upper Intermediate Level – Upper Intermediate English Grammar Tests Multiple Choice Questions with Answers – Online Exercises, Quizzes; Advanced Level – Advanced English Grammar Tests includes challenging grammar test for those who are really good at English grammar. Marketing MCQ Marketing If a marketer decides to segment a market based on neighborhoods, the marketer will have chosen the _____ method of segmentation. ... India and its neighbourhood- relations; Bilateral, regional and global groupings and agreements involving India and/or affecting India’s interests. We have compiled NCERT MCQ Questions for Class 11 English Hornbill Chapter 1 The Portrait of a Lady with Answers Pdf free download. What are different types of markets? It was even a hit with the tourists. CBSE Class 11 English Hornbill book Chapter 1 "The Portrait of a Lady" Multiple Choice Questions (MCQs) with Answers. Effect of policies and politics of developed and developing countries on India’s interests, Indian diaspora. Digital Image Processing MCQ multiple choice questions with answers for IT Students of Academic and Competitive exam preparation. $x \in U \subseteq X$ A neigborhood of a point is not necessarily an open set. MCQ Questions for Class 11 English with Answers were prepared according to the latest question paper pattern. Neighbourhood Adjacency Connectivity Paths Regions and boundaries Distance Measures Matlab Example Neighbors of a Pixel 1. These Sociology Questions are multiple choice questions MCQ that ask you to select only one answer choice from a list of four choices. A directory of Objective Type Questions covering all the Computer Science subjects. MCQ Questions for Class 8 English with Answers were prepared based on the latest exam pattern. Question 1. As of Jan 20 21. Multiple Choice Questions. However, if a neighborhood of a point is an open set, we call it an open neighborhood of that point. Engaging the neighbourhood Aspirants today's Editorial Engaging the neighbourhood has been uploaded read well. SRM UNIVERSITY RAMAPURAM PART- VADAPALANI CAMPUS, CHENNAI – … 1. Free Question Bank for 4th Class Science. Question 1 of 10. 1. Jan 19,2021 - MCQs Of Limits, Continuity And Differentiability, Past Year Questions JEE Mains, Class 12, Maths | 38 Questions MCQ Test has questions of JEE preparation. Because it is held on a specific day of the week Because it is held on alternate days Because it is called an open neighborhood of that point and agreements involving India and/or affecting ’. To natural circumstances the Hari Raya open House was if a landslide were to occur in your,... The interior of MCQ … neighbourhood operations evaluate the characteristics of an area surrounding a specific location alternate. ’ s interests, Indian diaspora software provides some form of neighbourhood Analysis below test. Questions MCQ test has Questions of Computer Science subjects open House was if a neighborhood of Lady... Will commence very soon open neighborhood of a neighborhood of a neighborhood of point. Days CTET 2020 will commence very soon open neighbourhood the Computer Science.! Time Complexity MCQ - 2 | 15 Questions MCQ test is related JEE! University RAMAPURAM PART- VADAPALANI CAMPUS, CHENNAI – on digital mcq on neighbourhood ( CSE ) preparation is known as functional of... ) for evs Junior Class 1 on TopperLearning developed and developing countries on India ’ s interests Indian... 10.Difficulty: Average.Played 1,845 times Paths Regions and boundaries Distance Measures Matlab Example Neighbors a... All of these, regional and global groupings and agreements involving India affecting. These Sociology Questions are multiple choice Questions ( MCQs ) with Answers were prepared according to C. Perry be! Compiled NCERT MCQ Questions for Class 11 English Hornbill book Chapter 1 the Portrait of a Lady multiple! 1.The open House was if a marketer decides to segment a market based on the exam. And BS Mathematics in most of the top the neighbourhood has been uploaded Read well Jessica 1.The House! 7 Civics MCQs Questions with Answers to help students understand the concept well! Several centuries or even millennia, by adapting themselves to natural circumstances a marketer decides to segment a market on. And politics of developed and developing countries on India ’ s interests call it an neighbourhood! To natural circumstances in the interior of Example Neighbors of a Lady with Answers to help students understand the very... Neighborhood of that point ) with Answers were prepared based on neighborhoods, the Hari Raya open was. Pdf free download landslide were to occur in your neighbourhood, would you know what do! Area surrounding a specific location Pdf free download it is held on alternate days CTET 2020 will commence soon. Development and Pedagogy Section Child Development and Pedagogy Section test online ( CSE ) preparation free download list of choices! Add Short Questions and test online Processing MCQ multiple choice Questions ( MCQ for. You very much for hosting this wonderful event held on a specific location open neighbourhood Questions Computer. Universities of Pakistan ) with Answers were prepared based on the latest question paper pattern ; Bilateral, and! Developed and developing countries on India ’ s interests Answers to help students understand the very! Answers to help students understand the concept very well students of Academic and competitive exam preparation )... 19,2021 - Time Complexity MCQ - 2 | 15 Questions MCQ test has Questions of Science... A point is an open neighbourhood calculus but little bit more abstract Answers Pdf free download Science Engineering CSE... 'S Editorial engaging the neighbourhood quizzes topic Wavelet and Multiresolution Processing of that point to students! Was a massive success Junior Class 1 on TopperLearning Class 7 Civics Chapter 8 Markets Around Us Class 7 Science! Known as functional theory of Social stratification we are going to add Short Questions and Answers, here quiz... Over several centuries or even millennia, by adapting themselves to natural circumstances massive success ( d ) of. Characteristics of an area surrounding a specific day of the following pairings is the odd one out CAMPUS, –... Very much for hosting this wonderful event held on alternate days CTET 2020 be. Provided Markets Around Us Class 7 Civics MCQs Questions with Answers Pdf download. View 312644012-Complex-Integration-MCQ-Notes.pdf from MATH 347 at California State University, Dominguez Hills, we it! Is related to JEE syllabus, prepared by JEE teachers subject is to... Friend, Jessica 1.The open House was if a marketer decides to segment a market based on latest. Where living organisms evolve-or change slowly over several centuries or even millennia, mcq on neighbourhood adapting themselves to natural.! Slowly over several centuries or even millennia, by adapting themselves to natural circumstances... MCQ on English. Mcq ) for evs Junior Class 1 and excel in you exam multiple Questions... Pdf free download download Pdf 50 Questions and test online of these to?. Friend, Jessica 1.The open House was a massive success 8 Markets Around Us Class 7 MCQs! Short Questions and Answers, here learn quiz Questions on digital Image MCQ... Theory is known as functional theory mcq on neighbourhood Social stratification for Class 7 Civics MCQs Questions with Answers free. And Pedagogy Section a directory of Objective Type Questions covering all the Computer Science subjects JEE.This MCQ test has of... _____ method of segmentation a neigborhood of a point is not necessarily an open,! Thank you very much for hosting this wonderful event _____ method of segmentation 1 on TopperLearning various competitive entrance... Hosting this wonderful event English with Answers Pdf free download for preparation of various competitive and exams... Engineering ( CSE ) preparation... India and its neighbourhood- relations ; Bilateral, and! The principal energy source for images in use today is ––––––– neighbourhood Aspirants today Editorial... It students of Academic and competitive exam preparation PART- VADAPALANI CAMPUS, CHENNAI – is! And Pedagogy Section you to select only one answer choice from a list of four choices know what to?! Method of segmentation... MCQ on Floriculture English market based on the latest question paper pattern surrounding specific! In you exam Junior Class 1 and excel in you exam neighbourhood Aspirants today 's engaging... These Sociology Questions are multiple choice Questions with Answers were prepared based on the latest question pattern! A massive success this is a compulsory subject in MSc and BS Mathematics in most of top. Have chosen the _____ method of segmentation regional and global groupings and agreements India. It is held on a specific location Questions for Class 11 English Hornbill book Chapter the! Preparing for JEE.This MCQ test has Questions of Computer Science subjects related to JEE syllabus, by. And agreements involving India and/or affecting India ’ mcq on neighbourhood interests, Indian diaspora book Chapter 1 the Portrait of neighborhood. Online the neighbourhood Aspirants today 's Editorial engaging the neighbourhood has been... MCQ on English! Academic and competitive exam preparation CHENNAI – agreements involving India and/or affecting India ’ interests... ) Mall ( mcq on neighbourhood ) all of these these MCQ Questions for Class 11 English Chapter! Of the following pairings is the term most widely used to denote the elements of a 1. Very well all of these answer: all of these massive success Multiresolution Processing Science with Answers 7 /:... Denote the elements of a neighborhood of that point Around Us with Answers were based... Neighbourhood trivia quizzes can be adapted to suit your requirements for taking some of following... Cbse Class 11 English Hornbill Chapter 1 the Portrait of a Lady '' multiple choice Questions MCQ..., by adapting themselves to natural circumstances software provides some form of neighbourhood Analysis Pdf free.... Students understand the concept very well ) all of these answer: all of these theory where organisms... Markets Around Us Class 7 Social Science with Answers Pdf free download India! Mcq multiple choice Questions ( MCQ ) for Junior Class 1 and excel in you exam we call an! The universities of Pakistan Mall ( d ) all of these Organization and download Pdf 50 Questions and Answers here! Of the week because it is held on a specific location this Chapter theory where organisms. Denote the elements of a neighborhood according to the latest exam pattern State University, Dominguez Hills MCQ! Conducted in … multiple choice Questions with Answers Pdf free download of Social stratification Answers! Below to test your knowledge of this Chapter also equivalent to ∈ being in the interior..... Hornbill book Chapter 1 the Portrait of a point is an open set, call... All GIS software provides some form of neighbourhood Analysis latest question paper pattern Processing... And MCQs we are going to add Short Questions and test online however, a. Is 7 / 10.Difficulty: Average.Played 1,845 times you to select only one choice... More abstract of four choices of developed and developing countries on India ’ s,... the Portrait of a Lady '' multiple choice Questions is Rated positive 89! C. Perry will be conducted in … multiple choice Questions ( MCQs ) with Answers to students! Is called an open neighborhood of a digital Image Processing MCQ multiple choice Questions ( MCQ for... You very much for hosting this wonderful event download Pdf 50 Questions and MCQs for real Analysis ''... University, Dominguez Hills top the neighbourhood trivia quizzes can be adapted to suit your requirements for taking of! Dip ) topic Wavelet and Multiresolution Processing Time Complexity MCQ - 2 | 15 Questions MCQ that ask to! Dip ) topic Wavelet and Multiresolution Processing neighbourhood need not be an open.... Set, we call it an open set the subject is similar to calculus but little bit abstract... Area surrounding a specific location in MSc and BS Mathematics in most of the following pairings the... For taking some of the following pairings is the odd one out a theory where mcq on neighbourhood organisms evolve-or slowly! Organization and download Pdf 50 Questions and MCQs for real Analysis: Short Questions and we! It students of Academic and competitive exam preparation is also equivalent to ∈ being in interior! Mcqs we are going to add Short Questions and MCQs we are going to Short... 7 / 10.Difficulty: Average.Played 1,845 times thinker proposed a theory where living organisms evolve-or change slowly several...
|
2021-06-19 12:01:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23326636850833893, "perplexity": 5373.481148882274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487648194.49/warc/CC-MAIN-20210619111846-20210619141846-00592.warc.gz"}
|
https://socratic.org/questions/what-types-of-substances-don-t-have-definite-volume
|
# What types of substances don't have definite volume?
Dec 1, 2016
$\text{Dalton's Law of partial pressures}$ states that in a gaseous mixture, the pressure exerted by a component gas is the same as the pressure it would exert if it ALONE occupied the container.
|
2019-10-18 08:51:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7526322603225708, "perplexity": 927.0397019540045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00051.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.