url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://mathynick.com/2014/08/ | ## The Satisfaction of a Job Well Done
knowyourmeme.com
A few days ago, I got a call from my uncle, Jerry, asking me if I could tutor his daughter, my cousin, Tayler. Of course I agreed! According to my uncle, my cousin was feeling very frustrated with the material in the classroom and more at home trying to understand. So frustrated nearly to the point of tears. So she took some time to cool off and clear her head for about an hour.
I put this flowchart together years ago to help our students become familiar and identify quadrilaterals and their properties using side lengths and the slopes of the sides. The information is usually gathered from a gridded Cartesian graph or sets of ordered pair vertices of the quadrilateral.
#### Information Needed or to be Derived:
Points:
$(x_1, y_1) \text{ and } (x_2, y_2)$
Pythagorean Theorem:
$\displaystyle c^2 = a^2 + b^2$
$\displaystyle \text{side length}=\sqrt{(\Delta x)^{2}+(\Delta y)^{2}}$
$\displaystyle \text{side length} = \sqrt{(x_{2}-x_{1})^2+(y_{2}-y_{1})^2}$
Slope:
$m=\frac{\Delta y}{\Delta x} = \frac{y_2-y_1}{x_2-x_1}$ | 2018-06-20 20:32:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4371953308582306, "perplexity": 2704.2122117833696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863886.72/warc/CC-MAIN-20180620202232-20180620222232-00210.warc.gz"} |
https://www.proofwiki.org/wiki/Definition:Autocorrelation | # Definition:Autocorrelation
## Definition
Let $S$ be a stochastic process giving rise to a time series $T$.
The autocorrelation of $S$ at lag $k$ is defined as:
$\rho_k := \dfrac {\expect {\paren {z_t - \mu} \paren {z_{t + k} - \mu} } } {\sqrt {\expect {\paren {z_t - \mu}^2} \expect {\paren {z_{t + k} - \mu}^2} } }$
where:
$z_t$ is the observation at time $t$
$\mu$ is the mean of $S$
$\expect \cdot$ is the expectation.
### Autocorrelation Coefficient
$\rho_k$ is known as the autocorrelation coefficient of $S$ at $k$.
## Also known as
Autocorrelation is also known as serial correlation.
## Also see
• Results about autocorrelation can be found here.
## Sources
Part $\text {I}$: Stochastic Models and their Forecasting:
$2$: Autocorrelation Function and Spectrum of Stationary Processes
Part $\text {I}$: Stochastic Models and their Forecasting:
$2$: Autocorrelation Function and Spectrum of Stationary Processes:
$2.1$ Autocorrelation Properties of Stationary Models:
$2.1.2$ Stationary Stochastic Processes: Autocovariance and autocorrelation coefficients | 2023-03-22 21:45:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852467179298401, "perplexity": 1598.0314808795583}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944452.97/warc/CC-MAIN-20230322211955-20230323001955-00511.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcds.2010.28.1693 | Article Contents
Article Contents
Oscillator and thermostat
• We study the problem of a potential interaction of a finite- dimensional Lagrangian system (an oscillator) with a linear infinite-dimensional one (a thermostat). In spite of the energy preservation and the Lagrangian (Hamiltonian) nature of the total system, under some natural assumptions the final dynamics of the finite-dimensional component turns out to be simple while the thermostat produces an effective dissipation.
Mathematics Subject Classification: Primary: 37K; Secondary: 70H.
Citation: | 2023-03-31 10:32:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.7223770022392273, "perplexity": 1828.5406067486597}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00487.warc.gz"} |
https://indico.cern.ch/event/656452/contributions/2859721/ | Quark Matter 2018
13-19 May 2018
Venice, Italy
Europe/Zurich timezone
The organisers warmly thank all participants for such a lively QM2018! See you in China in 2019!
D0-meson production as a function of event transverse spherocity in pp collisions at √s = 7 TeV with ALICE at the LHC
15 May 2018, 17:00
2h 40m
First floor and third floor (Palazzo del Casinò)
First floor and third floor
Palazzo del Casinò
Poster Open heavy flavour
Speaker
Manoj Bhanudas Jadhav (IIT- Indian Institute of Technology (IN))
Description
Multiplicity and event-shape variables like spherocity can be used to select events according to their topology. They provide a powerful tool to study soft-QCD processes (low Q$^{2}$), such as multiple parton interactions (MPI) and colour reconnection (CR) mechanisms which are expected to produce more isotropic events with respect to events dominated by jet production.
At the Large Hadron Collider (LHC) energies, heavy quarks are produced in hard scattering processes and their production can be described using perturbative quantum chromodynamics (pQCD). The measurements of open heavy-flavour hadrons as a function of spherocity and charged-particle multiplicity could improve the theoretical understanding of the production mechanisms, and the interplay between hard and soft processes.
In this contribution, recent results of the production of prompt D$^{0}$-meson as a function of event transverse spherocity (S$_{\rm O}$) in minimum bias pp collisions at $\sqrt{s}$ = 7 TeV will be presented. The results will be compared to predictions obtained with PYTHIA event generator.
Collaboration ALICE Presenter name already specified Experiment
Primary author
Manoj Bhanudas Jadhav (IIT- Indian Institute of Technology (IN))
Presentation Materials
Your browser is out of date!
If you are using Internet Explorer, please use Firefox, Chrome or Edge instead. | 2019-07-17 15:37:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.40651774406433105, "perplexity": 5044.384553577139}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525312.3/warc/CC-MAIN-20190717141631-20190717163631-00157.warc.gz"} |
https://orbiter-forum.com/threads/insert-bad-word-here.4538/#post-110328 | #### joeybigO
##### can't get in a word edgewise
Donator
After much ado about Mac I finally got a windoze based computer.
It has Vista, and i'm mostly happy with it. I still run Orbiter off my Mac, but I want to see what it looks like on the new comp.
The hardest part is adding hardware manually, which is very difficult, some stuff that I don't remember how to do, i'm learning again.
There is some other issues i'm getting used to, the asking of everything going to the administrator, I didn't know when I set up my wife's account that I made her the administrator, and i'm just a user.
Suits me just fine.
My insert bad word here is due to the fact that I was bad mouthing this so bad, but I really like it so far. I'm quite impressed with the graphics capabilities and other actions so far.
However I just bought this 2 days ago. But I got a great deal 298 at Wal Mart. On sale.
#### Arrowstar
##### Probenaut
What sorts of specifications does that PC have?
Donator
#### Hielor
##### Defender of Truth
Donator
Beta Tester
Is it THIS computer?
For \$300 it only comes with integrated Intel Graphics Media Accelerator, not good for Vista...
Clearly joeybigO disagrees. Where did everyone get this idea that you need some super-high-end graphics card just to run vista? it's not true.
Joey: If you go to the control panel under user accounts on your wife's account you can probably upgrade yourself to an Administrator.
Also, you can turn off the User Account Control which is the source of 90% of the annoying "Admin needed" popups.
#### Bj
Donator
Clearly joeybigO disagrees. Where did everyone get this idea that you need some super-high-end graphics card just to run vista? it's not true.
well you don't need a huge graphics card, but it helps;
Vista requirements
#### joeybigO
##### can't get in a word edgewise
Donator
Clearly joeybigO disagrees. Where did everyone get this idea that you need some super-high-end graphics card just to run vista? it's not true.
Joey: If you go to the control panel under user accounts on your wife's account you can probably upgrade yourself to an Administrator.
Also, you can turn off the User Account Control which is the source of 90% of the annoying "Admin needed" popups.
Well, I use that since my wonderful 2 daughters like to d/l "stuff" and put in onto the computer, what I would really like to take out is all that stuff on the browser. It's been so long I forgot how to remove that from the browser.
I think i'm going to make myself an admin on the computer.
And you were right, I don't need some big end graphics accelerator. I will use it to play Orbiter and maybe some internet stuff, but thats about all.
If the kids need it to pump up their games, which they don't have any at the moment, they have all Mac stuff games, then I think I might update the graphics, other than that, It runs great so far. | 2022-07-01 23:18:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20478315651416779, "perplexity": 2035.3477326950438}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00300.warc.gz"} |
https://www.physicsforums.com/threads/uniform-object-acceleration.218178/ | # Uniform object acceleration
1. Feb 26, 2008
### chocolatelover
1. The problem statement, all variables and given/known data
An object moving with uniform acceleration has a velocity of 16.0 cm/s in the positive x direction when its x coordinate is 3.00 cm. If its x coordinate 1.60 s later is -5.00 cm, what is its acceleration?
2. Relevant equations
a=vf-vi/change in time
3. The attempt at a solution
a=3--5/1.6 s
Could someone please tell me if this looks correct?
Thank you very much
2. Feb 26, 2008
### XxBollWeevilx
It looks like you are using your coordinates as the velocities...this would not be correct. Think in terms of displacement. Do you know any formulas that can be used with uniform acceleration?
3. Feb 26, 2008
### belliott4488
It always helps to include the units of all the numbers in your equations. You're subtracting two numbers that have units cm and dividing them by something in seconds, so your result will be in cm/sec. That's a speed, however, not an acceleration.
As XxBollWeevilx pointed out, you need to use a different equation - one that combines just the quantities you're given and the one you're trying to solve for.
4. Feb 26, 2008
### chocolatelover
Thank you very much
Does this look righ?
displacement=xf-xi
so, 3cm--5cm=displacement
=8cm
The displacement is just the distance, right?
The acceleration is:
vf-vi/change in t
I don't understand how the displacement helps. I still don't know what vi is, right?
Thank you
5. Feb 26, 2008
### cse63146
distance is how much you travelled, displacement is the difference between the initial position of a reference point and any later position.
6. Feb 26, 2008
### XxBollWeevilx
You're going from 3 cm to -5 cm, so your displacement will be $$x_f-x_i$$ = -5 cm - 3 cm. You DO know what vi is, from what I can see, but you don't know vf. But you don't need vf. Are there any equations that do not require vf? Think kinematics.
7. Feb 26, 2008
### chocolatelover
Thank you very much
So, if the displacement=xf-xi or 3cm--5cm=8cm in this case, then I can find the velocity, right?
velocity=xf-xi/change in t
8cm/1.6-0s=5m/s
In order to find the acceleration, I need to take vf-vi/change in t and it needs to be in m/s^2, right?
I know that the acceleration in uniform or constant. I know that vf=vi+at, but I don't know what vi or a are. Could you show me what I need to do to find the acceleration?
Thank you
8. Feb 26, 2008
### cse63146
its -5 cm - 3 cm not 3cm--5cm
displacement can be negative, distance cant
9. Feb 26, 2008
### XxBollWeevilx
No, don't worry about finding velocity. Taking xf-xi/change in t will only give you the average velocity during the motion, not the initial or final velocity. Look for a formula that doesn't require vf and everything but a is known.
10. Feb 26, 2008
### chocolatelover
Thank you very much
Could I use this equation?
xf=xi+vit+1/2at^2?
Thank you
11. Feb 26, 2008
### XxBollWeevilx
That would be a very good equation to use! :)
Nice job.
12. Feb 26, 2008
### chocolatelover
Thank you very much
Does this look correct?
-5=16+16(1.6)+1/2a1.6^2
a=-36.41
Thank you
13. Feb 26, 2008
### XxBollWeevilx
I didn't check the arithmetic, but that looks pretty good to me.
14. Feb 26, 2008
### chocolatelover
Thank you very much
Regards
15. Feb 26, 2008
### chocolatelover
Could someone please check this? I want to make sure I'm doing it right.
Thank you
16. Feb 26, 2008
### chocokat
The first 16 should be x1, i.e. 3. Be careful of these little errors!
17. Feb 26, 2008
### chocolatelover
Thank you very much
What do you mean x1, i.e. 3? It should be multiplied by 1?
Thank you
18. Feb 26, 2008
### chocokat
This is the equation you are using:
What is the value of xi, and what is the value you used when you calculated it?
(I'm sorry, I used x1 above when I should have used xi)
Last edited: Feb 26, 2008
19. Feb 26, 2008
### chocolatelover
It's okay. It should have been 3, right? Does -26.25 look alright?
Last edited: Feb 26, 2008
20. Feb 26, 2008
### chocokat
Yeah, you've got it now. | 2018-01-16 12:38:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.620678186416626, "perplexity": 2242.6899686297193}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886416.17/warc/CC-MAIN-20180116105522-20180116125522-00141.warc.gz"} |
http://math.stackexchange.com/questions/141823/is-the-study-of-algebraic-curve-is-techniquely-equal-to-the-advanced-division-of | # Is the study of algebraic curve is techniquely equal to the advanced division of analytic geometry, if not, what is the difference?
Is the study of algebraic curve is techniquely equal to the advanced division of analytic geometry, if not, what is the difference?
And what is other branch of advanced analytic geometry called?
in addition, what is the differece between algebraic geometry and algebraic curves?
-
## 1 Answer
In algebraic geometry one studies algebraic varieties, which locally look like the zero locus of some polynomials. A curve is a one-dimensional variety.
I don't hear people use the term "analytic geometry" much these days, but it could mean the study of analytic spaces, i.e. geometric objects which locally look like the the zero locus of some real or complex analytic functions. There is some overlap with algebraic geometry, but neither is a special case of the other.
-
The "analytic geometry" I am accustomed to amounts to using coordinates to solve problems that were usually done by synthetic (i.e. in the manner of Euclid and cohorts) means. – J. M. May 6 '12 at 17:40 | 2016-06-01 00:29:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8413761258125305, "perplexity": 147.86409371645107}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464053252010.41/warc/CC-MAIN-20160524012732-00067-ip-10-185-217-139.ec2.internal.warc.gz"} |
https://raisingthebar.nl/2017/01/24/regressions-with-stochastic-regressors-2/ | 24
Jan 17
## Regressions with stochastic regressors 2
### Regressions with stochastic regressors 2: two approaches
We consider the slope estimator for the simple regression
$y_i=a+bx_i+e_i$
assuming that $x_i$ is stochastic.
First approach: the sample size is fixed. The unbiasedness and efficiency conditions are replaced by their analogs conditioned on $x$. The outcome is that the slope estimator is unbiased and its variance is the average of the variance that we have in case of a deterministic regressor. See the details.
Second approach: the sample size goes to infinity. The main tools used are the properties of probability limits and laws of large numbers. The outcome is that, in the limit, the sample characteristics are replaced by their population cousins and the slope estimator is consistent. This is what we focus on here.
### A brush-up on convergence in probability
Review the intuition and formal definition. This is the summary:
Fact 1. Convergence in probability (which applies to sequences of random variables) is a generalization of the notion of convergence of number sequences. In particular, if $\{a_n\}$ is a numerical sequence that converges to a number $a$$\lim_{n\rightarrow\infty}a_n=a$, then, treating $a_n$ as a random variable, we have convergence in probability ${\text{plim}}_{n\rightarrow\infty}a_n=a$.
Fact 2. For those who are familiar with the theory of limits of numerical sequences, from the previous fact it should be clear that convergence in probability preserves arithmetic operations. That is, for any sequences of random variables $\{X_n\},\{Y_n\}$ such that limits ${\text{plim}}X_n$ and ${\text{plim}}Y_n$ exist, we have
$\text{plim}(X_n\pm Y_n)=\text{plim}X_n\pm\text{plim}Y_n,$ $\text{plim}(X_n\times Y_n)=\text{plim}X_n\times\text{plim}Y_n,$
and if $\text{plim}Y_n\ne 0$ then
$\text{plim}(X_n/ Y_n)=\text{plim}X_n/\text{plim}Y_n.$
This makes convergence in probability very handy. Convergence in distribution doesn't have such properties.
### A brush-up on laws of large numbers
See the site map for several posts about this. Here we apply the Chebyshev inequality to prove the law of large numbers for sample means. A generalization is given in the Theorem in the end of that post. Here is a further intuitive generalization:
Normally, unbiased sample characteristics converge in probability to their population counterparts.
Example 1. We know that the sample variance $s^2=\frac{1}{n-1}\sum(X_i-\bar{X})^2$ unbiasedly estimates the population variance $\sigma^2$$Es^2=\sigma^2$. The intuitive generalization says that then
(1) $\text{plim}s^2=\sigma^2$.
Here I argue that, for the purposes of obtaining some identities from the general properties of means, instead of the sample variance it's better to use the variance defined by $Var_u(X)=\frac{1}{n}\sum(X_i-\bar{X})^2$ (with division by $n$ instead of $n-1$). Using Facts 1 and 2 we get from (1) that
(2) $\text{plim}Var_u(X)=\text{plim}\frac{n-1}{n}\frac{1}{n-1}\sum(X_i-\bar{X})^2$
$=\text{plim}(1-\frac{1}{n})s^2=\text{plim}s^2=\sigma^2=Var(X)$
(sample variance converges in probability to population variance). Here we use $\lim(1-\frac{1}{n})=1$.
Example 2. Similarly, sample covariance converges in probability to population covariance:
(3) $\text{plim}Cov_u(X,Y)=Cov(X,Y)$
where by definition $Cov_u(X,Y)=\frac{1}{n}\sum(X_i-\bar{X})(Y_i-\bar{Y})$.
### Proving consistency of the slope estimator
Here (see equation (5)) I derived the representation of the OLS estimator of the slope
$\hat{b}=b+\frac{Cov_u(X,e)}{Var_u(X)}$
Using preservation of arithmetic operations for convergence in probability, we get
(4) $\text{plim}\hat{b}=\text{plim}\left[b+\frac{Cov_u(X,e)}{Var_u(X)}\right]=\text{plim}b+\text{plim}\frac{Cov_u(X,e)}{Var_u(X)}$
$=b+\frac{\text{plim}Cov_u(X,e)}{\text{plim}Var_u(X)}=b+\frac{Cov(X,e)}{Var(X)}.$
In the last line we used (2) and (3). From (4) we see what conditions should be imposed for the slope estimator to converge to a spike at the true slope:
$Var(X)\neq 0$ (existence condition)
and
$Cov(X,e)=0$ (consistency condition).
Under these conditions, we have $\text{plim}\hat{b}=b$ (this is called consistency).
Conclusion. In a way, the second approach is technically simpler than the first. | 2022-05-18 07:29:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 66, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9484952092170715, "perplexity": 500.66104532021546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662521152.22/warc/CC-MAIN-20220518052503-20220518082503-00349.warc.gz"} |
https://bitworking.org/news/2010/02/nasa/ | # NASA
The NASA budget request, which actually included a slight budget boost up to $19 billion, would cut out the Constellation completely, extend the International Space Station's mission through at least 2020, and set aside$6 billion over five years to support commercial spacecraft development.
It would also increase funding for fundamental research, Earth science and development programs for ground-breaking technology for space exploration.
There's obviously a lot of hand-wringing about this, which is understandable, for many people it's an unthinkable change because NASA has been the sole provider of U.S. manned space flight for the past 50 years. Personally I'm thrilled. | 2023-03-26 03:17:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2977440655231476, "perplexity": 6831.9169172612965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00508.warc.gz"} |
http://mathhelpforum.com/calculus/203996-show-sin-x-continuous.html | # Math Help - Show that sin(x) is continuous
1. ## Show that sin(x) is continuous
Using the observation that the absolute value of sin(x) is less than or equal to the absolute value of x, show that sin(x) is continuous in x = 0.
2. ## Re: Show that sin(x) is continuous
You probably want an $\epsilon-\delta$ proof.
$\forall \epsilon>0, \exists \delta>0, \forall x \in \mathbb{R}: |x|<\delta \Rightarrow |\sin(x)|<\epsilon$
Can you proof the above statement (with the given hint)?
3. ## Re: Show that sin(x) is continuous
I will try, thank you.
4. ## Re: Show that sin(x) is continuous
Originally Posted by mathlearner100
I will try, thank you.
Great. With the hint the statement is proved immediately. | 2015-05-23 09:03:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8258414268493652, "perplexity": 2143.7971011160075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927427.57/warc/CC-MAIN-20150521113207-00272-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://jmtomczak.github.io/blog/5/5_IDF.html | ### Introduction¶
While discussing flow-based models, we presented them as density estimators, namely, models that represent stochastic dependencies among continuous random variables. We introduced the change of variables formula that helps to express a random variable by transforming it using invertible maps (bijections) $f$ to a random variable with a known probability density function. Formally, it is defined as follows:
$$p(\mathbf{x}) = p\left(\mathbf{v}=f^{-1}(\mathbf{x})\right) \left| \mathbf{J}_{f}(x) \right|^{-1} ,$$
where $\mathbf{J}_{f}(x)$ is the Jacobian of $f$ at $\mathbf{x}$.
However, there are potential issues with such an approach, namely:
First of all, in many problems (e.g., image processing) the considered random variables (objects) are discrete. For instance, images typically take values in $\{0, 1, \ldots, 255\} \subset \mathbb{Z}$. In order to apply flows, we must apply dequantization (Hoogeboom et al., 2021) that results in a lower bound to the original probability distribution.
A continuous space possesses various potential pitfalls. One of them is that if we a transformation is a bijection (as in flows), not all continuous deformations are possible. It is tightly connected with topology and, more precisely, homeomorphisms, i.e., a continuous function between topological spaces that has a continuous inverse function, and diffeomorphisms, i.e., invertible functions that map one differentiable manifold to another such that both the function and its inverse are smooth. It is not crucial to know topology, but a curious reader may take a detour and read on that, it's definitely a fascinating field and I wish to know more about it! Let us consider three examples.
Imagine we want to transform a square into a circle (Figure 1.A). It is possible to find a homeomorphism (i.e., a bijection) that turns the square into the circle and back. Imagine you have a hammer and an iron square. If you start hitting the square infinitely many times, you can get an iron circle. Then, you can do it "backward" to get the square back. I know, it's not realistic but hey, we talking about math here!
However, if we consider a line segment and a circle, the situation is a bit more complicated. It is possible to transform the line segment into a circle, but not the other way around. Why? Because while transforming the circle to the line segment, it is unclear which point of the circle corresponds to the beginning (or the end) of the line segment. That's why we cannot invert the transformation!
Figure 1. Examples of homeomorphic spaces (A) and non-homeomorphic spaces (B).
Another example that I really like, and which is closer to the potential issues of continuous flows, is transforming a ring into a ball as in Figure 2. The goal is to replace the blue ring with the magenta ball. In order to make the transformation bijective, while transforming the blue ring in place of the magenta ball, we must ensure that the new magenta "ring" is in fact "broken" so that the new blue "ball" can get inside! Again, why? If the magenta ring is not broken, then we can't say how the blue ball got inside that destroys bijectivity! In the language of topology, it is impossible because the two spaces are non-homeomorphic.
Figure 2. An example of "replacing" a ring (in blue) with a ball (in magenta).
Alright, but how this affects the flow-based models? I hope that some of you asked this question, or maybe even imagine possible cases where this might hinder learning flows. In general, I would say it's fine, and we shouldn't look for faults where there are none or almost none. However, if you work with flows that require dequantization, then you can spot cases like the one in Figure 3. In this simple example, we have two discrete random variables that after uniform dequantization have two regions with equal probability mass, and the remaining two regions with zero probability mass (Hoogeboom et al., 2021). After training a flow-based model, we have a density estimator that assigns non-zero probability mass where the true distribution has zero density! Moreover, the transformation in the flow must be a bijection, therefore, there is a continuity between the two squares (see Figure 3, right). Where did we see that? Yes, in Figure 2! We must know how to invert the transformation, thus, there must be a "trace" of how the probability mass moves between the regions.
Figure 3. An example of uniformly dequantized discrete random variables (left) and a flow-based model (right). Notice that in these examples, the true distribution assigns equal probability mass to two regions (in orange), and zero probability mass to the remaining two regions (in black). However, the flow-based model assigns probability mass outside the original non-zero probability regions.
Again, we can ask ourselves if it is bad. Well, I would say not really, but if we think of a case with more random variables, and there is always some little error here and there, this problem a probability mass leakage would result in a far-from-perfect model. And, overall, the model could err in proper probability assignment.
### Flows in $\mathbb{R}$ or maybe in $\mathbb{Z}$?¶
Before we consider any specific cases and discuss discrete flows, first we need to answer whether there is a change of variables formula for discrete random variables. The answer, fortunately, is yes! Let us consider $\mathbf{x} \in \mathcal{X}^{D}$ where $\mathcal{X}$ is a discrete space, e.g., $\mathcal{X} = \{0,1\}$ or $\mathcal{X} = \mathbb{Z}$. Then the change of variables takes the following form:
$$p(\mathbf{x}) = \pi\left(\mathbf{z}_{0} = f^{-1}(\mathbf{x})\right) ,$$
where $f$ is an invertible transformation and $\pi(\cdot)$ is a base distribution. Immediately we can spot a "missing" Jacobian. This is correct! Why? Because now we live in the discrete world where the probability mass is assigned to points that are "shapeless" and the bijection cannot change the volume. Thus, the Jacobian-determinant is equal to $1$! That seems to be good news, isn't it? We can take any bijective transformations and we don't need to bother about the Jacobian. That's obviously true, however, we need to remember that the output of the transformation must be still discrete, i.e., $z \in \mathcal{X}^{D}$. As a result, we cannot use any arbitrary invertible neural network. We will discuss it in a minute, however, before we do that, it is worth discussing the expressivity of discrete flows.
Let us assume that we have an invertible transformation $f: \mathcal{X}^{D} \rightarrow \mathcal{X}^{D}$. Moreover, we have $\mathcal{X} = \{0,1\}$. As noted by (Papamakarios et al., 2019), a discrete flow can only permute probability masses. Since there is no Jacobian (or, rather, the Jacobian-determinant is equal to $1$), there is no chance to decrease or increase the probability for specific values. We depict it in Figure 4. You can easily imagine that as a Rubik's cube and your hands being the flow. If you record your moves, you can always play the video backward, thus, it's invertible. However, you can only shuffle the colors around! As a result, we don't gain anything by applying the discrete flow, and learning the discrete flow is equivalent to learning the base distribution $\pi$. So we are back to square one.
Figure 4. An example of a discrete flow for two binary random variables. Colors represent various probabilities (i.e., the sum of all squares is $1$).
However, as pointed out by (van den Berg et al., 2020), the situation looks differently if we consider an extended space (or infinite space like $\mathbb{Z}$). The discrete flow can still only shuffle the probabilities, but now it can re-organize them in such a way that the probabilities can be factorized! In other words, it can help the base distribution to be a product of marginals, $\pi(\mathbf{z}) = \prod_{d=1}^{D} \pi_{d}(z_d|\theta_{d})$, and the dependencies among variables are now encoded in the invertible transformations. An example of this case is presented in Figure 5. We refer to (van den Berg et al., 2020) for a more thorough discussion with an appropriate lemma.
Figure 5. An example of a discrete flow for two binary random variables but in the extended space. Colors represent various probabilities (i.e., the sum of all squares is $1$).
This is amazing information! It means that building a flow-based model in the discrete space makes sense. Now we can think of how to build an invertible neural network in discrete spaces.
### Let's do it! Integer Discrete Flows¶
We know now that it makes sense to work with discrete flows and that they are flexible as long as we use extended spaces or infinite spaces like $\mathbb{Z}$. However, the question is how to formulate an invertible transformation (or rather: an invertible neural network) that will output discrete values.
(Hoogeboom et al., 2019) proposed to focus on integers since they can be seen as discretized continuous values. As such, we consider coupling layers (Dinh et al., 2016) and modify them accordingly. Let us remind ourselves the definition of coupling layers for $\mathbf{x} \in \mathbb{R}^{D}$:
\begin{align*} \mathbf{y}_{a} &= \mathbf{x}_{a} \\ \mathbf{y}_{b} &= \exp \left(s\left(\mathbf{x}_{a}\right)\right) \odot \mathbf{x}_{b} + t\left(\mathbf{x}_{a}\right) , \end{align*}
where $s(\cdot)$ and $t(\cdot)$ are arbitrary neural networks called scaling and transition, respectively.
Considering integer-valued variables, $\mathbf{x} \in \mathbb{Z}^{D}$, requires modifying this transformation. First, using scaling might be troublesome because multiplying by integers is still possible, but when we invert the transformation, we divide by integers, and dividing an integer by an integer does not necessarily result in an integer. Therefore, we must remove scaling. Second, we use an arbitrary neural network for the transition. However, this network must return integers! (Hoogeboom et al., 2019) utilized a simple trick, namely, they said that we can round the output of $t(\cdot)$ to the closest integer. As a result, we add (in the forward) or subtract (in the inverse) integers from integers that is perfectly fine (the outcome is still integer-valued). Eventually, we get the following coupling layer:
\begin{align*} \mathbf{y}_{a} &= \mathbf{x}_{a} \\ \mathbf{y}_{b} &= \mathbf{x}_{b} + \lfloor t\left(\mathbf{x}_{a}\right) \rceil, \end{align*}
where $\lfloor \cdot \rceil$ is the rounding operator. An inquisitive reader could ask at this point whether the rounding operator still allows using the backpropagation algorithm. In other words, whether the rounding operator is differentiable. The answer is NO, but (Hoogeboom et al., 2019) showed that using the straight-through estimator (STE) of a gradient is sufficient. As a side note, the the STE in this case uses the rounding in the forward pass of the network, $\lfloor t\left(\mathbf{x}_{a}\right) \rceil$, but it utilizes $t\left(\mathbf{x}_{a}\right)$ in the backward pass (to calculate gradients). (van den Berg et al., 2020) further indicated that indeed the STE works well and the bias does not hinder training.
Very recently, in (Tomczak, 2020) it has been shown how to generalize invertible transformations like bipartite coupling layers, among others. For instance, we can divide $\mathbf{x}$ into four parts, $\mathbf{x} = [\mathbf{x}_{a}, \mathbf{x}_{b}, \mathbf{x}_{c}, \mathbf{x}_{d}]$, and the following transformation is invertible (Tomczak, 2020):
\begin{align*} \mathbf{y}_{a} &= \mathbf{x}_{a} + \lfloor t\left(\mathbf{x}_{b}, \mathbf{x}_{c}, \mathbf{x}_{d}\right) \rceil \\ \mathbf{y}_{b} &= \mathbf{x}_{b} + \lfloor t\left(\mathbf{y}_{a}, \mathbf{x}_{c}, \mathbf{x}_{d}\right) \rceil \\ \mathbf{y}_{c} &= \mathbf{x}_{c} + \lfloor t\left(\mathbf{y}_{a}, \mathbf{y}_{b}, \mathbf{x}_{d}\right) \rceil \\ \mathbf{y}_{d} &= \mathbf{x}_{d} + \lfloor t\left(\mathbf{y}_{a}, \mathbf{y}_{b}, \mathbf{y}_{c}\right) \rceil . \end{align*}
This new invertible transformation could be seen as a kind of autoregressive processing since $\mathbf{y}_{a}$ is used to calculate $\mathbf{y}_{b}$, then both $\mathbf{y}_{a}$ and $\mathbf{y}_{b}$ are used for obtaining $\mathbf{y}_{c}$ and so on. As a result, we get a more powerful transformation than the bipartite coupling layer.
We need to remember to use a permutation layer to reverse the order of variables. Otherwise, some inputs would be only partially processed. This is true for any coupling layer.
The last component we need to think of is the base distribution. Similarly to flow-based models, we can use various tricks to boost the performance of the model. For instance, we can consider squeezing, factoring-out, and a mixture model for the base distribution (Hoogeboom et al., 2019). However, in this post, we try to keep the model as simple as possible, therefore, we use the product of marginals as the based distribution. For images represented as integers, we use the following:
\begin{align*} \pi(\mathbf{z}) &= \prod_{d=1}^{D} \pi_{d}(z_{d}) \\ &= \prod_{d=1}^{D} \mathrm{DL}(z_{d}|\mu_{d}, \nu_{d}) \end{align*}
where $\pi_{d}(z_{d}) = \mathrm{DL}(z_{d}|\mu_{d}, \nu_{d})$ is the discretized logistic distribution that is defined as a difference of CDFs of the logistic distribution as follows (Chakraborty & Chakravarty, 2016):
$$\pi(z) = \mathrm{sigm}\left( (z+0.5-\mu)/\nu \right) - \mathrm{sigm}\left( (z-0.5-\mu)/\nu \right),$$
where $\mu \in \mathbb{R}$ and $\nu > 0$ denote the mean and the scale, respectively, $\mathrm{sigm}(\cdot)$ is the sigmoid function. Notice that this is equivalent to calculating the probability of $z$ falling into a bin of lenght $1$, therefore, we add $0.5$ in the first CDF and subtract $0.5$ from the second CDF. An example of the discretized distribution is presented in Figure 6. Interestingly, we can use this distribution to replace the Categorical distribution in previous posts, as it was done in (Kingma et al., 2016). We can even use a mixture of discretized logistic distribution to further improve the final performance (Hoogeboom et al., 2019; Salimans et al., 2017).
Figure 6. An example of the discretized logistic distribution with $\mu=0$ and $\nu=1$. The magenta area corresponds to the probability mass of a bin of size $1$.
Eventually, our log-likelihood function takes the following form:
\begin{align*} \ln p(\mathbf{x}) &= \sum_{d=1}^{D} \ln \mathrm{DL}(z_{d} = f^{-1}(\mathbf{x})|\mu_{d}, \nu_{d}) \\ &= \sum_{d=1}^{D} \ln \left( \mathrm{sigm}\left( (z_d+0.5-\mu_d)/\nu_d \right) - \mathrm{sigm}\left( (z_d-0.5-\mu_d)/\nu_d \right) \right) , \end{align*}
where me make all $\mu_{d}$ and $\nu_{d}$ learnable parameters. Notice that $\nu_{d}$ must be positive (stricly larger than $0$), therefore, in the implementation, we will consider the logarithm of the scale because taking $\exp$ of the log-scale ensures having strickly positive values.
Now, we have all components to implement our own Integer Discrete Flow (IDF)! Below, there is a code with a lot of comments that should help to understand every single line of it. The full code (with auxiliary functions) that you can play with is available here: [link].
# This function implements the log of the discretized logistic distribution.
# Chakraborty & Chakravarty, "A new discrete probability distribution with integer support on (−∞, ∞)",
# Communications in Statistics - Theory and Methods, 45:2, 492-505, DOI: 10.1080/03610926.2013.830743
def log_integer_probability(x, mean, logscale):
scale = torch.exp(logscale)
logp = log_min_exp(
F.logsigmoid((x + 0.5 - mean) / scale),
F.logsigmoid((x - 0.5 - mean) / scale))
return logp
# We need to also turn torch.round (i.e., the rounding operator) into a differentiable function.
# For this purpose, we use the rounding in the forward pass, but the original input for the backward pass.
# This is nothing else than the straight-through estimator.
def __init__(self):
super().__init__()
@staticmethod
def forward(ctx, input):
rounded = torch.round(input, out=None)
return rounded
@staticmethod
# That's the class of the Integer Discrete Flows (IDFs).
# There are two options implemented:
# Option 1: The bipartite coupling layers as in (Hoogeboom et al., 2019).
# Option 2: A new coupling layer with 4 parts as in (Tomczak, 2020).
# We implemnet the second option explicitely, without any loop, so that it is very clear how it works.
class IDF(nn.Module):
def __init__(self, netts, num_flows, D=2):
super(IDF, self).__init__()
print('IDF by JT.')
# Option 1:
if len(netts) == 1:
self.t = torch.nn.ModuleList([netts[0]() for _ in range(num_flows)])
self.idf_git = 1
# Option 2:
elif len(netts) == 4:
self.t_a = torch.nn.ModuleList([netts[0]() for _ in range(num_flows)])
self.t_b = torch.nn.ModuleList([netts[1]() for _ in range(num_flows)])
self.t_c = torch.nn.ModuleList([netts[2]() for _ in range(num_flows)])
self.t_d = torch.nn.ModuleList([netts[3]() for _ in range(num_flows)])
self.idf_git = 4
else:
raise ValueError('You can provide either 1 or 4 translation nets.')
# The number of flows (i.e., invertible transformations).
self.num_flows = num_flows
# The rounding operator
self.round = RoundStraightThrough.apply
# Initialization of the parameters of the base distribution.
# Notice they are parameters, so they are trained alongside the weights of neural networks.
self.mean = nn.Parameter(torch.zeros(1, D)) #mean
self.logscale = nn.Parameter(torch.ones(1, D)) #log-scale
# The dimensionality of the problem.
self.D = D
# The coupling layer.
def coupling(self, x, index, forward=True):
# Option 1:
if self.idf_git == 1:
(xa, xb) = torch.chunk(x, 2, 1)
if forward:
yb = xb + self.round(self.t[index](xa))
else:
yb = xb - self.round(self.t[index](xa))
# Option 2:
elif self.idf_git == 4:
(xa, xb, xc, xd) = torch.chunk(x, 4, 1)
if forward:
ya = xa + self.round(self.t_a[index](torch.cat((xb, xc, xd), 1)))
yb = xb + self.round(self.t_b[index](torch.cat((ya, xc, xd), 1)))
yc = xc + self.round(self.t_c[index](torch.cat((ya, yb, xd), 1)))
yd = xd + self.round(self.t_d[index](torch.cat((ya, yb, yc), 1)))
else:
yd = xd - self.round(self.t_d[index](torch.cat((xa, xb, xc), 1)))
yc = xc - self.round(self.t_c[index](torch.cat((xa, xb, yd), 1)))
yb = xb - self.round(self.t_b[index](torch.cat((xa, yc, yd), 1)))
ya = xa - self.round(self.t_a[index](torch.cat((yb, yc, yd), 1)))
# Similalry to RealNVP, we have also the permute layer.
def permute(self, x):
return x.flip(1)
# The main function of the IDF: forward pass from x to z.
def f(self, x):
z = x
for i in range(self.num_flows):
z = self.coupling(z, i, forward=True)
z = self.permute(z)
return z
# The function for inverting z to x.
def f_inv(self, z):
x = z
for i in reversed(range(self.num_flows)):
x = self.permute(x)
x = self.coupling(x, i, forward=False)
return x
# The PyTorch forward function. It returns the log-probability.
def forward(self, x, reduction='avg'):
z = self.f(x)
if reduction == 'sum':
return -self.log_prior(z).sum()
else:
return -self.log_prior(z).mean()
# The function for sampling:
# First we sample from the base distribution.
# Second, we invert z.
def sample(self, batchSize, intMax=100):
# sample z:
z = self.prior_sample(batchSize=batchSize, D=self.D, intMax=intMax)
# x = f^-1(z)
x = self.f_inv(z)
return x.view(batchSize, 1, self.D)
# The function for calculating the logarithm of the base distribution.
def log_prior(self, x):
log_p = log_integer_probability(x, self.mean, self.logscale)
return log_p.sum(1)
# A function for sampling integers from the base distribution.
def prior_sample(self, batchSize, D=2):
# Sample from logistic
y = torch.rand(batchSize, self.D)
# Here we use a property of the logistic distribution:
# In order to sample from a logistic distribution, first sample y ~ Uniform[0,1].
# Then, calculate log(y / (1.-y)), scale is with the scale, and add the mean.
x = torch.exp(self.logscale) * torch.log(y / (1. - y)) + self.mean
# And then round it to an integer.
# The number of invertible transformations
num_flows = 8
# This variable defines whether we use:
# Option 1: 1 - the classic coupling layer proposed in (Hogeboom et al., 2019)
# Option 2: 4 - the general invertible transformation in (Tomczak, 2020) with 4 partitions
idf_git = 1
if idf_git == 1:
nett = lambda: nn.Sequential(nn.Linear(D // 2, M), nn.LeakyReLU(),
nn.Linear(M, M), nn.LeakyReLU(),
nn.Linear(M, D // 2))
netts = [nett]
elif idf_git == 4:
nett_a = lambda: nn.Sequential(nn.Linear(3 * (D // 4), M), nn.LeakyReLU(),
nn.Linear(M, M), nn.LeakyReLU(),
nn.Linear(M, D // 4))
nett_b = lambda: nn.Sequential(nn.Linear(3 * (D // 4), M), nn.LeakyReLU(),
nn.Linear(M, M), nn.LeakyReLU(),
nn.Linear(M, D // 4))
nett_c = lambda: nn.Sequential(nn.Linear(3 * (D // 4), M), nn.LeakyReLU(),
nn.Linear(M, M), nn.LeakyReLU(),
nn.Linear(M, D // 4))
nett_d = lambda: nn.Sequential(nn.Linear(3 * (D // 4), M), nn.LeakyReLU(),
nn.Linear(M, M), nn.LeakyReLU(),
nn.Linear(M, D // 4))
netts = [nett_a, nett_b, nett_c, nett_d]
# Init IDF
model = IDF(netts, num_flows, D=D)
# Print the summary (like in Keras)
print(summary(model, torch.zeros(1, 64), show_input=False, show_hierarchical=False))
And we are done, this is all we need to have! After running the code (take a look at: [link]) and training the IDFs, we should obtain results similar to the following:
A B C
Figure 5. Examples of outcomes of the training: A Randomly selected real images. B Unconditional generations from the IDF with bipartite coupling layers (Option 1). C Unconditional generations from the IDF with 4-partition coupling layers (Option 2).
### What's next?¶
Similarly to our example of RealNVP, here we present rather a simplified implementation of IDFs. We can use many of the tricks presented in the post on RealNVP. On recent developments on IDFs, please see also (van den Berg et al., 2020).
Integer discrete flows have a great potential in compression. Since IDFs learn the distribution $p(\mathbf{x})$ directly on the integer-valued objects, they are excellent candidates for lossless compression. As presented in (Hoogeboom et al., 2019), they are competitive with other codecs for lossless compression of images.
The recent paper by (van den Berg et al., 2020), further shows that the potential bias following from the STE of the gradients isn't so significant, and they can learn flexible distributions. This result suggests that IDFs require special attention, especially for real-life applications like compression.
It seems that the next step would be to think of more powerful transformations for discrete variables, e.g., see (Tomczak, 2020), and developing powerful architectures. Another interesting direction is utilizing alternative learning algorithms in which gradients could be better estimated, or even replaced.
### References¶
(van den Berg et al., 2020) van den Berg, R., Gritsenko, A. A., Dehghani, M., Sønderby, C. K., & Salimans, T. (2020). IDF++: Analyzing and Improving Integer Discrete Flows for Lossless Compression. arXiv preprint arXiv:2006.12459.
(Chakraborty & Chakravarty, 2016) Subrata Chakraborty and Dhrubajyoti Chakravarty. A new discrete probability distribution with integer support on (−∞, ∞). Communications in Statistics-Theory and Methods, 45(2):492–505, 2016.
(Dinh et al., 2016) Dinh, Laurent, Jascha Sohl-Dickstein, and Samy Bengio. "Density estimation using real nvp." arXiv preprint arXiv:1605.08803 (2016).
(Hoogeboom et al., 2019) Hoogeboom, E., Peters, J. W., Berg, R. V. D., & Welling, M. (2019). Integer discrete flows and lossless compression. arXiv preprint arXiv:1905.07376.
(Hoogeboom et al., 2021) Hoogeboom, E., Cohen, T. S., & Tomczak, J. M. (2020). Learning Discrete Distributions by Dequantization. AABI 2021
(Kingma et al., 2016) Kingma, D. P., Salimans, T., Jozefowicz, R., Chen, X., Sutskever, I., & Welling, M. (2016). Improved Variational Inference with Inverse Autoregressive Flow. Advances in Neural Information Processing Systems, 29, 4743-4751.
(Papamakarios et al., 2019) Papamakarios, G., Nalisnick, E., Rezende, D. J., Mohamed, S., & Lakshminarayanan, B. (2019). Normalizing flows for probabilistic modeling and inference. arXiv preprint arXiv:1912.02762.
(Salimans et al., 2017) Salimans, T., Karpathy, A., Chen, X., & Kingma, D. P. (2017). Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517.
(Theis et al., 2016) Theis, L., Oord, A. V. D., & Bethge, M. (2016). A note on the evaluation of generative models. ICLR 2016
(Tomczak, 2020) Tomczak, J. M. (2020). General Invertible Transformations for Flow-based Generative Modeling. arXiv preprint arXiv:2011.15056. | 2023-04-01 10:18:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9447987079620361, "perplexity": 1477.416334515494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949958.54/warc/CC-MAIN-20230401094611-20230401124611-00467.warc.gz"} |
https://www.physicsforums.com/threads/inverse-finsler-metric.642198/ | # Inverse Finsler metric
1. Oct 8, 2012
### ngkamsengpeter
Given a Finsler geometry (M,L,F) and $$g_{ab}^L=\frac{1}{2} \frac{\partial^2 L}{\partial y^a \partial y^b}$$
$$g_{ab}^F=\frac{1}{2} \frac{\partial^2 F^2}{\partial y^a \partial y^b}$$
$$F(x,y)=|L(x,y)|^{1/r}$$
I manage to get the following form
$$g_{ab}^F=\frac{2|L|^{2/r}}{rL}( g_{ab}^L+\frac{2-r}{2rL} \frac{\partial L}{\partial y^a}\frac{\partial L}{\partial y^b})$$
However, how to obtain the inverse of the the metric? That is how to obtain the following form:
$$g^{Fab}=\frac{rL}{2|L|^{2/r}}( g^{Lab}-\frac{2(2-r)}{r(r-1)L} y^a y^b)$$
I am new to this so have no idea how to get the inverse. Any help will be appreciated. Thanks. | 2018-03-18 06:57:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7062996625900269, "perplexity": 441.2031520581025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645538.8/warc/CC-MAIN-20180318052202-20180318072202-00734.warc.gz"} |
https://coq.gitlab.io/zulip-archive/stream/256336-jsCoq/topic/How.20to.20make.20an.20interactive.20website.20from.20scratch.3F.html | ## Stream: jsCoq
### Topic: How to make an interactive website from scratch?
#### Cyril Cohen (Dec 01 2021 at 23:33):
Hi, I've spent the last half day trying to setup a website such as https://coq-next.now.sh. I managed to do a seemingly npm install, and successfull ran npx jscoqdoc, but the generated html contains <script src="./ui-js/jscoq-loader.js"></script> (and not ./node_modules/jscoq/ui-js/jscoq-loaders.js and it seems that the path to codemirror is also wrong, but I don't know where or how yet).
Could someone point me to the sources of a generated webpage, together with a script (nix, docker or action, preferably some reproducible thing) that builds everything from scratch?
Also I wish to use mathcomp 1.13.0 but I can't seem to find where or how to select versions of packages...
#### Cyril Cohen (Dec 01 2021 at 23:36):
update: by copying node_modules/* at the root of the website I managed to run codemirror, now mathcomp is not found, although my package.json contains the following:
"dependencies": {
"express": "^4.17.1",
"jscoq": "^0.13.3",
"@jscoq/mathcomp": "^0.13.3"
},
#### Cyril Cohen (Dec 01 2021 at 23:39):
(also if I want to customize the headers, for example adding some tex-like-based rendering (e.g. using mathjax), what is the best practice to do that?)
#### Emilio Jesús Gallego Arias (Dec 02 2021 at 02:59):
@Shachar Itzhaky knows a bit more about how this is done, it is been a while I don't generate .v.html files
For mathjax I was able to just use it succesfully , but by hand.
In general we need to streamline the workflow quite a lot, but we lack manpower
#### Enrico Tassi (Dec 02 2021 at 07:06):
do you have the same header/footer we use here?
https://github.com/math-comp/mcb/tree/master/coq
#### Enrico Tassi (Dec 02 2021 at 07:07):
also it seems you can point inside the noxe modules dir, rather than copying out
#### Shachar Itzhaky (Dec 03 2021 at 12:18):
Hmm yes the use case of jscoqdoc seems a bit broken. The scripts generates wrong paths... sry it has undergone zero qa.
#### Shachar Itzhaky (Dec 03 2021 at 16:47):
Oh I just discovered that there is an environment variable JSCOQ_URL. If you set it before running jscoqdoc it will determine the prefix used. So you can set it to e.g. ./node_modules/jscoq.
Ofc the default should not be . and also this var should be documented.
#### Cyril Cohen (Dec 03 2021 at 18:15):
Another problem with jscoqdoc is that it does not seem to accept the arguments to option --with-footer and --with-header (it looks like to discards them before calling coqdoc...)
#### Cyril Cohen (Dec 03 2021 at 18:15):
I'm sturggling to put enclosing <div> around chunks of code & comments, is it possible?
#### Cyril Cohen (Dec 03 2021 at 18:17):
Also it seems that jscoq will only transform coqdoc to textareas if it is caught by #main > div.code, so if I have extra divs inbetween, it does not work... setting up jscoq_ids = ['div.code']; does not seem to help :sad:
#### Shachar Itzhaky (Dec 03 2021 at 22:31):
Ahhh that's a nasty presumption on my part... I was assuming that any .html files on the jscoqdoc command line are meant to be processed by jscoqdoc proper instead of being passed on to coqdoc. I completely overlooked --with-{header,footer}.
So, hack: either rename your header and footer to not end in .html (sorry). Or, run coqdoc yourself however you like, and then run jscoqdoc on the resulting HTMLs.
#### Shachar Itzhaky (Dec 03 2021 at 22:32):
As for jscoq_ids... yeah overriding it should have worked. Perhaps double-check by putting a breakpoint in jscoq-agent.js at the point where it tries to initialize jsCoq?
#### Cyril Cohen (Dec 04 2021 at 00:21):
Shachar Itzhaky said:
As for jscoq_ids... yeah overriding it should have worked. Perhaps double-check by putting a breakpoint in jscoq-agent.js at the point where it tries to initialize jsCoq?
I'm using jscoq-loader.js rather than jscoq-agent.js, what's the difference?
#### Cyril Cohen (Dec 04 2021 at 00:33):
Shachar Itzhaky said:
As for jscoq_ids... yeah overriding it should have worked. Perhaps double-check by putting a breakpoint in jscoq-agent.js at the point where it tries to initialize jsCoq?
My mistake (in a Makefile), the change to my .js file was not propagated correctly, it does work to edit jscoq_ids
Last updated: Jan 31 2023 at 10:01 UTC | 2023-01-31 10:48:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4975903332233429, "perplexity": 5721.742859710552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499857.57/warc/CC-MAIN-20230131091122-20230131121122-00626.warc.gz"} |
https://www.storyofmathematics.com/limit-laws/ | # Limit laws – Definition, Properties, and Examples
Ever wondered if there’s an easier way to find the limits of a function without their graph or table of values? We can use the different properties and laws of limits available. Limit laws are important in manipulating and evaluating the limits of functions.
Limit laws are helpful rules and properties we can use to evaluate a function’s limit.
Limit laws are also helpful in understanding how we can break down more complex expressions and functions to find their own limits. In this article, we’ll learn about the different limit laws and also discuss other limit properties that may help us in our next pre-calculus and calculus topics.
Before establishing these properties and learning how to apply them, why don’t we go ahead and begin with the definition of limit laws?
## What are the limit laws?
As we have mentioned, limit laws are the different laws or properties we can apply to manipulate functions and eventually find their limits.
For example, if we want to find the limit of $f(x) = -2x^2 + 5x – 8$ as it approaches 6, our previous knowledge would tell us to either graph or construct a table of values.
However, with the limit laws, we’ll need a few steps to evaluate $\lim{x \rightarrow 6 }-2x^2 + 5x – 8$.
\begin{aligned} \lim_{x\rightarrow6} -2x^2 + 5x – 8 &= \lim_{x\rightarrow6} -2x^2 + \lim_{x\rightarrow6} 5x + \lim_{x\rightarrow6} -8\color{blue} \text{ \{ Sum Law\}}\\&=\lim_{x\rightarrow6} -2x^2 + \lim_{x\rightarrow6} 5x + -8\color{blue} \text{ \{ Constant Law\}}\\&=-72 + 30 + -8\color{blue} \text{ \{ Polynomial Function Property\}}\\&=-50 \end{aligned}
Don’t worry. Once you’re introduced to a list of limit laws, evaluating limits will be easier for you as well! In fact, we have already learned some of these limit laws in the past – but they are in much simpler and general forms.
Note that throughout the entire discussion, we will assume that the two expressions, $\lim_{x\rightarrow a} f(x)$ and $\lim_{x\rightarrow a} g(x)$, exist and $a$ is a constant.
## What are the properties of limits?
Why don’t we slowly introduce ourselves to the properties of limits and laws that may help us? This section will also explore examples that make use of these properties and laws, so we can better understand them as well.
If this is your first time encountering these properties, try to write down the limit laws’ names and algebraic definitions. Summarize these in one table as your guide for this section’s examples and the next topics you might encounter involving a function’s limit.
Don’t have a paper or your note-taking app nearby? No worries, we also summarized these properties for you at the end of this section!
### Understanding the two most fundamental limit laws
We’ll group with these two basic laws of limits because they are the two most applied laws and the simplest laws of limits. These are constant and identity laws.
Constant Law: $\boldsymbol{\lim_{x\rightarrow a} c = c}$
This limit law states that the limit of a constant $c$, as $x$ approaches $a$, is just equal to the constant itself.
The graph above illustrates why the constant law is true for all values of $a$ and $c$. Regardless of the value of $a$, the function will continue to be equal to $c$.
Here are a few examples on how we can apply constant law for some limits.
• $\lim_{x\rightarrow 2} 3 = 3$
• $\lim_{x\rightarrow 1} -6 = -6$
• $\lim_{x\rightarrow 6} \pi = \pi$
Identity Law: $\boldsymbol{\lim_{x\rightarrow a} x = c}$
Know why we call this the identity law? That’s because we’re dealing with the linear function, $y = x$, for this law of limit. The law of limit states that the limit of $y = x$ as it approaches $a$ is equal to the number (or $a$) as $x$ approaches it.
Here’s an illustration of why the identity law is true for all values of $x$. As $x$ approaches $a$, the value will of $y$ will depend on the value of $x$, so as $x$ approaches $a$, $y$ will also approach $a$.
Check out these three examples to better understand the identity law.
• $\lim_{x\rightarrow -4} x= -4$
• $\lim_{x\rightarrow \sqrt{2}} x = \sqrt{2}$
• $\lim_{x\rightarrow \pi} x= \pi$
Ready to learn more limit laws? Here are five more that focus on the four arithmetic operations: addition, subtraction, multiplication, and division.
### Limit laws involving arithmetic operations
We’re grouping these limit laws because they share similar forms and contain the four most used arithmetic operations in a given function.
Addition Law: $\boldsymbol{\lim_{x\rightarrow a} [f(x) + g(x)]= \lim_{x\rightarrow a} f(x) + \lim_{x\rightarrow a} g(x)}$
The addition law reiterates that when we take the limit of the sum of two functions, the result is equivalent to the sum of the respective limits of the function as $x$ approaches $a$.
If $\lim_{x\rightarrow 3} f(x) = -2$ and $\lim_{x\rightarrow 3} g(x) = 5$, this means that ${\lim_{x\rightarrow a} [f(x) + g(x)]$ can be determined as shown below.
\begin{aligned}\lim_{x\rightarrow a} [f(x) + g(x)] &= \lim_{x\rightarrow a} f(x) + \lim_{x\rightarrow a} g(x) \\&=-2 + 5\\&=3 \end{aligned}
Subtraction Law: $\boldsymbol{\lim_{x\rightarrow a} [f(x) – g(x)]= \lim_{x\rightarrow a} f(x) – \lim_{x\rightarrow a} g(x)}$
This law is similar to its addition counterpart. It states that the limit of two functions’ difference is just equal to the difference between the limits of each function as $x \rightarrow a$.
Why don’t we apply this law along with constant and identity laws to simplify $\lim_{x \rightarrow -6} (x – 4 )$.
\begin{aligned} \lim_{x \rightarrow -6} (x – 4 ) &=\lim_{x \rightarrow -6} x – \lim_{x \rightarrow -6}-4 \color{blue} \text{ {Subtraction Law}}\\&=\lim_{x \rightarrow -6} x -4\color{blue} \text{ { Constant Law}}\\&= -6 – 4 \color{blue} \text{ {Identity Law}}\\&=-10 \end{aligned}
This is a good example showing how all these properties are applied in simplifying and evaluating limits.
Coefficient Law: $\boldsymbol{\lim_{x\rightarrow a} c \cdot f(x)= c \lim_{x\rightarrow a} f(x)}$
This law states that the limit of the product shared by a constant, $c$, and the function, $f(x)$, will be the same when we multiply $c$ to the limit of $f(x)$ as it approaches $a$.
Here are some straightforward applications of this law:
• If $\lim_{x\rightarrow 2} f(x) = -4$, $\lim_{x\rightarrow 2} -5 \cdot f(x)$ is equal to $-4 \cdot -5 = 20$
• If $\lim_{x\rightarrow 3} g(x) = \dfrac{1}{2}$, $\lim_{x\rightarrow 3} -12 \cdot g(x)$ is equal to $-12 \cdot \dfrac{1}{2} = 8$
Product Law: $\boldsymbol{\lim_{x\rightarrow a} [f(x) \cdot g(x)] = \lim_{x\rightarrow a} f(x) \cdot \lim_{x\rightarrow a} g(x) }$
Like addition and subtraction laws, this particular limit law states that the limit of the product of two functions is equal to the product of each function’s corresponding limits.
Why don’t we try to simplify $\lim_{x\rightarrow 5} 2x$ using the product law and the previous laws we’ve learned?
\begin{aligned} \lim_{x \rightarrow 5} 2x &= \lim_{x \rightarrow 5} (2 \cdot x)\\&=\lim_{x \rightarrow 5} 2 \cdot \lim_{x \rightarrow 5} x\color{blue} \text{ {Product Law}}\\&=2 \cdot \lim_{x \rightarrow 5} x\color{blue} \text{ { Constant Law}}\\&= 2 \cdot 5 \color{blue} \text{ {Identity Law}}\\&=10 \end{aligned}
Quotient Law: $\boldsymbol{\lim_{x\rightarrow a} \dfrac{f(x)}{g(x)} = \dfrac{\lim_{x\rightarrow a} f(x) }{\lim_{x\rightarrow a} g(x) }}$, where $\boldsymbol{\lim_{x\rightarrow a} g(x) \neq 0}$
This means that the limit of the quotient of two functions is equivalent to the ratio of each of the functions’ limits. Note that this law is only applicable when $\lim_{x\rightarrow a} g(x) \neq 0$.
This means that if $\lim_{x\rightarrow a} f(x) = P$ and $\lim_{x\rightarrow a} g(x) = Q$, the limit of $\dfrac{f(x)}{g(x)}$ as $x \rightarrow a$ is equal to $\dfrac{\lim_{x\rightarrow a} f(x)}{\lim_{x\rightarrow a} g(x)} = \dfrac{P}{Q}$.
### Limit laws involving exponents and roots
Now that we’ve covered all the limit laws that involve the four basic operations, it’s time to up our game and let’s learn about the limit laws for functions that contain exponents and roots.
Power Law: $\boldsymbol{\lim_{x\rightarrow a} [f(x)]^n = \left[\lim_{x\rightarrow a} f(x) \right]^n}$, where $\boldsymbol{\lim_{x\rightarrow a} f(x) \neq 0}$ at $\boldsymbol{n < 0}$
The limit of the function that is raised to $n^{th}$ power will return the same result when we find the limit of $f(x)$ first as $x$ approaches $a$ then raising the result by $n^{th}$ power.
Note that this law is only true when the limit of $f(x)$ as $x$ approaches $a$ is not zero when $n$ is negative.
Simplifying $\lim_{x\rightarrow -2} (x – 1)^4$ will require us to use power law. Let’s go ahead and take a look at how we can simplify this expression.
\begin{aligned} \lim_{x \rightarrow -2} (x -1)^4 &= \left[\lim_{x \rightarrow -2} (x -1)\right]^4 \color{blue} \text{ {Power Law}} \\&=\left(\lim_{x \rightarrow -2} x – \lim_{x \rightarrow -2} 1\right)^4\color{blue} \text{ {Subtraction Law}}\\&=\left(-2 – \lim_{x \rightarrow -2} 1\right)^4\color{blue} \text{ { Identity Law}}\\&= (-2 – 1)^4\color{blue} \text{ {Constant Law}}\\&=(-3)^4\\&= 81 \end{aligned}
Root Law: $\boldsymbol{\lim_{x\rightarrow a} \sqrt[n]{f(x)} = \sqrt[n]{ \lim_{x\rightarrow a} f(x)}}$, where $\boldsymbol{\lim_{x\rightarrow a} f(x) \geq }$ where $\boldsymbol{n }$ is even
Remember that $k^{{1}{n}} = \sqrt[n]{k}$, so the root law is actually an extension of the power law. This means that the limit of the $n^{th}$ root of the function is also equal to the $n^{th}$ root of the function’s limit as $x$ approaches $a$.
Since we have restrictions when the root is even, ensure that the limit of $f(x)$ as it approaches $a$ is positive when $n$ is even.
Let’s apply what we’ve just learned to simplify $\lim_{x\rightarrow 4} \sqrt[3]{f(x)}$ if $\lim_{x\rightarrow 4} f(x) = -27$?
Using the root law, we have $\lim_{x\rightarrow 4} \sqrt[3]{f(x)} = \sqrt[3]{ \lim_{x\rightarrow 4} f(x)}$. Given that $\lim_{x\rightarrow 4} f(x) = -27$, we now have $\lim_{x\rightarrow 4} \sqrt[3]{f(x)} = \sqrt[3]{ -27}$ or $-3$.
Have you noticed a common pattern shared by all the limit laws we’ve just learned? The general rule shown by the limit laws is that whenever we apply an operation on a function’s limit, we can instead find the limit of the function first then take the limit of the resulting expression.
### Summary of the properties of limits
We’ll learn more about the applications of limit laws when we learn how to evaluate the limits of more complex functions. For now, let’s go ahead first and summarize the limit laws that we’ve just learned throughout this article.
Limit Law Algebraic Definition Example Constant Law $\lim_{x\rightarrow a} c = c$ $\lim_{x\rightarrow 3} 4 = 4$ Identity Law $\lim_{x\rightarrow a} x = a$ $\lim_{x\rightarrow 3} x = 3$ Addition Law $\lim_{x\rightarrow a} [f(x) + g(x)] = \lim_{x\rightarrow a} f(x) + \lim_{x\rightarrow a} g(x)$ $\lim_{x\rightarrow 2} [(x – 1)+ (2x)] = \lim_{x\rightarrow 2} (x – 1) + \lim_{x\rightarrow 2} 2x$ Subtraction Law $\lim_{x\rightarrow a} [f(x) – g(x)] = \lim_{x\rightarrow a} f(x) – \lim_{x\rightarrow a} g(x)$ $\lim_{x\rightarrow 2} [(x – 1) -(2x)] = \lim_{x\rightarrow 2} (x – 1) – \lim_{x\rightarrow 2} 2x$ Coefficient Law $\lim_{x\rightarrow a} cf(x) = c \lim_{x\rightarrow a} f(x)$ $\lim_{x\rightarrow 5} \sqrt{2} x = \sqrt{2} \lim_{x\rightarrow 5} x$ Product Law $\lim_{x\rightarrow a} [f(x) \cdot g(x)] = \lim_{x\rightarrow a} f(x) \cdot \lim_{x\rightarrow a} g(x)$ $\lim_{x\rightarrow 2} [(x – 1) \cdot (2x)] = \lim_{x\rightarrow 2} (x – 1) \cdot \lim_{x\rightarrow 2} 2x$ Quotient Law $\lim_{x\rightarrow a} \dfrac{f(x)}{g(x)} = \dfrac{\lim_{x\rightarrow a} f(x) }{\lim_{x\rightarrow a} g(x) }$ $\lim_{x\rightarrow 2} \dfrac{x – 1}{2x} = \dfrac{\lim_{x\rightarrow 2} x – 1 }{\lim_{x\rightarrow 2} 2x }$ Power Law $\lim_{x\rightarrow a} [f(x)]^n = \left[\lim_{x\rightarrow a} f(x) \right]^n$ $\lim_{x\rightarrow 3} [2(x + 1)]^4 = \left[\lim_{x\rightarrow 3} 2(x + 1)\right]^4$ Root Law $\lim_{x\rightarrow a} \sqrt[n]{f(x)} = \sqrt[n]{ \lim_{x\rightarrow a} f(x)}$ $\lim_{x\rightarrow 3} \sqrt[4]{2(x + 1)} = \sqrt[4]{ \lim_{x\rightarrow 3} 2(x + 1)}$
Make sure to review all the properties we’ve discussed in the previous section before answering the problems that follow.
Example 1
Given that $\lim_{x\rightarrow a} f(x) = -24$ and $\lim_{x\rightarrow a} g(x) = 4$, find the value of the following expressions using the properties of limits we’ve just learned.
a. $\lim_{x\rightarrow a} [f(x) + g(x)]$
b. $\lim_{x\rightarrow a} [4 g(x)]$
c. $\lim_{x\rightarrow a} \dfrac{\sqrt{g(x)}}{0.5f(x)}$
Solution
When working with problems like these for the first time, it’s always helpful to have a list of the limit laws we’ve just discussed. This way, you can always check for a limit law that may apply to our problem.
We can rewrite $\lim_{x\rightarrow a} [f(x) + g(x)]$ as $\lim_{x\rightarrow a} f(x) + \lim_{x\rightarrow a} g(x)$ using the addition law.
Substitute the given values for the limits of $f(x)$ and $g(x)$ as they approach $a$.
\begin{aligned}\lim_{x\rightarrow a} [f(x) + g(x)] &= \lim_{x\rightarrow a} f(x) + \lim_{x\rightarrow a} g(x) \\&=-24 + 4\\&= -20\end{aligned}
a. This means that $\lim_{x\rightarrow a} [f(x) + g(x)] = \boldsymbol{24}$.
Similarly, we can rewrite $\lim_{x\rightarrow a} [4 g(x)]$ as $4\lim_{x\rightarrow a} g(x)$ using the coefficient law.
\begin{aligned}\lim_{x\rightarrow a} [4g(x)] &= 4\lim_{x\rightarrow a} g(x) \\&=4(4)\\&= 16\end{aligned}
b. Hence, $\lim_{x\rightarrow a} [4 g(x)]$ is equal to $\boldsymbol{16}$.
The third expression will require multiple limit laws before we can find the expression’s value. In fact, for this item, we’ll need the following properties:
• Quotient Law to break down the fraction’s limit.
• Root Law for the numerator.
• Coefficient Law for the denominator.
Let’s go ahead and break down $\lim_{x\rightarrow a} \dfrac{\sqrt{g(x)}}{0.5f(x)}$ to see how these laws would be helpful for this item.
\begin{aligned}\lim_{x\rightarrow a} \dfrac{\sqrt{g(x)}}{0.5f(x)}&=\dfrac{\color{blue}{\lim_{x\rightarrow a}\sqrt{g(x)}}}{\color{blue}{\lim_{x\rightarrow a} [0.5f(x)]}} \color{blue}\text{ Quotient Law}\\&=\dfrac{\color{blue}{\sqrt{\lim_{x\rightarrow a}g(x)}}}{\lim_{x\rightarrow a} [0.5f(x)]} \color{blue} \text{ Root Law}\\&=\dfrac{\sqrt{\lim_{x\rightarrow a}g(x)}}{\color{blue}0.5\lim_{x\rightarrow a} f(x)} \color{blue} \text{ Coefficient Law}\\&=\dfrac{\sqrt{\lim_{x\rightarrow a}g(x)}}{0.5\lim_{x\rightarrow a} f(x)} \end{aligned}
Using the final expression, let’s substitute $\lim_{x\rightarrow a} f(x) = -24$ and $\lim_{x\rightarrow a} g(x) = 4$ into the rational expression.
\begin{aligned}\lim_{x\rightarrow a} \dfrac{\sqrt{g(x)}}{0.5f(x)}&=\dfrac{\sqrt{\lim_{x\rightarrow a}g(x)}}{0.5\lim_{x\rightarrow a} f(x)}\\&=\dfrac{\sqrt{4}}{0.5(-24)}\\&=\dfrac{2}{-12}\\&=-\dfrac{1}{6} \end{aligned}
c. This means that $\lim_{x\rightarrow a} \dfrac{\sqrt{g(x)}}{0.5f(x)}$ is equal to $\boldsymbol{-\dfrac{1}{6}}$.
Example 2
Use the different properties of limits to find the values of the following expressions.
a. $\lim_{x\rightarrow 2} x^2 – 2x +1$
b. $\lim_{x\rightarrow k} ax^2 +bx + c$, where $a$, $b$, and $c$ are nonzero constants
What can you observe from the results? In general, how can we evaluate the limits of a quadratic function?
Solution
Apply the addition law on the expression, $\lim_{x\rightarrow 2} x^2 – 2x +1$.
\begin{aligned}\lim_{x\rightarrow 2} x^2 – 2x +1 &= \lim_{x\rightarrow 2} x^2 – \lim_{x\rightarrow 2} 2x + \lim_{x\rightarrow 2} 1 \end{aligned}
Simplify this further by applying the following limit laws on each of the terms:
• Power law then identity law to simplify $\lim_{x\rightarrow 2} x^2$.
• Coefficient law and identity law to simplify $\lim_{x\rightarrow 2} 2x$.
• Constant law to evaluate $\lim_{x\rightarrow 2} 1$.
\begin{aligned}\lim_{x\rightarrow 2} x^2 &= \left(\lim_{x\rightarrow 2} x\right)^2\\&=(2)^2\end{aligned} \begin{aligned}\lim_{x\rightarrow 2} 2x &= 2\lim_{x\rightarrow 2} x\\&=2(2)\end{aligned} \begin{aligned}\lim_{x\rightarrow 2} 1 &= 1\end{aligned} \begin{aligned}\lim_{x\rightarrow 2} x^2 – 2x +1 &= (2)^2 -2(2)+1\\&= 4 – 4+1\\&=1\end{aligned}
a. This means that $\lim_{x\rightarrow 2} x^2 – 2x +1$ is equal to $\boldsymbol{1}$.
Since we’re working with an expression that has a similar, we’ll also be using similar steps and the same limit laws to simplify $\lim_{x\rightarrow k} ax^2 +bx + c$.
Using addition law, we’ll have $\lim_{x\rightarrow k} ax^2 +bx + c = \lim_{x\rightarrow k} ax^2 + \lim_{x\rightarrow k} bx + \lim_{x\rightarrow k} c$.
Let’s simplify each term and keep in mind that $a$, $b$, and $c$ are nonzero constants.
• Use the coefficient, power, then identity laws to simplify $\lim_{x\rightarrow k} ax^2$.
• Coefficient law and identity law to simplify $\lim_{x\rightarrow k} bx$.
• Constant law to evaluate $\lim_{x\rightarrow k} c$.
\begin{aligned}\lim_{x\rightarrow k} ax^2 &=a\lim_{x\rightarrow k} x^2\\&=a\left(\lim_{x\rightarrow k} x\right)^2\\&=a(k)^2\end{aligned} \begin{aligned}\lim_{x\rightarrow k} bx &= b\lim_{x\rightarrow k} x\\&=b(k) \end{aligned} \begin{aligned}\lim_{x\rightarrow k} c &= c\end{aligned} \begin{aligned}\lim_{x\rightarrow k} ax^2 +bx + c &=\lim_{x\rightarrow k} ax^2 + \lim_{x\rightarrow k} bx + \lim_{x\rightarrow k} c\\&= \lim_{x\rightarrow k} a(k)^2 – b(k) +c \end{aligned}
b. Hence, the limit of $ax^2 + bx +c$ as $x$ approaches $k$ is $\boldsymbol{ak^2 – bk + c}$.
From the results of a and b, we have:
• $\lim_{x\rightarrow 2} x^2 – 2x +1 = (2)^2 -2(2)+1$
• $\lim_{x\rightarrow k} ax^2 + bx + c = a(k)^2 – b(k) +c$
We can see that for each case, the resulting limits were equivalent to us finding the value of the given expression at $x = 2$ and $x = k$, respectively.
Since $ax^2 + bx + c$ is the general form of quadratic expressions and $k$ can be any nonzero constant, this observation applies to all quadratic functions.
Meaning, when given a quadratic function, its limit as $\boldsymbol{x}$ approaches $\boldsymbol{k}$ can be determined by finding the value of the function at $\boldsymbol{x = k}$.
Want a sneak peek at the next concepts you’ll learn on limits? In general, the limit of a polynomial function as it approaches $a$ is equal to the value of the function at $x = a$.
Example 3
Assume that $\lim_{x\rightarrow 2} f(x) = 4$, $\lim_{x\rightarrow 2} g(x) = -2$, and $\lim_{x\rightarrow 2} h(x) = 3$. Use the limit laws to find the value of $\lim_{x\rightarrow 2} \sqrt[3]{f(x)[g(x)]^2-\dfrac{h(x)}{x^2}}$.
Solution
By checking the expression, we can see that we’ll need several limit laws to find the expression’s value.
Start by working on the radical expression and apply the root law.
$\lim_{x\rightarrow 2} \sqrt[3]{f(x)[g(x)]^2-\dfrac{h(x)}{x^2}} = \sqrt[3]{\lim_{x\rightarrow 2}\left \{ f(x)[g(x)]^2-\dfrac{h(x)}{x^2} \right \}}$
Apply the subtraction law to separate the two terms inside the cube root.
$\sqrt[3]{\lim_{x\rightarrow 2}\left \{ f(x)[g(x)]^2-\dfrac{h(x)}{x^2} \right \}}= \sqrt[3]{\lim_{x\rightarrow 2}\left \{ f(x)[g(x)]^2 \right \}-\lim_{x\rightarrow 2}\dfrac{h(x)}{x^2}}$
Let’s focus on the limit of the first term inside the cube root, $\lim_{x\rightarrow 2} f(x)[g(x)]^2-\dfrac{h(x)} {x^2}$, and find its numerical value by applying the following limit laws:
• Product law to expand the terms further.
• Power law to simplify the second factor.
• Substitute $\lim_{x\rightarrow 2} f(x) = 4$ and $\lim_{x\rightarrow 2} g(x) = -2$ back into the expression.
\begin{aligned}\lim_{x\rightarrow 2}\left \{ f(x)[g(x)]^2 \right \}&={\color{blue}\lim_{x\rightarrow 2} f(x)} \cdot {\color{blue}\lim_{x\rightarrow 2}[g(x)]^2} \color{blue}\text{ Product Law}\\&=\lim_{x\rightarrow 2} f(x) \cdot {\color{blue}\left [\lim_{x\rightarrow 2}g(x) \right ]^2} \color{blue} \text{ Power Law}\\&=4 \cdot (-2)^2\\&=16 \end{aligned}
Now, let’s find the numerical value of $\lim_{x\rightarrow 2}\dfrac{h(x)}{x^2}$ by applying the following limit laws.
• Quotient law to apply the limit laws on both the numerator and denominator.
• Use the power law to rewrite the denominator.
• Find the value of $\lim_{x\rightarrow 2} x$ using the identity law.
• Substitute $\lim_{x\rightarrow 2} h(x) = 3$ into the expression.
\begin{aligned}\lim_{x\rightarrow 2}\dfrac{h(x)}{x^2}&=\dfrac{\color{blue}{\lim_{x\rightarrow 2}h(x)}}{\lim_{x\rightarrow 2} x^2} \color{blue} \text{ Quotient Law}\\&=\dfrac{\lim_{x\rightarrow 2}h(x)}{\color{blue}\left (\lim_{x\rightarrow 2} x \right )^2} \color{blue}\text{ Power Law}\\&=\dfrac{\lim_{x\rightarrow 2}h(x)}{\color{blue}\left (2 \right )^2}\color{blue} \text{ Identity Law} \\&=\dfrac{3}{(2)^2}\\&=\dfrac{3}{4} \end{aligned}
Let’s substitute the values, $\lim_{x\rightarrow 2} f(x)[g(x)]^2-\dfrac{h(x)} {x^2} = 16$ and $\lim_{x\rightarrow 2}\dfrac{h(x)}{x^2} = \dfrac{3}{4}$, into our original expression.
\begin{aligned}\sqrt[3]{\lim_{x\rightarrow 2}\left \{ f(x)[g(x)]^2 \right \}-\lim_{x\rightarrow 2}\dfrac{h(x)}{x^2}}&= \sqrt[3]{16 – \dfrac{3}{4}}\\&=\sqrt[3]{\dfrac{64 -3}{4}}\\&=\sqrt[3]{\dfrac{61}{4}} \end{aligned}
Hence, we have $\lim_{x\rightarrow 2} \sqrt[3]{f(x)[g(x)]^2-\dfrac{h(x)}{x^2}}$ is equal to $\boldsymbol{\sqrt[3]{\dfrac{61}{4}}}$.
Example 4
Use the limit laws to find the value of $\lim_{h\rightarrow 0 } f(h)$ given that $f(h) = \dfrac{\sqrt{5 – h} + 8}{h – 1}$.
Solution
Since $f(x)$ contains a rational expression, we can apply the quotient law to apply the limit laws on both the numerator and denominator.
\begin{aligned}\lim_{h \rightarrow 0 } f(x) &= \lim_{ h \rightarrow 0 } \dfrac{\sqrt{5 – h} + 8}{h – 1}\\ &= \dfrac{\lim_{ h \rightarrow 0 }(\sqrt{5 – h} + 8)}{\lim_{ h \rightarrow 0 } h – 1}\end{aligned}
Simplify the denominator first by using the following limit laws:
• Separate the two terms using the difference law.
• Apply the identity and constant laws to simplify the expression in the denominator further.
\begin{aligned}\lim_{h\rightarrow 0 } (h – 1) &= \lim_{h\rightarrow 0 } h – \lim_{h\rightarrow 0 }1\\&= 0 -\lim_{h\rightarrow 0 } 1\\&= 0 – 1\\&= -1\end{aligned}
Now that we have a numerical value for the denominator let’s go ahead and simplify the expression.
\begin{aligned}\dfrac{\lim_{ h \rightarrow 0 }(\sqrt{5 – h} + 8)}{\lim_{ h \rightarrow 0 } h – 1}&= \dfrac{\lim_{ h \rightarrow 0 }(\sqrt{5 – h} + 8)}{-1}\\&= -\lim_{ h \rightarrow 0 }(\sqrt{5 – h} + 8)\end{aligned}
Simplify this expression by applying the following properties:
• Apply the addition law to expand the terms and apply limits on each term.
• Use the root , subtraction, and constant laws, then finally, identity law to simplify $\lim_{ h \rightarrow 0} \sqrt{5 – h}$.
• Use the constant law in the second term to evaluate $\lim_{ h \rightarrow 0} 8$.
\begin{aligned}-\left[\sqrt{\lim_{ h \rightarrow 0 }(5 – h)} + \lim_{ h \rightarrow 0 }8\right ]&= -\left(\sqrt{\lim_{ h \rightarrow 0} 5 – \lim_{ h \rightarrow 0}h} + \lim_{ h \rightarrow 0 }8\right )\\&= -\left(\sqrt{5 – \lim_{ h \rightarrow 0}h} + \lim_{ h \rightarrow 0 }8\right )\\&= -\left(\sqrt{5 – 0} + \lim_{ h \rightarrow 0 }8\right )\\&= -(\sqrt{5} + 8 )\\&= -\sqrt{5} – 8 \end{aligned}
This means that $\lim_{h\rightarrow 0 }\dfrac{\sqrt{5 – h} + 8}{h – 1}$ is equal to $\boldsymbol{-\sqrt{5} – 8 }$.
Having fun with the process of finding the limits using the different properties? Guess what? You’ll actually learn more properties and techniques in evaluating limits in this article!
For now, we’ve provided more problems for you to try on your own to master these limit laws.
### Practice Questions
1. Which of the following shows the evaluated form of $\lim_{x\rightarrow -1} x^2 – 3x +2$?
2. Which of the following shows the evaluated form of $\lim_{x\rightarrow 4} 2x^2 + 12x + 18$?
3. Which of the following shows the evaluated form of $\lim_{x\rightarrow h} ax^2 – bx – c$, where $a$, $b$, and $c$ are nonzero constants?
4. Given that $\lim_{x\rightarrow a} f(x) = -144$ and $\lim_{x\rightarrow a} g(x) = 36$, what is the value of the expression, $\lim_{x\rightarrow a} [f(x) – g(x)]$?
5. Given that $\lim_{x\rightarrow a} f(x) = -144$ and $\lim_{x\rightarrow a} g(x) = 36$, what is the value of the expression, $\lim_{x\rightarrow a} [-2 g(x)]$?
6. Given that $\lim_{x\rightarrow a} f(x) = -144$ and $\lim_{x\rightarrow a} g(x) = 36$, what is the value of the expression, $\lim_{x\rightarrow a} \dfrac{12\sqrt{g(x)}}{0.5f(x)}$?
7. Assume that $\lim_{x\rightarrow 4} f(x) = 12$, $\lim_{x\rightarrow 4} g(x) = 4$, and $\lim_{x\rightarrow 4} h(x) = -2$. What is the value of $\lim_{x\rightarrow 4} \sqrt[4]{f(x)[g(x)]^3-\dfrac{h(x)}{x^2}}$?
8. Using the limit laws, what is the value of $\lim_{h\rightarrow 0 } f(h)$ given that $f(h) = \dfrac{\sqrt{15 – h} – 1}{h + 1}$?
Images/mathematical drawings are created with GeoGebra.
5/5 - (19 votes) | 2022-05-23 11:59:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9978638291358948, "perplexity": 734.2132934625743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558015.52/warc/CC-MAIN-20220523101705-20220523131705-00257.warc.gz"} |
http://www.r-bloggers.com/normal-distribution-functions/ | # Normal distribution functions
February 25, 2013
By
(This article was first published on R for Public Health, and kindly contributed to R-bloggers)
Ah, the Central Limit Theorem. The basis of much of statistical inference and how we get those 95% confidence intervals. It's just so beautiful! Lately, I have found myself looking up the normal distribution functions in R. They can be difficult to keep straight, so this post will give a succinct overview and show you how they can be useful in your data analysis.
To start, here is a table with all four normal distribution functions and their purpose, syntax, and an example:
Purpose Syntax Example Generates random numbers from normal distribution rnorm(n, mean, sd) rnorm(1000, 3, .25)Generates 1000 numbersfrom a normal with mean 3and sd=.25 Probability Density Function(PDF) dnorm(x, mean, sd) dnorm(0, 0, .5)Gives the density (height of thePDF) of the normalwith mean=0 and sd=.5. Cumulative Distribution Function(CDF) pnorm(q, mean, sd) pnorm(1.96, 0, 1)Gives the area under thestandard normal curve tothe left of 1.96,i.e. ~0.975 Quantile Function - inverse ofpnorm qnorm(p, mean, sd) qnorm(0.975, 0, 1)Gives the value at which theCDF of the standard normalis .975, i.e. ~1.96
Note that for all functions, leaving out the mean and standard deviation would result in default values of mean=0 and sd=1, a standard normal distribution.
Another important note for the pnorn() function is the ability to get the right hand probability using the lower.tail=FALSE option. For example,
In the first line, we are calculating the area to the left of 1.96, while in the second line we are calculating the area to the right of 1.96.
With these functions, I can do some fun plotting. I create a sequence of values from -4 to 4, and then calculate both the standard normal PDF and the CDF of each of those values. I also generate 1000 random draws from the standard normal distribution. I then plot these next to each other. Whenever you use probability functions, you should, as a habit, remember to set the seed. Setting the seed means locking in the sequence of "random" (they are pseudorandom) numbers that R gives you, so you can reproduce your work later on.
set.seed(3000)xseq<-seq(-4,4,.01)densities<-dnorm(xseq, 0,1)cumulative<-pnorm(xseq, 0, 1)randomdeviates<-rnorm(1000,0,1) par(mfrow=c(1,3), mar=c(3,4,4,2))
plot(xseq, densities, col="darkgreen",xlab="", ylab="Density", type="l",lwd=2, cex=2, main="PDF of Standard Normal", cex.axis=.8)
plot(xseq, cumulative, col="darkorange", xlab="", ylab="Cumulative Probability",type="l",lwd=2, cex=2, main="CDF of Standard Normal", cex.axis=.8)
hist(randomdeviates, main="Random draws from Std Normal", cex.axis=.8, xlim=c(-4,4))
The par() parameters set up a plotting area of 1 row and 3 columns (mfrow), and move the three plots closer to each other (mar). Here is a good explanation of the plotting area. The output is below:
Now, when we have our actual data, we can do a visual check of the normality of our outcome variable, which, if we assume a linear relationship with normally distributed errors, should also be normal. Let's make up some data, where I add noise by using rnorm() - here I'm generating the same amount of random numbers as is the length of the xseq vector, with a mean of 0 and a standard deviation of 5.5.
xseq<-seq(-4,4,.01)
y<-2*xseq + rnorm(length(xseq),0,5.5)
And now I can plot a histogram of y (check out my post on histograms if you want more detail) and add a curve() function to the plot using the mean and standard deviation of y as the parameters:
hist(y, prob=TRUE, ylim=c(0,.06), breaks=20)curve(dnorm(x, mean(y), sd(y)), add=TRUE, col="darkblue", lwd=2)
Here, the curve() function takes as its first parameter a function itself (or an expression) that must be written as some function of x. Our function here is dnorm(). The x in the dnorm() function is not an object we have created; rather, it's indicating that there's a variable that is being evaluated, and the evaluation is the normal density at the mean of y and standard deviation of y. Make sure to include add=TRUE so that the curve is plotted on the same plot as the histogram. Here is what we get:
Here are some other good sources on the topic of probability distribution functions:
To leave a comment for the author, please follow the link and comment on his blog: R for Public Health.
R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more...
If you got this far, why not subscribe for updates from the site? Choose your flavor: e-mail, twitter, RSS, or facebook... | 2014-10-26 06:03:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6048014760017395, "perplexity": 2008.2240475828035}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119658026.51/warc/CC-MAIN-20141024030058-00262-ip-10-16-133-185.ec2.internal.warc.gz"} |
https://arm-software.github.io/ComputeLibrary/latest/_cl_gemm_reshape_rhs_matrix_kernel_8h.xhtml | 22.11
ClGemmReshapeRhsMatrixKernel.h File Reference
#include "src/core/common/Macros.h"
#include "src/gpu/cl/ClCompileContext.h"
#include "src/gpu/cl/IClKernel.h"
Go to the source code of this file.
## Data Structures
class ClGemmReshapeRhsMatrixKernel
OpenCL kernel to reshape the RHS matrix when performing the matrix multiplication In particular, this kernel splits the src matrix in blocks of size K0xN0 and stores each one in the dst matrix unrolling the values. More...
arm_compute | 2022-11-30 20:49:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20795689523220062, "perplexity": 5940.741894815793}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710771.39/warc/CC-MAIN-20221130192708-20221130222708-00405.warc.gz"} |
https://www.nature.com/articles/s41598-017-16200-z?error=cookies_not_supported&code=d116fdd1-c102-4989-a8bb-aa752a30cebb | Article | Open | Published:
# Coulomb-like elastic interaction induced by symmetry breaking in nematic liquid crystal colloids
## Abstract
It is generally thought that colloidal particles in a nematic liquid crystal do not generate the first multipole term called deformation elastic charge as it violates the mechanical equilibrium. Here, we demonstrate theoretically and experimentally that this is not the case, and deformation elastic charges, as well as dipoles and quadrupoles, can be induced through anisotropic boundary conditions. We report the first direct observation of Coulomb-like elastic interactions between colloidal particles in a nematic liquid crystal. The behaviour of two spherical colloidal particles with asymmetric anchoring conditions induced by asymmetric alignment is investigated experimentally; the interaction of two particles located at the boundary of twist and parallel aligned regions is observed. We demonstrate that such particles produce deformation elastic charges and interact by Coulomb-like interactions.
## Introduction
Liquid crystals (LCs) are anisotropic soft materials with continuous ground-state symmetry, susceptible to breaking under the influence of external factors. It is also possible to break the symmetry of LCs by introducing particles of specific materials into the host LC. The immersed particles break the symmetry of the LC alignments; such distortions can influence the LC alignment up to a distance of several times the particle size. Particles may be accompanied by topological defects such as boojum, Saturn-ring, and hyperbolic-hedgehog defects, depending on the surface conditions1,2,3. The combined structures of a particle and defect are usually in dipole-like or quadrupole-like configurations.
Furthermore, LC distortions induced by a combined particle–defect structure result in a new class of long-range interactions that do not occur in regular colloids. The behaviour of these interactions is similar to that of the electric dipole–dipole or quadrupole–quadrupole interactions in electrostatics4,5. In addition, these long-range anisotropic interactions result in the formation of structures such as linear and inclined chains6,7,8,9. A particle–defect structure in a dipole-like configuration also interacts with director deformations such as a boundary between twist and uniform alignment regions10,11. The magnitude and direction of the associated force are related to the orientation of the dipole-like configuration and the divergence of the director deformation. Both the magnitude and direction of the force vary when the dipole-like configuration moves in the deformed director field. Particles at the nematic liquid crystal (NLC)–air interface12,13,14 and in quasi-two-dimensional systems of thin NLC cells form 2D crystals15,16,17,18, while 3D crystal structures can also be constructed in the bulk19. Moreover, variations in the shape and structure of colloidal particles bring diverse phenomena20,21 and new methods of handling defects may introduce new possibilities for creating functional devices22,23,24.
Both the particle shape and the surface anchoring provoke symmetry breaking by a particle immersed in an LC25,26. When the anchoring is weak, the particle shape is the main factor causing the symmetry breaking. In contrast, both factors are significant in the case of strong anchoring, because there is an accompanying topological defect that influences the director distribution near the particle.
In the articles23,24 authors experimentally found for the first time the monopole Coulomb-like interaction between separate point topological defects – radial and hyperbolic hedgehogs in the vicinity of the fiber in NLC. In those papers there were no colloidal particles engaged in the process of interaction. In this report the monopole Coulomb-like interaction between separate colloidal particles is demonstrated in the specific geometry. In both cases, spatial director deformation mediates the interaction. In his book, de Gennes27 has shown that the deformation charge appears only with an existing external torque moment, and this torque can be viewed as the deformation charge28,29,30. It has a continuous value that depends on the anchoring energy and shape of the particles. As shown in the literature24,25,30,31, the appearance of the deformation charge is connected to the broken symmetry in the distribution of the director field around the particles.
The coat concept has been introduced to provide general description of many systems without considering their finer details28,29. A coat covering an isolated system consists of a particle and accompanying topological defect, and exhibits the same symmetry properties as the resulting director field around the particle and defect. The director distribution outside the coat does not contain any topological defects and contains only smooth variations. The shell of the coat is not an intrinsic characteristic of the particle, but depends on the field of the director that surrounds it. The fundamental interactions between the particles are determined by the symmetry of the director field on the coat. Overall, particle–particle interactions consist of all contributions of the overlapping director-field deformations that are caused by the surface anchoring of particles and substrate boundaries29,30.
In this study, we observed the motion of pairs of dipole-configured interacting particles at the boundary of two different alignment regions. We investigated their interacting behaviour by measuring the changing rate of the particle separation. Coulomb-like interactions appear to dominate when the particles are separated by more than several times the particle radius. Conversely, dipole-dipole-like interactions appear to play a major role when the particles are closer than several times the particle radius.
## Results
We used LC cells to observe the particles interactions experimentally. A substrate of a cell was patterned to create two regions of parallel and twist alignment, as shown in Fig. 1(a). The cell was injected with a mixture of NLC and micro-particles. Figure 1(b) shows two dipolar-configured particles in a director field uniformly aligned along the vertical direction. Figure 1(c),(d) show two dipolar-configured particles at the boundary of the deformed director field of (a). The director field configuration breaks the mirror symmetry along the line connecting the two particles in Fig. 1(b). However, there is a rotational symmetry with respect to the axis connecting the two particles, as well as mirror symmetry with respect to the planes passing through the line connecting the two particles. In contrast, in Fig. 1(c),(d), all the rotational and mirror symmetries, including the broken mirror symmetry along the line connecting the particles, are also broken in that director field configuration. We used a polarizing optical microscope to observe the change in distance between the two particles (Fig. 1(c),(d)). The texture of the boundary between the two neighbouring regions does not show defect line, but connected smoothly with changing director orientation at the patterned surface.
The direction of the dipole configuration is defined as the direction from the point defect to the particle centre. The orientations of the two dipole configurations are nearly parallel but not linear, as illustrated in Fig. 1(c),(d); this is different from the configuration in Fig. 1(b). Analysing the change in the particle separation helps understanding the interaction that drives the motion of the particles. Unless otherwise stated, all measurements were performed at room temperature.
Two particles in the similar dipole orientation interact through the elastic deformation in the director field, so that the particle separation decreases with time, as in Fig. 2(a). Initially, the approaching speed is small, and it increases monotonically as the particles get closer, as in Fig. 2(b). At very small particle separations, their motion stops. The orientations of the dipoles also change slightly during the approach.
Figure 3 shows the results of measuring the distance and approach speed between the two particles. The force between two Coulomb-like interacting particles is expressed as $$a/{r}^{2}=bv$$ 32, where r is the particle distance and v is the speed of the particle. The right-hand side corresponds to Stokes drag force, with b expressed as 6πγR, where γ is the viscosity and R is the particle radius. For Coulomb-like interactions, Log(v) is then linearly proportional to Log(r) with a slope of –2. Similarly, for the dipole-dipole-like interacting particles, Log(v) is linearly proportional to Log(r) with a slope of –4. The top green square symbol and line indicate the data obtained for the uniform planar alignment, which exhibit a genuine dipole-dipole-like interaction with a slope of –4 over the entire range. The other data set was obtained at the boundary of the two differently aligned regions, as in Fig. 2(a), and exhibits two distinct regions with slopes of –2 and –4. These pairs of particles interact mainly by Coulomb-like interactions at large separations and by dipole-dipole-like interactions at small separations. The crossover between Coulomb-like and dipole-dipole-like interactions occurs at a similar distance in all of these graphs. The crossover distance represents the point at which the Coulomb-like and dipole-dipole-like interactions are of equal strength. We use the crossover point to determine the value of the deformation charge. To confirm our approach, we also fit the presented data with linear functions. The genuine dipole-dipole-like interaction shows the slope of –3.89 ± 0.18 for various fitting ranges. In the range corresponding to dipole-dipole-like interaction at the boundary of the two aligned regions the slope is –4.11 ± 0.25, while in the range corresponding to Coulomb-like interaction the slope is –2.23 ± 0.14. The measured slope for Coulomb-like interaction seems to deviate from the expected value of –2. However, this is likely due to the dependence of the fitted slope on the fitting range and the large fluctuations in the measured data for the large distances where the interaction is weak.
## Discussion
We explain the origin of the deformation charge appearing under the experimental conditions as follows3,28,29,33. Let us consider a director-field deviation δ n from its ground state n 0. The texture in the absence of immersed particles corresponds to n 0. The director field n ( r ) is then described by n ( r ) = n 0 + δ n, and satisfies |δn ( r )| 1 and n o ∙δn(r) = 0. δ n satisfies the Euler–Lagrange equation Δδ n = 0 for infinitesimal δ n at a position far away from the particle. A particle immersed in an LC produces a director-field deformation that breaks the ground state symmetry30. The deviation from the ground state is difficult to determine near the particle for a non-linear elastic deformation response, although it can be determined at distances far from the particle using the aforementioned coat approach. At distances far from the particle, the deformation can be determined by taking into account the symmetry breaking of the director field at short distances28,29,31. Using the Euler–Lagrange equation, it is possible to expand δ n (r) in a multipole expansion3,28,29.
$$\delta {n}_{\mu }=\frac{{q}_{\mu }}{r}+\frac{{p}_{\mu }^{\alpha }{r}_{\alpha }}{{r}^{3}}+\frac{{Q}_{\mu }^{\alpha \beta }{r}_{\alpha }{r}_{\beta }}{{r}^{5}}+\ldots ,$$
(1)
where µ denotes the components in the direction perpendicular to the ground state; indices µ, α, and β run through all coordinate directions, and summation over repeated indices is assumed; $${q}_{\mu }$$, $${p}_{\mu }^{\alpha }$$, and $${Q}_{\mu }^{\alpha \beta }$$ are the elastic monopole (charges), dipole, and quadrupole moments, respectively. The multipole expansion in Eq. (1) indicates that the director-field deviation induces a long-range effect. The validity range of each term may depend on several factors such as particle size, anchoring, and alignment. The deformations induced by two remote particles may overlap with each other. Deformation overlapping means that each particle feels the presence of the other particle, or, in other words, two particles with overlapping deformations interact by an elastic long-range interaction. In the case of strong anchoring, the non-linear solution for δn (r) displays an asymptotic behaviour near the particle. However, when the anchoring is weak, δn (r) is small, and the expansion in Eq. (1) is valid over the entire space29.
The torque in LC is related to the first term in Eq. (1), and applying an external torque Γ ext on the LC colloid is thought to be the only method to produce a deformation that is inversely proportional to r 27. In this work, we demonstrate that elastic monopoles can be induced by the influence of boundary conditions on the surface of the substrates and the particles. It is well known that NLCs transmit torques. A torque Γ acting on an NLC can be described by $${\boldsymbol{\Gamma }}=[{\boldsymbol{n}}\times \delta F/\delta n]$$, where F is the free energy27. The deformation free energy $$({F}_{def})$$ is related to the director deformation and can be described using one-constant approximation as:27
$${F}_{def}=\frac{K}{2}\int dV[{({\boldsymbol{\nabla }}\cdot {\boldsymbol{n}})}^{2}+{({\boldsymbol{\nabla }}\times {\boldsymbol{n}})}^{2}],$$
(2)
where K is the elastic constant and V is the total volume.
The relationship between the torque from the deformation (Γ def ) and the monopoles can be written as $${{\boldsymbol{\Gamma }}}_{def}={\boldsymbol{n}}\times \delta {F}_{def}/\delta n=4\pi K{\boldsymbol{q}}$$ 27. The deformation decreasing in proportion to 1/r is related to the torque. Γ def is the torque inducing the elastic monopole q in an NLC. The particle will feel a deformation torque (−Γ def ) with the elastic monopoles and particle rotation. Thus, at equilibrium, Γ ext = Γ def is satisfied.
The external torque is estimated by taking into account the boundary conditions on the particle surface. The anchoring energy is related to the director orientation on the particle surface. Anchoring energy $$({F}_{surface})$$ may be expressed in Rapini–Papoular form:
$${F}_{surface}=\oint dSW(s){[{{\boldsymbol{n}}}_{{\boldsymbol{e}}}(s)\cdot {\boldsymbol{n}}(s)]}^{2},$$
(3)
where W(s) is the anchoring strength and $${{\boldsymbol{n}}}_{{\boldsymbol{e}}}(s)$$ is the orientation of the easy axis on the particle surface S. The surface energy produces the torque Γ surface .
$${{\boldsymbol{\Gamma }}}_{surface}=[{\boldsymbol{n}}\times \frac{{\boldsymbol{\delta }}{{\boldsymbol{F}}}_{{\boldsymbol{surface}}}}{{\boldsymbol{\delta }}n}]\approx 2\oint dSW(s)({{\boldsymbol{n}}}_{{\boldsymbol{e}}}\cdot {{\boldsymbol{n}}}_{{\boldsymbol{o}}})[{{\boldsymbol{n}}}_{{\boldsymbol{o}}}\times {{\boldsymbol{n}}}_{{\boldsymbol{e}}}]$$
(4)
A particle with a broken symmetry with respect to the plane perpendicular to n o and a broken symmetry with respect to at least one vertical symmetry plane yields non-zero integrals in Eq. (4). The torque is obtained from the condition Γ surface + Γ def = 0. This situation is realised in our experiment. There, we have particles with different boundary conditions at the top and bottom surfaces. The director-field distribution on the left- and right-hand sides of the particle is also different in the vertical plane. There is a twist distribution in one half-region and a planar distribution in the other half-region. We can directly calculate the value of the deformation charge and compare it to that obtained from the experiment. In the deformed area, the particle surface experiences a torque that is produced by the distortion, the boundary conditions, and the dipole configuration.
The variable $$\theta$$ is introduced as the angle between the separation vector of the particles and the dipole, or the long axis of the deformation coat. The torque Γ is proportional to $$sin\theta cos\theta$$ 27, and the interaction energy (U int ) can be expressed in a general form for Coulomb-like (U mm ) and dipole-dipole-like (U dd ) interactions3.
$${U}_{int}={U}_{mm}+{U}_{dd}=4\pi K\{\frac{-qq^{\prime} }{r}si{n}^{2}\theta co{s}^{2}\theta +\frac{pp^{\prime} }{{r}^{3}}(1-3co{s}^{2}\theta )\},$$
(5)
where q and q′ are monopole charges and p and p′ are dipole moments. Extremum values of this potential energy at a given r and variable $$\theta$$ satisfy the following condition:
$$co{s}^{2}\theta =\frac{1}{2}+\frac{3{{\rm{p}}}^{2}}{2{{\rm{q}}}^{2}{r}^{2}}\,\,{\rm{for}}\,q=q^{\prime} \,{\rm{and}}\,p=p^{\prime}$$
(6)
If the particle separation changes, the dipole moment orientation changes as well, while there is no variation when the distance remains constant. If the determined relation between the angle and distance is substituted into Eq. (5), the real interaction energy at all distances, which represents the monopole, or the deformation charge, can be obtained as $${U}_{int}={U}_{mm}+{U}_{dd}=-4\,\pi \,K({q}^{2}/4r+{p}^{2}/2{r}^{3})$$. The crossover distance between the dipole-dipole-like interaction $${F}_{d}=-\partial {U}_{dd}/\partial r$$ and the Coulomblike interaction $${F}_{m}=-\partial {U}_{mm}/\partial r$$ is determined from the equality of the two forces. But we understand that this is an approximation and more precise approach should take into account the rotation of the radius vector between particles.
We obtain the crossover distance as $${r}_{c}=\sqrt{6}(p/q)=\sqrt{6}(\alpha /\rho )R$$, where p = αR 2 and q = ρR. R is the radius. α is the coefficient of the elastic dipole moment with the value of 2.04 obtained using different ansatzes for the distribution of director field around the isolated particle3. ρ is the coefficient of the elastic monopole charge. We obtain r c = 66.5 ± 2.6 µm from the data shown in Fig. 3. q becomes 11.4 µm with radius R = 12.3 µm and above r c value. We then obtain ρ/α = 0.45 and estimate the coefficient ρ = 0.92. We can roughly estimate the value of q from the torque balance condition. The deformation torque is $$4\pi Kq$$ in the above, and the surface torque may be considered to be $$2W\,4\pi {R}^{2}$$ in Eq. (4). q is expressed as 2WR 2 /8K for the assumption that one eight of the surface is not compensated in director field distribution and is effective to the torque for the anisotropy in director orientation. The calculated q value satisfies the above experimentally obtained q for K = 7 × 10−12 N and W = 2 × 10−6 J/m2 34,35. This estimate indicates that even small portion of surface anisotropy can satisfy the experimental value of monopole charge.
Furthermore, we numerically calculate the monopole charge. The director orientation at the particle surface was calculated by dividing the surface into several dozens of points, and Γ surface was obtained from Eq. (4) by adding up all the contributions. The value of q can then be obtained from the torque balance condition. The calculation was simplified by several assumptions. The particle is assumed to be located at the exact boundary at a certain height. The director orientation on the particle surface is determined by linear interpolation between the nearest substrate of infinite anchoring and the particle surface of finite anchoring. The effect of point defect can be neglected for small size. The above values of the particle surface anchoring, particle size, and LC elastic constant were used. The calculated monopole charge is on the order of 1 µm. All values calculated using the described above simplifications and approximations are in qualitative agreement.
The orientation of the topological dipoles and the separation vector in Fig. 2(a) exhibits continuous rotation as the two particles approach each other. For large distances, the rotation of the topological dipole is significant. The rotating trend can be explained by Eq. (6). The change in θ is caused by the competition between the Coulomb-like and elastic dipole-dipole-like interactions that occur as the particle separation changes in uniform texture. At large particle separations, θ = 45° must be satisfied, while θ = 0° must be satisfied at particle separations less than the critical distance. Figure 4 presents the experimental and calculated change in θ as a function of the particle separation. If we use the values of the parameters in Eq. (6) as they are, the calculated and experimental curves deviate from each other. The experimental data indicates that θ = 29° at large particle separations. At particle separations >40 μm, the change in θ is described by ρ/α = 0.9. The fact that the experimental data at large particle separations is fitted for θ = 29°, and not θ = 45°, is due to the difference in the alignment conditions in the surrounding of the particles. θ = 45° corresponds to the ideal orientation of two particles interacting by Coulomb-like and dipole-dipole-like interactions in a uniform alignment. In the experiment, the particles are located in the middle of different alignment patterns, so that the orientation of the interacting particles is expected to deviate from the ideal situation. These issues seem to result in θ = 29° at large particle separations. The decreasing θ at small particle separations is due to the strong elastic dipole-dipole-like interaction compared to the Coulomb-like interaction. The discrepancies between the experimental and calculated results at small particle separations originate from the limitations of the theoretical approach. However, these results appear to be sufficient to indicate the existence of elastic Coulomb-like interactions.
In conclusion, we found a concrete example in which the surface of a particle induces a non-zero deformation charge in NLCs. In the experiment, we utilized varying alignment conditions on a substrate, which caused varying surface director orientation with broken symmetry on different parts of the particle. We expect that symmetric particles with varying director-field distributions in the bulk will produce monopoles as a result of the asymmetrical boundary conditions on the substrate surface and the particle. These boundary conditions on the surface of the particle and cell fully satisfy the necessary conditions for the existence of the deformation charge, with the electrostatic analogy described in the literature21,24. These results manifest the first proof of the presence of Coulomb-like interactions on the far distance in elastic media between separate colloidal particles
## Methods
### Liquid crystal cell
The LC cell consisted of two substrates that were prepared differently: one had an orientation alignment pattern and the other had a uniform alignment. The LC on the alignment pattern aligns into two perpendicular orientations in the substrate plane. A photo-alignment material, composed of a polyamic acid including azo-units in the main chain, was used as the alignment layer36. The azo-units respond to light by cis-trans isomerization. The layer aligns the director perpendicular to the linear polarization of the incident light in the substrate plane. The alignment material was irradiated with a 405 nm diode laser. The irradiated position was controlled by moving the substrate with translation stages. The polarization of the incident light was controlled by a linearly polarized light and a twisted nematic cell. The polarization was selected with the electric field in the LC cell matching the expected irradiated position. The other substrate was coated with a planar alignment material (AL-3046) and rubbed uniformly. The orientation of the two substrates was controlled to bring two regions with parallel and twist alignments. The ratio of the width (y-axis) of the twist region to that of the parallel region on the top substrate was 120:80 μm or 100:80 μm. The cell gap was 70 μm.
### Liquid crystal and micro-particles
The LC was 4-cyano-4′-pentylbiphenyl (5CB, from Merck), which exists in the nematic phase at room temperature. The density of 5CB is 1.01 g/cm3. The LC was mixed with a small amount of micro-particles. The micro-particles were made from polyethylene (GRYPMS, from Cospheric) and had a 10 ± 3 μm radius and a 1.0 g/cm3 density.
The micro-particles induced homeotropic anchoring and were accompanied by hedgehog or Saturn-ring defects in nematic phase35. We focused on particles in dipole configurations accompanied by hedgehog defects. The particles did not stick to either substrate due to both the small difference in density between the LC and the particles and the repulsive force between the particles and substrates11. The length of the pattern along the x-axis was sufficiently large, so that the alignment was assumed to be uniform.
### Image analysis
The difference in the motion of different particle pairs appears to stem from slight variations in the particle size and other factors, such as anchoring and surface conditions. The original data obtained from these experiments is evenly distributed over time. The particle separation changes very little during the initial stages of the experiments and, consequently, a large error is introduced in the particle-speed calculations. To overcome this issue, we slightly smoothed the raw data without obvious variation in data positions and interpolated the data to the evenly distributed particle separation data. The particle speed (y-axis data) was obtained by differentiating the particle separation (x-axis data) with respect to time in Fig. 3. In order to match the data with the lines of slope −2 and −4, we prepared the lines of slope −2 and −4 and adjusted the intercept of the Log(v) axis.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Poulin, P. & Weitz, D. A. Inverted and multiple nematic emulsions. Phys. Rev. E 57, 626–637 (1998).
2. 2.
Stark, H. Director field configurations around a spherical particle in a nematic liquid crystal. Eur. Phys. J. B 10, 311–321 (1999).
3. 3.
Lubensky, T. C., Pettey, D., Currier, N. & Stark, H. Topological defects and interactions in nematic emulsions. Phys. Rev. E 57, 610–625 (1998).
4. 4.
Yada, M., Yamamoto, J. & Yokoyama, H. Direct observation of anisotropic interparticle forces in nematic colloids with optical tweezers. Phys. Rev. Lett. 92, 185501 (2004).
5. 5.
Izaki, K. & Kimura, Y. Interparticle force between different types of nematic colloids. Phys. Rev. E 87, 062507 (2013).
6. 6.
Poulin, P., Stark, H., Lubensky, T. C. & Weitz, D. A. Novel colloidal interactions in anisotropic fluids. Science 275, 1770–1774 (1997).
7. 7.
Smalyukh, I. I., Lavrentovich, O. D., Kuzmin, A. N., Kachynski, A. V. & Prasad, P. N. Elasticity-mediated self-organization and colloidal interactions of solid spheres with tangential anchoring in a nematic liquid crystal. Phys. Rev. Lett. 95, 157801 (2005).
8. 8.
Smalyukh, I. I., Kuzmin, A. N., Kachynski, A. V., Prasad, P. N. & Lavrentovich, O. D. Optical trapping of colloidal particles and measurement of the defect line tension and colloidal forces in a thermotropic nematic liquid crystal. Appl. Phys. Lett. 86, 021913 (2005).
9. 9.
Kotar, J. et al. Interparticle potential and drag coefficient in nematic colloids. Phys. Rev. Lett. 96, 207801 (2006).
10. 10.
Martinez, A., Mireles, H. C. & Smalyukh, I. I. Large-area optoelastic manipulation of colloidal particles in liquid crystals using photoresponsive molecular surface monolayers. Proc Natl Acad Sci USA 108, 20891–20896 (2011).
11. 11.
Lee, B.-K., Kim, S.-J., Lev, B. & Kim, J.-H. Motion of a colloidal particle in a nonuniform director field of a nematic liquid crystal. Phys. Rev. E 95, 012709 (2017).
12. 12.
Nazarenko, V., Nych, A. & Lev, B. Crystal structure in nematic emulsion. Phys. Rev. Lett. 87, 075504 (2001).
13. 13.
Smalyukh, I. I. et al. Ordered droplet structures at the liquid crystal surface and elastic-capillary colloidal interactions. Phys. Rev. Lett. 93, 117801 (2004).
14. 14.
Paek, S.-I., Kim, S.-J. & Kim, J.-H. Magnetic-field-induced structural change of a two-dimensional colloid of glycerol droplets on a nematic liquid-crystal surface. Phys. Rev. E 87, 032502 (2013).
15. 15.
Musevic, I., Skarabot, M., Tkalec, U., Ravnik, M. & Zumer, S. Two-dimensional nematic colloidal crystals self-assembled by topological defects. Science 313, 954–958 (2006).
16. 16.
Skarabot, M. et al. Two-dimensional dipolar nematic colloidal crystals. Phys. Rev. E 76, 051406 (2007).
17. 17.
Ognysta, U. et al. 2D interactions and binary crystals of dipolar and quadrupolar nematic colloids. Phys. Rev. Lett. 100, 217803 (2008).
18. 18.
Skarabot, M. et al. Interactions of quadrupolar nematic colloids. Phys. Rev. E 77, 031705 (2008).
19. 19.
Nych, A. et al. Assembly and control of 3D nematic dipolar colloidal crystals. Nat. Commun. 4, 1489 (2013).
20. 20.
Lapointe, C. P., Mason, T. G. & Smalyukh, I. I. Shape-controlled colloidal interactions in nematic liquid crystals. Science 326, 1083–1086 (2009).
21. 21.
Senyuk, B., Puls, O., Tovkach, O. M., Chernyshuk, S. B. & Smalyukh, I. I. Hexadecapolar colloids. Nat. Comm. 7, 10659 (2016).
22. 22.
Skarabot, M. et al. Hierarchical self-assembly of nematic colloidal superstructures. Phys. Rev. E 77, 061706 (2008).
23. 23.
Nikkhou, M. et al. Light-controlled topological charge in a nematic liquid crystal. Nat. Phys. 11, 183–187 (2015).
24. 24.
Nikkhou, M., Skarabot, M. & Musevic, I. I. Annihilation dynamics of topological monopoles on a fiber in nematic liquid crystals. Phys. Rev. E 93, 062703 (2016).
25. 25.
Kuksenok, O. V., Ruhwandl, R. W., Shiyanovskii, S. V. & Terentjev, E. M. Director structure around a colloid particle suspended in a nematic liquid crystal. Phys. Rev. E 54, 5198–5203 (1996).
26. 26.
Stark, H. Physics of colloidal dispersions in nematic liquid crystals. Phys. Rep. 351, 387–474 (2001).
27. 27.
de Gennes, P. G. & Prost, J. The Physics of liquid crystals, 2nd edn (Oxford Science Publications, Oxford, 1993).
28. 28.
Lev, B. I. & Tomchuk, P. M. Interaction of foreign macrodroplets in a nematic liquid crystal and induced supermolecular structures. Phys. Rev. E 59, 591–602 (1999).
29. 29.
Lev, B. I., Chernyshuk, S. B., Tomchuk, P. M. & Yokoyama, H. Symmetry breaking and interaction of colloidal particles in nematic liquid crystals. Phys. Rev. E 65, 021709 (2002).
30. 30.
Lev, B. I. The ground state and the character of the interaction between a colloidal particles in a liquid crystals. arXiv:1311.1878v1[cond-mat.soft] (2013).
31. 31.
Lev, B. I., Chernyshuk, S. B. & Tovkach, O. M. Elastic octopoles and colloidal structures in nematic liquid crystals. Phys. Rev. E 89, 032505 (2014).
32. 32.
Poulin, P., Cabuil, V. & Weitz, D. A. Direct measurement of colloidal forces in an anisotropic solvent. Phys. Rev. Lett. 79, 4862–4865 (1997).
33. 33.
Ramaswamy, S., Nityananda, R., Raghunathan, V. A. & Prost, J. Power-law forces between particles in a nematic. Mol. Cryst. Liq. Cryst. 288, 175–180 (1996).
34. 34.
Cul, M. & Kelly, J. R. Temperature dependence of visco-elastic properties of 5CB. Mol. Cryst. Liq. Cryst. 331, 49–57 (1999).
35. 35.
Kim, S.-J. & Kim, J.-H. The interaction of colloidal particles with weak homeotropic anchoring energy in homogeneous nematic liquid crystal cells. Soft Matter 10, 2664 (2014).
36. 36.
Park, B. et al. Thermal and optical stabilities of photoisomerizable polyimide layers for nematic liquid crystal alignments. Jpn. J. Appl. Phys. 37, 5663–5668 (1998).
## Acknowledgements
This research was supported by the Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2014R1A1A2058029 and NRF-2017R1A2B4011758), and supported by Brain Pool Program through the Korean Federation of Science and Technology Societies (KOFST) funded by the Ministry of Science, ICT and Future Planning (161S-1-3-1617).
## Author information
### Author notes
• Sung-Jo Kim
Present address: IBS Center for Soft and Living Matter, UNIST-gil 50, Ulsan, 689-798, Korea
1. Beom-Kyu Lee and Sung-Jo Kim contributed equally to this work.
### Affiliations
1. #### Department of Physics, Chungnam National University, 99 Daehak-ro, Yuseong-gu, Daejeon, 34134, Korea
• Beom-Kyu Lee
• , Sung-Jo Kim
• , Jong-Hyun Kim
• & Bohdan Lev
• Bohdan Lev
### Contributions
B.K.L. and S.J.K. and J.H.K. initiated the work. B.K.L. performed the experiments. S.J.K. and B.L. introduced the theoretical modeling. B.K.L. and J.H.K. calculated and executed the modeling. J.H.K. and B.L. wrote the manuscript. All authors contributed to the manuscript.
### Competing Interests
The authors declare that they have no competing interests.
### Corresponding author
Correspondence to Jong-Hyun Kim. | 2018-10-16 09:38:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7988291382789612, "perplexity": 1251.9410551771869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00542.warc.gz"} |
https://realblog.zkiz.com/abbychau/272199 | MultiQueue2 is a fast bounded mpmc queue that supports broadcast/broadcast style operations
MultiQueue was developed by Sam Schetterer, but not updated for some time. I found it very useful as it implements futures. However, it is with a few outdated library API and the use of spin locks is taking 100% CPU in many cases.
# Stone Age
For any channels, if there is a potential for conflicts (more than one of the writers/readers are consuming the resources), there must be locks. So the 100% issue is very likely to be caused by a spinlock.
At first, I did it in the sleep way
#[inline(always)]
pub fn check(seq: usize, at: &AtomicUsize, wc: &AtomicUsize) -> bool {
wc.load(Relaxed) == 0 || seq == cur_count || past(seq, cur_count).1
}
Nonetheless, I sacrified the throughput because I punished every check.
And then, I tried a much smaller sleep. This is just meaningless because it does not trigger CPU from P-State to S-State. And I immediately find that it may be fool to cocerce S-State in a spinlock design.
And then I tried 1ms,10ms,1000ns, for both settings of check after wait or check before wait. link. It does not make much different.
I tried the _mm_pause from SSE2 as well, but it does not make any different.
#[inline(always)]
pub fn check(seq: usize, at: &AtomicUsize, wc: &AtomicUsize) -> bool {
if is_x86_feature_detected!("sse2") {
#[cfg(target_arch = "x86")]
use std::arch::x86::_mm_pause;
#[cfg(target_arch = "x86_64")]
use std::arch::x86_64::_mm_pause;
unsafe{
_mm_pause();
}
}else{
}
wc.load(Relaxed) == 0 || seq == cur_count || past(seq, cur_count).1
}
Although _mm_pause is designed for spinlock, but the point is CPU will not switch P-State and S-State just for a few nono-seconds. It is simply not the right way.
# Tool Age
Keep trying the trival ways will never proceed more (although it practically solved the system problem of 100% cpu usage in my program and my program is not required to be very performant too), so I stepped back and rethought about my first solution.
The first solution was actually doing less checking and forcing the cpu to sleep, so that it does not waste too much energy to do useless checkings which are 99%(roughly) predictable.
What I really want to punish are the collided conflicts, because they are very likely to be blocked again if the CPU checks in the next cycle. At this time, sleeping is better than checking.
So I make the spinlock to be a swing-back one.
#[inline(always)]
pub fn check(seq: usize, at: &AtomicUsize, wc: &AtomicUsize) -> bool {
if wc.load(Relaxed) == 0 || seq == cur_count || past(seq, cur_count).1 {
true
} else {
false
}
}
This works exactly as expected. The cpu usage lowered and the through-put is able to pass all the tests.
# Futures
However, I then found that the CPU usage is high for future queues. That, there is a kind of hybrid lock used for future queues.
Therefore, I removed all the Spin before going to parking_lot::Mutex::new(VecDeque::new()). link
It increases the number of low level context switches very much but indeed lowed the CPU usage. By the nature of this heavy switch comparing to the light weight spinlock, the through-put of future queues becomes just 1/10 of the normal queue.
And this experiment also taught me the performance ratio between parking_lot mutexes and native spins.
# No Silver Bullet
I really would like to come up with an equation for people to set the try_spins precisely, but it is very complicated because it includes all the envirnment information like CPU frequency, number of consumers, rate of feeding, etc.
So I just left it to the user.
unsafe impl<RW: QueueRW<T>, T> Sync for MultiQueue<RW, T> {}
unsafe impl<RW: QueueRW<T>, T> Send for MultiQueue<RW, T> {}
unsafe impl<RW: QueueRW<T>, T: Send> Send for InnerSend<RW, T> {}
unsafe impl<RW: QueueRW<T>, T: Send> Send for InnerRecv<RW, T> {}
unsafe impl<RW: QueueRW<T>, T: Send> Send for FutInnerSend<RW, T> {}
unsafe impl<RW: QueueRW<T>, T: Send> Send for FutInnerRecv<RW, T> {}
unsafe impl<RW: QueueRW<T>, R, F: FnMut(&T) -> R, T> Send for FutInnerUniRecv<RW, R, F, T> {}
/// Usage: futures_multiqueue(capacity)
/// This is equivalent to futures_multiqueue_with(capacity,50,20).
pub fn futures_multiqueue<RW: QueueRW<T>, T>(
capacity: Index,
) -> (FutInnerSend<RW, T>, FutInnerRecv<RW, T>) {
let cons_arc = Arc::new(FutWait::new());
let prod_arc = Arc::new(FutWait::new());
let (tx, rx) = MultiQueue::new_internal(capacity, cons_arc.clone());
let ftx = FutInnerSend {
writer: tx,
wait: cons_arc.clone(),
prod_wait: prod_arc.clone(),
};
let rtx = FutInnerRecv {
} | 2022-08-09 06:48:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3284679055213928, "perplexity": 10189.741688524427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570913.16/warc/CC-MAIN-20220809064307-20220809094307-00422.warc.gz"} |
http://mathoverflow.net/revisions/29978/list | MathOverflow will be down for maintenance for approximately 3 hours, starting Monday evening (06/24/2013) at approximately 9:00 PM Eastern time (UTC-4).
Do there exist nonconstant real valued functions f $f$ and g $g$ such that the expression: $$f(x) -v/g(x)$$ Is is maximized at $x = v v$ for all positive real v?$v$?
Do there exist nonconstant real valued functions f and g such that the expression: $$f(x) -v/g(x)$$ Is maximized at x = v for all positive real v? | 2013-06-20 05:53:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6655343174934387, "perplexity": 550.730380228353}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710313659/warc/CC-MAIN-20130516131833-00062-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://www.gamedev.net/blogs/entry/652866-basic-code/ | • entries
7
13
• views
2152
# Basic code.
308 views
Well at the end of each chapter of Beginning C++ Game Programming, I was hoping to do a largish piece of code with everything I learnt in the chapter included, I managed this for Chapter 1 but couldn't be assed for 2, I'm now half way through 3.
To make up for this, and to check that I can still remember all the correct syntax, I'm going to be creating some code and posting it here, for you to critisize at your own will =D.
Maybe if I have enough time, and schoolwork doesn't get in the way I'll create one large program combining all 3 pieces of code. This might be an idea for the beginning of the summer hols, and who knows, by September I may actually be decent at programming and have a bit of respect on these here boards!
I guess I may aswell post the Chapter 1 code here, this is all created by me, and is not a finished piece of code. Just a showcase of my progress:
//Testing what I learnt in Chapter 1.#include using namespace std;int main(){ cout << "Battle Aftermath.\n\n"; const int UNDEAD_EXP = 25; int undeadKilled = 7; int undeadPoints = UNDEAD_EXP * undeadKilled; cout << "Undead Points: " << undeadPoints << ".\n"; cout << "Congratulations, you have slain " << undeadKilled << " undead beasts and gained " << undeadPoints << " Experience points!" << "\n\n"; const int HUMAN_EXP = 10; int humansKilled = 16; int humanPoints = HUMAN_EXP * humansKilled; cout << "Human Points: " << humanPoints << endl; cout << "Congratulations, you have slain " << humansKilled << " human gladiators and gained " << humanPoints << " Experience points!" << "\n\n"; const int ALLIES_EXP = -10; int alliesKilled = 4; int alliesPoints = ALLIES_EXP * alliesKilled; cout << "Allies Points: " << alliesPoints << endl; cout << "Although you battled well for your team, " << alliesKilled << " members were brutally massacred, putting a burden of " << alliesPoints << " experience on your team..." << "\n\n"; int totalPoints = undeadPoints + humanPoints + alliesPoints; enum armour {LEATHER = 10, CHAINMAIL = 20, PLATEMAIL = 40}; armour myArmour = LEATHER; cout << "You have earnt the right to a new set of armour.\n" << endl; cout << "1 - I'll keep my armour thanks.\n"; cout << "2 - I'll upgrade to Chainmail please! (" << CHAINMAIL << " points)\n"; cout << "3 - I'll upgrade to Platemail please! (" << PLATEMAIL << " points)\n"; int choice; cout << "Choice: "; cin >> choice; switch (choice) { case 1: cout << "You have chosen to keep your current armour.\n"; break; case 2: cout << "You have chosen to upgrade to Chainmail!\n"; break; case 3: cout << "You have chosen to upgrade to Platemail!\n"; break; default: cout << "Please choose a number from 1-3\n"; } return 0;}
I like the way you are going about learning. I certainly started off and progressed in a similar way to you when I first started out. Yea, always go at your own pace.
One thing I would say about your chpt1 code is that the enum is within the main() function. I suggest giving the enum a global program scope rather than just the scope of the main() function (in other words place it at the top, outside the braces of main()). The thing is, when you get onto functions and later onto modular programming you will not be able to access the enumerations from the other functions other than the function that the enum is in the scope of. If that makes any sense at all [wink].
But apart from that, pretty slick [smile]
ah yeah, I see what you mean. Thanks for the tip, luckily for this piece of code I only need it in that scope though. It's actually very good of you to tell me, otherwise I probably would have got some logical errors later on and not had a clue, for being helpful I rated you plus :)
## Create an account
Register a new account | 2018-10-20 00:25:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2610446810722351, "perplexity": 3271.5715494044694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512499.22/warc/CC-MAIN-20181019232929-20181020014429-00090.warc.gz"} |
http://math.stackexchange.com/questions/847232/mathematical-induction-matrix-example | # Mathematical Induction Matrix Example
I'm a little rusty and I've never done a mathematical induction problem with matrices so I'm needing a little help in setting this problem up.
Show that $$\begin{bmatrix}1&1\\1&1\end{bmatrix}^{n} = \begin{bmatrix}2^{(n-1)}&2^{(n-1)}\\2^{(n-1)}&2^{(n-1)}\end{bmatrix}$$ for every $n\ge 1$.
-
yes, just realized I left something off – cele Jun 25 '14 at 15:25
First show that it's true for $n=1$ (obvious). Then assume that it's true for $n$, and compute the value at $n+1$ by multiplying out the matrices. – katrielalex Jun 25 '14 at 15:28
@gnometorule, after looking at this problem with a professor I know, they suggested induction. As I've mentioned, I'm a little rusty, just getting back into higher math so I went with their suggestion – cele Jun 25 '14 at 15:28
nice little problem! 1up – John Smith Jun 25 '14 at 15:30
My comment was written when you were still missing the n-exponent. – gnometorule Jun 25 '14 at 15:36
The case $n=1$ is clear since $2^0 = 1$. So suppose that $$\begin{pmatrix} 1&1 \\ 1 & 1 \end{pmatrix}^n = \begin{pmatrix} 2^{n-1}&2^{n-1} \\ 2^{n-1} & 2^{n-1}\end{pmatrix} \quad \quad *$$ for some $n \geq 1$ and let us prove that $$\begin{pmatrix} 1&1 \\ 1 & 1 \end{pmatrix}^{n+1} = \begin{pmatrix} 2^{n}&2^{n} \\ 2^{n} & 2^{n}\end{pmatrix}.$$ We have $$\begin{pmatrix} 1&1 \\ 1 & 1 \end{pmatrix}^{n+1} = \begin{pmatrix} 1&1 \\ 1 & 1 \end{pmatrix}^{n} \begin{pmatrix} 1&1 \\ 1 & 1 \end{pmatrix} \overset{*}{=} \begin{pmatrix} 2^{n-1}&2^{n-1} \\ 2^{n-1} & 2^{n-1}\end{pmatrix}\begin{pmatrix} 1&1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} 2\cdot 2^{n-1}&2\cdot 2^{n-1} \\ 2\cdot 2^{n-1} & 2\cdot 2^{n-1}\end{pmatrix} =\begin{pmatrix} 2^{n}&2^{n} \\ 2^{n} & 2^{n}\end{pmatrix}.$$
And thus the relation is true for every $n \in \mathbb{N}$
It is clearly true for $n=1$. Assume it's true for $n$. Then $$\begin{pmatrix}1&1\\1&1\end{pmatrix}^{n+1}=\begin{pmatrix}1&1\\1&1\end{pmatrix}\begin{pmatrix}2^{(n-1)}&2^{(n-1)}\\2^{(n-1)}&2^{(n-1)}\end{pmatrix}=\begin{pmatrix}2^{(n-1)}+2^{(n-1)}&2^{(n-1)}+2^{(n-1)}\\2^{(n-1)}+2^{(n-1)}&2^{(n-1)}+2^{(n-1)}\end{pmatrix}=\begin{pmatrix}2^n&2^n\\2^n&2^n\end{pmatrix}$$
So it's true for $n+1$. By induction it is true for all $n$. | 2015-11-29 05:45:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9448971748352051, "perplexity": 254.47071928554544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456289.53/warc/CC-MAIN-20151124205416-00106-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://motls.blogspot.com/2006/09/do-laws-of-nature-last-forever.html | ## Monday, September 25, 2006 ... //
### Do the laws of nature last forever?
Several people have informed me about an article in New Socialist
The author seems confused what is science, what is physics, what is a law of nature, how it looks like and how it may look like, and why we believe such laws. He is profoundly confused at many levels: basic levels as well as the technical ones.
Cosmological natural selection
Lee Smolin promotes his cosmological natural selection. Just during the last month, five independent people have mentioned this issue in discussions with me or in their own articles; the list included famous names like L.S. or A.V. All of them are convinced that it is trivial to falsify Smolin's hypothesis and it has, in fact, been done immediately when Smolin proposed it.
A decade ago, Smolin had conjectured that the laws of our universe are optimized for black hole production because every new black hole is a new baby whose properties are similar to the parent universe but it is not quite identical because there is also a cosmological mutation going on. The most prolific universes - those who create many black holes - are going to dominate the ensemble of the universes. Lee Smolin has written a whole book whose content is isomorphic to this paragraph.
It is easy to see that if you change some parameters in our universe, for example if you reduce the hierarchy between the electroweak scale and the Planck scale, many more black holes will be created. The theory is dead. Trivially dead. Period. Why does Smolin revive this nonsense all the time, without having any new arguments or mechanisms? Does a lie become the truth when it is repeated 100 times?
Smolin also misleadingly suggests that he is behind the word "landscape" in theoretical physics even though the only scientifically plausible meaning of the word in theoretical physics was explained by Leonard Susskind.
Why do we believe that the world is governed by laws
The main reason why we believe that the universe obeys laws is that the laws we have found are the simplest satisfactory explanation of the experiments and observations that we have done. Lee Smolin doesn't seem to mention this technical detail. Instead, he focuses on battles between philosophy, religion, and science that should place no role in science whatsoever.
Evolving laws vs. evolving knowledge
Lee Smolin clearly fails to distinguish the evolution of the laws themselves and the evolution of our knowledge about them. He claims that Einstein's discovery that the geometry of the real space is not flat is "another example" of evolving laws of nature. In reality, of course, Einstein's equations had the same form and applied to the motion of cellestial bodies long before Einstein realized that. The laws are not changing.
What does it mean for the laws to change?
What does a physicist mean by the basic laws of nature? He or she means the most fundamental possible mathematical reasons or rules that predict or imply a class of observed phenomena or all other phenomena. If the laws were evolving, they would not really be laws. If the question whether it is fine to kill other people had an evolving answer, there would be no law about the murder. Of course, at a longer time scale or length scale, this social law indeed doesn't exist. But it exists within an effective theory.
Physics is more fundamental than sociology and its laws are thus more lasting, too.
A physicist always picks the most fundamental law she can pick. If the laws were evolving significantly but if they were still accessible to science, the primary thing that a physicist would be interested in would be the laws that govern the evolution of the "simpler" laws. No doubt, we would call these rules "laws" again, even though Lee Smolin tends to call them "metalaws". The prefix "meta-" only means that the things are perhaps getting too complicated for Lee. But it doesn't mean that they're getting too complicated for everyone else.
In his recent book, he also incorrectly uses the word "meta-theory" for string/M-theory. In reality, the word "meta-theory" could have been used a few times at the beginning of the duality revolution but no one would use this terminology today. String/M-theory is simply a theory. It is impossible to divide it to pieces.
Even though most crackpots are unable to understand this simple fact, string/M-theory is the best textbook example what a theory in physics means. It is a logically coherent structure including a finite number of concepts and a finite number of equations and other mathematical rules that can be used to predict the outcome of many experiments. I say that string/M-theory is the best representative of the word "theory" we have in science because it is the most complete and the most logically coherent description of the widest possible set of phenomena that we have ever had, namely all of them.
Technically, by a theory, we mean a choice of the Hilbert space and/or basic degrees of freedom together with rules to determine the dynamics - the Hamiltonian, the action, or more general rules to calculate correlators or the S-matrix.
The word "theory" does not require the system of concepts to be already proven experimentally. We use the word "theory" for theories that are not yet proven or that are unlikely to be ever proven (little Higgs theory?) as well as for theories that have already been falsified (Glashow's old SU(2) theory) although the word "model" is often a substitute, especially for more concrete theories that have many conceivable "siblings".
In science, we couldn't use the word "theory" just for one of these options - a correct or a wrong theory - because the validity of any sufficiently interesting theory we discuss or investigate has yet to be determined and this fact would make the word "theory" unusable in most situations: another point that the "critics of science" completely misunderstand. Virtually neither of the "Not Even Wrong" crackpots understands that if we already knew for sure that a theory is correct, we wouldn't be developing it anymore - we would only be looking for its other consequences and we would be moving to a more profound theory.
The theory that most of the theoretical physicists, especially the cutting-edge theoretical physicists, work on at a given moment of time is necessarily an unproven theory, essentially by definition. If a theory is already proven, then it is not at the cutting edge.
Meaning of the evolution in the actual theories we have
In the state-of-the-art theories, we know exactly what it means for the theories to evolve in time. For quantum field theories, it means to change their relevant and marginal parameters into functions of the cosmological time: the masses of elementary particles and the renormalizable couplings may depend on time. We know that such a possible evolution is severely constrained by observations. Although there are a few controversies, it is fair to say that it seems that there exists no such evolution.
We shouldn't be thinking about creating new degrees of freedom because a transition from a theory with some degrees of freedom to a theory with other degrees of freedom would prove that at least one of them had to be incomplete at the mass scale of the new degrees of freedom, and thus a more complete theory was needed. Alternatively, such a transition would be completely discontinuous and it wouldn't allow us to use the knowledge about the previous regime to learn anything about the new regime.
In string theory, it is completely impossible to change the laws of nature as a function of time because string theory has no parameters to be adjusted. There is only one unique and eternal mathematical structure called string theory and it cannot be changed or contaminated. It can be twisted but the twisted version is not a realistic theory. :-)
Any evolution is governed by the laws of string theory. This is why string theory can be used to prove that spacetime topology can change, among other things. The laws of string theory don't break down and don't have to be - and cannot be - changed even when an extreme effect such as topology change occurs.
If the laws of a theory needed to change in order to describe a certain transition, we would have a proof that the theory is an incomplete description of reality.
Needless to say, this is not the case in string theory. String theory is a complete description of reality even though we don't understand its predictions in some extreme situations, especially those that have something to do with the ultratiny expanding universe. Can we really tunnel into another vacuum and which observables we should talk about in this setup? Although people assume that an answer similar to the answer of effective field theory with a "common sense" discontinuity is the right one, we are not guaranteed that we have fully understood all implications of string theory for this situation.
But at any rate, whatever the allowed observables, rules, questions, and transitions in this context are, they are a part of string theory and we are not allowed to mess up with these laws by inventing some meta-laws or laws saying how the previous laws should change.
Mathematical character of laws
In any theory that remotely resembles the theories that have been successfully used to describe reality for centuries, the most fundamental laws - even if some people would like to call them meta-laws - are given by systems of mathematical equations constraining certain mathematical structures and quantities. If this were not the case, we couldn't describe the universe quantitatively.
Lee obviously disagrees with the previous paragraph. In order to show how intensely he disagrees with the basic thesis of theoretical physics that the world is based on mathematical laws, he even quotes Roberto Unger, a Harvard philosopher, who has called the eternal mathematical laws thought to be relevant for physics "a poisoned gift of mathematics to physics". Wow.
Lee feels that physics was consistently getting rid of time and he suggests that it could be a good idea to give time the same role it played before Galileo. If the laws are evolving, we would need to return before Galileo. In other words, if his article reflects what Lee thinks, he wants to return us to the age of the Inquisition in this respect, too. Very nice.
This is the physicist who is spamming us with the nonsensical comments that we should be doing physics without a pre-existing spacetime; on the other hand, the very fact that we have permanent laws that are valid at all times - which has been the case at least for 300 years - is already too bad for him. Do you think it is a consistent approach to try to kill time where it seems almost necessary and restore its key role in contexts where it has been impossible for 300+ years because the eternal laws work so well?
In philosophy, one could spend several centuries by thinking about the evolving laws of nature. Needless to say, it would lead to the ballpark of the same realm as most other investigations in philosophy, namely the realm of nowhere. In physics, we have very different methods. According to these methods, the concept of the basic laws that evolve in time is an ill-defined concept because we can't really define any global "time" coordinate (because of general relativity and other deep insights) and because we can't define what it means for the laws to evolve unless we have other laws that don't evolve. It also seems to be a useless idea that doesn't help us to explain anything that we know about the universe, and therefore its weight in physics is tiny even though it can still generate several pages in New Socialist.
And that's the memo. | 2021-12-04 01:27:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5563094019889832, "perplexity": 461.522336841103}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00593.warc.gz"} |
https://www.gerad.ca/fr/papers/G-92-35 | Groupe d’études et de recherche en analyse des décisions
# Solution of the Multisource Weber and Conditional Weber Problems by D.-C. Programming
## Pey-Chun Chen, Pierre Hansen, Brigitte Jaumard et H Tuy
D.-c. programming is a recent technique of global optimization, which allows the solution of problems whose objective function and constraints can be expressed as differences of convex (i.e., d.-c.) functions. Many such problems arise in continuous location theory. The problem first considered is to locate a known number of source facilities so as to minimize the sum of weighted Euclidean distances between fixed location of users and the source facility closest to location of each user. We also apply d.-c. programming to the solution of the conditional Weber problem, an extension of the multisource Weber Problem, in which some facilities are assumed to be already established. In addition, we consider a generalization of the Weber's problem, the facility location problem with limited distances, where the effective service distance becomes a constant when the actual distance attains a given value. Computational results for problems with up to ten thousand users and two new facilities; fifty users and three new facilities; one thousand users, twenty existing facilities and one new facility or two hundred users, ten existing and two new facilities are reported.
, 35 pages
Ce cahier a été révisé en janvier 1996 | 2021-03-07 03:01:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8942166566848755, "perplexity": 1621.6819221150045}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376006.87/warc/CC-MAIN-20210307013626-20210307043626-00440.warc.gz"} |
https://www.physicsforums.com/threads/closure-of-groups.710403/ | # Closure of groups
1. Sep 14, 2013
### bonfire09
Let G be a group and my book defines closure as: For all a,bε G the element a*b is a well defined element of G. Then G is called a group. When they say well defined element does that mean I have to show a*b is well defined and it is a element of the group? Or do I just show a*b is closed under *(the operation)?
2. Sep 14, 2013
### economicsnerd
I've always found the closure axiom a bit silly. It's implied if you just write $*: G\times G\to G$. All it means is that, given $a,b\in G$, there's a thing named $a*b$, and that whatever this thing is, it belongs to $G$.
3. Sep 14, 2013
### bonfire09
Thanks. I saw that other abstract algebra books have it defined as how you said it. My book apparently has it defined a little differently.
4. Sep 14, 2013
### Axiomer
This will often depend on the context. * is by definition a binary function *:GxG→G, and a function is well-defined by definition.. I think the problem is best illustrated by examples:
Let G = {[x]: x is an integer not a multiple of 3}, where [x] = {integers y s.t. x~y}, where we write x~y if x and y leave the same remainder upon division by 3 (alternatively, x-y is divisible by 3). The elements of G are sets called equivalence classes of Z modulo 3, and we can easily verify that G={[1],[2]}. Now define a binary operation on G by [a]*=[ab]. At first glance it might not be obvious that * is well-defined since [a] and have many different representations, and [ab] might depend on which of these representations we choose. For example [2]=[5] and [1]=[31], so we better make sure that we get [2]*[1]=[5]*[31] with how we defined *! We can verify that [2]*[1]=[2]=[155]=[5]*[31], since 155 leaves remainder 2 upon division by 3. Indeed, we can prove that [a]*=[ab] gives the same element of G no matter how we choose to write [a] and , i.e. * is well defined!
Edit: G={[0],[1],[2]} is not a group ([0] is not invertible), edited so that G={[1],[2]}
Last edited: Sep 14, 2013
5. Sep 14, 2013
### bonfire09
Yes its just like showing a function is well defined. I wasn't sure if it just suffices to show that (G,*) is closed under the operation *. | 2017-12-14 22:58:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9095008969306946, "perplexity": 597.934055923569}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948551162.54/warc/CC-MAIN-20171214222204-20171215002204-00168.warc.gz"} |
https://blog.rossry.net/tag/academia/ | IN WHICH Ross Rheingans-Yoo—a sometime artist, economist, poet, trader, expat, EA, and programmer—writes on things of interest.
# Reading Feed (last update: March 17)
A collection of things that I was glad I read. Views expressed by linked authors are chosen because I think they're interesting, not because I think they're correct, unless indicated otherwise.
### (17)
Blog: Marginal Revolution | The rise of the temporary scientist — relevant to my interests, naturally.
### (7)
Lots of other people do have a problem with the donation, though. Matt Levine, writing at the Bloomberg View with his tongue firmly in-cheek, sums them up without taking much of a side:
It's possible that there's a secret club of billionaires competing to give tons of money to the philanthropies that make people angriest. The Koch Brothers and George Soros could be co-presidents, and John Paulson shot to the top of the league table in 2012 when he gave a \$100 million
READ MORE
# The Garden and the Jungle
### (1)
I love the place I'm working this summer. (A smallish proprietary trading firm in lower Manhattan.) It has one of the most vibrantly intellectual atmospheres I've seen anywhere, and the problems that we're working on really are interesting, often novel, and eminently practical. For a place that aims to compete in international financial markets by hiring the best mathematical talent that (1) cool math problems and (2) money can buy, it's...just about exactly what you might expect.
In particular, I'm in love with my current research project, which is easily the coolest thing I've been asked to do yet. (I also interned for all of last summer there.) What exactly it is is proprietary (sorry), but it has me mixing machine-learning and stochastic calculus in some really cool ways that have me alternating between coding furiously and filling up whiteboard upon whiteboard with math. Also, I recently got yelled at for taking up too much computing power on the shared intern server, so I got upgraded to supercomputing-cluster
READ MORE
# Burn the Man's Books!
According MIT's Title IX Office, no-longer Professor Emeritus Walter Lewin acted in violation of the Institute's sexual harassment and misconduct policy while teaching an online MIT course open to the public. The Institute announced on Tuesday that it has stripped Lewin of Professor-Emeritus status, and will be removing videos of his physics lectures -- which have been called "legendary" -- from MIT OpenCourseWare and MITx.
I accept without question the reports that the charges were extremely serious and that "this wasn't a borderline case", and I agree with my current CS(@MIT) professor Scott Aaronson, as he writes in a recent blog post:
• [S]exual harassment must never be tolerated, neither here nor anywhere else. But I also feel that, if a public figure is going to be publicly brought down like this (yes, even by a private university), then the detailed findings of the investigation should likewise be made public, regardless of how embarrassing they are.
• More importantly, I wish to register that I disagree
READ MORE
# November 21 Bucket o' Links: "Languages, Language, and Words, Words, Words" Edition
I'm going to continue calling these my Friday linkwraps, in the hopes that I'll (1) actually publish one on Friday someday, or, failing that, (2) not slip to a write-on-Saturday, publish-on-Sunday schedule if I call them my Saturday linkwraps instead.
I'm still running an updated-almost-daily feed of readworthy links at My Faults My Own | Reading Feed. Check it out if you're a fan of these BoL's!
1
For reasons which may later become clear, I've written two subtly different versions of this post, for different audiences. Poets, dreamers, and readers who don't particularly care to erect walls between fantasy and reality, click here. Readers who don't have time for my mind games and just want to read a normal Bucket o' Links, click here.
READ MORE
1 / 1 | 2019-03-21 23:01:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19286522269248962, "perplexity": 4064.8192454350024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202572.7/warc/CC-MAIN-20190321213516-20190321235516-00421.warc.gz"} |
https://physics.stackexchange.com/questions/428829/piston-cylinder-assembly-with-separate-compartments | # Piston cylinder assembly with separate compartments [closed]
Consider an ideal gas in a piston cylinder assembly, initially divided into 3 compartments by impermeable diathermal membranes. The compartments initially have the same mass and temperatures, but different pressures. The membranes are punctured, and the system settles to the final state with a uniform pressure, temperature, and volume.
The goal is to find the final temperature $T_2$ in terms of the initial pressures $P_{1A}$, $P_{1B}$, $P_{1C}$, initial temperature $T_1$, and ideal gas properties.
The first law of thermodynamics yields
$\Delta E = Q-W=\frac{m}{3}c_v\Delta T + \frac{m}{3}c_v\Delta T + \frac{m}{3}c_v\Delta T$
But $Q=0$ so we have
$-W=mc_v\Delta T$
Now, this is easy if the process is isobaric, so that
$W=P_2(V_2-V_1)$
but is this a valid assumption? I'm going to assume that it is, and that the final pressure $P_2$ is the average of all the initial pressures, $P_2=\bar{P_1}=\frac{P_{1A}+P_{1B}+P_{1C}}{3}$.
So we have
$W=\bar{P_1}(V_2-V_1) = \bar{P_1}(\frac{mRT_2}{\bar{P_1}} - \frac{mRT_1}{3P_{1A}} - \frac{mRT_1}{3P_{1B}} - \frac{mRT_1}{3P_{1C}})$
Substituting this back into the first law will allow us to solve for $T_2$ in terms of $P_{1A}$, $P_{1B}$, $P_{1C}, T_1$ and gas properties.
My solution, however, is contingent assuming that $P_2=\bar{P_1}=\frac{P_{1A}+P_{1B}+P_{1C}}{3}$. I made this assumption because intuitively I think the pressures need to equilibrate to some value, and that value must be an average of all the pressures if the masses in each compartment are the same. Is this a good assumption? Is there a better way to solve this problem?
## closed as off-topic by user191954, stafusa, John Rennie, ZeroTheHero, Jon CusterSep 16 '18 at 3:38
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Community, stafusa, John Rennie, ZeroTheHero, Jon Custer
If this question can be reworded to fit the rules in the help center, please edit the question.
Actually, in the end, the work done by the overall combined system is just that required to raise the weight of the piston to its new elevation and to push back the outside atmosphere. For the initial state, $$P_{1A}A=Mg+P_{atm}A\tag{1}$$where M is the mass of the piston, and the expansion work done by the system is $$W=\left(\frac{Mg}{A}+P_{atm}\right)\Delta V\tag{2}$$So, combining these equations, the work done by the system is just $$W=P_{1A}\Delta V$$
• I see why it's constant pressure now; the piston weight and atmospheric pressure are always the same. But did you mean that $P_{1A}=P_{piston} + P_{atm}$? I don't see why $P_{3A}$ has any constraints, it could be anything. – Drew Sep 15 '18 at 14:55 | 2019-10-23 20:30:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.748383104801178, "perplexity": 336.0292215884226}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987836295.98/warc/CC-MAIN-20191023201520-20191023225020-00472.warc.gz"} |
http://www.imomath.com/index.php?options=317&lmm=0 | # Algebra
1. (9 p.) Let $$a$$, $$b$$, and $$c$$ be non-real roots of the polynimal $$x^3+x-1$$. Find $\frac{1+a}{1-a}+ \frac{1+b}{1-b}+ \frac{1+c}{1-c}.$
2. (22 p.) Consider the set $$S\subseteq(0,1]^2$$ in the coordinate plane that consists of all points $$(x,y)$$ such that both $$[\log_2(1/x)]$$ and $$[\log_5(1/y)]$$ are even. The area of $$S$$ can be written in the form $$p/q$$ for two relatively prime integers $$p$$ and $$q$$. Evaluate $$p+q$$.
3. (24 p.) Consider the polynomial $P(x)=(1 + x + x^2 + \dots + x^{17})^2 - x^{17}.$ Assume that the roots of $$P$$ are $$x_k=r_k \cdot e^{i2\pi a_k}$$, for $$k = 1, 2, ... , 34$$, $$0 < a_1 \leq a_2 \leq \dots \leq a_{34} < 1$$, and some positive real numbers $$r_k$$. The sum $$a_1 + a_2 + a_3 + a_4 + a_5$$ is equal to $$p/q$$ for two coprime integers $$p$$ and $$q$$. Determine $$p+q$$.
4. (15 p.) The equation $$2^{333x-2} + 2^{111x+2} = 2^{222x+1} + 1$$ has three real roots. Assume that their sum is expressed in the form $$\frac mn$$ where $$m$$ and $$n$$ are relatively prime positive integers. Find $$m+n$$.
5. (28 p.) Let $$f:\mathbb N\rightarrow\mathbb R$$ be the function defined by $$f(1) = 1$$, $$f(n) = n/10$$ if $$n$$ is a multiple of 10 and $$f(n) = n+1$$ otherwise. For each positive integer $$m$$ define the sequence $$x_1$$, $$x_2$$, $$x_3$$, ... by $$x_1 = m$$, $$x_{n+1} = f(x_n)$$. Let $$g(m)$$ be the smallest $$n$$ such that $$x_n = 1$$. (Examples: $$g(100) = 3$$, $$g(87) = 7$$.) Denote by $$N$$ be the number of positive integers $$m$$ such that $$g(m) = 20$$. The number of distinct prime factors of $$N$$ is equal to $$2^u\cdot v$$ for two non-negative integers $$u$$ and $$v$$ such that $$v$$ is odd. Determine $$u+v$$.
2005-2018 IMOmath.com | imomath"at"gmail.com | Math rendered by MathJax | 2018-07-23 08:04:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9552561640739441, "perplexity": 66.05629224490998}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676595531.70/warc/CC-MAIN-20180723071245-20180723091245-00639.warc.gz"} |
https://math.stackexchange.com/questions/2876733/why-this-function-is-not-riemann-integrable-on-0-1 | # Why this function is not Riemann integrable on $[0, 1]$
Why this function is not Riemann integrable on [0, 1] ? $$f\left(x\right)=\begin{cases} x & x\in\mathbb{Q}\\ 0 & x\notin\mathbb{Q} \end{cases}$$ We can calculate the upper integral and lower integral, $\overline{\int}f\left(x\right)dx=0.5$, $\underline{\int}f\left(x\right)dx=0$, therefore, the Riemann's criterion does not hold. So the function is not Riemann inegrable.
But if you use the Lebesgue theorem, bounded function on [a, b] is Riemann integrable if and only if the set of discontinuities of $f\left(x\right)$ has measure zero.
We know that (1) every finite set has measure zero, and (2) every countable subset of $\mathbb{R}$ has measure zero.
The rational numbers $\mathbb{Q}$ is a countable subset of $\mathbb{R}$, and the rational numbers $\mathbb{R}\setminus\mathbb{Q}$ is a countable subset of $\mathbb{R}$.
Hence, the set of discontinuities of $f\left(x\right)$ on [0, 1] has measure zero. Therefore $f\left(x\right)$ is Riemann integrable.
What are the mistakes here? Thank you very much.
• No, $f$ is discontinuous at every point of $(0,1]$ – zhw. Aug 9 '18 at 0:36
• $\mathbb{R} \setminus \mathbb{Q}$ isn't countable! – Hans Lundmark Aug 9 '18 at 10:53
well, your discontinuities are a lot more than $$\mathbb{Q}\cap [0,1]$$ they are in fact all of $$[0,1]$$, since it is the boundary that matters. or show me one point apart from $$0$$ where your function is continuous!
• The function is continuous at $x=0$ – JavaMan Oct 31 '18 at 14:01 | 2019-08-19 04:10:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 3, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9269232153892517, "perplexity": 106.13161914984991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00097.warc.gz"} |
https://artofproblemsolving.com/wiki/index.php?title=2015_AIME_I_Problems/Problem_5&diff=prev&oldid=69319 | # Difference between revisions of "2015 AIME I Problems/Problem 5"
## Problem
In a drawer Sandy has $5$ pairs of socks, each pair a different color. On Monday Sandy selects two individual socks at random from the $10$ socks in the drawer. On Tuesday Sandy selects $2$ of the remaining $8$ socks at random and on Wednesday two of the remaining $6$ socks at random. The probability that Wednesday is the first day Sandy selects matching socks is $\frac{m}{n}$, where $m$ and $n$ are relatively prime positive integers, Find $m+n$.
## Hint
Notice that we can allow the sample space of the problem to be the $\dfrac{10!}{(2!)^5}$ possible permutations of socks, and that the desired outcomes are those for which the fifth and sixth items are the same color, the first and second are different colors, and the third and fourth are different colors.
## Solution
But this probability is simple to count. Let the fifth sock be arbitrary; the probability that the sixth sock matches in color is $\dfrac{1}{9}$. Let the first sock be arbitrary; the probability that the second sock does not match is $\dfrac{6}{7}.$
The only "hard" part is the third and fourth sock. But that is simple casework. If the third sock's color matches the color of one of the first two socks (which occurs with probability $\dfrac{2}{6} = \dfrac{1}{3}$), then the fourth sock can be arbitrary. Otherwise (with probability $\dfrac{2}{3}$), the fourth sock can be chosen with probability $\dfrac{4}{5}$ (5 socks left, 1 sock that can possibly match the third sock's color). The desired probability is thus $$\frac{1}{9} \cdot \frac{6}{7} \cdot (\dfrac{1}{3} + \dfrac{2}{3} \cdot \dfrac{4}{5}) = \frac{26}{315}.$$ The answer is $\dfrac{341}.$ | 2022-05-23 16:09:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 17, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131932258605957, "perplexity": 429.0879893628615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662558030.43/warc/CC-MAIN-20220523132100-20220523162100-00352.warc.gz"} |
http://catalog.flatworldknowledge.com/bookhub/reader/128?e=fwk-redden-ch07_s06 | Study Aids:
Click the Study Aids tab at the bottom of the book to access your Study Aids (usually practice quizzes and flash cards).
Study Pass:
Study Pass is our latest digital product that lets you take notes, highlight important sections of the text using different colors, create "tags" or labels to filter your notes and highlights, and print so you can study offline. Study Pass also includes interactive study aids, such as flash cards and quizzes.
Highlighting and Taking Notes:
If you've purchased the All Access Pass or Study Pass, in the online reader, click and drag your mouse to highlight text. When you do a small button appears – simply click on it! From there, you can select a highlight color, add notes, add tags, or any combination.
Printing:
If you've purchased the All Access Pass, you can print each chapter by clicking on the Downloads tab. If you have Study Pass, click on the print icon within Study View to print out your notes and highlighted sections.
Search:
To search, use the text box at the bottom of the book. Click a search result to be taken to that chapter or section of the book (note you may need to scroll down to get to the result).
View Full Student FAQs
7.6 Applications of Rational Equations
Learning Objectives
1. Solve applications involving relationships between real numbers.
2. Solve applications involving uniform motion (distance problems).
3. Solve work-rate applications.
Number Problems
Recall that the reciprocalThe reciprocal of a nonzero number n is 1/n. of a nonzero number n is 1/n. For example, the reciprocal of 5 is 1/5 and 5 ⋅ 1/5 = 1. In this section, the applications will often involve the key word “reciprocal.” When this is the case, we will see that the algebraic setup results in a rational equation.
Example 1: A positive integer is 4 less than another. The sum of the reciprocals of the two positive integers is 10/21. Find the two integers.
Solution: Begin by assigning variables to the unknowns.
Next, use the reciprocals $1n$ and $1n−4$ to translate the sentences into an algebraic equation.
We can solve this rational expression by multiplying both sides of the equation by the least common denominator (LCD). In this case, the LCD is $21n(n−4)$.
The question calls for integers and the only integer solution is $n=7$. Hence disregard 6/5. Use the expression $n−4$ to find the smaller integer.
Answer: The two positive integers are 3 and 7. The check is left to the reader.
Example 2: A positive integer is 4 less than another. If the reciprocal of the smaller integer is subtracted from twice the reciprocal of the larger, then the result is 1/30. Find the two integers.
Solution:
Set up an algebraic equation.
Solve this rational expression by multiplying both sides by the LCD. The LCD is $30n(n−4)$.
Here we have two viable possibilities for the larger integer. For this reason, we will we have two solutions to this problem.
As a check, perform the operations indicated in the problem.
Answer: Two sets of positive integers solve this problem: {6, 10} and {20, 24}.
Try this! The difference between the reciprocals of two consecutive positive odd integers is 2/15. Find the integers.
Answer: The integers are 3 and 5.
Uniform Motion Problems
Uniform motionDescribed by the formula $D=rt$, where the distance, D, is given as the product of the average rate, r, and the time, t, traveled at that rate. problems, also referred to as distance problems, involve the formula
where the distance, D, is given as the product of the average rate, r, and the time, t, traveled at that rate. If we divide both sides by the average rate, r, then we obtain the formula
For this reason, when the unknown quantity is time, the algebraic setup for distance problems often results in a rational equation. Similarly, when the unknown quantity is the rate, the setup also may result in a rational equation.
We begin any uniform motion problem by first organizing our data with a chart. Use this information to set up an algebraic equation that models the application.
Example 5: Mary spent the first 120 miles of her road trip in traffic. When the traffic cleared, she was able to drive twice as fast for the remaining 300 miles. If the total trip took 9 hours, then how fast was she moving in traffic?
Solution: First, identify the unknown quantity and organize the data.
To avoid introducing two more variables for the time column, use the formula $t=Dr$. Here the time for each leg of the trip is calculated as follows:
Use these expressions to complete the chart.
The algebraic setup is defined by the time column. Add the times for each leg of the trip to obtain a total of 9 hours:
We begin solving this equation by first multiplying both sides by the LCD, 2x.
Answer: Mary averaged 30 miles per hour in traffic.
Example 6: A passenger train can travel, on average, 20 miles per hour faster than a freight train. If the passenger train covers 390 miles in the same time it takes the freight train to cover 270 miles, then how fast is each train?
Solution: First, identify the unknown quantities and organize the data.
Next, organize the given data in a chart.
Use the formula $t=Dr$ to fill in the time column for each train.
Because the trains travel the same amount of time, finish the algebraic setup by equating the expressions that represent the times:
Solve this equation by first multiplying both sides by the LCD, $x(x+20)$.
Use x + 20 to find the speed of the passenger train.
Answer: The speed of the passenger train is 65 miles per hour and the speed of the freight train is 45 miles per hour.
Example 7: Brett lives on the river 8 miles upstream from town. When the current is 2 miles per hour, he can row his boat downstream to town for supplies and back in 3 hours. What is his average rowing speed in still water?
Solution:
Rowing downstream, the current increases his speed, and his rate is x + 2 miles per hour. Rowing upstream, the current decreases his speed, and his rate is x − 2 miles per hour. Begin by organizing the data in the following chart:
Use the formula $t=Dr$ to fill in the time column for each leg of the trip.
The algebraic setup is defined by the time column. Add the times for each leg of the trip to obtain a total of 3 hours:
Solve this equation by first multiplying both sides by the LCD, $(x+2)(x−2)$.
Next, solve the resulting quadratic equation.
Use only the positive solution, $x=6$ miles per hour.
Answer: His rowing speed is 6 miles per hour.
Try this! Dwayne drove 18 miles to the airport to pick up his father and then returned home. On the return trip he was able to drive an average of 15 miles per hour faster than he did on the trip there. If the total driving time was 1 hour, then what was his average speed driving to the airport?
Answer: His average speed driving to the airport was 30 miles per hour.
Work-Rate Problems
The rate at which a task can be performed is called a work rateThe rate at which a task can be performed.. For example, if a painter can paint a room in 8 hours, then the task is to paint the room, and we can write
In other words, the painter can complete $18$ of the task per hour. If he works for less than 8 hours, then he will perform a fraction of the task. For example,
Obtain the amount of the task completed by multiplying the work rate by the amount of time the painter works. Typically, work-rate problems involve people working together to complete tasks. When this is the case, we can organize the data in a chart, just as we have done with distance problems.
Suppose an apprentice painter can paint the same room by himself in 10 hours. Then we say that he can complete $110$ of the task per hour. Let t represent the time it takes both of the painters, working together, to paint the room.
To complete the chart, multiply the work rate by the time for each person. The portion of the room each can paint adds to a total of 1 task completed. This is represented by the equation obtained from the first column of the chart:
This setup results in a rational equation that can be solved for t by multiplying both sides by the LCD, 40.
Therefore, the two painters, working together, complete the task in $449$ hours.
In general, we have the following work-rate formula$1t1⋅t+1t2⋅t=1$, where $1t1$ and $1t2$ are the individual work rates and t is the time it takes to complete the task working together.:
Here $1t1$ and $1t2$ are the individual work rates and t is the time it takes to complete one task working together. If we factor out the time, t, and then divide both sides by t, we obtain an equivalent work-rate formula:
In summary, we have the following equivalent work-rate formulas:
Example 3: Working alone, Billy’s dad can complete the yard work in 3 hours. If Billy helps his dad, then the yard work takes 2 hours. How long would it take Billy working alone to complete the yard work?
Solution: The given information tells us that Billy’s dad has an individual work rate of $13$ task per hour. If we let x represent the time it takes Billy working alone to complete the yard work, then Billy’s individual work rate is $1x$, and we can write
Working together, they can complete the task in 2 hours. Multiply the individual work rates by 2 hours to fill in the chart.
The amount of the task each completes will total 1 completed task. To solve for x, we first multiply both sides by the LCD, 3x.
Answer: It takes Billy 6 hours to complete the yard work alone.
Of course, the unit of time for the work rate need not always be in hours.
Example 4: Working together, two construction crews can build a shed in 5 days. Working separately, the less experienced crew takes twice as long to build a shed than the more experienced crew. Working separately, how long does it take each crew to build a shed?
Solution:
Working together, the job is completed in 5 days. This gives the following setup:
The first column in the chart gives us an algebraic equation that models the problem:
Solve the equation by multiplying both sides by 2x.
To determine the time it takes the less experienced crew, we use 2x:
Answer: Working separately, the experienced crew takes 7½ days to build a shed, and the less experienced crew takes 15 days to build a shed.
Try this! Joe’s garden hose fills the pool in 12 hours. His neighbor has a thinner hose that fills the pool in 15 hours. How long will it take to fill the pool using both hoses?
Answer: It will take both hoses $623$ hours to fill the pool.
Key Takeaways
• In this section, all of the steps outlined for solving general word problems apply. Look for the new key word “reciprocal,” which indicates that you should write the quantity in the denominator of a fraction with numerator 1.
• When solving distance problems where the time element is unknown, use the equivalent form of the uniform motion formula, $t=Dr$, to avoid introducing more variables.
• When solving work-rate problems, multiply the individual work rate by the time to obtain the portion of the task completed. The sum of the portions of the task results in the total amount of work completed.
Topic Exercises
Part A: Number Problems
Use algebra to solve the following applications.
1. A positive integer is twice another. The sum of the reciprocals of the two positive integers is 3/10. Find the two integers.
2. A positive integer is twice another. The sum of the reciprocals of the two positive integers is 3/12. Find the two integers.
3. A positive integer is twice another. The difference of the reciprocals of the two positive integers is 1/8. Find the two integers.
4. A positive integer is twice another. The difference of the reciprocals of the two positive integers is 1/18. Find the two integers.
5. A positive integer is 2 less than another. If the sum of the reciprocal of the smaller and twice the reciprocal of the larger is 5/12, then find the two integers.
6. A positive integer is 2 more than another. If the sum of the reciprocal of the smaller and twice the reciprocal of the larger is 17/35, then find the two integers.
7. The sum of the reciprocals of two consecutive positive even integers is 11/60. Find the two even integers.
8. The sum of the reciprocals of two consecutive positive odd integers is 16/63. Find the integers.
9. The difference of the reciprocals of two consecutive positive even integers is 1/24. Find the two even integers.
10. The difference of the reciprocals of two consecutive positive odd integers is 2/99. Find the integers.
11. If 3 times the reciprocal of the larger of two consecutive integers is subtracted from 2 times the reciprocal of the smaller, then the result is 1/2. Find the two integers.
12. If 3 times the reciprocal of the smaller of two consecutive integers is subtracted from 7 times the reciprocal of the larger, then the result is 1/2. Find the two integers.
13. A positive integer is 5 less than another. If the reciprocal of the smaller integer is subtracted from 3 times the reciprocal of the larger, then the result is 1/12. Find the two integers.
14. A positive integer is 6 less than another. If the reciprocal of the smaller integer is subtracted from 10 times the reciprocal of the larger, then the result is 3/7. Find the two integers.
Part B: Uniform Motion Problems
Use algebra to solve the following applications.
15. James can jog twice as fast as he can walk. He was able to jog the first 9 miles to his grandmother’s house, but then he tired and walked the remaining 1.5 miles. If the total trip took 2 hours, then what was his average jogging speed?
16. On a business trip, an executive traveled 720 miles by jet aircraft and then another 80 miles by helicopter. If the jet averaged 3 times the speed of the helicopter and the total trip took 4 hours, then what was the average speed of the jet?
17. Sally was able to drive an average of 20 miles per hour faster in her car after the traffic cleared. She drove 23 miles in traffic before it cleared and then drove another 99 miles. If the total trip took 2 hours, then what was her average speed in traffic?
18. Harry traveled 15 miles on the bus and then another 72 miles on a train. If the train was 18 miles per hour faster than the bus and the total trip took 2 hours, then what was the average speed of the train?
19. A bus averages 6 miles per hour faster than a trolley. If the bus travels 90 miles in the same time it takes the trolley to travel 75 miles, then what is the speed of each?
20. A passenger car averages 16 miles per hour faster than the bus. If the bus travels 56 miles in the same time it takes the passenger car to travel 84 miles, then what is the speed of each?
21. A light aircraft travels 2 miles per hour less than twice as fast as a passenger car. If the passenger car can travel 231 miles in the same time it takes the aircraft to travel 455 miles, then what is the average speed of each?
22. Mary can run 1 mile per hour more than twice as fast as Bill can walk. If Bill can walk 3 miles in the same time it takes Mary to run 7.2 miles, then what is Bill’s average walking speed?
23. An airplane traveling with a 20-mile-per-hour tailwind covers 270 miles. On the return trip against the wind, it covers 190 miles in the same amount of time. What is the speed of the airplane in still air?
24. A jet airliner traveling with a 30-mile-per-hour tailwind covers 525 miles in the same amount of time it is able to travel 495 miles after the tailwind eases to 10 miles per hour. What is the speed of the airliner in still air?
25. A boat averages 16 miles per hour in still water. With the current, the boat can travel 95 miles in the same time it travels 65 miles against it. What is the speed of the current?
26. A river tour boat averages 7 miles per hour in still water. If the total 24-mile tour downriver and 24 miles back takes 7 hours, then how fast is the river current?
27. If the river current flows at an average 3 miles per hour, then a tour boat makes the 9-mile tour downstream with the current and back the 9 miles against the current in 4 hours. What is the average speed of the boat in still water?
28. Jane rowed her canoe against a 1-mile-per-hour current upstream 12 miles and then returned the 12 miles back downstream. If the total trip took 5 hours, then at what speed can Jane row in still water?
29. Jose drove 15 miles to pick up his sister and then returned home. On the return trip, he was able to average 15 miles per hour faster than he did on the trip to pick her up. If the total trip took 1 hour, then what was Jose’s average speed on the return trip?
30. Barry drove the 24 miles to town and then back in 1 hour. On the return trip, he was able to average 14 miles per hour faster than he averaged on the trip to town. What was his average speed on the trip to town?
31. Jerry paddled his kayak upstream against a 1-mile-per-hour current for 12 miles. The return trip downstream with the 1-mile-per-hour current took 1 hour less time. How fast can Jerry paddle the kayak in still water?
32. It takes a light aircraft 1 hour more time to fly 360 miles against a 30-mile-per-hour headwind than it does to fly the same distance with it. What is the speed of the aircraft in calm air?
Part C: Work-Rate Problems
Use algebra to solve the following applications.
33. James can paint the office by himself in 7 hours. Manny paints the office in 10 hours. How long will it take them to paint the office working together?
34. Barry can lay a brick driveway by himself in 12 hours. Robert does the same job in 10 hours. How long will it take them to lay the brick driveway working together?
35. Jerry can detail a car by himself in 50 minutes. Sally does the same job in 1 hour. How long will it take them to detail a car working together?
36. Jose can build a small shed by himself in 26 hours. Alex builds the same small shed in 2 days. How long would it take them to build the shed working together?
37. Allison can complete a sales route by herself in 6 hours. Working with an associate, she completes the route in 4 hours. How long would it take her associate to complete the route by herself?
38. James can prepare and paint a house by himself in 5 days. Working with his brother, Bryan, they can do it in 3 days. How long would it take Bryan to prepare and paint the house by himself?
39. Joe can assemble a computer by himself in 1 hour. Working with an assistant, he can assemble a computer in 40 minutes. How long would it take his assistant to assemble a computer working alone?
40. The teacher’s assistant can grade class homework assignments by herself in 1 hour. If the teacher helps, then the grading can be completed in 20 minutes. How long would it take the teacher to grade the papers working alone?
41. A larger pipe fills a water tank twice as fast as a smaller pipe. When both pipes are used, they fill the tank in 5 hours. If the larger pipe is left off, then how long would it take the smaller pipe to fill the tank?
42. A newer printer can print twice as fast as an older printer. If both printers working together can print a batch of flyers in 45 minutes, then how long would it take the newer printer to print the batch working alone?
43. Working alone, Henry takes 9 hours longer than Mary to clean the carpets in the entire office. Working together, they clean the carpets in 6 hours. How long would it take Mary to clean the office carpets if Henry were not there to help?
44. Working alone, Monique takes 4 hours longer than Audrey to record the inventory of the entire shop. Working together, they take inventory in 1.5 hours. How long would it take Audrey to record the inventory working alone?
45. Jerry can lay a tile floor in 3 hours less time than Jake. If they work together, the floor takes 2 hours. How long would it take Jerry to lay the floor by himself?
46. Jeremy can build a model airplane in 5 hours less time than his brother. Working together, they need 6 hours to build the plane. How long would it take Jeremy to build the model airplane working alone?
47. Harry can paint a shed by himself in 6 hours. Jeremy can paint the same shed by himself in 8 hours. How long will it take them to paint two sheds working together?
48. Joe assembles a computer by himself in 1 hour. Working with an assistant, he can assemble 10 computers in 6 hours. How long would it take his assistant to assemble 1 computer working alone?
49. Jerry can lay a tile floor in 3 hours, and his assistant can do the same job in 4 hours. If Jerry starts the job and his assistant joins him 1 hour later, then how long will it take to lay the floor?
50. Working alone, Monique takes 6 hours to record the inventory of the entire shop, while it takes Audrey only 4 hours to do the same job. How long will it take them working together if Monique leaves 2 hours early?
1: {5, 10}
3: {4, 8}
5: {6, 8}
7: {10, 12}
9: {6, 8}
11: {1, 2} or {−4, −3}
13: {4, 9} or {15, 20}
15: 6 miles per hour
17: 46 miles per hour
19: Trolley: 30 miles per hour; bus: 36 miles per hour
21: Passenger car: 66 miles per hour; aircraft: 130 miles per hour
23: 115 miles per hour
25: 3 miles per hour
27: 6 miles per hour
29: 40 miles per hour
31: 5 miles per hour
33: $4217$ hours
35: $27311$ minutes
37: 12 hours
39: 2 hours
41: 15 hours
43: 9 hours
45: 3 hours
47: $667$ hours
49: $217$ hours
Close Search Results
Study Aids | 2013-05-18 22:27:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 29, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5943122506141663, "perplexity": 695.4498241724954}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368696382917/warc/CC-MAIN-20130516092622-00088-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathoverflow.net/feeds/question/22843 | Real primitive of a complex form on a CR manifold - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-25T09:29:16Z http://mathoverflow.net/feeds/question/22843 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/22843/real-primitive-of-a-complex-form-on-a-cr-manifold Real primitive of a complex form on a CR manifold Andrea Altomani 2010-04-28T12:37:38Z 2010-04-28T16:35:15Z <p>I am looking for a characterization of (0,1)-forms on a CR manifold M that admit a real primitive, i.e. those can be written as:</p> <p>$\omega=\overline\partial_M f$</p> <p>for a real function f.</p> <p>If M is a complex manifold, by expanding ddf=0 one obtains the following characterization:</p> <p>$\overline\partial\omega=0$, $\partial\overline\omega=0$, $\partial\omega+\overline\partial\overline\omega=0$.</p> <p>In the CR case however, there is not a good substitute for $\partial$, and also the symmetry between (0,1) forms and (0,1)-forms fails.</p> <p><strong>Edit:</strong> One easy condition is $\overline\partial_M\omega=0$. In general it is a difficult problem even to say if $\omega=\overline\partial_M g$ for some <em>complex</em> function g.</p> <p>My question should be rephrased as follows: <em>Assuming that there exist a</em> complex <em>solution g to</em> $\omega=\overline\partial_M g$, <em>when is it possible to choose g real?</em></p> | 2013-05-25 09:29:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8277302980422974, "perplexity": 2862.4692788592074}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705884968/warc/CC-MAIN-20130516120444-00093-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://stats.stackexchange.com/questions/7286/can-i-change-the-proposal-distribution-in-random-walk-mh-mcmc-without-affecting | # Can I change the proposal distribution in random-walk MH MCMC without affecting Markovianity?
Random walk Metropolis-Hasitings with symmetric proposal
$q(x|y)= g(|y-x|)$ has the property that the acceptance probability
$$P(accept\ y) = \min\{1, f(y)/f(x)\}$$
does not depend on proposal $g(\cdot)$.
Does that mean that I can change the $g(\cdot)$ as a function of previous performance of the chain, without affecting the markovianity of the chain?
Of particular interest to me is the adjustment of the scaling of Normal proposal as a function of acceptance rate.
Would also greatly appreciate if someone can point out to the adaptation algorithms used in practice for this type of problem.
Many thanks.
[edit: Starting with the references given by robertsy and wok I found the following references on MH adaptive algorithms:
Andrieu, Christophe, and Éric Moulines. 2006.
On the Ergodicity Properties of Some Adaptive MCMC Algorithms. The Annals of Applied Probability 16, no. 3: 1462-1505. http://www.jstor.org/stable/25442804.
Andrieu, Christophe, and Johannes Thoms.
2008. A tutorial on adaptive MCMC. Statistics and Computing 18, no. 4 (12): 343-373. doi:10.1007/s11222-008-9110-y. http://www.springerlink.com/content/979087678366r78v/.
Atchadé, Y., G. Fort, E. Moulines, and P. Priouret. 2009.
Adaptive Markov Chain Monte Carlo: Theory and Methods. Preprint.
Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli 16, no. 1 (February): 116-154. doi:10.3150/09-BEJ199. http://projecteuclid.org/DPubS?verb=Display&version=1.0&service=UI&handle=euclid.bj/1265984706&page=record.
Cappé, O., S. J Godsill, and E. Moulines. 2007.
An overview of existing methods and recent advances in sequential Monte Carlo. Proceedings of the IEEE 95, no. 5: 899-924.
Giordani, Paolo. 2010.
Adaptive Independent Metropolis–Hastings by Fast Estimation of Mixtures of Normals. Journal of Computational and Graphical Statistics 19, no. 2 (6): 243-259. doi:10.1198/jcgs.2009.07174. http://pubs.amstat.org/doi/abs/10.1198/jcgs.2009.07174.
Latuszynski, Krzysztof, Gareth O Roberts, and Jeffrey S Rosenthal. 2011.
Adaptive Gibbs samplers and related MCMC methods. 1101.5838 (January 30). http://arxiv.org/abs/1101.5838.
Pasarica, C., and A. Gelman. 2009.
Adaptively scaling the Metropolis algorithm using expected squared jumped distance. Statistica Sinica.
Roberts, Gareth O. 2009.
Examples of Adaptive MCMC. Journal of Computational and Graphical Statistics 18, no. 2 (6): 349-367. doi:10.1198/jcgs.2009.06134. http://pubs.amstat.org/doi/abs/10.1198/jcgs.2009.06134.
]
-
How come you don't have +100 bonus from your SO record? – mbq Feb 16 '11 at 10:57
@mbq, probably because I created this account long ago when I was 0 on OS as well...pity, 100 on CW looks like a big deal, since you must be a real chap to answer stuff in here :) – VitoshKa Feb 16 '11 at 11:11
You can get the bonus by clearing all associations and then associating accounts again. – wok Feb 16 '11 at 15:37
thanks @wok, nice to be reach:) – VitoshKa Feb 16 '11 at 16:15
I think that this paper from Heikki Haario et al. will give you the answer you need. The markovianity of the chain is affected by the adaptation of the proposal density, because then a new proposed value depends not only of the previous one but on the whole chain. But it seems that the sequence has still the good properties if great care is taken.
-
thanks robertsy, for good reference. indeed the process is not markov. Even if acceptance probability is independent of the past, the transition kernel of the process is a function of the proposal density and thus depends on the whole chain. – VitoshKa Feb 16 '11 at 16:07 | 2013-12-07 14:05:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7496132850646973, "perplexity": 2529.153709314379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163054576/warc/CC-MAIN-20131204131734-00010-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41598-017-16307-3?error=cookies_not_supported&code=2ef78a54-c8c3-4e47-b7d4-064fd7c35295 | Article | Open | Published:
# Beliefs about Others’ Abilities Alter Learning from Observation
## Abstract
Learning what is dangerous by observing others can be safer and more efficient than individual learning. The efficiency of observational learning depends on how observational information is used, something we propose depends on our beliefs’ about others. Here, we investigated how described and actual abilities of another individual (a demonstrator) influenced performance and psychophysiology during learning of an observational avoidance task. Participants were divided into two groups. In each group there were two demonstrators who were described as either high (Described-High group) or low (Described-Low group) in their ability to learn the task. In both groups, one demonstrator had a high ability (Actual-High) and the other had a low ability (Actual-Low) to learn. Participants performed worse in the Described-Low compared to the Described-High group. Pupil dilation, and behavioral data in combination with reinforcement learning modeling, suggested that the described ability influenced performance by affecting the level of attention towards the observational information. Skin conductance responses and pupil dilation provided us with a separate measure of learning in addition to choice behavior.
## Introduction
Learning to avoid harm is key to survival. A core feature in humans and other species is the ability to learn such information by observing the behavior of fellow individuals1,2,3,4. Although learning which choices to make in order to avoid harm by observing others is often safer than learning by individual trial and error, the usefulness of observational learning depends on both the informational content of the observed behavior as well as if and how this information is used by the observer. Whereas the former is often a function of the behavior of the observed individual, the latter critically depends on the observer’s beliefs and expectations. Here, we examined how prior beliefs about others’ abilities to avoid potentially harmful consequences affect how we learn from those individuals, and how such learning is influenced by their actual performance. To address this fundamental question, we used a novel experimental model of observational instrumental learning in a potentially dangerous environment. We measured behavior and psychophysiological responses, and applied reinforcement learning modelling to examine the mechanisms mediating the impact of prior beliefs and observational information on learning to avoid punishment (mild electric shocks).
Humans often hold prior beliefs about the abilities of others5, and these priors might help to direct attention and shape expectations in social situations with relevance for learning. For example, people are more prone to copying the behaviors of others when these are more prestigious6. Children copy individuals more if these individuals are proficient, or believed to be proficient, and more if they have a high rather than low status7. There is a large body of work arguing for the need for social learning to be selective in order to be adaptive, proposing several possible social learning strategies such as payoff biased and prestige biased learning (see e.g.8,9,10), further supporting the importance of prior beliefs about others’ abilities in social learning. Prior beliefs of someone’s ability can be the result of direct observation of that person’s behavior but impressions are often formed by verbal descriptions11,12. However, to what extent the knowledge of someone’s ability improves observational learning from that person depends on if learning is based on copying or on associating the observed choices with their outcomes. As an example, if you are on vacation in a new city trying to figure out at which restaurant to have dinner, copying the observed choices of the experts, i.e. the locals, is probably a better idea than copying other tourists. However, if you base your choice on observation of others’ reactions while eating at a certain restaurant (e.g. expressing content or disgust), then knowing their level of expertise is less useful. Importantly, successful learning here not only relies on having access to valuable observational information about choices and outcomes but also depends on how information is used (e.g. copied or not). From a performance perspective, learning by associating the observed choices with their outcomes, here referred to as observational associative learning, will improve performance regardless of the ability of the observed demonstrator. Learning through copying improves performance only when the demonstrator is more likely than the observer to select the most optimal choice. The need for copying to be selective has led to the proposition of several adaptive social learning strategies such as payoff biased or prestige biased learning9, strategies for which there is empirical evidence in both humans13 and non-human animals14,15. Learning either through observational associative learning or copying comes with specific strengths and weaknesses making the two forms of observational learning more or less suitable in different situations. Observational associative learning would require you to attend to both choices and outcomes; it could be slower and require more effort. Learning by copying can be fast and easy but it can also be vulnerable to environmental changes and relies on the ability of the demonstrator. The pros and cons of the different strategies are thus similar to those often proposed when contrasting copying with asocial, individual, learning16,17, where copying is regarded as fast but sensitive to spatial and/or temporal changes and individual learning is regarded as slower but more accurate.
Here, we use a novel experimental paradigm to investigate how the described and the actual abilities of a demonstrator influence observational learning in a simple avoidance task. The assumption that the informational value of a demonstrator’s choices is dependent on the ability of the demonstrator has been argued to lead to beneficial payoff19 and prestige10 biases in copying of behavior. Previous findings have shown preferential attention to individuals of higher rank during observational learning in chimpanzees23. There are also findings pointing to a positivity bias within the ability domain during impression formation24. The bias shows that more attention and weight is assigned to information about others that conveyed positive rather than negative ability information (e.g. more weight is assigned to information that someone is skilled rather than unskilled). This is in line with a view of observational social information as more valuable when the person we observe is high, rather than low, in ability. Based on these previous findings we hypothesize that people would pay less attention to the behavior of a demonstrator described as low, as compared to high, in ability. This hypothesis is in line with theories on payoff biased and prestige biased social learning strategies8. However, based on our previous findings that participants learn through observational associative learning rather than copying when this is possible18, we predict that this decrease in attention would lead to worse performance in the Described-Low compared to the Described-High group. From an objective perspective, information about the ability of the demonstrator should have little or no effect on the level of attention directed towards the demonstrator’s choices and outcomes in the present study, since the ability of the demonstrator has very little effect on the value of observable information in the paradigm we are using. An attentional bias towards skilled or prestigious individuals can however be beneficial in other situations, for instance when learning by copying or in more complex tasks. Evidence of such bias in the present task could indicate that individuals erroneously generalize from other situations.
Varying the actual ability of a demonstrator changes the observer’s learning task; the observational information from a demonstrator behaving randomly faster provides a fuller view of the choice-outcome space while the observational information from a demonstrator that attempts to minimize damage/loss provides a biased view of the choice-outcome space. Furthermore, the choices a demonstrator makes are more predictable when he/she has a high, as compared to a low, ability to learn, making observational learning somewhat more cognitively demanding during observation of a demonstrator with low ability (given that low ability to learn is defined as more random). Based on previous findings18 as well as simulations of the task (see SI), we do not expect any effect of actual ability on performance depending on either the difference in informational value or to the difference in predictability. Still, if a demonstrator’s actual ability affects how cognitively demanding the observational learning task is, we might see an interaction between actual ability and level of attention, which we hypothesize is driven by described ability. Thus, when attention is at a sufficiently low level, we could expect an effect on performance driven by how cognitively demanding the task is. However, it is unclear how strong such an interaction between attention, driven by described ability, and cognitive demand, driven by actual ability, would be and under which levels of attention we might see an effect. We are therefore open to the possibility that described ability could interact with actual ability. And consequently, depending on the level of attention, we cannot rule out the possibility of a main effect of actual ability.
Even though copying of poor behavior is counterproductive and should be avoided (but see25 for an example showing the surprising efficiency of copying-heavy strategies), a decrease in the ability to observationally learn from the experiences of poor performing others can be dangerous as well. It has been argued that attentionally biased observational learning can explain false beliefs of effective management in organizations26 and that similar mechanisms with regards to impression formation could account for the persistence of group stereotypes27.
To investigate effects on attention we measured pupil dilation responses which are sensitive to shifts in allocation of attentional resources28,29 and the level of cognitive load30,31. Pupil dilation responses have been used to measure surprise during learning32. We used reinforcement learning modeling to explore in more detail how learning was affected by the described ability. In addition, measures of skin conductance responses, SCRs, which captures changes in autonomic arousal33, gaze behavior and pupil dilation responses were used to validate the learning model by serving as psychophysiological indices of learning. Skin conductance responses are commonly used as a measure of learning in humans, often in fear conditioning studies34, where the arousal response serves as a proxy for learned fear. Skin conductance responses have also been used to study attentional processes and decision making35, for instance to capture the anticipation of an outcome with significant consequence. The inclusion of psychophysiological measures provides us with an additional measure of learning, separate from choice behavior.
## Results
### Described ability affected performance
Using logistic generalized mixed modeling with performance (Optimal/Suboptimal) as dependent variable and maximal random effect structure36, as predicted we saw an effect of described ability (χ 2(1) = 4.82, p = 0.028) caused by higher performance for the Described-High group compared to the Described-Low group (Estimate = 0.044 SE = 0.20, p = 0.028) as well as a positive effect of trial (χ 2(1) = 17.44, p < 0.001; Estimate = 0.14, SE = 0.03, p < 0.001) reflecting the learning curve while there was no effect of actual ability (χ 2(1) = 0.31, p = 0.57; Actual-Low: Estimate = −0.067, SE = 0.12, p = 0.57), see Fig. 2. Including an interaction between described and actual ability in the model did not significantly improve model fit (p = 0.083). By comparing our model against a simple model with trial as the only predictor we showed that our model was significantly better (p = 0.011). An explorative analysis to investigate the effect of sex showed that men performed better than women (Estimate = 0.50 SE = 0.20, p = 0.012). Including sex as a predictor increased model fit (p = 0.014) but did not alter our results qualitatively. For detailed model descriptions, analyses and model comparisons see SI. l.
### Described ability influenced the perception of the demonstrators
After each block, participants estimated the number of shocks given to the demonstrator and to themselves. A linear mixed model (LMM) modeling the effects of described and actual ability on the absolute deviation between estimated and actually delivered number of shocks, showed that participants were significantly more accurate during observation of Described-High demonstrators compared to the Described-Low demonstrators when rating the number of shocks delivered to the demonstrator (χ 2(1) = 4.32, p = 0.04). This suggests that described ability influenced the level of attention paid to the demonstrators, in turn affecting how accurately they reported the number of shocks given to the demonstrator. To investigate if this could be linked to performance we conducted an LMM, modeling the effect of the absolute deviation between rated and actually delivered number of shocks on mean performance per block. The analysis showed that participants’ performance was worse during blocks where shock estimations were less accurate (χ 2(1) = 5.77, p = 0.02). These results further supported our conjecture that described ability affects performance by influencing attention since low attention can be expected to lead to more mistakes in reporting occurred events.
To confirm that the described ability influenced the perceived ability of the demonstrators, an ordered logistic regression analysis showed that Described-High demonstrators were rated higher on performance compared to Described-Low (χ 2(1) = 124.98, p < 0.001), in addition to Actual-High demonstrators being rated higher on performance than Actual-Low (χ 2(1) = 8.20, p = 0.004). The ratings were carried out after the experiment when participants were asked to rate demonstrator’s performance on a five-graded scale (ranging from 1 = Very poor performance to 5 = Very good performance).
### Pupil dilation responses were sensitive to described ability
We analyzed pupil dilation responses using growth curve analysis37 to investigate the effects on attention in our paradigm beyond behavioral choice data (see SI for details). Changes in pupil dilation are, at least partly, regulated by the locus coeruleus believed to mediate allocation of attentional resources31 and have been linked to learning processes32,38. Changes in pupil dilation responses have been shown to affect the influence of information on existing beliefs39. Cognitive control and allocation of attentional resources can be both proactive, in preparation of an upcoming event, and reactive, following an event40. We hypothesize that proactive increase of attention as measured by proactive pupil dilation responses would facilitate succeeding processing of information, in our case learning from observation of the demonstrator’s choice and the outcome following the choice.
### Proactive pupil dilation responses were linked to performance
Effects of proactive attention were investigated by analyzing the pupil dilation responses using growth curve analysis, GCA37, during the “go phase” preceding the demonstrator’s choice and in the 1s time-window preceding the ensuing outcome. In the”go phase” time-window preceding the demonstrator’s choice, overall pupil dilation responses were larger during observation of the Described-High demonstrator compared to the Described-Low (p = 0.04), see Fig. 3a, indicating a preferential attendance to the choices made by a Described-High demonstrator and supporting the hypothesis that describing the demonstrator’s ability as high increases the level of attention towards that demonstrator. In the time-window preceding the outcome of the demonstrators’ choices overall pupil dilation responses were larger during observation of an Actual-Low compared to an Actual-High demonstrator (p = 0.04), see Fig. 3b, indicating that more attention was directed towards the outcome following a choice made by a demonstrator described as low rather than high in ability. (See SI for model fits and parameter estimates.) Next, we investigated the effect of proactive responses on performance. To do this, we calculated a trial-wise index of attention (Atttotal), a measure of the total amount of proactive attention directed at observational information (i.e. choice and outcome) per trial. Atttotal was calculated as the sum of the normalized mean pupil dilation responses preceding both the demonstrator’s choice (Attchoice) and the outcome of that choice (Attoutcome) (see SI for details). An LMM modeling the effects of proactive attention on performance while controlling for trial order revealed an interaction between Atttotal and actual ability (χ 2 (1) = 6.41, p = 0.01). Follow-up analyses showed that this interaction was driven by a positive effect of attention, as measured by Atttotal, on performance during observation of an Actual-Low demonstrator (β = 0.088, SE = 0.059, p = 0.14), in contrast to a negative effect during observation of an Actual-High demonstrator (β = −0.069, SE = 0.059, p = 0.24), as would be expected if learning from an Actual-Low demonstrator is more cognitively demanding than learning from an Actual-High. In addition, higher mean Attoutcome per block was associated with better accuracy in rating the number of shocks given to the demonstrator in that block (χ 2 (1) = 5.90, p = 0.02), further corroborating the interpretation of pupil dilation responses as affected by attention.
To summarize, we first showed that describing the demonstrator as high in ability, rather than low, increased proactive attention preceding the demonstrator’s choice. These results are in line with our hypotheses based on findings that people with a high ability attract more attention than those with a low ability41. Secondly, we showed that higher levels of proactive attention preceding both the demonstrator’s choice and the outcome increased performance when the actual ability of the demonstrator was low but not when it was high.
### Reinforcement learning models how described ability affects learning
To investigate the observational learning process on a trial-by-trial basis we analyzed participants’ choices using reinforcement learning (RL) modeling. We used models based on the Q-learning algorithm where the expected value of a choice is updated proportional to the difference between outcome and expected value of the choice, the prediction error, and a learning rate42,43. RL modeling allowed us to investigate if and how participants used observational information: if they observed the demonstrator’s choices and outcomes to update the expected values of choices (here observational learning) or if they appeared to copy the behavior of the demonstrators. We fitted twenty-four RL models that differed systematically in how observational information was used, whether or not the model included observational associative learning and/or imitation and whether or not parameters were fitted separately for each within-participant condition (Actual-Low/Actual-High). Models were compared using AIC weights (Akaike Information Criterion weights), which are interpreted as measures of the probabilities of each model being the best, compared to the other models, based on sample predictions on new data44 (see SI for details on models and model fits). Model comparisons show strong support for models which included observational learning compared to models which only use individual learning or individual learning paired with copying (sum of mean AIC weights for models which included vicarious reinforcement = 0.923, sum of mean AIC weights for models which did not include vicarious reinforcement = 0.077). From the models which included observational learning, no model clearly stood out as the best model. Based on each model’s mean AIC weight per participant and mean ranking across all participants we choose a simple model of observational learning as the best model. This model included observational learning but no copying of the demonstrators’ choices. According to the model, each choice is represented by a value Q, reflecting the expected outcome of making that choice. During observation of the demonstrator Q-values are updated according to the following equation where t denotes the trial number (to clarify that Q-values are updated twice during each trial, the trial number increases with 0.5 during observational learning), α is the learning rate and outcome is −1 for shock and 1 for omission of shock:
$${Q}_{choice}(t-0.5)={Q}_{choice}(t-1)+\alpha \ast (outcom{e}_{demonstrator}-{Q}_{choice}(t-1))$$
(1)
Next, the softmax activation function uses the Q-values to calculate the probability of making each choice. The choice associated with a higher Q-value will have a higher probability of being chosen but this is controlled by the inverse temperature parameter β, regulating how deterministic choices are (where low values indicate highly deterministic choices). The outcome following individual choices are used to update the Q-values again:
$${Q}_{choice}(t)={Q}_{choice}(t-0.5)+\alpha \ast (outcom{e}_{individual}-{Q}_{choice}(t-0.5))$$
(2)
Note that Q-values are similarly updated, using the same learning rate, regardless of whether or not the outcome follows the demonstrator’s or the individual’s choice. In this model we have two free parameters, α and β, which we fitted to participants choices. The learning rate parameter α reflects how fast expected outcomes for available choices are updated. Suboptimal performance can arise from both exceedingly high and low learning rates, resulting in too much or too little weight assigned to the latest piece of information. The inverse temperature parameter β is often interpreted as a measure of the tendency to explore but is more correctly understood as a measure of how noisy choice-behavior is.
Next, to further explore the mechanisms of the learning processes we analyzed the distributions of these fitted parameters. We did this by looking at how described ability affected the distribution of fitted parameters as categorized according to a cluster analysis of parameter combinations (see SI). The Described-High group appeared to have more fitted parameters belonging to a cluster with low values of both α and β while the Described-Low group had medium to high values of α and low to medium levels of β. Although this pattern was not significant (p = 0.16) our findings could indicate that described ability possibly affects both how well observed information is integrated over time as well as how noisy choices are. In our experiment, poor performance could thus be caused both by an impaired ability to integrate information over time in addition to choices simply being more random. However, it is not always straightforward to interpret fitted parameter values in RL models since it is difficult to distinguish the separate impacts on performance of the α and β parameters45 and therefore wish to caution against relying too heavily on this finding. To validate the RL model we showed that often used measures of surprise and learning in the form of reactive pupil dilation responses32,38 and skin conductance responses which measure arousal responses34,35 were sensitive to model derived observational prediction errors, both following the demonstrators’ choices and the outcome of those choices. Pupil dilation and skin conductance responses were larger following more surprising events associated with higher absolute prediction errors. In addition, gaze patterns during presentation of the choices reflected the model derived certainty of which choice was the optimal such that participants looked more at the optimal choice the larger the difference in expected value between the optimal and suboptimal choice, which is in line with studies showing that gaze is directed to the preferred choice46,47. (See SI for details.)
## Discussion
In the present study, we used a novel experimental paradigm to examine how prior beliefs about others’ ability impact learning from these individuals. Participants learned to avoid punishment (mild electric shocks) through observation of demonstrators described as having either a high (Described-High) or low (Described-Low) ability to learn. We also investigated if the described ability interacted with the actual ability of the demonstrators by varying how well they learned the task. Each participant observed two demonstrators, one that learned quickly and performed well (Actual-High) and one that behaved randomly and performed poorly (Actual-Low).
Our results show that people’s prior beliefs about others’ ability as low can alter observational learning. Learning from observing demonstrators, who were described as low in ability, resulted in worse performance as compared to observation of demonstrators described as having a high ability. Notably, we have previously shown that actual ability does not affect performance in a similar paradigm where no description of ability was given18; we therefore did not expect any main effect of actual ability. Our current results extend these previous results by showing the impact of prior beliefs on observational learning. These results are especially interesting since the actual ability of the demonstrator does not significantly affect how useful or valuable it is to learn by observation the choices and outcomes of that demonstrator in the present paradigm. Information about the level of ability should theoretically therefore not matter to the participant. We further demonstrated that describing demonstrators’ abilities as low compared to high also led to less accurate estimations of the number of shocks that the demonstrators had received. Moreover, results from analyses of pupil dilation responses, previously associated with attention31,48, indicated that the level of proactive attention before observation of a demonstrator’s choice was lower during observation of a demonstrator described as having low, compared to high, ability.
We further observed a trend towards an interaction between described and actual ability driven by a greater difference in performance during observation of demonstrators that performed poorly. We hypothesize that this could be driven by a difference in how cognitively demanding the observational learning task is, which depends on actual ability, interacting with the level of attention during observation, which depends on described ability. The choices of a demonstrator with actual low ability, who is making random choices, are more difficult to predict and learning from a demonstrator with low ability would therefore require more attention than learning from a demonstrator with high ability. This could make observational learning from a demonstrator with low ability more susceptive to changes in attention. In support of this argument we showed that the effect of proactive attention on performance differed as a function of the actual ability of the observed demonstrator. Higher proactive attention was linked to better participant performance when the demonstrator had an actual low ability to learn (random choices). When the demonstrator had an actual high ability to learn we saw the opposite pattern where instead lower proactive attention was linked to better participant performance. Further research is needed to investigate this potential interaction between described and actual ability on observational learning.
We used RL modeling to investigate and describe the effects on learning in more detail. RL modeling showed clearly that participants in both groups used observational information to learn and that neither of the groups appeared to copy the behavior of either of the demonstrators to any large extent. This finding can be related to previous findings showing that payoff biased learning is adaptive but under-used49. The model which best explained our behavioral data was a simple model which included two free parameters, a learning rate which was the same for learning from own outcome as well as from the demonstrators’ outcome and a parameter which regulates how deterministic choices are depending on the learned expected values of each choice. A closer look at the distributions of the fitted parameter values in the best model suggests that described ability affected both the learning rate and how deterministic choices were. The learning rate was higher and choices were less deterministic in the group that observed demonstrators described as low in ability. A high learning rate could indicate a failure to integrate information over time, possibly as a result of low cognitive effort or poor working memory45. Relatively non-deterministic choices it also what we would expect if the participants failed to notice which choice a demonstrator just made. It is however difficult to separate the parameters’ specific contributions to behavior50,51. To validate our model we analyzed effects of observational prediction errors on psychophysiological data. Skin conductance and reactive pupil dilation responses were sensitive to observational prediction errors and gaze patterns were sensitive to the difference in expected value between both choices. Taken together, our behavioral results, pupil dilation data and RL modeling, support our hypothesis that a demonstrator’s described ability affects the level of attention that the participants direct towards the available observational information.
The present study does not answer the question of why described ability would affect the level of attention or cognitive effort of an observer. We propose that the participants in the group where the demonstrators are described as high in ability allocated more attention because high ability should be more informative in a very general sense, and more diagnostic of an individual’s character than low ability52. It is also possible that participants concluded that since the choices of demonstrators that were low in ability themselves were not informative or useful to copy, paying attention to the choices of those demonstrators would not be useful. Participants would then behave according to a heuristic based on prestige or payoff biased learning8,10. This argument is supported by our analysis of the proactive pupil dilation responses which indicate that the described ability affects attention directed at the choices made by the demonstrators, not necessarily the outcome of those choices. However, participants in the Described-Low group also made more mistakes when attempting to estimate the number of shocks given to the demonstrator during a block which could be explained by a difference in the level attention directed toward the outcome as well. It is important to note that these arguments which are based on beliefs of the value of observational information rely on participants misconceiving the learning task. The value of observational information is in fact slightly higher when a demonstrator makes random choices, constantly exploring the environment; rather than making choices that are biased towards the optimal choice (see SI).
In line with theories on attention as a utility maximizing system, which mediates search for information53, proactive or preparatory, attention has been shown to be sensitive to the prospect of reward54,55, such that more attention is given to information with greater value. These previous studies support our interpretation of the current results that prior beliefs of demonstrators’ ability affected how participants evaluated observational information and, as a consequence of that, how much attention they directed to the demonstrators. It is interesting that participants would have evaluated the observational information differently depending on the description of the demonstrator when in fact learning benefits greatly from using observational information regardless of the demonstrator’s actual ability (and where, if anything, observational information from a poor performing demonstrator is actually slightly more valuable). One explanation could be that this bias, to attend to the behavior of supposedly skilled or successful others at the cost of failing to learn from presumably poor performing individuals, would be adaptable in certain (possibly more ecologically valid) environments. Consider for instance the task of learning how to build a chair. The task involves several steps as well as several solutions and the quality of the chair depends on how each step is performed. Observational learning would in this case only be efficient if the observed demonstrator is at least close to building a proper chair. Learning from someone randomly trying out ways to assemble different pieces of wood would be extremely slow. It has been suggested that such narrow-peaked search landscapes, where only behavior close to a local optima, generates valuable feedback about the location of the solution, could increase tendencies to copy56.
Our results show that something as simple as a short description of someone else can lead to impairments in learning simply by a decrease in attention to valuable observational information. Systematically attending to the behavior of those that are described as learning and performing well is a form of biased selective sampling. This mechanism can be tied to varying occurrences of illusory correlations, such as stereotypes57 and biased organizational theories26 which show that undersampling of failure leads to false beliefs regarding the nature of effective management. We have shown here that biased sampling of others’ behavior can give rise to suboptimal learning and that the problem can worsen as a function of the nature of the observed behavior.
## Materials and Methods
### Participants
A total of 46 healthy participants were recruited and paid for participation in the experiment approved by the Regional Ethical Review Board in Stockholm and the experiment was performed in accordance with relevant guidelines and regulations. Three participants were excluded due to technical issues. The remaining 43 participants were randomly assigned to either the Described-Low or Described-High group (Described-Low: n = 21, 14 women, mean age 24.5 y [sd = 6.9]; Described-High: n = 22, 14 women, mean age 24.9 y [sd = 4.9]). Before the experiment, all participants signed an informed consent form.
### Experimental Procedure
Apart from an initial training block, participants completed five blocks of eight trials per demonstrator, resulting in a total of 80 trials (2 × 5 × 8). For each block the participants had to repeatedly choose between two randomly drawn pictures of equal luminance, one assigned to be the optimal choice (probability of being paired with a shock = 0.2) while the other was the suboptimal (probability of being paired with a shock = 0.8). Each trial in the setup consisted of an initial demonstrator stage during which the demonstrator made a choice followed by outcome before the individual stage during which the participant made a choice followed by an outcome. The sequence of events was the same for both stages, see Fig. 1. Each stage began with a 1.5 s presentation of a figure indicating whose turn it was. Next, a fixation cross was displayed and after 1 s the choice stimuli was presented for 1.5 s. Half a second after the presentation of the choice stimuli a “go-sound” was played signaling that it was time to make a choice. During the individual stage the sound lasted a maximum of 1 s or until a choice was made. During the demonstrator stage the duration of the sound was randomized between 300–800 ms to simulate termination following the demonstrator’s choice. The fixation cross was then rotated 20° for 6 s to indicate which choice was made (right: clockwise: left: counterclockwise). If the consequence of the choice was a shock (100 ms DC-pulse, individually set to be unpleasant but not painful) this was administered after 3 s either directly to the participant (individual stage) or indicated by a short neutral “shock-sound” (demonstrator stage). At the end of each block, participants were asked to estimate the number of administered shocks. After finishing the experiment, participants filled out a set of questionnaires (see SI).
### Data acquisition
The experiment was presented and behavioral data collected using E-Prime (Psychology Software Tools). In addition to measure learning as choice behavior we also recorded psychophysiological measures of learning: gaze, pupil dilation and skin conductance responses. Pupil dilation data is commonly used in learning paradigms to measure surprise and attention31,58 and skin conductance responses are often used as a measure of learning in conditioning paradigms34 and has been linked to attentional processes in decision making tasks as well35.
Eye tracking data with a resolution of 50 Hz was collected through iViewX 1.6 using an SMI remote Red III eye tracker placed on the desk in front of the participants. Eye tracking data from 7 participants was excluded due to poor data quality leaving 36 participants to be included in analyses of gaze patterns and pupil dilation responses. Skin conductance data was collected using a pair of Ag-AgCl electrodes attached to the index and middle finger of the left hand. The signals were amplified and recorded at 250 Hz using BIOPAC Systems (Santa Barbara, CA). Skin conductance data from 2 participants were excluded to poor quality leaving 41 participants to be included in further analyses. For additional details on material, data acquisition and preparation see SI.
### Data availability
The datasets generated and analyzed in the current study are available from the corresponding author on reasonable request.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## References
1. 1.
Bandura, A. Social learning theory. (Prentice Hall, 1977).
2. 2.
Mineka, S. & Cook, M. Mechanisms involved in the observational conditioning of fear. J. Exp. Psychol. Gen. 122, 23–38 (1993).
3. 3.
Olsson, A. & Phelps, E. A. Social learning of fear. Nat. Neurosci. 10, 1095–102 (2007).
4. 4.
Zentall, T. R. & Galef, J. R. B. G. Social learning: psychologicall and biological perspectives. (Psychology Press, 1988).
5. 5.
Fiske, S. T., Cuddy, A. J. C. & Glick, P. Universal dimensions of social cognition: warmth and competence. Trends Cogn. Sci. 11, 77–83 (2007).
6. 6.
Henrich, J. & Broesch, J. On the nature of cultural transmission networks: evidence from Fijian villages for adaptive learning biases On the nature of cultural transmission networks: evidence from Fijian villages for adaptive learning biases. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 366, 1139–1148 (2011).
7. 7.
Wood, L. A., Kendal, R. L. & Flynn, E. G. Whom do children copy? Model-based biases in social learning. Dev. Rev. 33, 341–356 (2013).
8. 8.
Laland, K. N. Social learning strategies. Learn. Behav. 32, 4–14 (2004).
9. 9.
Rendell, L. et al. Cognitive culture: theoretical and empirical insights into social learning strategies. Trends Cogn. Sci. 15, 68–76 (2011).
10. 10.
Henrich, J. & Gil-White, F. J. The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission. Evol. Hum. Behav. 22, 165–196 (2001).
11. 11.
Fiske, S. T. & Neuberg, S. L. A Continuum of Impression Formation, from Category-Based to Individuating Processes: Influences of Information and Motivation on Attention andInterpretation. Adv. Exp. Soc. Psychol. 23, 1–74 (1990).
12. 12.
Kunda, Z. & Thagard, P. Forming Impressions From Stereotypes, Traits, and Behaviors: A Parallel-Constraint-Satisfaction Theory. Psychol. Rev. 103, 284–308 (1996).
13. 13.
McElreath, R. et al. Beyond existence and aiming outside the laboratory: estimating frequency-dependent and pay-off-biased social learning strategies. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 363, 3515–28 (2008).
14. 14.
Pike, T. W., Kendal, J. R., Rendell, L. E. & Laland, K. N. Learning by proportional observation in a species of fish. Behav. Ecol. 21, 570–575 (2010).
15. 15.
Kendal, J. R., Rendell, L., Pike, T. W. & Laland, K. N. Nine-spined sticklebacks deploy a hill-climbing social learning strategy. Behav. Ecol. 20, 238–244 (2009).
16. 16.
Kameda, T. & Nakanishi, D. Does social/cultural learning increase human adaptability? Rogers’s question revisited. Evol. Hum. Behav. 24, 242–260 (2003).
17. 17.
McElreath, R., Wallin, A. & Fasolo, B. In Simple Heuristics in a Social World (eds Hertwig, R. & Hoffrage, U.) (Oxford University Press. https://doi.org/10.1093/acprof:oso/9780195388435.003.0014, 2013).
18. 18.
Selbing, I., Lindström, B. & Olsson, A. Demonstrator skill modulates observational aversive learning. Cognition 133, 128–39 (2014).
19. 19.
Boyd, R. & Richerson, P. J. Culture and the Evolutionary Process. (University of Chicago Press, 1985).
20. 20.
Harvey, N. & Fischer, I. Taking Advice: Accepting Help, Improving Judgment, and Sharing Responsibility. Organ. Behav. Hum. Decis. Process. 70, 117–133 (1997).
21. 21.
Sniezek, J. A., Schrah, G. E. & Dalal, R. S. Improving judgement with prepaid expert advice. J. Behav. Decis. Mak. 17, 173–190 (2004).
22. 22.
Rabin, M. & Schrag, J. L. First Impressions Matter: A Model of Confirmatory Bias. Q. J. Econ. 114, 37–82 (1999).
23. 23.
Kendal, R. et al. Chimpanzees copy dominant and knowledgeable individuals: Implications for cultural diversity. Evol. Hum. Behav. 36, 65–72 (2015).
24. 24.
Bruin, E. N. M. D & Lange, P. A. M. Van. What People Look for in Others: Influences of the Perceiver and the Perceived on Information Selection. 26, 206–219 (2015).
25. 25.
Rendell, L. et al. Why copy others? Insights from the social learning strategies tournament. Science 328, 208–13 (2010).
26. 26.
Denrell, J. Vicarious Learning, Undersampling of Failure, and the Myths of Management. Organ. Sci. 14, 227–243 (2003).
27. 27.
Denrell, J. Why most people disapprove of me: experience sampling in impression formation. Psychol. Rev. 112, 951–78 (2005).
28. 28.
Granholm, E. & Steinhauer, S. R. Pupillometric measures of cognitive and emotional processes. Int. J. Psychophysiol. 52, 1–6 (2004).
29. 29.
Siegle, G. J., Steinhauer, S. R. & Thase, M. E. Pupillary assessment and computational modeling of the Stroop task in depression. Int. J. Psychophysiol. 52, 63–76 (2004).
30. 30.
Beatty, J. & Lucero-Wagoner, B. In Handbook of Psychophysiology 2nd, 142–162 (2000).
31. 31.
Laeng, B., Sirois, S. & Gredeback, G. Pupillometry: A Window to the Preconscious? Perspect. Psychol. Sci. 7, 18–27 (2012).
32. 32.
O’Reilly, J. X. et al. Dissociable effects of surprise and model update in parietal and anterior cingulate cortex. Proc. Natl. Acad. Sci. USA 1–10, https://doi.org/10.1073/pnas.1305373110 (2013).
33. 33.
Critchley, H. D. Neural mechanisms of autonomic, affective, and cognitive integration. J. Comp. Neurol. 493, 154–166 (2005).
34. 34.
Öhman, A. & Mineka, S. Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychol. Rev. 108, 483–522 (2001).
35. 35.
Dawson, M. E., Schell, A. M. & Courtney, C. G. The skin conductance response, anticipation, and decision-making. J. Neurosci. Psychol. Econ. 4, 111–116 (2011).
36. 36.
Barr, D. J., Levy, R., Scheepers, C. & Tily, H. J. Keep it maximal. J. Mem. Lang. 68, 1–43 (2013).
37. 37.
Mirman, D. Growth Curve Analysis and Visualization Using R Analysis and Visualization Using R. (2014).
38. 38.
Preuschoff, K., t Hart, B. M. & Einhäuser, W. Pupil Dilation Signals Surprise: Evidence for Noradrenaline’s Role in Decision Making. Front. Neurosci 5, 115 (2011).
39. 39.
Nassar, M. R. et al. Rational regulation of learning dynamics by pupil-linked arousal systems. Nat. Neurosci. 15, 1040–6 (2012).
40. 40.
Braver, T. S. The variable nature of cognitive control: A dual mechanisms framework. Trends Cogn. Sci. 16, 106–113 (2012).
41. 41.
Bruin, E. N. M. D. & Lange, P. A. M. Van. What People Look for in Others: Influences of the Perceiver and the Perceived on Information Selection. Personal. Soc. Psychol. Bull. 26, 206–219 (2015).
42. 42.
Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction. Policy (The MIT Press, 1998).
43. 43.
Watkins, C. J. C. H. & Dayan, P. Q-Learning. Mach. Learn. 8, 279–292 (1992).
44. 44.
Wagenmakers, E.-J. & Farrell, S. AIC model selection using Akaike weights. Psychon. Bull. Rev. 11, 192–6 (2004).
45. 45.
Collins, A. G. E. & Frank, M. J. How much of reinforcement learning is working memory, not reinforcement learning? A behavioral, computational, and neurogenetic analysis. Eur. J. Neurosci. 35, 1024–1035 (2012).
46. 46.
Shimojo, S., Simion, C., Shimojo, E. & Scheier, C. Gaze bias both reflects and influences preference. Nat. Neurosci. 6, 1317–22 (2003).
47. 47.
Krajbich, I., Armel, C. & Rangel, A. Visual fixations and the computation and comparison of value in simple choice. Nat. Neurosci. 13, 1292–8 (2010).
48. 48.
Moresi, S. et al. Pupil dilation in response preparation. Int. J. Psychophysiol. 67, 124–30 (2008).
49. 49.
Mesoudi, A. An experimental comparison of human social learning strategies: Payoff-biased social learning is adaptive but underused. Evol. Hum. Behav. 32, 334–342 (2011).
50. 50.
Nassar, M. R. & Gold, J. I. A healthy fear of the unknown: perspectives on the interpretation of parameter fits from computational models in neuroscience. PLoS Comput. Biol. 9, e1003015 (2013).
51. 51.
Daw, N. D. In Decision Making, Affect, and Learning: Attention and Performance XXIII 3–38 (Oxford University Press, 2011).
52. 52.
Martijn, C., Spears, R., Van der Pligt, J. & Jakobs, E. Negativity and positivity effects in person perception and inference: Ability versus morality. Eur. J. Soc. Psychol. 22, 453–463 (1992).
53. 53.
Gottlieb, J. Attention, learning, and the value of information. Neuron 76, 281–95 (2012).
54. 54.
van den Berg, B., Krebs, R. M., Lorist, M. M. & Woldorff, M. G. Utilization of reward-prospect enhances preparatory attention and reduces stimulus conflict. Cogn. Affect. Behav. Neurosci. 14, 561–77 (2014).
55. 55.
Marini, F., van den Berg, B. & Woldorff, M. G. Reward-prospect interacts with trial-by-trial preparation for potential distraction. Vis. cogn. 23, 313–335 (2015).
56. 56.
Acerbi, A., Tennie, C. & Mesoudi, A. Social learning solves the problem of narrow-peaked search landscapes: experimental evidence in humans, https://doi.org/10.1098/rsos.160215 (2016).
57. 57.
Denrell, J. & Le Mens, G. Information Sampling, Conformity and Collective Mistaken Beliefs. Proceedings of the 35th Annual Conference of the Cognitive Science Society 2013, 2177–2182 (2013).
58. 58.
Sibley, C., Coyne, J. & Baldwin, C. Pupil Dilation as an Index of Learning. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 55, 237–241 (2011).
## Acknowledgements
This research was supported by an Independent Starting Grant (284366; Emotional Learning in Social Interaction project) from the European Research Council and the Knut and Alice Wallenberg Foundation (KAW 2014.0237) to A.O. We thank J. Axelsson, C. Balkenius, B. Lindström and P. Pärnamets for valuable comments on a previous version of this paper.
## Author information
### Affiliations
1. #### Division of Psychology, Karolinska Institutet, 171 77, Stockholm, Sweden
• Ida Selbing
• & Andreas Olsson
### Contributions
I.S. and A.O. designed the experiment; I.S. collected and analyzed the data and performed the model based analyses. I.S. and A.O. wrote the paper.
### Competing Interests
The authors declare that they have no competing interests.
### Corresponding author
Correspondence to Ida Selbing. | 2018-10-18 00:43:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6874606609344482, "perplexity": 3106.120829693827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511365.50/warc/CC-MAIN-20181018001031-20181018022531-00097.warc.gz"} |
https://msp.org/ant/2012/6-4/p02.xhtml | #### Vol. 6, No. 4, 2012
Recent Issues
The Journal About the Journal Editorial Board Editors’ Interests Subscriptions Submission Guidelines Submission Form Policies for Authors Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals
Nonuniruledness results for spaces of rational curves in hypersurfaces
### Roya Beheshti
Vol. 6 (2012), No. 4, 669–687
##### Abstract
We prove that the sweeping components of the space of smooth rational curves in a smooth hypersurface of degree $d$ in ${ℙ}^{n}$ are not uniruled if $\left(n+1\right)∕2\le d\le n-3$. We also show that for any $e\ge 1$, the space of smooth rational curves of degree $e$ in a general hypersurface of degree $d$ in ${ℙ}^{n}$ is not uniruled roughly when $d\ge e\sqrt{n}$.
##### Keywords
rational curves on hypersurfaces
##### Mathematical Subject Classification 2010
Primary: 14J70
Secondary: 14J40, 14E05 | 2021-07-26 20:02:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 8, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3711842894554138, "perplexity": 1399.4589430128008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152144.92/warc/CC-MAIN-20210726183622-20210726213622-00620.warc.gz"} |
https://socratic.org/questions/if-an-object-is-moving-at-11-m-s-over-a-surface-with-a-kinetic-friction-coeffici | # If an object is moving at 11 m/s over a surface with a kinetic friction coefficient of u_k=22 /g, how far will the object continue to move?
${\mu}_{k} m g \times d = \frac{1}{2} m \times {v}^{2}$ ,where d is the distance traversed.
$\implies d = {v}^{2} / \left(2 {\mu}_{k} g\right) = {11}^{2} / \left(2 \cdot \frac{22}{g} \cdot g\right) = \frac{11}{4} = 2.75 m$ | 2019-04-21 20:07:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20807532966136932, "perplexity": 630.2257529634155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532882.36/warc/CC-MAIN-20190421195929-20190421221929-00483.warc.gz"} |
http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332000000200014&lng=en&nrm=iso | Print version ISSN 0103-9733On-line version ISSN 1678-4448
Braz. J. Phys. vol.30 no.2 São Paulo June 2000
http://dx.doi.org/10.1590/S0103-97332000000200014
Quantum cosmology: how to interpret and obtain results
Nelson Pinto-Neto
Centro Brasileiro de Pesquisas Físicas,
Rua Dr. Xavier Sigaud 150, Urca
22290-180, Rio de Janeiro, RJ, Brazil
E-mails: [email protected]
We argue that the Copenhagen interpretation of quantum mechanics cannot be applied to quantum cosmology. Among the alternative interpretations, we choose to apply the Bohm-de Broglie interpretation of quantum mechanics to canonical quantum cosmology. For minisuperspace models, we show that there is no problem of time in this interpretation, and that quantum effects can avoid the initial singularity, create inflation and isotropize the Universe. For the general case, it is shown that, irrespective of any regularization or choice of factor ordering of the Wheeler-DeWitt equation, the unique relevant quantum effect which does not break spacetime is the change of its signature from lorentzian to euclidean. The other quantum effects are either trivial or break the four-geometry of spacetime. A Bohm-de Broglie picture of a quantum geometrodynamics is constructed, which allows the investigation of these latter structures.
I Introduction
Almost all physicists believe that quantum mechanics is a universal and fundamental theory, applicable to any physical system, from which classical physics can be recovered. The Universe is, of course, a valid physical system: there is a theory, Standard Cosmology, which is able to describe it in physical terms, and make predictions which can be confirmed or refuted by observations. In fact, the observations until now confirm the standard cosmological scenario. Hence, supposing the universality of quantum mechanics, the Universe itself must be described by quantum theory, from which we could recover Standard Cosmology. However, the Copenhagen interpretation of quantum mechanics [1, 2, 3]1, which is the one taught in undergraduate courses and employed by the majority of physicists in all areas (specially the von Neumann's approach), cannot be used in a Quantum Theory of Cosmology. This is because it imposes the existence of a classical domain. In von Neumann's view, for instance, the necessity of a classical domain comes from the way it solves the measurement problem (see Ref. [4] for a good discussion). In an impulsive measurement of some observable, the wave function of the observed system plus macroscopic apparatus splits into many branches which almost do not overlap (in order to be a good measurement), each one containing the observed system in an eigenstate of the measured observable, and the pointer of the apparatus pointing to the respective eigenvalue. However, in the end of the measurement, we observe only one of these eigenvalues, and the measurement is robust in the sense that if we repeat it immediately after, we obtain the same result. So it seems that the wave function collapses, the other branches disappear. The Copenhagen interpretation assumes that this collapse is real. However, a real collapse cannot be described by the unitary Schrödinger evolution. Hence, the Copenhagen interpretation must assume that there is a fundamental process in a measurement which must occur outside the quantum world, in a classical domain. Of course, if we want to quantize the whole Universe, there is no place for a classical domain outside it, and the Copenhagen interpretation cannot be applied. Hence, if someone insists with the Copenhagen interpretation, she or he must assume that quantum theory is not universal, or at least try to improve it by means of further concepts. One possibility is by invoking the phenomenon of decoherence [5]. In fact, the interaction of the observed quantum system with its environment yields an effective diagonalization of the reduced density matrix, obtained by tracing out the irrelevant degrees of freedom. Decoherence can explain why the splitting of the wave function is given in terms of the pointer basis states, and why we do not see superpositions of macroscopic objects. In this way, classical properties emerge from quantum theory without the need of being assumed. In the framework of quantum gravity, it can also explain how a classical background geometry can emerge in a quantum universe [6]. In fact, it is the first quantity to become classical. However, decoherence is not yet a complete answer to the measurement problem [7, 8]. It does not explain the apparent collapse after the measurement is completed, or why all but one of the diagonal elements of the density matrix become null when the measurement is finished. The theory is unable to give an account of the existence of facts, their uniqueness as opposed to the multiplicity of possible phenomena. Further developments are still in progress, like the consistent histories approach [9], which is however incomplete until now. The important role played by the observers in these descriptions is not yet explained [10], and still remains the problem on how to describe a quantum universe when the background geometry is not yet classical.
Nevertheless, there are some alternative solutions to this quantum cosmological dilemma which, together with decoherence, can solve the measurement problem maintaining the universality of quantum theory. One can say that the Schrödinger evolution is an approximation of a more fundamental non-linear theory which can accomplish the collapse [11, 12], or that the collapse is effective but not real, in the sense that the other branches disappear from the observer but do not disappear from existence. In this second category we can cite the Many-Worlds Interpretation [13] and the Bohm-de Broglie Interpretation [14, 15]. In the former, all the possibilities in the splitting are actually realized. In each branch there is an observer with the knowledge of the corresponding eigenvalue of this branch, but she or he is not aware of the other observers and the other possibilities because the branches do not interfere. In the latter, a point-particle in configuration space describing the observed system and apparatus is supposed to exist, independently on any observations. In the splitting, this point particle will enter into one of the branches (which one depends on the initial position of the point particle before the measurement, which is unknown), and the other branches will be empty. It can be shown [15] that the empty waves can neither interact with other particles, nor with the point particle containing the apparatus. Hence, no observer can be aware of the other branches which are empty. Again we have an effective but not real collapse (the empty waves continue to exist), but now with no multiplication of observers. Of course these interpretations can be used in quantum cosmology. Schrödinger evolution is always valid, and there is no need of a classical domain outside the observed system.
In this paper we review some results on the application of the Bohm-de Broglie interpretation to quantum cosmology [16, 17, 18, 19, 20]. In this approach, the fundamental object of quantum gravity, the geometry of 3-dimensional spacelike hypersurfaces, is supposed to exist independently on any observation or measurement, as well as its canonical momentum, the extrinsic curvature of the spacelike hypersurfaces. Its evolution, labeled by some time parameter, is dictated by a quantum evolution that is different from the classical one due to the presence of a quantum potential which appears naturally from the Wheeler-DeWitt equation. This interpretation has been applied to many minisuperspace models [16, 19, 21, 22, 23, 24], obtained by the imposition of homogeneity of the spacelike hypersurfaces. The classical limit, the singularity problem, the cosmological constant problem, and the time issue have been discussed. For instance, in some of these papers it was shown that in models involving scalar fields or radiation, which are nice representatives of the matter content of the early universe, the singularity can be clearly avoided by quantum effects. In the Bohm-de Broglie interpretation description, the quantum potential becomes important near the singularity, yielding a repulsive quantum force counteracting the gravitational field, avoiding the singularity and yielding inflation. The classical limit (given by the limit where the quantum potential becomes negligible with respect to the classical energy) for large scale factors are usually attainable, but for some scalar field models it depends on the quantum state and initial conditions. In fact it is possible to have small classical universes and large quantum ones [24]. About the time issue, it was shown that for any choice of the lapse function the quantum evolution of the homogeneous hypersurfaces yield the same four-geometry [19]. What remained to be studied is if this fact remains valid in the full theory, where we are not restricted to homogeneous spacelike hypersurfaces. The question is, given an initial hypersurface with consistent initial conditions, does the evolution of the initial three-geometry driven by the quantum bohmian dynamics yields the same four-geometry for any choice of the lapse and shift functions, and if it does, what kind of spacetime structure is formed? We know that this is true if the three-geometry is evolved by the dynamics of classical General Relativity (GR), yielding a non degenerate four geometry, but it can be false if the evolving dynamics is the quantum bohmian one. The idea was to put the quantum bohmian dynamics in hamiltonian form, and then use strong results presented in the literature exhibiting the most general form that a hamiltonian should have in order to form a non degenerate four-geometry from the evolution of three-geometries [25]. Our conclusion is that, in general, the quantum bohmian evolution of the three-geometries does not yield any non degenerate four-geometry at all [20]. Only for very special quantum states a relevant quantum non degenerate four-geometry can be obtained, and it must be euclidean. In the general case, the quantum bohmian evolution is consistent (still independent on the choice of the lapse and shift functions) but yielding a degenerate four-geometry, where special vector fields, the null eigenvectors of the four geometry, are present2. We arrived at these conclusions without assuming any regularization and factor ordering of the Wheeler-DeWitt equation. As we know, the Wheeler-DeWitt equation involves the application of the product of local operators on states at the same space point, which is ill defined [27]. Hence we need to regularize it in order to solve the factor ordering problem, and have a theory free of anomalies (for some proposals, see Refs [28, 29, 30]). Our conclusions are completely independent on these issues.
This paper is organized as follows: in the next section we review the Bohm-de Broglie interpretation of quantum mechanics for non-relativistic particles and quantum field theory in flat spacetime. In section III we apply the Bohm-de Broglie interpretation to canonical quantum gravity in the minisuperspace case. We show that there is no problem of time in this interpretation, and that quantum effects can avoid the initial singularity, create inflation, and isotropize the Universe. In section IV we treat the general case. We prove that the bohmian evolution of the 3-geometries, irrespective of any regularization and factor ordering of the Wheeler-DeWitt equation, can be obtained from a specific hamiltonian, which is of course different from the classical one. We then use this hamiltonian to obtain a picture of these new quantum structures. We end with conclusions and many perspectives for future work.
II The Bohm-de Broglie Interpretation
In this section we will review the Bohm-de Broglie interpretation of quantum mechanics. We will first show how this interpretation works in the case of a single particle described by a Schrödinger equation, and then we will obtain, by analogy, the causal interpretation of quantum field theory in flat spacetime.
Let us begin with the Bohm-de Broglie interpretation of the Schrödinger equation describing a single particle. In the coordinate representation, for a non-relativistic particle with Hamiltonian H = p2/2m + V(x), the Schrödinger equation is
We can transform this differential equation over a complex field into a pair of coupled differential equations over real fields, by writing Y = Aexp(iS/), where A and S are real functions, and substituting it into (1). We obtain the following equations.
The usual probabilistic interpretation, i.e. the Copenhagen interpretation, understands equation (3) as a continuity equation for the probability density A2 for finding the particle at position x and time t. All physical information about the system is contained in A2, and the total phase S of the wave function is completely irrelevant. In this interpretation, nothing is said about S and its evolution equation (2). Now suppose that the term is not present in Eqs. (2) and (3). Then we could interpret them as equations for an ensemble of classical particles under the influence of a classical potential V through the Hamilton-Jacobi equation (2), whose probability density distribution in space A2(x, t) satisfies the continuity equation (3), where ÑS(x, t) /m is the velocity field v(x, t) of the ensemble of particles. When the term is present, we can still understand Eq. (2) as a Hamilton-Jacobi equation for an ensemble of particles. However, their trajectories are no more the classical ones, due to the presence of the quantum potential term in Eq. (2).
The Bohm-de Broglie interpretation of quantum mechanics is based on the two equations (2) and (3) in the way outlined above, not only on the last one as it is the Copenhagen interpretation. The starting idea is that the position x and momentum p are always well defined, with the particle's path being guided by a new field, the quantum field. The field Y obeys Schrödinger equation (1), which can be written as the two real equations (2) and (3). Equation (2) is interpreted as a Hamilton-Jacobi type equation for the quantum particle subjected to an external potential, which is the classical potential plus the new quantum potential
Once the field Y, whose effect on the particle trajectory is through the quantum potential (4), is obtained from Schrödinger equation, we can also obtain the particle trajectory, x(t), by integrating the differential equation p = m= ÑS(x, t), which is called the guidance relation (a dot means time derivative). Of course, from this differential equation, the non-classical trajectory x(t) can only be known if the initial position of the particle is given. However, we do not know the initial position of the particle because we do not know how to measure it without disturbances (it is the hidden variable of the theory). To agree with quantum mechanical experiments, we have to postulate that, for a statistical ensemble of particles in the same quantum field Y, the probability density distribution of initial positions x0 is P(x0, t0) = A2(x0, t = t0). Equation (3) guarantees that P(x, t) = A2(x, t) for all times. In this way, the statistical predictions of quantum theory in the Bohm-de Broglie interpretation are the same as in the Copenhagen interpretation3.
It is interesting to note that the quantum potential depends only on the form of Y, not on its absolute value, as can be seen from equation (4). This fact brings home the non-local and contextual character of the quantum potential4. This is a necessary feature because Bell's inequalities together with Aspect's experiments show that, in general, a quantum theory must be either non-local or non-ontological. As the Bohm-de Broglie interpretation is ontological, than it must be non-local, as it is. The non-local and contextual quantum potential causes the quantum effects. It has no parallel in classical physics.
The function A plays a dual role in the Bohm-de Broglie interpretation: it gives the quantum potential and the probability density distribution of positions, but this last role is secondary. If in some model there is no notion of probability, we can still get information from the system using the guidance relations. In this case, A2 does not need to be normalizable. The Bohm-de Broglie interpretation is not, in essence, a probabilistic interpretation. It is straightforward to apply it to a single system.
The classical limit can be obtained in a very simple way. We only have to find the conditions for having Q = 05. The question on why in a real measurement we see an effective collapse of the wave function is answered by noting that, in a measurement, the wave function splits in a superposition of non-overlapping branches. Hence the point particle (representing the particle being measured plus the macroscopic apparatus) will enter into one particular branch, which one depends on the initial conditions, and it will be influenced by the quantum potential related only to this branch, which is the only one that is not negligible in the region where the point particle actually is. The other empty branches continue to exist, but they neither influence on the point particle nor on any other particle [15]. There is an effective but not real collapse. The Schrödinger equation is always valid. There is no need to have a classical domain outside the quantum system to explain a measurement, neither is the existence of observers crucial because this interpretation is objective.
For quantum fields in flat spacetime, we can apply a similar reasoning. As an example, take the Schrödinger functional equation for a quantum scalar field:
Writing again the wave functional as Y = A exp(iS/), we obtain:
where Q(f) = -2 is the corresponding (unregulated) quantum potential. The first equation is viewed as a modified Hamilton-Jacobi equation governing the evolution of some initial field configuration through time, which will be different from the classical one due to the presence of the quantum potential. The guidance relation is now given by
The second equation is the continuity equation for the probability density A2[f(x), t0] of having the initial field configuration at time t0 given by f(x).
A detailed analysis of the Bohm-de Broglie interpretation of quantum field theory is given in Ref. [34] for the case of quantum electrodynamics.
III The Bohm-de Broglie Interpretation of Minisuperspace Canonical Quantum Cosmology
In this section, we summarize the rules of the Bohm-de Broglie interpretation of quantum cosmology in the case of homogeneous minisuperspace models. When we are restricted to homogeneous models, the supermomentum constraint of GR is identically zero, and the shift function can be set to zero without loosing any of the Einstein's equations. The hamiltonian is reduced to general minisuperspace form:
where pa(t) and qa(t) represent the homogeneous degrees of freedom coming from Pij(x, t) and hij(x, t). The minisuperspace Wheeler-De Witt equation is:
Writing Y = R exp(iS/), and substituting it into (10), we obtain the following equation:
where
and fab(qm) and U(qm) are the minisuperspace particularizations of the DeWitt metric Gijkl [36] and of the scalar curvature density -h1/2R(3)(hij) of the spacelike hypersurfaces, respectively. The causal interpretation applied to quantum cosmology states that the trajectories qa(t) are real, independently of any observations. Eq. (11) is the Hamilton-Jacobi equation for them, which is the classical one amended with a quantum potential term (12), responsible for the quantum effects. This suggests to define:
where the momenta are related to the velocities in the usual way:
To obtain the quantum trajectories we have to solve the following system of first order differential equations:
Eqs. (15) are invariant under time reparametrization. Hence, even at the quantum level, different choices of N(t) yield the same spacetime geometry for a given non-classical solution qa(t). There is no problem of time in the causal interpretation of minisuperspace quantum cosmology.
Let us now apply these rules, as examples, to minisuperspace models with a free massless scalar field. Take the lagrangian:
For w = -1 we have effective string theory without the Kalb-Rammond field. For w = -3/2 we have a conformally coupled scalar field. Performing the conformal transformation gmn = efmn we obtain the following lagrangian:
where the bars have been omitted. We will define Cw º (w + 3/2).
III.1. The isotropic case
We consider now the Robertson-Walker metric
where the spatial curvature takes the values 0, 1,-1. Inserting this in the lagrangian (17), and using the units where = c = 1, we obtain the following action:
where V is the total volume divided by a3 of the spacelike hypersurfaces, which are supposed to be closed, and lp is the Planck length. V depends on the value of and on the topology of the hypersurfaces. For = 0 it can be as large as we want because their fundamental polyhedra can have arbitrary size. In the case of = 1 and topology S3, V = 2p2. Defining b2 = , the hamiltonian turns out to be:
where
Usually the scale factor has dimensions of length because we use angular coordinates in closed spaces. Hence we will define a dimensionless scale factor º a/b. In that case the hamiltonian becomes, omitting the tilde:
As b appears as an overall multiplicative constant in the hamiltonian, we can set it equal to one without any loss of generality, keeping in mind that the scale factor which appears in the metric is ba, not a. We can further simplify the hamiltonian by defining a º ln(a) obtaining
where
The momentum pf is a constant of motion which we will call . We will restrict ourselves to the physically interesting case, due to observations, of = 0 and Cw > 0.
The classical solutions in the gauge N = 1 are,
where c1 is an integration constant. In term of cosmic time they are:
The solutions contract or expand forever from a singularity, depending on the sign of , without any inflationary epoch.
Let us now quantize the model. With a particular choice of factor ordering, we obtain the following Wheeler-DeWitt equation
Employing the separation of variables method, we obtain the general solution
where k is a separation constant, and
and
We will now make gaussian superpositions of these solutions and interpret the results using the causal interpretation of quantum mechanics. The function F(k) is:
We take the wave function:
with a2 = b2 = 0.
Performing the integration in k we obtain for Y (we will define f and omit the bars from now on)
In order to obtain the Bohmian trajectories, we have to calculate the phase S of the above wave function and substitute it into the guidance formula
where
We will work in the gauge N = 1. These equations constitute a planar system which can be easily studied:
The line a = 0 divides configuration space in two symmetric regions. The line f = 0 contains all singular points of this system, which are nodes and centers. The nodes appear when the denominator of the above equations, which is proportional to the norm of the wave function, is zero. No trajectory can pass through these points. They happen when f = 0 and cos(da) = 0, or a = (2n + 1)p/2d, n an integer, with separation p/d. The center points appear when the numerators are zero. They are given by f = 0 and a = 2 d[cotan (da)]/s2. They are intercalated with the node points. As | a | ® ¥ these points tend to np/d, and their separations cannot exceed p/d. As one can see from the above system, the classical solutions (a(t) µ t1/3) are recovered when | a | ® ¥ or | f | ® ¥, the other being different from zero.
There are plenty of different possibilities of evolution, depending on the initial conditions. Near the center points we can have oscillating universes without singularities and with amplitude of oscillation of order 1. For negative values of a, the universe arise classically from a singularity but quantum effects become important forcing it to recollapse to another singularity, recovering classical behaviour near it. For positive values of a, the universe contracts classically but when a is small enough quantum effects become important creating an inflationary phase which avoids the singularity. The universe contracts to a minimum size and after reaching this point it expands forever, recovering the classical limit when a becomes sufficiently large. We can see that for a negative we have classical limit for small scale factor while for a positive we have classical limit for big scale factor.
III.2. The anisotropic case
To exemplify the quantum isotropization of the Universe, let us take now, instead of the Friedman-Robertson-Walker of Eq. (18), the homogeneous and anisotropic Bianchi I line element
This line element will be isotropic if and only if b+(t) and b-(t) are constants. Inserting Eq. (43) into the lagrangian (17), supposing that the scalar field f depends only on time, discarding surface terms, and performing a Legendre transformation, we obtain the following minisuperspace classical hamiltonian
where (p0, p+, p-, pf) are canonically conjugate to (b0, b+, b-, f), respectively, and we made the trivial redefinition f ® f.
We can write this hamiltonian in a compact form by defining ym = (b0, b+, b-, f) and their canonical momenta pm = (p0, p+, p-, pf), obtaining
where hmn is the Minkowski metric with signature (- + + +). The equations of motion are the constraint equation obtained by varying the hamiltonian with respect to the lapse function N
and the Hamilton's equations
The solution to these equations in the gauge N = 12exp(3 y0) is
where the momenta pn are constants due to the equations of motion and the Cm are integration constants. We can see that the only way to obtain isotropy in these solutions is by making p1 = p+ = 0 and p2 = p- = 0, which yield solutions that are always isotropic, the usual Friedman-Robertson-Walker (FRW) solutions with a scalar field. Hence, there is no anisotropic solution in this model which can classically become isotropic during the course of its evolution. Once anisotropic, always anisotropic. If we suppress the f degree of freedom, the unique isotropic solution is flat spacetime because in this case the constraint (46) enforces p0 = 0.
To discuss the appearance of singularities, we need the Weyl square tensor W2 º WabmnWabmn. It reads
Hence, the Weyl square tensor is proportional to exp(-12b0) because the p's are constants (see Eq. (48)) and the singularity is at t = -¥. The classical singularity can be avoided only if we set p0 = 0. But then, due to equation (46), we would also have pi = 0, which corresponds to the trivial case of flat spacetime. Hence, the unique classical solution which is non-singular is the trivial flat spacetime solution.
The Dirac quantization procedure yields the Wheeler-DeWitt equation, which in the present case reads
Let us now investigate spherical-wave solutions of Eq. (51). They read
where y º .
The guidance relations in the gauge N = 12exp(3 y0) are (see Eqs. (47)) read
where S is the phase of the wave function. In terms of f and g the above equations read
where the prime means derivative with respect to the argument of the functions f and g, and Im(z) is the imaginary part of the complex number z.
From Eq. (56) we obtain that
which implies that y i(t) = cjiy j(t), with no sum in j, where the cji are real constants and = 1. Hence, apart some positive multiplicative constant, knowing about one of the y i means knowing about all y i. Consequently, we can reduce the four equations (55) and (56) to a planar system by writing y = C|y3|, with C > 1, and working only with y0 and y3, say. The planar system now reads
Note that if f = g, y3 stabilizes at y3 = 0 because 3 as well as all other time derivatives of y3 are zero at this line. As y i(t) = cjiy j(t), all y i(t) become zero, and the cosmological model isotropizes forever once y3 reaches this line. Of course one can find solutions where y3 never reaches this line, but in this case there must be some region where 3 = 0, which implies i = 0, and this is an isotropic region. Consequently, quantum anisotropic cosmological models with f = g always have an isotropic phase, which can become permanent in many cases.
IV The Bohm-de Broglie Interpretation of Superspace Canonical Quantum Cosmology
In this section, we will quantize General Relativity Theory (GR) without making any simplifications or cutting of degrees of freedom. The matter content is a minimally coupled scalar field with arbitrary potential. All subsequent results remain essentially the same for any matter field which couples uniquely with the metric, not with their derivatives.
The classical hamiltonian of full GR with a scalar field is given by:
where
In these equations, hij is the metric of closed 3-dimensional spacelike hypersurfaces, and Pij is its canonical momentum given by
where
is the extrinsic curcature of the hypersurfaces (indicates are raised and lowered by the 3-metric hij and its inverse hij). The canonical momentum of the scalar field is now
The quantity R(3) is the intrinsic curvature of the hypersurfaces and h is the determinant of hij. The lapse function N and the shift function Nj are the Lagrange multipliers of the super-hamiltonian constraint » 0 and the super-momentum constraint j » 0, respectively. They are present due to the invariance of GR under spacetime coordinate transformations. The quantities Gijkl and its inverse Gijkl (GijklGijab = ) are given by
which is called the DeWitt metric. The quantity Di is the i-component of the covariant derivative operator on the hypersurface, and k = 16 pG/c4.
The classical 4-metric
and the scalar field which are solutions of the Einstein's equations can be obtained from the Hamilton's equations of motion
for some choice of N and N i, and if we impose initial conditions compatible with the constraints
It is a feature of the hamiltonian of GR that the 4-metrics (68) constructed in this way, with the same initial conditions, describe the same four-geometry for any choice of N and N i. The algebra of the constraints close in the following form (we follow the notation of Ref. [25]):
To quantize this constrained system, we follow the Dirac quantization procedure. The constraints become conditions imposed on the possible states of the quantum system, yielding the following quantum equations:
In the metric and field representation, the first equation is
which implies that the wave functional Y is an invariant under space coordinate transformations.
The second equation is the Wheeler-DeWitt equation [35, 36]. Writing it unregulated in the coordinate representation we get
where V is the classical potential given by
This equation involves products of local operators at the same space point, hence it must be regularized. After doing this, one should find a factor ordering which makes the theory free of anomalies, in the sense that the commutator of the operator version of the constraints close in the same way as their respective classical Poisson brackets (75). Hence, Eq. (79) is only a formal one which must be worked out [28, 29, 30].
Let us now see what is the Bohm-de Broglie interpretation of the solutions of Eqs. (76) and (77) in the metric and field representation. First we write the wave functional in polar form Y = Aexp(iS/), where A and S are functionals of hij and f. Substituting it in Eq. (78), we get two equations saying that A and S are invariant under general space coordinate transformations:
The two equations we obtain for A and S when we substitute Y = Aexp(iS/) into Eq. (77) will of course depend on the factor ordering we choose. However, in any case, one of the equations will have the form
where V is the classical potential given in Eq. (80). Contrary to the other terms in Eq. (83), which are already well defined, the precise form of Q depends on the regularization and factor ordering which are prescribed for the Wheeler-DeWitt equation. In the unregulated form given in Eq. (79), Q is
Also, the other equation besides (83) in this case is
Let us now implement the Bohm-de Broglie interpretation for canonical quantum gravity. First of all we note that Eqs. (81) and (83), which are always valid irrespective of any factor ordering of the Wheeler-DeWitt equation, are like the Hamilton-Jacobi equations for GR, supplemented by an extra term Q in the case of Eq. (83), which we will call the quantum potential. By analogy with the cases of non-relativistic particle and quantum field theory in flat spacetime, we will postulate that the 3-metric of spacelike hypersurfaces, the scalar field, and their canonical momenta always exist, independent on any observation, and that the evolution of the 3-metric and scalar field can be obtained from the guidance relations
with Pij and Pf given by Eqs. (63) and (65), respectively. Like before, these are first order differential equations which can be integrated to yield the 3-metric and scalar field for all values of the t parameter. These solutions depend on the initial values of the 3-metric and scalar field at some initial hypersurface. The evolution of these fields will of course be different from the classical one due to the presence of the quantum potential term Q in Eq. (83). The classical limit is once more conceptually very simple: it is given by the limit where the quantum potential Q becomes negligible with respect to the classical energy. The only difference from the previous cases of the non-relativistic particle and quantum field theory in flat spacetime is the fact that the equivalent of Eqs. (3) and (7) for canonical quantum gravity, which in the naive ordering is Eq. (85), cannot be interpreted as a continuity equation for a probability density A2 because of the hyperbolic nature of the DeWitt metric Gijkl. However, even without a notion of probability, which in this case would mean the probability density distribution for initial values of the 3-metric and scalar field in an initial hypersurface, we can extract a lot of information from Eq. (83) whatever is the quantum potential Q, as will see now. After we get these results, we will return to this probability issue in the last section.
First we note that, whatever is the form of the quantum potential Q, it must be a scalar density of weight one. This comes from the Hamilton-Jacobi equation (83). From this equation we can express Q as
As S is an invariant (see Eq. (81)), then dS / dhij and dS /df must be a second rank tensor density and a scalar density, both of weight one, respectively. When their products are contracted with Gijkl and multiplied by h-1/2, respectively, they form a scalar density of weight one. As V is also a scalar density of weight one, then Q must also be. Furthermore, Q must depend only on hij and f because it comes from the wave functional which depends only on these variables. Of course it can be non-local (we show an example in the appendix), i.e., depending on integrals of the fields over the whole space, but it cannot depend on the momenta.
Now we will investigate the following important problem. From the guidance relations (86) and (87), which will be written in the form
and
we obtain the following first order partial differential equations:
and
The question is, given some initial 3-metric and scalar field, what kind of structure do we obtain when we integrate this equations in the parameter t? Does this structure form a 4-dimensional geometry with a scalar field for any choice of the lapse and shift functions? Note that if the functional S were a solution of the classical Hamilton-Jacobi equation, which does not contain the quantum potential term, then the answer would be in the affirmative because we would be in the scope of GR. But S is a solution of the modified Hamilton-Jacobi equation (83), and we cannot guarantee that this will continue to be true. We may obtain a complete different structure due to the quantum effects driven by the quantum potential term in Eq. (86). To answer this question we will move from this Hamilton-Jacobi picture of quantum geometrodynamics to a hamiltonian picture. This is because many strong results concerning geometrodynamics were obtained in this later picture [25, 37]. We will construct a hamiltonian formalism which is consistent with the guidance relations (86) and (87). It yields the bohmian trajectories (91) and (92) if the guidance relations are satisfied initially. Once we have this hamiltonian, we can use well known results in the literature to obtain strong results about the Bohm-de Broglie view of quantum geometrodynamics.
Examining Eqs. (81) and (83), we can easily show [20] that the hamiltonian which generates the bohmian trajectories, once the guidance relations (86) and (87) are satisfied initially, is given by:
where we define
The quantities and i are the usual GR super-hamiltonian and super-momentum constraints given by Eqs. (61) and (62). In fact, the guidance relations (86) and (87) are consistent with the constraints Q » 0 and i » 0 because S satisfies (81) and (83). Furthermore, they are conserved by the hamiltonian evolution given by (93) [20].
We now have a hamiltonian, HQ, which generates the bohmian trajectories once the guidance relations (86) and (87) are imposed initially. In the following, we can investigate if the the evolution of the fields driven by HQ forms a four-geometry like in classical geometrodynamics. First we recall a result obtained by Claudio Teitelboim [37]. In this paper, he shows that if the 3-geometries and field configurations defined on hypersurfaces are evolved by some hamiltonian with the form
and if this evolution can be viewed as the "motion" of a 3-dimensional cut in a 4-dimensional spacetime (the 3-geometries can be embedded in a four-geometry), then the constraints » 0 and i » 0 must satisfy the following algebra
The constant in (96) can be ±1 depending if the four-geometry in which the 3-geometries are embedded is euclidean ( = 1) or hyperbolic ( = -1). These are the conditions for the existence of spacetime.
The above algebra is the same as the algebra (75) of GR if we choose = -1. But the hamiltonian (93) is different from the hamiltonian of GR only by the presence of the quantum potential term Q in Q. The Poisson bracket {i (x), j (x¢)} satisfies Eq. (98) because the i of HQ defined in Eq. (93) is the same as in GR. Also {i (x), Q (x¢)} satisfies Eq. (97) because i is the generator of spatial coordinate transformations, and as Q is a scalar density of weight one (remember that Q must be a scalar density of weight one), then it must satisfies this Poisson bracket relation with i. What remains to be verified is if the Poisson bracket {Q (x), Q (x¢)} closes as in Eq. (96). We now recall the result of Ref. [25]. There it is shown that a general super-hamiltonian which satisfies Eq. (96), is a scalar density of weight one, whose geometrical degrees of freedom are given only by the three-metric hij and its canonical momentum, and contains only even powers and no non-local term in the momenta (together with the other requirements, these last two conditions are also satisfied by Q because it is quadratic in the momenta and the quantum potential does not contain any non-local term on the momenta), then must have the following form:
where
With this result we can now establish two possible scenarios for the Bohm-de Broglie quantum geometrodynamics, depending on the form of the quantum potential:
IV.1. Quantum geometrodynamics evolution is consistent and forms a non degenerate four-geometry
In this case, the Poisson bracket {Q (x), Q (x¢)} must satisfy Eq. (96). Then Q must be such that V+Q = VG with V given by (80) yielding:
Then we have two possibilities:
IV.1.1. The spacetime is hyperbolic ( = -1)
In this case Q is
Hence Q is like a classical potential. Its effect is to renormalize the cosmological constant and the classical scalar field potential, nothing more. The quantum geometrodynamics is indistinguishable from the classical one. It is not necessary to require the classical limit Q = 0 because VG = V+Q already may describe the classical universe we live in.
IV.1.2. The spacetime is euclidean ( = 1)
In this case Q is
Now Q not only renormalize the cosmological constant and the classical scalar field potential but also change the signature of spacetime. The total potential VG = V + Q may describe some era of the early universe when it had euclidean signature, but not the present era, when it is hyperbolic. The transition between these two phases must happen in a hypersurface where Q = 0, which is the classical limit.
We can conclude from these considerations that if a quantum spacetime exists with different features from the classical observed one, then it must be euclidean. In other words, the sole relevant quantum effect which maintains the non-degenerate nature of the four-geometry of spacetime is its change of signature to a euclidean one. The other quantum effects are either irrelevant or break completely the spacetime structure. This result points in the direction of Ref. [38].
IV.2. Quantum geometrodynamics evolution is consistent but does not form a non degenerate four-geometry
In this case, the Poisson bracket {Q (x), Q (x¢)} does not satisfy Eq. (96) but is weakly zero in some other way. Some examples are given in Ref. [40]. They are real solutions of the Wheeler-DeWitt equation, where Q = -V, and non-local quantum potentials. It is very important to use the guidance relations to close the algebra in these cases. It means that the hamiltonian evolution with the quantum potential is consistent only when restricted to the bohmian trajectories. For other trajectories, it is inconsistent. Concluding, when restricted to the bohmian trajectories, an algebra which does not close in general may close, as shown in the above example. This is an important remark on the Bohm-de Broglie interpretation of canonical quantum cosmology, which sometimes is not noticed.
In the examples above, we have explicitly obtained the ß constants" of the algebra that characterizes the "pre-four-geometry" generated by HQ i.e., the foam-like structure pointed long time ago in early works of J. A. Wheeler [35, 42].
V Conclusion and Discussions
The Bohm-de Broglie interpretation of canonical quantum cosmology yields a quantum geometrodynamical picture where the bohmian quantum evolution of three-geometries may form, depending on the wave functional, a consistent non degenerate four geometry which must be euclidean (but only for a very special local form of the quantum potential), and a consistent but degenerate four-geometry indicating the presence of special vector fields and the breaking of the spacetime structure as a single entity (in a wider class of possibilities). Hence, in general, and always when the quantum potential is non-local, spacetime is broken. The three-geometries evolved under the influence of a quantum potential do not in general stick together to form a non degenerate four-geometry, a single spacetime with the causal structure of relativity. This is not surprising, as it was anticipated long ago [42]. Among the consistent bohmian evolutions, the more general structures that are formed are degenerate four-geometries with alternative causal structures. We obtained these results taking a minimally coupled scalar field as the matter source of gravitation, but it can be generalized to any matter source with non-derivative couplings with the metric, like Yang-Mills fields.
As shown in the previous section, a non degenerate four-geometry can be attained only if the quantum potential have the specific form (101). In this case, the sole relevant quantum effect will be a change of signature of spacetime, something pointing towards Hawking's ideas.
In the case of consistent quantum geometrodynamical evolution but with degenerate four-geometry, we have shown that any real solution of the Wheeler-DeWitt equation yields a structure which is the idealization of the strong gravity limit of GR. This type of geometry, which is degenerate, has already been studied [41]. Due to the generality of this picture (it is valid for any real solution of the Wheeler-DeWitt equation, which is a real equation), it deserves further attention. It may well be that these degenerate four-metrics were the correct quantum geometrodynamical description of the young universe. It would be also interesting to investigate if these structures have a classical limit yielding the usual four-geometry of classical cosmology.
For non-local quantum potentials, we have shown that apparently inconsistent quantum evolutions are in fact consistent if restricted to the bohmian trajectories satisfying the guidance relations (86) and (87). This is a point which is sometimes not taken into account.
If we want to be strict and impose that quantum geometrodynamics does not break spacetime, then we will have stringent boundary conditions. As said above, a non degenerate four-geometry can be obtained only if the quantum potential have the form (101). This is a severe restriction on the solutions of the Wheeler-DeWitt equation.
These restrictions on the form of the quantum potential do not occur in minisuperspace models [19] because there the hypersurfaces are restricted to be homogeneous. The only freedom we have is in the time parametrization of the homogeneous hypersurfaces which foliate spacetime. There is a single constraint, which of course always commute with itself, irrespective of the quantum potential. The theorem proven in Ref. [25], which was essential in all the reasoning of the last section, cannot be used here because minisuperspace models do not satisfy one of their hypotheses. In section 3 we studied quantum effects in such minisuperspace models and we showed that they can avoid singularities, isotropize the Universe, and create inflationary epochs. It should be very interesting to investigate if these quantum phases of the Universe may have left some traces which could be detected now, as in the anisotropies of the cosmic microwave background radiation.
As we have seen, in the Bohm-de Broglie approach we can investigate further what kind of structure is formed in quantum geometrodynamics by using the Poisson bracket relation (96), and the guidance relations (91) and (92). By assuming the existence of 3-geometries, field configurations, and their momenta, independently on any observations, the Bohm-de Broglie interpretation allows us to use classical tools, like the hamiltonian formalism, to understand the structure of quantum geometry. If this information is useful, we do not know. Already in the two-slit experiment in non-relativistic quantum mechanics, the Bohm-de Broglie interpretation allows us to say from which slit the particle has passed through: if it arrive at the upper half of the screen it must have come from the upper slit, and vice-versa. Such information we do not have in the many-worlds interpretation. However, this information is useless: we can neither check it nor use it in other experiments. In canonical quantum cosmology the situation may be the same. The Bohm-de Broglie interpretation yields a lot of information about quantum geometrodynamics which we cannot obtain from the many-worlds interpretation, but this information may be useless. However, we cannot answer this question precisely if we do not investigate further, and the tools are at our disposal.
We would like to remark that all these results were obtained without assuming any particular factor ordering and regularization of the Wheeler-DeWitt equation. Also, we did not use any probabilistic interpretation of the solutions of the Wheeler-DeWitt equation. Hence, it is a quite general result. However, we would like to make some comments about the probability issue in quantum cosmology. The Wheeler-DeWitt equation when applied to a closed universe does not yield a probabilistic interpretation for their solutions because of its hyperbolic nature. However, it has been suggested many times [21, 43, 44, 45, 46] that at the semiclassical level we can construct a probability measure with the solutions of the Wheeler-DeWitt equation. Hence, for interpretations where probabilities are essential, the problem of finding a Hilbert space for the solutions of the Wheeler-DeWitt equation becomes crucial if someone wants to get some information above the semiclassical level. Of course, probabilities are also useful in the Bohm-de Broglie interpretation. When we integrate the guidance relations (91) and (92), the initial conditions are arbitrary, and it should be nice to have some probability distribution on them. However, as we have seen along this paper, we can extract a lot of information from the full quantum gravity level using the Bohm-de Broglie interpretation, without appealing to any probabilistic notion.
It would also be important to investigate the Bohm-de Broglie interpretation for other quantum gravitational systems, like black holes. Attempts in this direction have been made, but within spherical symmetry in empty space [47], where we have only a finite number of degrees of freedom. It should be interesting to investigate more general models.
The conclusions of this paper are of course limited by many strong assumptions we have tacitly made, as supposing that a continuous three-geometry exists at the quantum level (quantum effects could also destroy it), or the validity of quantization of standard GR, forgetting other developments like string theory. However, even if this approach is not the appropriate one, it is nice to see how far we can go with the Bohm-de Broglie interpretation, even in such incomplete stage of canonical quantum gravity. It seems that the Bohm-de Broglie interpretation may at least be regarded as a nice "gauge" [48] to be used in quantum cosmology, as, probably, it will prove harder, or even impossible, to reach the detailed conclusions of this paper using other interpretations. Furthermore, if the finer view of the Bohm-de Broglie interpretation of quantum cosmology can yield useful information in the form of observational effects, then we will have means to decide between interpretations, something that will be very important not only for quantum cosmology, but for quantum theory itself.
Acknowledgments
We would like to thank CNPq of Brazil for financial support.
References
[1] N. Bohr, Atomic Physics and Human Knowledge (Science Editions, New York, 1961); N. Bohr; Phys. Rev. 48, 696 (1935). [ Links ]
[2] W. Heisenberg, The Physical Principles of the Quantum Theory (Dover, New York, 1949). [ Links ]
[3] J. von Neumann, Mathematical Foundations of Quantum Mechanics (Princeton University Press, Princeton, 1955). [ Links ]
[4] R. Omnès, The Interpretation of Quantum Mechanics (Princeton University Press, Princeton, 1994). [ Links ]
[5] H. D. Zeh, Found. Phys. 1, 69 (1970); [ Links ]E. Joos and H. D. Zeh, Z. Phys. B 59, 223 (1985); [ Links ]W. H. Zurek, Phys. Rev. D 26, 1862 (1982); [ Links ]W. H. Zurek, Phys. Today 44, 36 (1991). [ Links ]
[6] C. Kiefer, Class. Quantum Grav. 18, 379 (1991); [ Links ]D. Giulini, E. Joos, C. Kiefer, J. Kupsch, I. O. Stamatescu and H. D. Zeh, Decoherence and the Appearance of a Classical World in Quantum Theory (Springer-Verlag, Berlin, 1996). [ Links ]
[7] V.F Mukhanov, in Physical Origins of Time Asymmetry, ed by J. J. Halliwell, J. Pérez-Mercader and W. H. Zurek (Cambridge University Press, 1994). [ Links ]
[8] H. D. Zeh, in Decoherence and the Appearance of a Classical World in Quantum Theory (Springer-Verlag, Berlin, 1996). [ Links ]
[9] M. Gell-Mann and J. B. Hartle, in Complexity, Entropy and the Physics of Information, ed. by W. H. Zurek (Addison Wesley, 1990). [ Links ]
[10] J. P. Paz and W. H. Zurek, Phys. Rev. D 48, 2728 (1993). [ Links ]
[11] G.C. Ghirardi, A. Rimini and T. Weber, Phys. Rev. D 34 470 (1986); [ Links ]G.C. Ghirardi , P. Pearle and A. Rimini, Phys. Rev. A 42, 78 (1990). [ Links ]
[12] R. Penrose, in Quantum Implications: Essays in Honour of David Bohm, ed. by B. J. Hiley and F. David Peat (Routledge, London, 1987). [ Links ]
[13] The Many-Worlds Interpretation of Quantum Mechanics, ed. by B. S. DeWitt and N. Graham (Princeton University Press, Princeton, 1973). [ Links ]
[14] D. Bohm, Phys. Rev. 85, 166 (1952); [ Links ]D. Bohm, B. J. Hiley and P. N. Kaloyerou, Phys. Rep. 144, 349 (1987). [ Links ]
[15] P. R. Holland, The Quantum Theory of Motion: An Account of the de Broglie-Bohm Causal Interpretation of Quantum Mechanics (Cambridge University Press, Cambridge, 1993). [ Links ]
[16] J. C. Vink, Nucl. Phys. B 369, 707 (1992). [ Links ]
[17] Y. V. Shtanov, Phys. Rev. D 54, 2564 (1996). [ Links ]
[18] A. Valentini, Phys. Lett. A 158, 1, (1991). [ Links ]
[19] J. A. de Barros and N. Pinto-Neto, Int. J. of Mod. Phys. D 7, 201 (1998). [ Links ]
[20] N. Pinto-Neto and E. S. Santini, Phys. Rev. D 59, 123517, (1999). [ Links ]
[21] J. Kowalski-Glikman and J. C. Vink, Class. Quantum Grav. 7, 901 (1990). [ Links ]
[22] E. J. Squires, Phys. Lett. A 162, 35 (1992). [ Links ]
[23] J. A. de Barros, N. Pinto-Neto and M. A. Sagioro-Leal, Phys. Lett. A 241, 229 (1998). [ Links ]
[24] R. Colistete Jr., J. C. Fabris and N. Pinto-Neto, Phys. Rev. D 57, 4707 (1998). [ Links ]
[25] S. A. Hojman, K. Kucha\checkr and C. Teitelboim, Ann. Phys. 96, 88 (1976). [ Links ]
[26] E. Cartan, Annales Scientifiques de l'Ecole Normale Supérieure 40, 325 (1923) and 41, 1 (1924). [ Links ]
[27] N. C. Tsamis and R. P. Woodward, Phys. Rev. D 36, 3641 (1987). [ Links ]
[28] K. Maeda and M. Sakamoto, Phys. Rev. D 54, 1500 (1996). [ Links ]
[29] T. Horiguchi, K. Maeda and M. Sakamoto, Phys. Lett. B 344, 105 (1995). [ Links ]
[30] J. Kowalski-Glikman and K. A. Meissner, Phys. Lett. B 376, 48 (1996). [ Links ]
[31] J. M. Lévy Leblond, Ann. Inst. Henri Poincarè 3, 1 (1965). [ Links ]
[32] D. Bohm and J. P. Vigier, Phys. Rev. 96, 208 (1954). [ Links ]
[33] A. Valentini, Phys. Lett. A 156, 5 (1991). [ Links ]
[34] P. N. Kaloyerou, Phys. Rep. 244, 287 (1994). [ Links ]
[35] J.A. Wheeler, in Battelle Rencontres: 1967 Lectures in Mathematical Physics, ed. by B. DeWitt and J.A.Wheeler (Benjamin New York, 1968). [ Links ]
[36] B. S. DeWitt, Phys. Rev. 160, 1113 (1967). [ Links ]
[37] C. Teitelboim, Ann. Phys. 80, 542 (1973). [ Links ]
[38] Euclidean Quantum Gravity, ed. by G. W. Gibbons and S. W. Hawking (World Scientific, London, 1993). [ Links ]
[39] C. Teitelboim, Phys. Rev. D 25, 3159 (1982). [ Links ]
[40] N. Pinto- Neto and E. Sergio Santini, 'Geometrodinâmica quântica na interpretação de Bohm- de Broglie: o espaço- tempo quântico deve ser euclideano?', this volume. [ Links ]
[41] G. Dautcourt, report gr-gc/9801093.
[42] J. A. Wheeler, Ann. Phys. 2, 604 (1957); [ Links ]J. A. Wheeler, in Relativity, Groups and Topology, ed. by B. DeWitt and C. DeWitt (Gordon and Breach, New York, 1964); [ Links ]G.M Patton and J.A Wheeler, in Quantum Gravity. An Oxford Symposium, ed. by C.J. Isham, R.Penrose and D. Sciama (Clarendon Press, Oxford, 1975). [ Links ]
[43] T. Banks, Nucl. Phys. B 249, 332 (1985). [ Links ]
[44] T. P. Singh and T. Padmanabhan, Ann. Phys. 196, 296 (1989). [ Links ]
[45] D. Giulini and C. Kiefer, Class. Quantum Grav. 12, 403 (1995). [ Links ]
[46] J.J. Halliwell, in Quantum Cosmology and Baby Universes, ed. by S. Coleman, J.B. Hartle, T. Piran and S. Weinberg (World Scientific, Singapore, 1991). [ Links ]
[47] M. Kenmoku, H. Kubotani, E. Takasugi and Y. Yamazaki, report gr-qc 9810039.
[48] We thank this image to Brandon Carter.
[49] K. Kucha, Phys. Rev. D 50, 3961 (1994). [ Links ]
[50] J. Louko and S. N. Winters-Hilt, Phys. Rev. D 54, 2647 (1996). [ Links ]
[51] T. Brotz and C. Kiefer, Phys. Rev. D 55, 2186 (1997). [ Links ]
1 Although these three authors have different views from quantum theory, the first emphasizing the indivisibility of quantum phenomena, the second with his notion of potentiality, and the third with the concept of quantum states, for all of them the existence of a classical domain is crucial. That is why we group their approaches under the same name "Copenhagen interpretation".
2 For instance, the four geometry of Newtonian spacetime is degenerate [26], and its single null eigenvector is the normal of the absolute hypersurfaces of simultaneity, the time. As we know, it does not form a single spacetime structure because it is broken in absolute space plus absolute time.
3 It has been shown that under typical chaotic situations, and only within the Bohm-de Broglie interpretation, a probability distribution P ¹ A2 would rapidly approach the value P = A2 [32, 33]. In this case, the probability postulate would be unnecessary, and we could have situations, in very short time intervals, where this modified Bohm-de Broglie interpretation would differ from the Copenhagen interpretation.
4 The non-locality of Q becomes evident when we generalize the causal interpretation to a many particles system.
5 It should be very interesting to investigate the connection between this bohmian classical limit and the phenomenon of decoherence. To our knowledge, no work has ever been done on this issue, which may illuminate both the Bohm-de Broglie interpretation and the comprehension of decoherence. | 2018-05-26 11:08:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8336162567138672, "perplexity": 483.61932569609826}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867416.82/warc/CC-MAIN-20180526092847-20180526112847-00501.warc.gz"} |
https://support.bioconductor.org/p/101639/ | Search
Question: GenomicFeatures and custom genomes?
1
9 months ago by
david.rinker10
david.rinker10 wrote:
Hi I want to use GenomicFeatures to extract upstream sequences from a genome using gene_IDs
My organism (Anopheles melas) has an assembled genome (fasta) and a gtf file.
It seems like these are all the files I should need, however I am not finding it easy to figure out on how to load these two data types for genomicFeatures to use? So far all I have figured out that I need to make a TxDb from gff; I have no idea what to do with the fasta file.
Could someone (BRIEFLY) tell me which packages and objects I need to use and how they all mesh together?
I have read over the manual as well as the Rsamtools manual but am more confused than anything.
Thank you
modified 9 months ago by Hervé Pagès ♦♦ 13k • written 9 months ago by david.rinker10
2
9 months ago by
Martin Morgan ♦♦ 22k
United States
Martin Morgan ♦♦ 22k wrote:
Depending on your OS, i'd suggest creating a '2bit' (rtracklayer, TwoBitFile) or indexed fasta (FaFile, Rsamtools) file.
For the gff, I think you can rtracklayer::import() it and get something lighter weight but still useful, or create the TxDb for greater flexibility.
The glue is Biostrings::getSeq(), which will take the 2bit or fasta file as first argument and genomic ranges as second returning the sequences corresponding to the ranges.
1
9 months ago by
Hervé Pagès ♦♦ 13k
United States
Hervé Pagès ♦♦ 13k wrote:
Hi,
Using extractUpstreamSeqs() from the GenomicFeatures package:
library(Biostrings)
library(GenomicFeatures)
txdb <- makeTxDbFromGFF(gtf_file)
extractUpstreamSeqs(genome, txdb)
If you have a 2-bit file instead of FASTA, replace genome <- readDNAStringSet(fasta_file) with:
library(rtracklayer)
genome <- TwoBitFile(twobit_file)
The 2-bit format supports very efficient random access so this solution avoids loading the full genome sequences in memory. Note however that this won't work if the sequences contain IUAPAC ambiguity letters other than N (the 2-bit format does not support them).
More information and examples in ?extractUpstreamSeqs.
Cheers,
H.
1
Just realizing now that extractUpstreamSeqs() doesn't take a DNAStringSet object, sorry. We should add this at some point. So for now you could either use:
library(Rsamtools)
genome <- FaFile(fasta_file)
as Martin suggested, or convert your FASTA file to 2-bit format with something like:
export(readDNAStringSet(fasta_file), "mygenome.2bit")
Note that using FaFile on a compressed FASTA file has been reported to be unreliable on Windows in the past.
Hope this helps,
H.
0
9 months ago by
david.rinker10
david.rinker10 wrote:
Ok, guess this question is a bust.
If the developers are reading, it might be a useful addition to the manual and/or vignette to outline this process. More and more "non-model" organisms are being sequenced and analyzed--those users are inevitably going to want to do what I'm currently trying to do.
If I can figure this out I'll post my summary here to hopefully enlighten others. | 2018-07-22 19:56:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2969955503940582, "perplexity": 4887.434369909411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593586.54/warc/CC-MAIN-20180722194125-20180722214125-00187.warc.gz"} |
http://math.stackexchange.com/questions/1653317/probability-and-the-out-of-thing | # Probability and the “out of” thing"
I have quite an odd question:
I am not able to fully understand the concept of "out of". If I roll a dice once, from a total of $6$ possible outcomes, I'll get 1. Why does that mean a fraction $1\over 6$ = approx $16.67 \%$ and why does that mean that on average one out of $6$ rolls, I will get for example "$1$" on dice.
Where does the fraction say that for every $6$ rolls, I'll get on average one roll I wanted to get.
Why does $5$ out of $7$ mean $5\over 7$, why does that mean that it's on average $5$ per every $7$ people? Because when I want to get $5\over 7$ of something, I divide something into $7$ parts and get $5$. Is that the second look at this matter, that it can be seen like, for example: for every $7$ (divide some number by $7$ to find out how many $7$s are there) and then multiply by $5$, because for every seven that is included in the number it will be $5$.
Are my thoughts correct? How is the correct way of seeing these things?
Thanks for help in advance.
dont answer like Maths 90-page long thesis, I just want an answer that is en explanation in your own words. What I struggle is probably the fractions, what does out of mean and why... and you explain everything but this.
-
90 pages? ${ }$ – Byron Schmuland Feb 13 at 20:10
Your final paragraph is apparently in direct reference to my answer. "90 page-long thesis" is a drastic exaggeration, that is a half a page of material and a half a page or so of examples in order to clarify. "You explain everything but this" - I explained the definition of probability in the fourth line, how it can be interpreted as a fraction halfway down, and its interpretation as "out of" in the final paragraph. In order to adequately describe it, I had to explain what basic terms meant instead of assuming you already knew the definitions. – JMoravitz Feb 13 at 21:34
## 5 Answers
Here I define some of the basic terms used in introductory probability.
An experiment is a task/action with measurable/perceivable distinct outcomes. The set of all outcomes is referred to as the sample space. An event is a subset of the sample space.
For example, the experiment might be "roll two dice and compute their sum." The sample space will be $\{2,3,4,5,6,7,8,9,10,11,12\}$. An event such as "the sum is greater than or equal to 10" would refer to the subset of the sample space $\{10,11,12\}$.
The probability of an event occurring is the ratio of times that if we repeat the experiment (independently), we expect the outcome to be one of the outcomes in the event in question.
Note, we require a few properties of a probability distribution as a result of this:
• $Pr(\emptyset)=0$
• $Pr(S)=1$
• If $E\cap F=\emptyset$ then $Pr(E\cup F) = Pr(E)+Pr(F)$
• $0\leq Pr(E)\leq 1$ for all $E$
(in words of measure theory, probability acts as a measure such that $Pr(\Omega)=1$)
It is worth noting that several set theory and counting tools are directly applicable to probability as well such as the principle of inclusion exclusion.
In the special case that all outcomes in the sample space are equiprobable, in other words equally likely to occur, we say that the sample space is unbiased. In this special case, for sample space $S$ and event $E$, we have the following:
$$Pr(E)=\frac{|E|}{|S|}$$
where $|E|$ denotes the number of elements in the set $E$.
In the question of dice rolling, the sample space $\{2,3,4,5,6,7,8,9,10,11,12\}$ is not equiprobable, so we may not use the formula above. We may however use instead the sample space being all ways to roll two differently colored dice and describe our event as a subset of that instead. In this case, we have the sample space $\{(1,1),(1,2),(1,3),(1,4),(1,5),(1,6),(2,1),(2,2),\dots,(6,4),(6,5),(6,6)\}$ with sample space size $|S|=36$. Our event corresponds to the subset $\{(4,6),(5,5),(5,6),(6,4),(6,5),(6,6)\}$, so our probability $Pr(\text{two dice rolled sum to at least 10}) = Pr(E) = \frac{|E|}{|S|}=\frac{6}{36}=\frac{1}{6}$.
Note: If we had incorrectly used the sample space $\{2,3,\dots,12\}$, and used the formula, it would have given us an answer of $\frac{3}{11}$ which is incorrect! Event size divided by sample space size is only usable if the sample space is unbiased!
For a smaller example, let our experiment be rolling a single die, and our outcome we are curious to find the probability of be "is even." Our sample space is $\{1,2,3,4,5,6\}$ and our event is $\{2,4,6\}$. The probability is then $Pr(\text{is even}) = \frac{|E|}{|S|}=\frac{3}{6}=\frac{1}{2}$.
Without going into too much detail about expected value, we define the expected value of a discrete random variable $X$ as
$$\mathbb{E}[X]=\sum_{x\in S} x Pr(X=x)$$
This gives us a way of talking about the "average" outcome. For example, in rolling one fair six-sided die, the expected value of the outcome will be $3.5$. Of course, that doesn't mean that we will ever roll a $3.5$ exactly (it isn't even a side on the die), but it means that averaging out the results over "several" attempts, the average will approach $3.5$. In the case of either "success" or "failure" it is easier as we can represent a success as a $1$ and a failure as a $0$ in terms of value.
If we were to run an experiment with probability of success $p$ a total of $n$ times, you will find that the expected number of successes is $np$ (provable from definitions).
In this sense, yes indeed, a probability of $Pr(\text{rolling a six})=\frac{1}{6}$ implies that we expect that in six rolls, one $6$ will occur on average. It also implies that in one hundred rolls, $16.\overline{6}$ sixes will occur on average. Herein lies the usefulness of referring to things as a "percentage" (per: for each, cent: hundred). An event having probability $Pr(E)=72\%$ implies that if we were to run the experiment $100$ times, we expect a success $72$ of those times. It is not so much encoded in the number itself, as it is within the theory that these numbers can be interpreted in this way.
-
The (long) answers are correct, but the OP wants a shorter one, so I will try.
When you roll a die, there are 6 possible things that could happen. If you assume the die is fair (not weighted, no rounded corners) then each of the six possibilities is equally likely. If you are betting on a "5" then you win with just one out of the six possibilities. "one out of six" is the fraction 1/6. If you are betting that the number will be even then you win in three of the six cases, so the probability is 3/6, or 1/2.
I think your problem starts with an idea of fractions that's just a little too narrow. You don't always think of 4/7 as dividing a pizza into 7 pieces and taking 4 of them. You might be thinking about 70 people in a room - 40 women and 30 men. Then 4/7 of the people are women. If you pick someone at random the probability that you pick a woman is 4/7.
I hope those paragraphs help. But probability is subtle, so I will write a little more about two possible misunderstandings.
The "equally likely" is important. If you think about trying to roll a total of 3 with two dice you can't just list the possible totals (2 through 12) and say that the probability of a total of 3 is 1/11. When you roll the two dice there are $6 \times 6 = 36$ (equally likely) things that can happen (for example, 4 on the first die and 2 on the second die). Of those 36, just two give a total of 3, so the probability of a total of 3 is 2/36, or 1/18. (@JMorawetz discusses this example in detail in his answer.)
For a single roll of one die the assumption that the die is fair means that in the long run you will see a "5" (for example) about 1/6 of the time - just the fraction you get by counting the possibilities. Making sense of "about" and "in the long run" is a hard problem. It took mathematicians centuries to figure it out.
-
The classical definition of probability is
"The probability of an event is the ratio of the number of favorable outcomes to the number of possible outcomes".
If the number of trials (n) goes to infinity the probability is of event A is
$$P(A)=\lim_{n \to \infty}\frac{N(A)}{n}$$
$N(A)$ is the number of favorable outcomes. In your case, for instance, 5 of 7 peoples are females. Therefore $\frac{5}{7}$ it is just the ratio of number of females (F) and number of females and men (M):
$P(A)=\frac{F}{F+M}$.
And for a sample size of n the expected value of females is $$E(x)=P(A)\cdot n=\frac{5}{7}\cdot n$$
This is only the expected value. That means on average $\frac{5}{7}$ of the sample are womans. It is not sure that the number of females is 15 ($=21\cdot \frac {5}{7}=3\cdot \frac {5}{1}$) if the sample size is $21$. It has only the highest probability. The number of females can be from $0$ to $21$. But you can say, that for each single drawing of a person the chance is $\frac{5}{7}$ that this person is a female, if your sampling is with replacement.
-
Where does the fraction say that for every $6$ rolls, I'll get on average one roll I wanted to get.
I don't think it's helpful to focus on it being $6$ rolls exactly, or to think in terms of a single roll. In probability, we have the law of large numbers which, in this case, means that the more rolls there are, the more the ratio of $1$s rolled to the total rolls will be to the number $\frac{1}{6}$.
In other words, if you make $600$ rolls of the dice, the number of $1$s you're going to roll is very likely to be very close to $100$. (As will the number of $2$s, $3$s, etc.) Now, this isn't all the $\frac{1}{6}$ probability means, but it's one of its more intuitive consequences.
-
I think it may help to understand what we are looking for when we ask for the probability of some event occurring. Given a large number of tests $N$, the probability of a particular event $E$ occurring should be some number $p$ such that $p\times N$ produces the expected number of events $E$. We are looking for the number of times a certain outcome will occur out of the total number of outcomes. When we have 100 events, probability asks, "How many are we interested in out of those 100 events?"
For example, when rolling a die, the probability of rolling a 1 is $\frac{1}{6}\approx16.67\%$. This means that if we roll the die $6$ times, we can expect to roll a 1 about $p\times N=\frac{1}{6}\times6=1$ time. Similarly, if we roll the die $300$ times, we can expect to roll a 1 about $p\times N=\frac{1}{6}\times300=50$ times.
The reason why the probability of rolling a 1 is $1$ out of $6$, or $\frac{1}{6}$, has to do with the definition of probability. Let's ask the same question as before. "How many events are we interested in out of the total number of possible events?" Well, there are $6$ possible outcomes (we can roll a 1, 2, 3, 4, 5 or 6), and we are looking for only $1$ of those outcomes (rolling a 1). Therefore, the probability of rolling a 1 is exactly $1$ (what we're interested in) out of $6$ (the total), or $\frac{1}{6}$. This is the same as saying, "For every 6 events, we can expect about 1 of them to be what we're looking for."
- | 2016-07-24 22:32:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267928957939148, "perplexity": 169.49627016528626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824185.14/warc/CC-MAIN-20160723071024-00084-ip-10-185-27-174.ec2.internal.warc.gz"} |
http://leancrew.com/all-this/2012/03/photo-annotation-setup-script/ | # Photo annotation setup script
I’ve been using OmniGraffle to annotate photos for years. I love the results, but there’s always been more fiddly setup than I’d like. I recently wrote a script to handle some of the fiddling.
The kind of annotation I do regularly is similar to what I did in this photo from the update to my broken bracket post.
I know lots of people use Skitch, and I’d be happy to use it, too, if my annotations were all for images in my blog or email. But most of my annotated photos go into my reports for work, and Skitch’s arrows and text are too cartoony for that. I could use both—OmniGraffle for formal work and Skitch for the blog—but I’d rather focus on just one system.
My workflow for annotating photos in OmniGraffle goes like this:
1. Import photo.
2. Resize photo so it fits on one page. OmniGraffle initially sizes the photo according to its X Resolution and Y Resolution meta data. The images from my Canon G10 are 4416 pixels wide and have an X Resolution of 180 pixels/per inch, so they start out over 24 inches wide in OmniGraffle.
3. Lock the imported photo in place so I don’t accidentally move, resize, or delete it.
5. Export the resulting image as a JPEG with the proper resolution. What is the proper resolution? Well, that depends on the original size of the photo (in pixels) and the size I shrunk it to in Step 2 (in inches). The math is just the division of one number by another, but I do have to look up (or remember) the two numbers first.
Step 4 has to be done by hand, but the others can be automated. Here’s the AppleScript I use.
1: tell application "Finder"
2: set theItem to the selection as alias
3: set thePath to POSIX path of (theItem as text)
4: end tell
5:
6: tell application "Image Events"
7: set theImage to open thePath
8: set imgSize to dimensions of theImage
9: end tell
10:
11: set w to 6 * 72
12: set h to w * (item 2 of imgSize) / (item 1 of imgSize)
13: set res to (item 1 of imgSize) / w
14:
14: tell application "OmniGraffle 5"
16: activate
17: tell canvas of front window
18: make new solid at beginning of graphics with properties¬
19: {size:{w, h}, origin:{1 * 72, 1 * 72}, fill:no fill,¬
20: draws shadow:false, draws stroke:false, locked:true,¬
21: image:thePath}
22: end tell
23: set resolution of current export settings to res
24: set area type of current export settings to all graphics
25: set export scale of current export settings to 1.0
25: end tell
When I call it, I have OmniGraffle open to a blank document and the image I want to annotate selected in the Finder. The script imports the image, resizes it to 6 inches wide, locks it, and sets the output scale and resolution for the exported JPEG.
Although I normally have OmniGraffle using inches as the measurement unit when I’m working with it by hand, it wants points for the width and height when they’re being set via AppleScript. That’s why Line 11 sets the variable w to 6 * 72 instead of just 6. Similarly, the output resolution in Line 13 has to be set in pixels/point rather than pixels/inch.
Right now I have this script set up to be called via FastScripts1 while I’m working in OmniGraffle. I suppose it could be turned into a Service with a little work, but I generally don’t think in terms of Services.
1. Speaking of FastScripts, Daniel Jalkut was on a recent episode of the Mac Power Users podcast where he talked up Skitch as a great app/service for marking up screenshots. Again, I don’t doubt that, but I’m sticking with OmniGraffle. | 2017-05-25 03:14:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22845523059368134, "perplexity": 3229.6115907176577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607963.70/warc/CC-MAIN-20170525025250-20170525045250-00435.warc.gz"} |
https://documentation.aimms.com/functionreference/elementary-computational-operations/financial-functions/securities/securityperiodicdurationmodified.html | Function SecurityPeriodicDurationModified(SettlementDate, MaturityDate, ParValue, Redemption, Frequency, CouponRate, Yield, Basis)
# SecurityPeriodicDurationModified
The function SecurityPeriodicDurationModified returns the modified Macauley duration of a security that pays interest at the end of each coupon period.
SecurityPeriodicDurationModified(
SettlementDate, ! (input) scalar string expression
MaturityDate, ! (input) scalar string expression
ParValue, ! (input) numerical expression
Redemption, ! (input) numerical expression
Frequency, ! (input) numerical expression
CouponRate, ! (input) numerical expression
Yield, ! (input) numerical expression
[Basis] ! (optional) numerical expression
)
## Arguments
SettlementDate
The date of settlement of the security. SettlementDate must be in date format.
MaturityDate
The date of maturity of the security. MaturityDate must also be in date format and must be a date after SettlementDate.
ParValue
The starting value of the security at issue date. ParValue must be a positive real number.
Redemption
The amount repaid for the security at the maturity date. Redemption must be a positive real number.
Frequency
The number of coupon payments in one year. Frequency must be 1 (annual), 2 (semi-annual) or 4 (quarterly).
CouponRate
The annual interest rate of the security as a percentage of the par value. CouponRate must be a nonnegative real number.
Yield
The yield of the security. Yield must be a nonnegative real number.
Basis
The day-count basis method to be used. The default is 1.
## Return Value
The function SecurityPeriodicDurationModified returns the modified Macauley duration of a security that pays interest at the end of each coupon period.
## Equation
The modified duration $$D_{\textit{mod}}$$ is computed through the equation
$D_{\textit{mod}} = \frac{D}{1+\frac{r_y}{f}}$
where $$D$$ is the Macauley duration.
Note
• This function can be used in an objective function or constraint and the input parameters ParValue, Redemption, CouponRate, and Yield can be used as a variable.
• The function SecurityPeriodicDurationModified is similar to the Excel function MDURATION.
The function SecurityPeriodicDuration. Day count basis methods. General equations for securities with multiple coupons. | 2023-01-30 02:47:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4853292405605316, "perplexity": 7367.412314079744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00000.warc.gz"} |
https://www.nature.com/articles/s41467-022-32604-6 | ## Introduction
Scientific discoveries are an emergent phenomenon of the collective actions of individual scientists and the scientific communities they construct. The composition of these communities can be highly heterogeneous, and often exhibit pervasive inequalities. These inequalities can be social in terms of who makes up the scientific workforce1,2 and what resources they receive for their research3,4,5, or epistemic in terms of which ideas spread further and receive more attention6,7. Understanding the origins of these inequalities and their effects on the pace and direction of scientific discovery would better inform efforts to support innovation, broaden participation in science, and accelerate new discoveries8.
The pervasiveness of inequalities in science, in representation, prestige, attention, resources, etc., likely reflects the combined and heterogeneous effects of many processes, including competition, cumulative advantage, systemic bias, pipeline effects, and discrimination. For instance, in the academic job market, faculty hiring committees tend to hire the graduates of prestigious doctoral programs1,9, which may allow scientists at a small group of elite institutions to effectively set the research agenda of the entire field. Scientists at elite institutions also receive disproportionately more funding than those at less prestigious institutions, which may enable greater scientific activity, larger doctoral training programs, and institutionalized hierarchy10. And, an elite affiliation provides a measurable advantage in peer review, which may play a role in the research of elite scientists being far more likely to appear in high-impact publication venues, compared to that of early-career or non-elite scientists11.
Biases related to gender, race, ethnicity, geography, language, and prestige are known to drive differences in scientific output and impact. A number of recent studies have shown that inequality in social networks and collaborations may relate to gender disparity and affect career outcomes for women12,13,14,15,16,17,18, particularly in science, technology, engineering, and mathematics (STEM) fields19,20,21,22. Moreover, women tend to receive less funding23, publish fewer papers, are more isolated in collaborations, and are often overlooked in favor of male collaborators24. As a result, it remains unclear the degree to which differences in individual scientific activity reflect genuine differences in scientific merit or biases caused by various non-meritocratic processes.
At its base, science is composed of networks of social interactions25,26,27,28. These interactions mediate most scientific activities, including scientific training, hiring, collaboration, teaching, citation, peer review, and debate. Hence, a scientist’s social relationships with other scientists may represent a form of persistent social capital that can be accumulated, used, and possibly transferred among scientists29,30,31. For instance, some evidence indicates that a single extremely strong connection to another scientist is sufficient to increase the productivity and career sustainability of individual researchers32. Via collaboration, networks correlate with the unequal provision of "scientific and technological human capital” across researchers20, shape the academic career of researchers33, and can conceal underlying inequalities in formal evaluations like tenure34. Even common but unadjusted measures of scientific productivity and impact, such as the number of papers a scientist publishes or the number of citations a paper receives, depend on networks, because discoveries are always situated within a broader, evolving conversation among scientists35,36,37,38.
In the sociology of science, there are both many measures of scholarly output and a wide variety of normalization schemes intended to help distill a collaborative publication record into individual-level contributions8. For instance, authorship may be fractionalized by the number of coauthors on a given paper39,40, or a paper’s citation count may be normalized by the impact factor of the venue in which it appeared41. Each measure sheds its own light on social and epistemic inequalities in science, and each normalization scheme comes with assumptions, with potentially uncertain external validity42. In this study, our analysis follows the long tradition in the sociology of science43,44 of using simple measures of scholarly productivity and prominence, which count the number of papers published by an individual and the number of publications in high-impact venues. This approach presents both advantages and limitations, which we discuss below, but is central to our analysis of network effects.
By mediating scientific attention, evaluation, and collaboration, social networks play a fundamental role both in shaping what scientific discoveries are made and what impact they have, and in shaping pervasive social and epistemic inequalities in science. Untangling the effects of these interactions would shed substantial light on the mechanisms that underlie scientific discovery, and may offer new solutions for making the scientific community more inclusive and innovative. For instance, is it more important for an early career scientist to have a prominent mentor or to train in an elite program? How does who a scientist knows shape what questions they study or what discoveries they make? How much are gender differences in productivity and prominence caused by gendered differences in collaboration networks?12 And, how much of a scientist’s productivity and prominence is explained by that of their collaborators?45 These questions cannot be clearly answered without considering the effects of social networks in science.
Here, we untangle the network effects of collaborations on the productivity and prominence of individual scientists by developing two network models. Applied to large-scale scientific publication and collaboration data, these models allow us to quantify the network’s effect on driving certain widespread and persistent inequalities across individual researchers. Using these models, we investigate the degree to which gendered collaboration patterns explain gendered differences in productivity—measured by the number of first- or last- authored publications—and prominence—measured by the number of high-impact publications that received the upper 8th percentile of citations as measured 2 years after publication for a given year and field, how network effects vary with institutional prestige, and the degree to which collaboration networks operate as a kind of moderately transferable form of social capital, by which successful senior scientists improve the long-term trajectory of their junior collaborators. Although the selected metrics of productivity and prominence are broadly mentioned and discussed in the scientific community, they should be used with considerations as they do not necessarily imply scientific utility46,47.
## Results
We begin by extracting pairs of coauthors defined across 20.0 million research articles in the Microsoft Academic Graph (MAG) database since 195048,49, across six STEM fields: biology, chemistry, computer science, mathematics, medicine, and physics. To better isolate the most important network connections, we focus on the coauthorship links defined by the first and last authors of each paper. Subsetting to only the first-last author pairs connections eliminates the network effects on productivity and prominence caused by variations in the number of coauthors per paper, middle-author contributions of all types, trends over time and across fields in team sizes, and other related confounds. This selection preserves and focuses our analysis on the most important collaboration links according to common coauthorship norms in STEM fields, e.g., traditional mentor-mentee relationships, where the junior scholar is typically the first author and their senior colleague is the last author.
The nature of coauthorship in scientific publications tends to confound direct measures of the productivity and prominence of individual scientists. Highly productive scientists tend to have many collaborators, often including each other, and the productivity of these individuals tends to lift the productivities of others by virtue of those collaborations. In the same way, highly cited scientists tend to increase the prominence of their collaborators, and often, the same collaborators are both highly productive and highly cited. Bibliometric normalization schemes, such as fractional authorship, can be viewed as paper-level adjustments for these network effects of collaboration.
However, untangling the network effects of collaborations over a scientific career to estimate each individual’s contributions within the interdependent context of coauthorship networks requires a generative network model. Here, we introduce two such models that can control for these collaboration network effects and allow us to quantify the latent productivity and prominence of individual researchers, and their relationship with social and epistemic inequalities in scientific careers.
We model the production of publications by a pair of coauthors as a stochastic outcome of their joint efforts, governed by a linear combination of their individual latent productivity parameters (Fig. 1a). Mathematically, the number of coauthored publications is the output of a pairwise Poisson process, parameterized by the sum of the latent individual productivities λi and λj for coauthor pair (i,j). Hence, the model parameter λi gives the expected number of publications per year for author i, and for an author pair (i,j), their joint productivity is a random variable of the form
$$P({N}_{ij},{t}_{ij}|{\lambda }_{i},{\lambda }_{j})=\frac{{\exp }^{-({\lambda }_{i}+{\lambda }_{j}){t}_{ij}}{[({\lambda }_{i}+{\lambda }_{j}){t}_{ij}]}^{{N}_{ij}}}{{N}_{ij}!},$$
(1)
where Nij is the observed number of papers coauthored by authors i and j over a total collaboration time period tij (see Methods).
Similarly, we model prominence, defined as the number of high-impact publications, as a joint function of individual latent parameters (Fig. 1a). Mathematically, researcher prominence is modeled by a Binomial distribution, parameterized by the sum of the latent individual prominences θi and θj of the coauthor pair (i,j). Hence, the model parameter θi gives the expected fraction of publications with i as an author that will be highly cited, and for an author pair (i,j), their joint prominence is a random variable of the form
$$P({N}_{ij},{m}_{ij}|{\theta }_{i},{\theta }_{j})=\left(\begin{array}{c}{{N}_{ij}}\\ {{m}_{ij}}\end{array}\right){({\theta }_{i}+{\theta }_{j})}^{{m}_{ij}}{[1-({\theta }_{i}+{\theta }_{j})]}^{{N}_{ij}-{m}_{ij}},$$
(2)
where mij is the observed number of highly cited papers coauthored by authors i and j over a total collaboration time period tij (see Methods). We note that both models assume conditional independence across publications, which may obscure some interesting temporal effects50. Applying these joint productivity and prominence models to all pairs of coauthors in a collaboration network yields joint likelihood functions whose independent maximization yields a set of individual productivity and prominence parameters that effectively control for the network effects of coauthorship on the variables of interest
$$L({{{{{{{\boldsymbol{\lambda }}}}}}}})=\mathop{\sum}\limits_{i\ne j}\log P({N}_{ij},{t}_{ij}|{\lambda }_{i},{\lambda }_{j})\qquad L({{{{{{{\boldsymbol{\theta }}}}}}}})=\mathop{\sum}\limits_{i\ne j}\log P({N}_{ij},{m}_{ij}|{\theta }_{i},{\theta }_{j}).$$
(3)
Applied to our full dataset of 198,202 mid-career researchers across six STEM fields, defined as researchers with at least 15 years of scholarly publishing activity (see Supplementary Information), we find compelling evidence that these latent parameter models yield a useful individual decomposition of the observed joint productivities and prominences of collaborating scientists (Fig. 1b and Supplementary Fig. 3 for individual fields). Examining the marginal distributions, we find that the latent productivity and prominence variables are nearly orthogonal (Pearson’s r = 0.09, p < 10−3), with λ following a Normal distribution and θ following a heavy-tailed distribution. That is, controlling for network effects, we find that individual productivity of mid-career researchers is low variance and concentrated around a central tendency of μλ = 0.39 first/last-authored papers per year (standard deviation σλ = 0.15), with only the top 0.02% of researchers exhibiting a latent productivity of $$\hat{\lambda } \; > \;2$$ first/last-authored papers per year.
In contrast, controlling for network effects, individual prominence is highly variable, with an average prominence of μθ = 0.04 (on average, for publications written by two authors, 1 out of 12.5 will be highly cited), but a standard deviation twice as large (σθ = 0.08). That is, a large majority of researchers have low individual prominence, while a minority generate a long tail of much greater impact, much like measures of popularity and wealth in other complex social systems51. Furthermore, both of these estimated parameters have low correlation with a researcher’s career-wise raw productivity, with the Pearson correlation coefficients rλ,N = 0.21 and rθ,N = −0.02. This implies that after controlling for the network effects of collaboration, the latent parameters could indicate the productivity and prominence of individual researchers in a given unit time period. As a technical aside, we note that parameter estimates for these models are more stable for researchers with at least 10 papers, and appear to underestimate latent productivity λ and overestimate prominence θ for less productive authors (Supplementary Fig. 5). The distribution of θ does not qualitatively change when we alter the threshold of highly-cited papers (Supplementary Fig. 6).
If the estimated individual productivity and prominence parameters λ and θ are genuinely measuring individual-level characteristics, controlling for network effects from collaboration, then they should only loosely correlate with their corresponding network-confounded measures of raw productivity and raw prominence. We evaluate the efficacy of these two measures by characterizing their correlation with other “unadjusted” measures and time-related dynamics for individual researchers. We first select a cohort of minimally productive mid-career researchers who have published at least 10 papers by their 15th year, and tabulate a correlation matrix of estimated individual parameters and observed scholarly statistics, based on their publications through their mid-career (Fig. 1c). We define a researcher to be “high λ” or “high θ” if their individual estimated parameter is in the upper 10th percentile of same-field researchers for a given year. And, we define a high λ or θ coauthor as a collaborator who is themselves a high λ or θ author and has published at least three papers by the year of relevant collaboration. This correlation analysis reveals that a researcher’s individual λ and θ values correlate only moderately with their “unadjusted” productivity and prominence (λ with papers, Pearson’s r = 0.21; θ with citations, Pearson’s r = 0.36), indicating that the model parameters are capturing behavior above and beyond what the unadjusted counts provide. And, we find strong evidence of the network effects of collaborations in driving the observed productivity and prominence of individual researchers, because the number of high λ and high θ coauthors correlates more strongly with individual productivity and prominence (papers vs. high λ coauthors, Pearson’s r = 0.70; citations vs. high θ coauthors, Pearson’s r = 0.49) than do the individual’s own model parameters. Hence, these network models can shed new light on the substantial but often hidden role that social networks can play in determining individual career metrics.
Similarly, if the estimated individual latent parameters are measuring a researcher’s underlying characteristics, they should remain relatively stable over an individual’s career path, even as their collaboration network evolves. Compared to a fully randomized null model, we find that high λ or high θ researchers are more likely to remain in the same percentile group after 10 years (see Supplementary Information, and Supplemenatry Figs. 79). Furthermore, researchers with high latent parameter values in their early-career (first 5 years of publishing) are also more likely by their mid-career to be in the upper 5th percentile of citations among researchers who publish in a given field in a given year. And, this pattern holds when we repeat the analyses in matched-pair experiments, in which we match researchers on their institutional prestige, productivity, and prominence in their early-career (Supplementary Fig. 10, Supplementary Tables 14). These results indicate that an individual researcher’s estimated model parameters for productivity and prominence are relatively stable over a career, suggesting that they are capturing underlying scholarly behavior independent of changes in collaboration patterns over time, as intended.
In agreement with past studies, we find gendered inequalities in observed measures of both career-wise productivity (Fig. 2a) and prominence (Fig. 2d) among mid-career STEM researchers, in which men both publish more papers and receive more citations than women22,52,53. On average, men in these fields publish a total of 20.3 papers by the time they reach their mid-career (first 15 years) compared to 18.3 papers by women (t-test, t = 24.5, p < 0.001, Cohen’s d = 0.15 ± 0.01), and, on average, men’s past publications receive 346.0 total citations compared to 330.1 citations for women’s (t-test, t = 4.9, p < 0.001, Cohen’s d = 0.03 ± 0.01). In other words, men’s average total productivity is 11.0% greater and they receive 5.0% more citations than women by mid-career, and these disparities are stable over time. For researchers with at least three publications in the first 5 years of their publishing career, i.e., in their early career, the probability of persisting until mid-career is 20.6% for men but only 15.7% for women, in agreement with the well-known higher drop-out rate for early-career female scientists53. Despite these differences in observed scholarly metrics, controlling for collaboration via our network models reveals a different pattern: across fields, the average mid-career latent productivity parameter is $$\hat{\lambda }=0.39$$ for both men and women (t-test, t = 0.7, p = 0.51, Cohen’s d < 0.01), and the average mid-career latent prominence parameter $$\hat{\theta }=0.044$$ for men and 0.045 for women (t-test, t = 0.82, p = 0.41, Cohen’s d < 0.01). That is, men and women exhibit statistically indistinguishable individual latent productivities and latent prominences, implying that the differences in observed scholarly metrics are likely caused by gendered differences in the structure and composition of researcher collaboration networks (Fig. 2b, e).
Furthermore, we find that the gendered gaps for mid-career researchers can be largely explained by variation in the number of direct coauthors in their collaboration networks. Matching women and men researchers by institutional prestige, year of first publication, and field, we still find a gendered disparity in which women’s productivity and prominence is lower relative to matched men (Fig. 2c, f). However, additionally matching on the number of coauthors largely eliminates these gendered disparities in both productivity (10.5%, t-test, t = 24.5, p < 0.001, Cohen’s d = 0.15 ± 0.01 vs. 0.7%, t-test, t = 1.3, p = 0.20, Cohen’s d = 0.01 ± 0.01) and prominence (12.8%, t-test, t = 4.9, p < 0.001, Cohen’s d = 0.03 ± 0.01 vs. 2.3%, t-test, t = 2.0, p = 0.04, Cohen’s d = 0.02 ± 0.01). Hence, we find substantial evidence that the well-known gendered productivity and prominence inequalities among women and men researchers can be largely explained as a network effect, in which the composition and size of local collaboration networks differ between men and women, and these differences lead to the observed differences in scholarly metrics, rather than any inherent difference in the researchers themselves. We note that this analysis does not establish a causal relationship, and hence known causal factors, such as the gendered impact of parenthood on researchers that leads to productivity penalty for mothers as they undertake more childcare duties54, likely influence both productivity and collaboration networks. We also test the robustness of our findings by selecting mid-career researchers with at least 20 publications (Supplementary Fig. 13) and repeating the analysis by randomly sampling a tertile of researchers (Supplementary Fig. 14), showing that these different choices do not change the qualitative nature of our conclusions. Overall, these results suggest that collaboration networks can be viewed as a form of social capital that is distributed in unequal and gendered ways in STEM, which mediates or shapes the amount of scholarly contributions and their visibility.
If a researcher’s collaboration network acts like a form of social capital, we should expect key dynamics of social capital apply in collaboration networks as well. For instance, an author’s collaboration network capital should be “transferrable” to some degree between researchers. For example, collaboration by an early-career researcher with a high λ or high θ senior coauthor should enhance the junior researcher’s productivity or prominence in a way that persists into their own mid-career, compared to similar researchers without such a collaboration. For this analysis of junior-senior collaborations, we select pairs in which, at the time of collaboration, the early-career researcher is 5 or fewer years since their first publication, and the senior coauthor is 6 or more years since their first publication. Because the model estimates of individual latent parameters are more accurate for researchers with more papers, we restrict our analysis here to early-career coauthors and their senior coauthors that have at least three papers by the time of collaboration.
We find that early-career researchers are significantly more likely to collaborate with high λ or θ senior researchers if they are based at elite institutions, which we define as research institutions whose authoritative ranking is among the top 10 in a given field (see Methods), indicating that the composition of collaboration networks itself varies with environmental prestige55. This may be largely due to a selection effect that high λ or θ senior researchers are more likely to work at elite institutions, reflecting inequalities of having access to important social networks among early-career researchers. In particular, at pairwise coauthorships, the probability that an early-career researcher collaborates with a high λ (productivity) senior researcher is 0.177 at elite institutions vs. 0.145 at non-elite institutions (t-test, t = 19.3, p < 0.001, Cohen’s d = 0.09 ± 0.01), and the probability of collaborating with a high θ (prominence) senior researcher is 0.141 at elite institutions vs. 0.067 at non-elite institutions (t-test, t = 50.2, p < 0.001, Cohen’s d = 0.28 ± 0.01).
However, regardless of the institution, researchers who collaborated with high λ or high θ senior coauthors early in their career are significantly more likely to themselves be a highly prominent researcher in their mid-career, who have accrued the upper 5th percentile of citations among all active researchers in a given year and field (Fig. 3a, c). In particular, collaborating with at least one high λ senior coauthor in the first 5 years of a researcher’s career increases the probability of subsequently being a highly prominent researcher in the 15th career year from 16.2 to 29.5% (t-test, t = 65.0, p < 0.001, Cohen’s d = 0.34 ± 0.01; Fig. 3a). And, a high θ senior coauthor doubles that mid-career probability from 16.3 to 39.8% (t-test, t = 81.6, p < 0.001, Cohen’s d = 0.61 ± 0.01; Fig. 3c). For both types of collaboration patterns, junior researchers from elite institutions exhibit higher productivity and prominence in the mid-career than do peers at less prestigious institutions—a disparity that reflects the value of prestigious environments55. This institution-based gap is larger for early-career researchers that have collaborated with high θ coauthors than with high λ coauthors.
However, the early-career benefits of a high λ or high θ senior coauthor appear to decrease modestly with that coauthor’s career age (Fig. 3b, d). This finding contrasts with past studies of scientific mentorship56,57, which have typically relied on unadjusted citation counts that are naturally larger for more senior collaborators and which represent a stronger confounding network effect. By correcting for the network effect of collaboration, we find instead that the benefits of collaborating with highly productive or highly prominent senior coauthors do not increase with coauthor seniority. Rather, they decrease with career age of the senior coauthor, and decrease more for high λ coauthors, suggesting that the transfer of social capital from senior to junior researchers through collaboration is more effective earlier in the career of senior coauthors. We also test the robustness of our results by selecting senior collaborators with at least six publications and at least ten publishing career years by the time of relevant collaboration, (Supplementary Fig. 15), and we find that the different thresholds do not qualitatively change our findings.
Finally, we consider the impact of environmental prestige on latent productivity and prominence of mid-career researchers. Past work has shown that working at a more prestigious institution drives greater productivity and prominence among early-career researchers55. However, as with past work on the impact of mentorship, such insights were derived from scholarly measures that did not control for the network effects of collaboration, which increase as a career progresses. Across six STEM fields, researchers in our dataset affiliated with elite institutions on average publish a total of 21.8 papers up to their mid-career (first 15 years), which is 8.5% greater than the 20.1 for researchers at non-elite institutions (t-test, t = 11.5, p < 0.001, Cohen’s d = 0.11 ± 0.02, Fig. 4a). And, over the same career time, researchers at elite institutions receive on average 493.7 citations, which is 62.1% greater than the 304.5 citations received by researchers at non-elite institutions (t-test, t = 27.8, p < 0.001, Cohen’s d = 0.38 ± 0.02, Fig. 4d). Hence, in unadjusted scholarly metrics, researchers at elite institutions have marginally higher productivity and a substantially higher impact.
We find that these productivity and prominence advantages for researchers working in prestigious environments also appear in our estimated individual latent parameters. Researchers at elite institutions, on average, also exhibit a marginally greater latent productivity than those at non-elite institutions (λ = 0.394 vs. 0.387; 1.8% greater; t-test, t = 6.0, p < 0.001, Cohen’s d = 0.05 ± 0.02, Fig. 4b). And, these same researchers, on average, exhibit nearly double the latent prominence of researchers at non-elite institutions (θ = 0.071 vs. 0.037; 91.9% greater; t-test, t = 36.7, p < 0.001, Cohen’s d = 0.43 ± 0.02, Fig. 4d). Hence, controlling for the network effects of collaboration, we find smaller but still significant advantages in productivity but even larger advantages in prominence for researchers working at elite institutions, compared with raw scholarly metrics. The persistence of the advantages of elite environments after controlling for network effects suggests that other factors likely drive these differences55, e.g., differences in resources, the size of collaboration networks, or selection effects that apply primarily to mid-career researchers. In addition, we find that the results do not qualitatively change when we modify the number of selected elite institutions to the top 20 (Supplementary Fig. 16).
Some of this prestige advantage can be explained by differences in the composition of a mid-career researcher’s collaboration networks. Matching researchers in our sample by field and year of first publication, we find that researchers at non-elite institutions are only 6.8% less productive than those at elite institutions (Fig. 4c). However, further matching on variables that quantify the composition of a researcher’s collaboration network, and in particular, the number of coauthors, number of high λ coauthors, and number of high θ coauthors, we find that researchers at non-elite institutions are 2.8% more productive than those at elite institutions (t-test, t = 3.1, p < 0.01, Cohen’s d = 0.04 ± 0.02). These network effects are even stronger for the prominence of individual researchers. Matching researchers by field and year of first publication, researchers at non-elite institutions receive 39.9% fewer citations than those at elite institutions, while further matching on collaboration network variables shrinks this gap to only 19.9%. Hence, in contrast to gendered differences (Fig. 2), we find that the inequalities in productivity and prominence associated with environmental prestige cannot be explained entirely by differences in the structure of collaboration networks, suggesting that additional prestige-related variables play an important role in driving the greater scholarly impact of researchers at elite institutions.
In addition, we test the interaction effects of gender and institutional prestige on the performance of mid-career researchers. We find that the prestige of institutions has a relatively stronger effect on researchers’ productivity and prominence than gender, for both unadjusted measures and latent parameters (see Supplementary Fig. 12). In particular, both gender and institutional prestige have negligible effects on latent productivity λ, while institutions appear to have stronger influence than gender on latent prominence θ. The observation that prestige does not appear to drive latent productivity λ is supported by other recent studies, which show how the greater productivity of faculty at prestigious departments can be largely explained by a collaboration network effect: elite departments provide more available funded research labor, who then coauthor papers with the faculty members in their departments58.
## Discussion
By mediating scientific attention, evaluation, and collaboration, social networks play a fundamental role in shaping both the advancement of science and the pervasive social and epistemic inequalities that appear in most scientific communities. However, analyses of scholarly metrics associated with productivity and prominence, based on counts of publications and citations, even when normalized in some way, as in the case of fractional authorship or adjusting for journal impact factors, tend to be confounded by network effects that operate above the level of individual publications. Such network effects make it difficult to gain insight into the causes and consequences of these inequalities, particularly across the span of a scientific career. Here, we introduced two scholar-level generative network models that allow us to estimate parameters that represent individual researcher productivity and prominence, while controlling for the effects of collaborations with more or less productive or prominent collaborators over time (and those collaborators’ collaborations, etc). We then applied these models to a large dataset of 198,202 mid-career researchers and all of their first-last author collaborations across 70 years of time and six fields in STEM to investigate the effect of collaboration networks.
We find that the observed gendered gap in productivity and prominence can be largely explained by differences in social networks. The way social networks can behave like social capital, with boosting effects on junior researchers decaying as their senior collaborators age. After controlling for network effects, our adjusted productivity and prominence parameters can explain a significant proportion but not all scholarly disparity related to environmental prestige. These results have implications for gendered and institutional differences in scholarship, which we discuss further in the following paragraphs.
Our estimated latent parameters reveal that women researchers who persist until mid-career (15 years since first publication) exhibit equal productivity and prominence to persisting men (Fig. 2). This finding suggests that the well-known gendered difference in “unadjusted” scholarly metrics like number of papers (productivity) and total citation counts (impact) can be explained by gendered differences in coauthorship networks. Although this result does not imply causality, it does indicate that known causal factors like the gendered impact of parenthood on researchers54, likely also shape collaboration networks. By providing new individual parameters after adjusting network effects, our findings highlight the importance of social networks in shaping scholarly gender differences among mid-career researchers, which contributes to the abundant literature on potential causes and effects of gender disparity in science, including academic culture19 and homophily15,16. More research is needed to identify the likely multiple reasons that women on average have fewer coauthors than men, and the degree to which those reasons relate to scholarly factors, preferences, or non-meritocratic factors.
These results also suggest that collaboration networks can be viewed as a form of social capital that is distributed in unequal and gendered ways in STEM. In this way, collaboration networks may serve as a common mediating variable for other social and epistemic inequalities, which may then drive differences in the amount or visibility of scholarly contributions, or other factors associated with scientific discovery. Efforts specifically aimed at expanding and supporting the collaboration networks of women researchers, e.g., formal support and advocacy organizations, women-in-science meetings, and fellowships for women that support intensive new collaborations, seem likely to help mitigate these gendered gaps in scholarly metrics, and to broadly support scientific discovery.
Supporting the view that collaboration networks act like a form of social capital, we find that early-career collaborations with elite senior researchers, as identified via their high latent parameters λ or θ, seem to raise the latent productivity and prominence of their junior coauthors, which supports the long-term development of their academic careers (Fig. 3). This effect appears regardless of the prestige of the affiliated institution, but is amplified in prestigious environments, which measurably catalyze the formation of collaboration ties with elite researchers. However, the boosting effect that early-career collaborations with elite senior coauthors have on mid-career productivity and prominence gradually declines as senior coauthors age, regardless of the senior authors’ latent parameter values. Further research is needed to understand the causal mechanisms through which these senior collaborations produce lasting influence on the productivity and prominence of early-career researchers, whether these effects are gendered, and what causes the age-related effect.
Many possibilities are plausible. The effect could reflect epistemic ossification, in which older scientist become progressively less well-connected to the dynamic core of their field. It could also reflect social saturation, in which the capacity of senior scientists’ collaborators to from new collaborations with junior colleagues is gradually depleted. A particularly plausible possibility is that the effects are driven by prestige-correlated selection and social stratification. For instance, elite senior researchers are more likely to be based at prestigious, research-intensive institutions, and hence are more available to collaborate with students intent on pursuing academic research careers, who have enhanced prospects to do so, as a result of their prestigious pedigree. By the same token, talented students at a less prestigious institution will have fewer available elite researchers to collaborate with, and hence have lower access to the kinds of social capital that facilitate a successful early research career. Or, the advantage of mid-career researchers at elite institutions in their productivity and prominence may reflect the stratification of research resources, e.g., funding, research group size, computational or experimental facilities, etc., and early collaborations with elite senior researchers simply increases the likelihood of ultimately working at such an institution. Identifying the underlying causes of the long-term effects of these collaborations is an important direction of future research, with specific implications for efforts to mitigate social and epistemic inequalities in science.
Overall, our findings shed considerable new light on the fundamental role of collaboration networks in shaping scientific careers and mediating scholarly inequalities. Our results suggest that collaboration networks embody a form of unequally distributed social capital, which influences who makes what scientific and technological discoveries. In particular, collaboration network effects can explain both the persistent gendered inequalities among mid-career researchers in productivity and prominence, and a considerable portion of the observed inequalities between researchers working in more or less elite environments. While these results are not causal, they do suggest that a more detailed understanding of the factors that influence the size and composition of researcher collaboration networks is likely to bring us closer to a causal understanding of many social and epistemic inequalities in science. Collaboration networks may also play an important role in the domain of research and development efforts, particularly in the form of patent collaborations59. Studies focusing on cross-disciplinary effects thus are likely to shed further light on the dynamics and influence of social capital in scientific discovery, and the role of collaboration networks in shaping individual research careers.
There are several limitations to our analyses. By focusing only on first and last author collaborations, we neglect all collaborations with middle coauthors, regardless of the kind or size of their contributions. This categorical selection mitigates the confounding network effects of large author lists, but also neglects the value and influence of team science. Among the six STEM fields studied here, a common norm is that research tasks like data analysis, experiments, and visualization are performed by the first author, while the last author commonly plays the more supervisory role of research design, manuscript writing, and funding support. The specific and varied roles of and interactions with middle authors are omitted in order to simplify the model framework. Elaborating our modeling framework to incorporate the effects of middle-author collaborations, perhaps labeled using an author contribution taxonomy, may reveal additional nuance or secondary effects of interest. In addition, in order to produce reliable estimates of latent parameters, researchers with only a small number of collaborations were dropped from our analysis, which limits our insights to relatively productive mid-career researchers. Hence, we can say little about the degree to which our results hold for researchers with short track records. Our name-based gender classification used data from the US Social Security Administration, which is biased toward English names. Further studies that focus on gender disparity of other ethnic groups are needed to show if similar gendered network patterns persist. And, our analysis of environmental prestige used only a coarse dichotomous variable for elite or non-elite institution, which likely obscures the effects of gradations of prestige. Finally, our analyses depend on crude but easy-to-measure metrics of scholarly contributions, based on publication and citation counts, which can be useful in aggregate but should not be confused with measures of scientific utility.
Our results implicate a fundamental but complicated role for collaboration networks, and the kind of social capital they embody, in forming and perpetuating social and epistemic inequalities in the scientific processes of STEM fields. They also suggest that collaboration network effects could be leveraged to help mitigate some of those same inequalities, to better support scientific discovery and to broaden participation in science. For instance, targeted support of cross-institution, early-career collaborations with elite senior researchers, perhaps through specialized fellowships, may support the career advancement of promising young researchers who would otherwise leave research. Similarly, directly supporting the collaboration networks of women researchers may improve both retention and productivity, particularly at times when gendered impacts occur, e.g., at parenthood54. And, efforts to “correct” for collaboration network effects when evaluating candidates for faculty positions or applicants for funding is likely to help mitigate the multiple implicit biases that are known to favor elite-pedigree men researchers with prolific senior collaborators1,5,25. Network effects are a natural part of the social processes that underlie the scientific process, and are likely to be key components in any effort to mitigate social and epistemic biases, to make academia more meritocratic and less sensitive to the effects of cumulative advantages.
We note that our models are a general way to decompose observed data on repeated collaborative activities, such as technological inventions, business partnerships, and musical composition, into individual contributions. Applying similar models to other phenomena would be an interesting direction of future work, which may help illuminate individual differences and contributions to these group activities. As we have done in this paper, it can also shed new light on how those differences relate to other variables of interest and, in particular, the role of those differences in driving broader social inequalities.
## Methods
### Publication and citation data
We use the MAG dataset, containing journal articles and conference proceedings published between 1950 and 2019, inclusive. MAG provides a 5-level taxonomy of academic fields of study; the top level 0 divides all documents into 19 major fields. Among them, we select six scientific fields representative of the traditional science, technology, engineering and mathematics (STEM) domains: biology, chemistry, computer science, mathematics, medicine, and physics. These fields publish the majority of research papers in science and technology domains (see Supplementary Fig. 1). Following the publication norms in these fields, we include only journal articles in our analyses for all fields except computer science. For computer science, where conference proceedings are peer reviewed in the same way that journal articles are in other fields, we include both journal and conference articles.
Missing researcher affiliations are common in MAG, but difficult to impute. The MAG dataset includes 80.4 million papers that meet the above inclusion criteria. Among these, 36.0 million papers provide author affiliation information, and we consider only these in our analyses. These affiliations provide necessary information for assessing the environmental effects on coauthorship, career development, productivity, etc. for individual scientists. The reasons for missing affiliations for authors in MAG remain unclear.
Our analyses consider coauthorship only between first and last authors of each paper. In the six STEM fields we analyze, the first and last authorship positions are typically understood to denote the authors that made the greatest contributions to the research. There are circumstances where this norm does not apply, e.g., in specific subfields where authors are listed in the alphabetic order, or when there are multiple “first” or “last” authors due to equal contribution flags, as well as in some large collaborations. To account for this latter category, we exclude all papers with more than 10 listed authors. Applied together, our refined dataset contains 12.9 million unique authors and 20.0 million research articles. Our first-last author counting scheme eliminates the effects of large author lists and the relevance of fractional counting, at the expense of potentially under-counting contributions and effects of middle-authorship. Most authors are associated with very few publications, and our analyses focus on the mid-career trajectories of the 198,202 productive authors that published their first paper in 1975–2003 and have at least 10 publications in the 15th career year.
We define the highly cited papers to be those that receive the upper 8th percentile of citations among papers published in journals and computer science conferences, respectively, for a given year and level 0 field annotated in MAG dataset. In MAG, a paper belongs to exactly one level 0 field but falls into several different fields at other fine-grained levels, making it difficult to operationalize the definition of highly-cited works based on these levels. The theoretical need to normalize citation counts at a fine-grained level only applies when authors are being compared directly across such fields, and that our models naturally account for such cross-field variability as our model essentially estimates a researcher-specific parameter.
### Institutional prestige and elite institutions
For a specific discipline, we use the z-score of the number of total historical highly cited papers produced by each research institution to define its prestige score
$${p}_{i}^{{{{{{{{\rm{inst}}}}}}}}}=\frac{{N}_{i}^{{{{{{{{\rm{high}}}}}}}}}-\langle {N}^{{{{{{{{\rm{high}}}}}}}}}\rangle }{\sigma /\sqrt{{n}^{{{{{{{{\rm{inst}}}}}}}}}}},$$
(4)
where $${N}_{i}^{{{{{{{{\rm{high}}}}}}}}}$$ is the number of highly cited papers produced by institution i, 〈Nhigh〉 is the average number of highly cited papers by all institutions, σ is the standard deviation of highly cited papers, and ninst is the number of institutions. The institutional prestige score is discipline specific, but does not vary over time. We define the top 10 research institutions by this measure, within each field, to be elite institutions.
### Gender
We assign binary gender labels to authors according to a classifier based on U.S. Social Security Administration data, which records the historical gender associated with names of newborn babies in the United States of America60. Hence, our analysis of gender disparity is most applicable to researchers with origins in North America or other native English-speaking countries. Only first names that have at least 95% accuracy for a specific gender are retained for the matching. As such, we matched 126,805 productive authors for our analysis who published the first paper in 1975–2003, composing 64.0% of all productive mid-career authors selected for our study, among which 20.2% (25,666) are women.
### Latent variable estimation
For each network model and each field, we use all papers published within that field up to a given year, and estimate the latent parameter sets using convex optimization. We estimate yearly parameters with bootstrap-corrected pseudo-likelihood using 30 replications for every year from 1975 to 2017. For each year T, we construct the coauthorship network by using all publications from 1950 to T. In each round of bootstrap sampling, prior to estimating the network models, we prune all subgraphs of the coauthorship network that are trees, as model parameters become non-identifiable in such structures. Authors dropped as a result of pruning are assigned a latent variable of 0, and the final parameter estimates are the average values of all replications. In our analysis of patterns over time, authors receive latent parameter estimates in every year from their first appearance as either a first or last author until 2019.
Individual research latent parameters λ and θ are estimated using the convex optimization R package CVXR61. Within a given field, for each year from 1975 to 2017, we estimate the model parameters on a bootstrap of all papers published up to and including that year, using 30 replications. An individual researcher’s λ and θ parameters are recorded as the (bootstrap) average across replications.
In our network models, we assume that a pair of coauthors started their collaboration 1 year before they published their first paper together. Hence, the duration of collaboration for authors i and j is $${t}_{ij}={{{{{{{{\rm{Yr}}}}}}}}}_{ij}^{{{{{{{\rm{last paper}}}}}}}}-{{{{{{{{\rm{Yr}}}}}}}}}_{ij}^{{{{{{{\rm{first paper}}}}}}}}+1$$.
We assess the bias induced from pruning authors in collaboration network trees by examining the differences in individual-level attributes such as institutional prestige and gender in the 2017 network for retained and dropped authors. There are 35.6% women authors in the retained population, while the proportion of women is 31.9% in the dropped population. And, the average institutional prestige score for retained authors is 5.51, and 3.41 for dropped authors, suggesting that authors in the tree subgraphs are usually from less prestigious institutions.
### Data manipulation and visualization
We used R package data.table version 1.14.0 for processing and manipulating publication and citation data62. All data visualization graphics in this study are made with the R package ggplot2 version 3.3.563.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article. | 2022-10-03 21:34:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5356463193893433, "perplexity": 2434.1614024533687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00227.warc.gz"} |
https://socratic.org/questions/562220ea11ef6b7d835f158f | # Question #f158f
Dec 13, 2015
$f = 25 N$
#### Explanation:
Consider the following image, representing the situation
There is a rather easy method to solve this, but one must make a very simple assumption, that the two blocks move together under this acceleration, ie they have the same acceleration
Under this assumption, the two blocks can be considered one composite( this is only true under the above assumption). Applying newtons second law on the composite system
$F = \left({m}_{1} + {m}_{2}\right) a$, again I cannot reiterate how important this assumption is, without that, I couldnt have written that equation. Where $a$ is the acceleration on the system
Therefore with $F = 40 N$, ${m}_{1} = 5$Kg, ${m}_{2} = 3$Kg, we get
$a = 5 \frac{m}{s} ^ 2$
But the only force on the $5$Kg block is due to the frictional force from the $3$Kg, evident from the free body diagram of the 5Kg block. Therefore for the 5Kg block
frictional force=$f = {m}_{1} a$=$5 \setminus \times 5$=$25 N$
Note: one could have as well solved the problem by drawing free body diagrams for both the blocks, but that would be rather messy, taking advantage of one assumption, makes the problem much easier. | 2019-10-15 00:05:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 12, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.906067967414856, "perplexity": 509.70017892581313}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00342.warc.gz"} |
https://eprint.iacr.org/2017/138 | ### How (not) to Use Welch's T-test in Side-Channel Security Evaluations
François-Xavier Standaert
##### Abstract
The Test Vector Leakage Assessment (TVLA) methodology is a qualitative tool relying on Welch's T-test to assess the security of cryptographic implementations against side-channel attacks. Despite known limitations (e.g., risks of false negatives and positives), it is sometimes considered as a pass-fail test to determine whether such implementations are "safe" or not (without clear definition of what is "safe"). In this note, we clarify the limited quantitative meaning of this test when used as a standalone tool. For this purpose, we first show that the straightforward application of this approach to assess the security of a masked implementation is not sufficient. More precisely, we show that even in a simple (more precisely, univariate) case study that seems best suited for the TVLA methodology, detection (or lack thereof) with Welch's T-test can be totally disconnected from the actual security level of an implementation. For this purpose, we put forward the case of a realistic masking scheme that looks very safe from the TVLA point-of-view and is nevertheless easy to break. We then discuss this result in more general terms and argue that this limitation is shared by all "moment-based" security evaluations. We conclude the note positively, by describing how to use moment-based analyzes as a useful ingredient of side-channel security evaluations, to determine a "security order".
Available format(s)
Publication info
Published elsewhere. Proceedings of CARDIS 2018
Keywords
side-channel analysissecurity evaluations
Contact author(s)
fstandae @ uclouvain be
History
2018-10-15: revised
See all versions
Short URL
https://ia.cr/2017/138
CC BY
BibTeX
@misc{cryptoeprint:2017/138,
author = {François-Xavier Standaert},
title = {How (not) to Use Welch's T-test in Side-Channel Security Evaluations},
howpublished = {Cryptology ePrint Archive, Paper 2017/138},
year = {2017},
note = {\url{https://eprint.iacr.org/2017/138}},
url = {https://eprint.iacr.org/2017/138}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2022-07-03 11:26:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3037031292915344, "perplexity": 3149.460059313413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00423.warc.gz"} |
http://mathoverflow.net/revisions/114161/list | If you have $y/(\log y)^{O(1)}\;$ integers, each with at most $(\log y)^{O(1)}\;$ bits, then you can find all the small prime factors of each integer in time $(\log y)^{O(1)}\;$ per integer. | 2013-05-22 08:02:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7145848274230957, "perplexity": 124.37504137288485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368701508530/warc/CC-MAIN-20130516105148-00097-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://gmatclub.com/forum/points-a-b-and-c-have-xy-coordinates-2-0-8-12-and-163408.html | Summer is Coming! Join the Game of Timers Competition to Win Epic Prizes. Registration is Open. Game starts Mon July 1st.
It is currently 23 Jul 2019, 04:36
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Points A, B, and, C have xy-coordinates (2,0), (8,12), and (
Author Message
TAGS:
Hide Tags
Intern
Joined: 14 Mar 2013
Posts: 44
Location: United States
GMAT Date: 12-03-2013
WE: General Management (Retail)
Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
19 Nov 2013, 03:19
1
3
00:00
Difficulty:
35% (medium)
Question Stats:
74% (02:08) correct 26% (02:01) wrong based on 189 sessions
HideShow timer Statistics
Points A, B, and, C have xy-coordinates (2,0), (8,12), and (14,0), respectively. Points X, Y, and Z have xy-coordinates (6,0), (8,4), and (10,0), respectively. What fraction of the area of triangle ABC is the area of triangle XYZ?
(A) 1/9
(B) 1/8
(C) 1/6
(D) 1/5
(E) 1/3
Manager
Joined: 25 Oct 2013
Posts: 143
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
19 Nov 2013, 03:33
If you notice, both triangles ABC and XYZ have a side on X axis. we can take these sides as bases for each triangle, therefore
Area of ABC is 1/2*12*12 (Height of ABC is the y coordinate of the third point (8,12))
similarly Area of XYZ is 1/2*4*4
dividing area of XYZ with that of ABC gives 1/9.
Hope it helps.
_________________
Click on Kudos if you liked the post!
Practice makes Perfect.
VP
Joined: 02 Jul 2012
Posts: 1153
Location: India
Concentration: Strategy
GMAT 1: 740 Q49 V42
GPA: 3.8
WE: Engineering (Energy and Utilities)
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
19 Nov 2013, 03:33
ABC is a triangle with base = 12 units and height = 12 units
Area = 0.5*base*height = 0.5*12*12 = 72
XYZ is a triangle with base = 4 units and height = 4 units
Area = 0.5*base*height = 0.5*4*4 = 8
Fraction = 8/72 = 1/9
_________________
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Current Student
Status: Chasing my MBB Dream!
Joined: 29 Aug 2012
Posts: 1111
Location: United States (DC)
WE: General Management (Aerospace and Defense)
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
19 Nov 2013, 03:39
1
Points A, B, and, C have xy-coordinates (2,0), (8,12), and (14,0), respectively. Points X, Y, and Z have xy-coordinates (6,0), (8,4), and (10,0), respectively.
Area of triangle in coordinate plane is
$$1/2 *[x1(y2-y3)+x2(y3-y1)+x3(y1-y2)]$$...
The area of ABC will be 72 and area of xyz will be 8..
Fraction : 8/72= 1/9
_________________
Intern
Joined: 26 Aug 2014
Posts: 43
GMAT 1: 650 Q49 V30
GMAT 2: 650 Q49 V31
WE: Programming (Computer Software)
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
02 Sep 2015, 08:08
How do I get heights and bases of triangle ABC and XYZ as 12 and 4 respectively?
CEO
Joined: 20 Mar 2014
Posts: 2620
Concentration: Finance, Strategy
Schools: Kellogg '18 (M)
GMAT 1: 750 Q49 V44
GPA: 3.7
WE: Engineering (Aerospace and Defense)
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
02 Sep 2015, 08:15
1
ShristiK wrote:
How do I get heights and bases of triangle ABC and XYZ as 12 and 4 respectively?
If you plot the points in XY plane, you will see that the height of triangle ABC = y-coordinate of B = 12, the other 2 points lie on the X axis and thus can not give the height of the triangle.
Similarly, height of triangle XYZ = y-coordinate of Y = 4, the other 2 points lie on the X axis and thus can not give the height of the triangle.
Hope this helps.
Intern
Joined: 26 Aug 2014
Posts: 43
GMAT 1: 650 Q49 V30
GMAT 2: 650 Q49 V31
WE: Programming (Computer Software)
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
02 Sep 2015, 09:17
Thanks a lot!
I really need to read the question carefully. I've been plotting (14,0) and (10,0) as (0,14) and (0,10) all this time!
Current Student
Status: DONE!
Joined: 05 Sep 2016
Posts: 368
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
22 Oct 2016, 09:58
I'd recommend plotting this one. It's very easy to see that they are equilateral triangles & you can calculate the area of each and compare.
Area ABC = 36sqrt(3)
Area XYZ = 4sqrt(3)
Ratio of XYZ: ABC = 1/9
Non-Human User
Joined: 09 Sep 2013
Posts: 11749
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink]
Show Tags
18 Aug 2018, 19:04
Hello from the GMAT Club BumpBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
_________________
Re: Points A, B, and, C have xy-coordinates (2,0), (8,12), and ( [#permalink] 18 Aug 2018, 19:04
Display posts from previous: Sort by | 2019-07-23 11:36:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6627554297447205, "perplexity": 7982.5827539919055}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00117.warc.gz"} |
https://bitbucket.org/jeunice/show/src | # show /
Filename Size Date modified Message
docs
show
study
study0
test
343 B
180 B
752 B
7.7 KB
233 B
5.2 KB
2.8 KB
33 B
2.5 KB
169 B
959 B
from show import *
x = 12
nums = list(range(4))
show(x, nums)
yields:
x: 12 nums: [0, 1, 2, 3]
Output is self-labeled, so you don't spend time doing that yourself.
## Debug Printing
Logging, assertions, unit tests, and interactive debuggers are all great tools. But sometimes you just need to print values as a program runs to see what's going on. Every language has features to print text, but they're rarely customized for printing debugging information. show is. It provides a simple, DRY mechanism to "show what's going on."
Sometimes programs print so that users can see things, and sometimes they print so that developers can. show() is for developers, helping rapidly print the current state of variables in ways that easily identify what value is being printed, without a lot of wasted effort. It replaces the craptastic repetitiveness of:
print "x: {0!r}".format(x)
with:
show(x)
And if you have a lot of output flowing by, and it's hard to see your debugging output, try:
show(x, y, z, style='red')
And now you have debug output that clearly stands out from the rest.
But "debug printing is so very 1989!" you may say. "We now have logging, logging, embedded assertions, unit tests, ..." Yes, that's true. But wonderful as those things are, just showing your current program values is often what the doctor ordered.
## And Much More
While avoiding a few extra characters of typing and a little extra program complexity is nice (very nice, actually), show does much more. As just a taste, show.changed() displays local values that have changed since it was last run:
def f():
x = 4
show.changed()
x += 1
retval = x * 3
show.changed()
return retval
When run will display:
x: 4
x: 5 retval: 15
Functions decorated with @show.inout show you input parameters as the function is called, then the return value later.:
@show.inout
def g(a):
b = 3
a += b
show.changed()
return a
g(4)
Displays:
g(a=4)
a: 7 b: 3
g(a=4) -> 7
(If you want this terser, decorate with @show.inout(only='out').)
If you run show.prettyprint() after importing, or alternatively if you import with from show.pretty import *, the Pygments syntax highlighter will (if installed), be used to colorize data values. This can significantly help see complex lists and dictionaries.
Finally, show does normal output too, just like say (with all of its high-level text formatting):
wizard = "Gandalf"
show("You have no power here, {wizard}!")
Prints:
You have no power here, Gandalf!
Long story short, show is a strong debugging companion that prints the maximum amount of useful information with the minimum amount of fuss.
For more, see the full documentation at Read the Docs.
## New and Notable
Try from show.pretty import *.
IPython is now well-supported, either in a terminal window or a Jupyter Notebook. In other words, show now supports interactive usage. (The plain Python REPL is still only marginally supported, given significant weaknesses in its introspection support.)
A relatively new capability is to differentially set the formatting parameters on a method by method basis. For example, if you want to see separators in green and function call/return annotations in red:
show.sep.set(style='green')
show.inout.set(style='red')
You could long do this on a call-by-call basis, but being able to set the defaults just for specific methods allows you to get more formatting in with fewer characters typed. This capability is available on a limited basis: primarily for format-specific calls (blanklines, hr, sep, and title) and for one core inspection call (the inout decorator). It will be extended, and mapped back to underlying say and options features over time.
Warning
There are some outstanding issues with Windows. Also, when evaluating show, do so from a program file or from IPython, not from the plain interactive REPL. show depends on introspection, which the plain REPL simply doesn't provide with any quality. | 2018-04-23 03:05:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2361842542886734, "perplexity": 6540.030453298676}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945669.54/warc/CC-MAIN-20180423011954-20180423031954-00476.warc.gz"} |
http://physics.stackexchange.com/questions/30980/capillaries-in-series | # Capillaries in series
The velocity of fluid of viscosity $\eta$ through a capillary of radius $r$ and length $l$ at a distance $x$ from the center of the capillary is given by; $v=\frac{P}{4l \eta }(r^2-x^2)$ (where $P$ is the pressure difference at the two ends of capillary). With the help of this I can find the rate of flow of fluid out of the capillary equal to $\frac {dV_{out}}{dt} = \frac{\pi Pr^4}{8l \eta }$.
But what happens when the capillaries are in series with different radius and length?
-
add comment
## 1 Answer
Assuming the fluids are incompressible, the flow through each capillary must be the same. Also the sum of the pressures across each capillary must equal the total pressure. Therefore, you have the equations:
$P_1+P_2 = P$
$V_1 = \frac{\pi P_1 r_1^4}{8 l_1\eta} = V_2 = \frac{\pi P_2 r_2^4}{8 l_2\eta}$
Solve this system for $P_1$ and $P_2$ then plug back in to find the flow rate in terms of $P, r_1, l_1, r_2, l_2$.
-
To add to that: a system of pipes with laminar incompressible flow is (at least mathematically) extremely similar to an electrical circuit. You have a potential (pressure), you have a current (flow) and you have a resistance (hydrodynamic resistance). So all the stuff you learned about resistances in parallel and in series, also holds here and can make your life easy – Michiel Feb 13 '13 at 21:31
add comment | 2014-03-07 09:21:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7815465331077576, "perplexity": 229.9757467961753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999639954/warc/CC-MAIN-20140305060719-00027-ip-10-183-142-35.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3180447/is-a-vector-space-a-subspace-of-itself | Is a vector space a subspace of itself?
We know that a subspace (of a vector space $$V$$) is a vector space that follows the same addition and multiplication rules as $$V$$, but is a vector space a subspace of itself?
Also, I'm getting confused doing the practice questions, on when we prove that something is a vector space by using the subspace test and when we prove V1 - V10, which are the ten axioms of vector spaces. So for example in $$\Bbb R^2$$, we have that $$\vec{x} + \vec{y} = \vec{y} + \vec {x}$$, etc..
• How do you define a subspace of a vector space? – Brian Apr 9 '19 at 2:02
• Is a set a subset of itself?? What’s V1-V10? – J. W. Tanner Apr 9 '19 at 2:16
• The term "proper" subspace is often used to denote a subspace space that is not the entire vector space. – Theo Bendit Apr 9 '19 at 2:19
• As other commenters have noted, your question lacks context. Please edit your question to include more context, lest your question be closed. Please give a definition of a subspace. Please explain what V1 - V10 means. If you are working from a particular text, a citation to that text would be helpful, too. – Xander Henderson Apr 9 '19 at 3:39
• @CompuChip That's not a usage of "proper set" that I'm accustomed to, but I can certainly imagine it being used by certain fields of maths. I would instead say "proper, non-trivial" to mean not the whole set/space, and not the empty set/$0$ subspace. – Theo Bendit Apr 9 '19 at 8:57
I'm guessing that V1 - V10 are the axioms for proving vector spaces.
To prove something is a vector space, independent of any other vector spaces you know of, you are required to prove all of the axioms in the definition. Not all operations that call themselves $$+$$ are worthy addition operations; just because you denote it $$+$$ does not mean it is (for example) associative, or has an additive identity.
There is a lot to prove, because there's a lot to gain. Vector spaces have a simply enormous amount of structure, and that structure gives us a really rich theory and powerful tools. If you have an object that you wish to understand better, and you can show it is a vector space (or at least, related to a vector space), then you'll instantly have some serious mathematical firepower at your fingertips.
Subspaces give us a shortcut to proving a vector space. If you have a subset of a known vector space, then you can prove just $$3$$ properties, rather than $$10$$. We can skip a lot of the steps because somebody has already done them previously when showing the larger vector space is indeed a vector space. You don't need to show, for example, $$v + w = w + v$$ for all $$v, w$$ in your subset, because we already know this is true for all vectors in the larger vector space.
I'm writing this, not as a direct answer to your question (which Jose Carlos Santos has answered already), but because confusion like this often stems from some sloppiness on the above point. I've seen many students (and, lamentably, several instructors) fail to grasp that showing the subspace conditions on a set that is not clearly a subset of a known vector space does not prove a vector space. The shortcut works because somebody has already established most of the axioms beforehand, but if this is not true, then the argument is a fallacy.
You can absolutely apply the subspace conditions on the whole of a vector space provided you've proven it's a vector space already with axioms V1 - V10.
• Ohh okay I think I understand. So when we prove that something is a subspace of say $\Bbb R^2$, we don't have to prove that $c(\vec{x} + \vec{y}) = c\vec{x} + c\vec{y}$ because we already know that's true inside of $\Bbb R^2$ and we know this subspace is a vector space INSIDE of $\Bbb R^2$ following all its rules. – ming Apr 9 '19 at 16:37
• But then for questions like "A sequence is a infinite list of real numbers. For example 1, 2, 3, 4, 5 is a sequence, and so is 1, -1, 2, -2, 4, -4. We define addition and scalar multiplication so that...." We actually don't have any known information/facts about this vector space so then we need to prove all ten axioms? – ming Apr 9 '19 at 16:39
• @ming Exactly. Couldn't put it better myself. – Theo Bendit Apr 9 '19 at 22:34
Yes, every vector space is a vector subspace of itself, since it is a non-empty subset of itself which is closed with respect to addition and with respect to product by scalars. | 2021-04-14 05:47:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 10, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7501230239868164, "perplexity": 244.39568807852928}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038076819.36/warc/CC-MAIN-20210414034544-20210414064544-00299.warc.gz"} |
http://en.wikipedia.org/wiki/Jacket_matrix | # Jacket matrix
In mathematics, a jacket matrix is a square matrix $A= (a_{ij})$ of order n if its entries are non-zero and real, complex, or from a finite field, and
Hierarchy of matrix types
$\ AB=BA=I_n$
where In is the identity matrix, and
$\ B ={1 \over n}(a_{ij}^{-1})^T.$
where T denotes the transpose of the matrix.
In other words, the inverse of a jacket matrix is determined its element-wise or block-wise inverse. The definition above may also be expressed as:
$\forall u,v \in \{1,2,\dots,n\}:~a_{iu},a_{iv} \neq 0, ~~~~ \sum_{i=1}^n a_{iu}^{-1}\,a_{iv} = \begin{cases} n, & u = v\\ 0, & u \neq v \end{cases}$
The jacket matrix is a generalization of the Hadamard matrix,also it is a Diagonal block-wise inverse matrix.
## Motivation
n .... -2, -1, 0 1, 2,..... logarithm 2^n ....$\ {1 \over 4},{1 \over 2},$ 1, 2, 4,..... Series
As shown in Table, i.e. in series, n=2 case, Forward[disambiguation needed]: $2^2=4$, Inverse : $(2^2)^{-1}={1 \over 4}$, then, $4*{1\over 4}=1$.
Therefore, exist an element-wise inverse.
## Example 1.
$A = \left[ \begin{array}{rrrr} 1 & 1 & 1 & 1 \\ 1 & -2 & 2 & -1 \\ 1 & 2 & -2 & -1 \\ 1 & -1 & -1 & 1 \\ \end{array} \right],$:$B ={1 \over 4} \left[ \begin{array}{rrrr} 1 & 1 & 1 & 1 \\[6pt] 1 & -{1 \over 2} & {1 \over 2} & -1 \\[6pt] 1 & {1 \over 2} & -{1 \over 2} & -1 \\[6pt] 1 & -1 & -1 & 1\\[6pt] \end{array} \right].$
or more general
$A = \left[ \begin{array}{rrrr} a & b & b & a \\ b & -c & c & -b \\ b & c & -c & -b \\ a & -b & -b & a \end{array} \right],$:$B = {1 \over 4} \left[ \begin{array}{rrrr} {1 \over a} & {1 \over b} & {1 \over b} & {1 \over a} \\[6pt] {1 \over b} & -{1 \over c} & {1 \over c} & -{1 \over b} \\[6pt] {1 \over b} & {1 \over c} & -{1 \over c} & -{1 \over b} \\[6pt] {1 \over a} & -{1 \over b} & -{1 \over b} & {1 \over a} \end{array} \right],$
## Example 2.
For m x m matrices, $\mathbf {A_j},$
$\mathbf {A_j}=diag(A_1, A_2,.. A_n )$ denotes an mn x mn block diagonal Jacket matrix.
$J_4 = \left[ \begin{array}{rrrr} I_2 & 0 & 0 & 0 \\ 0 & cos\theta & -sin\theta & 0 \\ 0 & sin\theta & cos\theta & 0 \\ 0 & 0 & 0 & I_2 \end{array} \right],$ $\ J^T_4 J_4 =J_4 J^T_4=I_4.$
## References
• Moon Ho Lee,The Center Weighted Hadamard Transform, IEEE Transactions on Circuits Syst. Vol. 36, No. 9, PP. 1247–1249, Sept.1989.
• K.J. Horadam, Hadamard Matrices and Their Applications, Princeton University Press, UK, Chapter 4.5.1: The jacket matrix construction, PP. 85–91, 2007.
• Moon Ho Lee, Jacket Matrices: Constructions and Its Applications for Fast Cooperative Wireless Signal Processing,LAP LAMBERT Publishing, Germany,Nov. 2012. | 2014-09-22 12:52:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 16, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8896588087081909, "perplexity": 4499.159395504175}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657137046.16/warc/CC-MAIN-20140914011217-00262-ip-10-234-18-248.ec2.internal.warc.gz"} |
http://nmrlipids.blogspot.fi/2015/03/mapping-scheme-for-lipid-atom-names-for.html | ## Wednesday, March 25, 2015
### Mapping scheme for lipid atom names for universal analysis scripts
During project we have generated unique collection of simulation data in the Zenodo community which can be openly used for different kind analyses. Analysis of this data is straighforward for expert, however generating scripts may be tedious since the atom naming convention is different between different models. To ease this I have generated a atom name mapping convention which should allow the usage of the same analysis script for different force fields with minimum effort. This has been done bearing in mind the analysis we have to do for the current activities in this blog, but also the wider usage beyond this project. In this post I describe the idea of the mapping and show some examples where I have used it. I hope that other people try to use this as well and tell if there are suggestions for improvement.
Mapping:
The mapping consist of a file which has the universal atom names in the first column and force field specific atom names in the second column. Here is example from the beginning of the mapping file for the CHARMM36 force field:
Universal name force field name
M_G1_M C3
M_G1H1_M HX
M_G1H2_M HY
M_G1O1_M O31
M_G1C2_M C31
M_G1C2O1_M O32
.
.
.
Universal atomnames always starts with "M_" flag and ends with "_M" flag to ensure that the scripts using grep command never confuses between them and force field specific names. In the actual naming convention between the flags, the first two characters define in which glycerol backbone chain the atoms attached (G1, G2 or G3). Then the naming goes like this:
M_G1_M ;glycerol backbone C1
M_G1H1_M ;hydrogen attached to glycerol backbone C1
M_G1H2_M ;hydrogen attached to glycerol backbone C1
M_G1O1_M ;first atom (oxygen) in sn-1 chain
M_G1C2_M ;second atom (carbon) in sn-1 chain
M_G1C2O1_M ;oxygen attached to second atom (carbon) in sn-1 chain
M_G1C3_M ;third atom (carbon) in sn-1 chain
M_G1C3H1_M ;first hydrogen attached to third atom (carbon) in sn-1 chain
M_G1C3H2_M ;second hydrogen attached to third atom (carbon) in sn-1 chain
.
.
.
So the third character tells the atom type and fourth character tells the counting number from the glycerol backbone carbon. If there are hydrogens or other atoms attached to the main chain, those will be added to the end of the naming, for example, in these cases:
M_G1C2O1_M ;oxygen attached to second atom (carbon) in sn-1 chain
M_G1C3H1_M ;first hydrogen attached to third atom (carbon) in sn-1 chain
The G3 atoms are the headgroup atoms, thus they are little bit more complicated. Examples of complete mapping files for CHARMM36 and MacRog models can be found from GitHub.
Usage:
I have used this mapping for couple of cases now and I think that it works.
Below is example script which calculates the order parameters for acyl chains for CHARMM36 model in low hydrated conditions. The main point is that only changes required to use the script for different systems are the file names for simulation files, output files, mapping file and the order parameter analysis script location. These are 7 lines after "#Define file names" comment. The gro_OP.awk and mapping files can be found from GitHub. It is tedius to generate the mapping files, however, once done the analysis of different properties becomes very straightforward.
Example script:
#!/bin/bash
wget https://zenodo.org/record/13945/files/popcRUN4.tpr
wget https://zenodo.org/record/13945/files/popcRUN4.trr
#Define file names
tprname=popcRUN4.tpr
trajname=popcRUN4.trr
trajgroname=analTMP.gro
sn1outname=OrderParamSN1lowHYD.dat
sn2outname=OrderParamSN2lowHYD.dat
mappingFILE=../MAPPING/mappingPOPCcharmm.txt
analFILE=../../nmrlipids.blogspot.fi/scripts/gro_OP.awk
#Make gro file which can be used to calculate the order parameters
echo System | /home/ollilas1/gromacs/gromacs465/bin/trjconv -f $trajname -s$tprname -o $trajgroname -pbc res -b 0 #This is loop over sn-1 carbon segments for(( j = 3 ; j <= 16; j=j+1 )) do #This greps the force field specific atom names using the mapping file Cname=$(grep M_G1C"$j"_M$mappingFILE | awk '{printf "%5s\n",$2}') H1name=$(grep M_G1C"$j"H1_M$mappingFILE | awk '{printf "%5s\n",$2}') H2name=$(grep M_G1C"$j"H2_M$mappingFILE | awk '{printf "%5s\n",$2}') #Calculate order parameters H1op=$(awk -v Cname="$Cname" -v Hname="$H1name" -f $analFILE$trajgroname)
H2op=$(awk -v Cname="$Cname" -v Hname="$H2name" -f$analFILE $trajgroname) #Print results to file echo$j $H1op$H2op >> $sn1outname done #This is loop over sn-2 carbon segments for(( j = 3 ; j <= 18; j=j+1 )) do #This greps the force field specific atom names using the mapping file Cname=$(grep M_G2C"$j"_M$mappingFILE | awk '{printf "%5s\n",$2}') H1name=$(grep M_G2C"$j"H1_M$mappingFILE | awk '{printf "%5s\n",$2}') H2name=$(grep M_G2C"$j"H2_M$mappingFILE | awk '{printf "%5s\n",$2}') #Calculate order parameters H1op=$(awk -v Cname="$Cname" -v Hname="$H1name" -f $analFILE$trajgroname)
H2op=$(awk -v Cname="$Cname" -v Hname="$H2name" -f$analFILE $trajgroname) #Print results to file echo$j $H1op$H2op >> \$sn2outname
done
The script and output files (OrderParamSN2lowHYD.dat ) can be found also from github.
The results from the above script together with the full hydration results are also shown in Fig. 1
Fig 1. Order parameters for acyl chains calculated from trajectories availabe in Zenodo collection using the mapping file and similar scripts as above. Also experimental results by Ferreira et al. for full hydration are shown. Dehydration induced ordering of acyl chains qualitatively agrees with experiments for DMPC [Dvinskikh et al., Mallikarjunaiah et al.].
I have also ran similar script to calculate acyl chain order parameters from different simulation data with different cholesterol concentrations available in the Zenodo collection. These scripts and produced results are available in GitHub (CHARMM36, MacRog). The results are summarized in Fig. 2
Fig 2. Order parameters for acyl chains calculated from some trajectories with cholesterol availabe in the Zenodo collection using the mapping file and similar scripts as above. The experimental data is from Ferreira et al. For more discussion about CHARMM36 and MacRog in GitHub.
This is quite technical issue, and I am not sure if this presentation is understandable, so please do not hesitate ask further clarifications. | 2017-06-24 22:21:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5132609605789185, "perplexity": 8817.997332717467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320362.97/warc/CC-MAIN-20170624221310-20170625001310-00141.warc.gz"} |
https://studyadda.com/question-bank/fractions_q24/3360/281769 | • # question_answer 24) The given list shows the names of the students who are in the Sports Club. Students in the Sports Club Ananya Saumya Shalu Shikha Mona Anushka Sonika Parul One student will be chosen at random. What fraction of names are starting with alphabet S? A) $\frac{2}{8}$ B) $\frac{3}{8}$ C) $\frac{1}{8}$ D) $\frac{4}{8}$
Total number of students = 8 Number of students whose name starts with 'S' = 4 $\therefore$ Required fraction $=\frac{4}{8}$ | 2019-06-26 22:02:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.503275990486145, "perplexity": 6239.733131216906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000575.75/warc/CC-MAIN-20190626214837-20190627000500-00011.warc.gz"} |
https://zbmath.org/?q=an:1010.34021 | ## Small amplitude limit cycles and the distribution of zeros of families of analytic functions.(English)Zbl 1010.34021
The paper is concerned with the number of limit cycles of a planar polynomial system in a neighborhood of the origin. An estimation is given on their number. The main tool is a distributional inequality for the number of zeros of some families of univariate holomorphic functions depending analytically on a parameter.
### MSC:
34C05 Topological structure of integral curves, singular points, limit cycles of ordinary differential equations 34C10 Oscillation theory, zeros, disconjugacy and comparison theory for ordinary differential equations 60F05 Central limit and other weak theorems
### Keywords:
small amplitude limit cycles; distribution of zeros
Full Text: | 2022-05-20 19:55:25 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8869377970695496, "perplexity": 629.4518500471099}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662534669.47/warc/CC-MAIN-20220520191810-20220520221810-00793.warc.gz"} |
https://datascience.stackexchange.com/questions/13109/adoptivemachine-wise-ml-packages/13583 | What kind of Machine Learning/Data Mining packages are available that can easily be scaled on a cluster. For instance H2O is one of them because it's running in Java so it can be easily extended to other machines providing parallelism. What other are popular? or even for general Data munging purposes.
Language wise preferably Python or R. (if other please don't be discouraged to include)
For python (which is my language of preference) I can suggest one of two alternatives.
First, PySpark. As part of the Spark collection of utilities it is very mature and designed to work with hundreds and thousands of nodes. It is extremely stable and has decent API, especially when empowered with Spark-MLlib. Spark has loads of users and resources and comes highly recommended, Nearly the entire industry uses that.
If thousands of nodes is not your biggest concern, and you're familiar with Pandas, you might want to consider Dask for a native python solution, together with sklearn. This is a bit "risky" from a corporate's standpoint but I recommend giving it a try.
• Thank you very much, I did not know about Dask seems a nice framewrok! Aug 23 '16 at 11:45
• An upvote and an "accepted answer" are always a nice way to say "Thank you" ;). No pressure though Aug 23 '16 at 14:53
• I firmly agree with you, I was the one who upvoted you, the reason that I not accepted your answer is because I leave some room for more recommendations especially for the distributed ML part. Again thank you for your proposal! Aug 23 '16 at 15:40
SPARKR, http://spark.apache.org/docs/latest/sparkr.html, is designed to support parallel processing.
Additional resources are summarized here: https://cran.r-project.org/web/views/HighPerformanceComputing.html
• Thank you for the answer, except from these two are there any other ? It seems strange to me that only these two exist. Aug 1 '16 at 19:54 | 2022-01-25 23:04:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2145586758852005, "perplexity": 1125.796943781376}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00653.warc.gz"} |
https://www.asymptote-project.eu/all-tasks/?asymPagination=14 | #### Farmer Production
One farmer sold to four dealers the $\dfrac{2}{5}$, $\dfrac{2}{15}$, $\dfrac{1}{3}$ and $\dfrac{1}{10}$ of his production. How much of the production was not sold?
# Fractions
Training
6
#### Cyclists average weight
A group of 8 cyclist is composed by 2 women and 6 men. The average weight of the 8 cyclist is 80 kg. The average of men weight is 88 kg. Calculate how much is the average weight of women.
# Mean & median
Reasoning
8
#### Joseph rating problem
Joseph in the last 7 tests takes the following grades: 7.5 / 8 / 5.5 / 4 / 5.5 / 6 / 6.5. How much is his average rating? (round to the second decimal digit)
# Mean & median
Training
6
#### Faucets
A faucet fills up a tank in 10 minutes. Another faucet fills up the same tank in 15 minutes. How many minutes does it take for the tank to be full if we open the two faucets together?
# Fractions
Modeling
6
#### Sibling Survey
The 24 students in a class conducted a survey on the number of siblings every student has. On average, a student in this class has about 1,83 siblings. The number of students with 1 and 2 siblings is equal. Exactly 6 Students in the class have more than 2 siblings. Below, select the diagrams from the title image, that represent the collected data correctly.
# Standard diagrams
Training
7
#### Playing with cards and probabilities
A certain set of playing cards is made of twelve red cards and some black cards. A card is chosen at random from this set. It is known that the probability of that card being red is 75%. How many black cards are there in this set?
# One-stage random experiments
Reasoning
8 | 2022-09-27 17:36:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7223813533782959, "perplexity": 874.6211652533215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00442.warc.gz"} |
http://www.journaltocs.ac.uk/index.php?action=browse&subAction=subjects&publisherID=89&journalID=6743&pageb=1&userQueryID=&sort=&local_page=1&sorType=DESC&sorCol=2 | Subjects -> METALLURGY (Total: 58 journals)
Showing 1 - 10 of 10 Journals sorted alphabetically Acta Metallurgica Slovaca (Followers: 2) Advanced Device Materials (Followers: 6) American Journal of Fluid Dynamics (Followers: 44) Archives of Metallurgy and Materials (Followers: 9) Asian Journal of Materials Science (Followers: 4) Canadian Metallurgical Quarterly (Followers: 21) Complex Metals (Followers: 2) Energy Materials : Materials Science and Engineering for Energy Systems (Followers: 24) Graphene and 2D Materials (Followers: 6) Handbook of Ferromagnetic Materials (Followers: 1) Handbook of Magnetic Materials (Followers: 2) High Temperature Materials and Processes (Followers: 6) Indian Journal of Engineering and Materials Sciences (IJEMS) (Followers: 11) International Journal of Metallurgy and Alloys (Followers: 1) International Journal of Metals (Followers: 7) International Journal of Minerals, Metallurgy, and Materials (Followers: 11) International Journal of Mining and Geo-Engineering (Followers: 4) Ironmaking & Steelmaking (Followers: 5) ISIJ International - Iron and Steel Institute of Japan (Followers: 26) Izvestiya Vuzov. Poroshkovaya Metallurgiya i Funktsional’nye Pokrytiya (Proceedings of Higher Schools. Powder Metallurgy аnd Functional Coatings) (Followers: 2) JOM Journal of the Minerals, Metals and Materials Society (Followers: 35) Journal of Central South University (Followers: 1) Journal of Cluster Science Journal of Heavy Metal Toxicity and Diseases Journal of Iron and Steel Research International (Followers: 11) Journal of Materials & Metallurgical Engineering (Followers: 2) Journal of Materials Processing Technology (Followers: 21) Journal of Metallurgical Engineering (Followers: 4) Journal of Sustainable Metallurgy (Followers: 3) Materials Science and Metallurgy Engineering (Followers: 6) Metal Finishing (Followers: 20) Metallurgical and Materials Engineering (Followers: 7) Metallurgical and Materials Transactions A (Followers: 41) Metallurgical and Materials Transactions B (Followers: 32) Metallurgical and Materials Transactions E (Followers: 2) Metallurgical Research and Technology (Followers: 8) Metallurgy and Foundry Engineering (Followers: 2) Mining, Metallurgy & Exploration Powder Diffraction (Followers: 1) Powder Metallurgy (Followers: 36) Powder Metallurgy and Metal Ceramics (Followers: 8) Powder Metallurgy Progress (Followers: 5) Practical Metallography (Followers: 6) Rare Metals (Followers: 3) Revista de Metalurgia Revista del Instituto de Investigación de la Facultad de Ingeniería Geológica, Minera, Metalurgica y Geográfica Revista Remetallica (Followers: 1) Revue de Métallurgie Russian Metallurgy (Metally) (Followers: 4) Science and Technology of Welding and Joining (Followers: 7) Steel Times lnternational (Followers: 19) Transactions of the IMF (Followers: 14) Transactions of the Indian Institute of Metals (Followers: 5) Tungsten Universal Journal of Materials Science (Followers: 3) Welding in the World (Followers: 7) Welding International (Followers: 11) Вісник Приазовського Державного Технічного Університету. Серія: Технічні науки
Similar Journals
JOM Journal of the Minerals, Metals and Materials SocietyJournal Prestige (SJR): 1.054 Citation Impact (citeScore): 2Number of Followers: 35 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1543-1851 - ISSN (Online) 1047-4838 Published by Springer-Verlag [2626 journals]
• Evaluation and Modeling of Scrap Utilization in the Steelmaking Process
• Abstract: Abstract The study of scrap melting provides data for increasing the scrap utilization rate. Here, an evaluation model is established to analyze the effect of each factor on scrap melting using statistical methods for the first time. Subsequently, the quantitative relationship between the influencing factors and melting parameters is obtained. Back propagation (BP) neural networks and multiple regression are used for predictions. For scrap melting controlled by carbon mass transfer when the bath temperature range is 1573–1723 K, the relative contribution of each parameter was mixing power > bath temperature > specific surface area > carbon content. The predicted values of the BP neural network are more accurate than those of multiple regression. The relative errors of average melting rate, average mass melting speed, and mass transfer coefficient of training sets are 14.02%, 13.95%, and 7.19%, respectively, which decrease by 22.71%, 47.22%, and 69.46%, respectively, compared with those of the regression equations after outliers are removed.
PubDate: 2021-01-13
• Leaching of Antimony from Stibnite Ore in KOH Solution for Sodium
Pyroantimonate Production: Systematic Optimization and Kinetic Study
• Abstract: Abstract This work aims to shed light on the leaching kinetics of antimony from stibnite ore in potassium hydroxide (KOH) solution. Response surface methodology based on central composite design was used to investigate the effect of time, temperature, solid to liquid ratio (S/L), and KOH concentration as independent parameters on the leaching efficiency of antimony. According to the results, time shows the most significant effect on the leaching yield of antimony, followed by temperature. The optimum leaching condition was obtained at a KOH concentration of 0.5 mol/L, temperature of 25°C, S/L of 100 g/L, and time of 133 min, with an antimony leaching yield of 56.5%. Kinetic studies based on the shrinking core model illustrated that the diffusion process through the ash layer is the rate-limiting step, with an activation energy of 4.97 kJ mol−1. Finally, antimony was recovered from the leach liquor in the form of NaSb(OH)6. This study can pave the way for the development of new hydrometallurgical processes for antimony recovery from the sulfide minerals.
PubDate: 2021-01-13
• Observation of Fundamental Mechanisms in Compression-Induced Phase
Transformations Using Ultrafast X-ray Diffraction
• Abstract: Abstract As theoretically hypothesized for several decades in group IV transition metals, we have discovered a dynamically stabilized body-centered cubic (bcc) intermediate state in Zr under uniaxial loading at sub-nanosecond timescales. Under ultrafast shock wave compression, rather than the transformation from α-Zr to the more disordered hex-3 equilibrium ω-Zr phase, in its place we find the formation of a previously unobserved nonequilibrium bcc metastable intermediate. We probe the compression-induced phase transition pathway in zirconium using time-resolved sub-picosecond x-ray diffraction analysis at the Linac Coherent Light Source. We also present molecular dynamics simulations using a potential derived from first-principles methods which independently predict this intermediate phase under ultrafast shock conditions. In contrast with experiments on longer timescale (> 10 ns) where the phase diagram alone is an adequate predictor of the crystalline structure of a material, our recent study highlights the importance of metastability and time dependence in the kinetics of phase transformations.
PubDate: 2021-01-12
• Comparison of Biomass and Coal in the Recovery Process of silicon in an
Electric Arc Furnace
• Abstract: Abstract Silicon recovery of silica ore ( $${\text{SiO}}_{2}$$ ) has been studied with two types of carbon materials, charcoal as a biomass and coal as a fossil fuel, at elevated temperatures between 1800 and 2000°C in an electric arc furnace. The effects of porosity and electrical resistance of the carbon materials were investigated. To this end, recovery of silicon and ferrosilicon production were tested separately by charcoal and coal, and the products were investigated both qualitatively and quantitatively. A higher electrical resistance of charcoal was found in comparison with coal, with increased efficiency of the furnace and decreased electric energy consumption (per ton of product). The efficiency of the furnace using charcoal and coal was 92.13% and 77.4%, respectively. In addition, the higher porosity of charcoal facilitates the flow of SiO gas through the carbon material leading to a higher reactivity and reducing the electric energy consumption for each ton of FeSi.
PubDate: 2021-01-12
• Investigation of Microwave and Thermal Processing of Electrode Material of
End-of-Life Ni-MH Battery
• Abstract: Abstract Ni-metal hydride (NiMH) batteries should be recycled, as they contain base metals (Ni, Co) and rare earth elements (La, Ce). In this study, thermal and microwave treatments are investigated as a pre-treatment method for decomposing electrode material (Ni(OH)2 and LaNi5). The kinetic analysis of thermal decomposition electrode material yields the activation energy of 41.3 kJ/mol. The maximum percentage of lanthanum nickel oxide phase and cerium oxide phase is obtained at 1000°C of thermal treatment and 15 min of microwave exposure. The formation of melted balls of Ni and its oxide (Ni ~ 72.8%) was observed in microwave exposure, and the tendency of ball formation decreased with increasing exposure time. The effect of microwave exposure and thermal treatment on the acid leaching (1 M HCl, at S/L-1:20, at 70°C for 2 h) was studied. The leaching results showed NiO and CeO2 phases in the leach residue of thermal and microwave treated products.
PubDate: 2021-01-11
• Thermodynamics and Synthesis of Cu Powder from CuO in Waste Tire-Derived
Pyrolytic Gas Atmosphere
• Abstract: Abstract The present study aimed to investigate the reduction behavior of CuO particles under the gaseous atmosphere generated by waste tire pyrolysis. Thermodynamics of the reduction process indicated that CuO could be reduced to the metal via the tire (rubber) pyrolysis route in the temperature range 700–900 K. Oxide reduction experiments were conducted as a function of the reactant mass ratio (mtire/mCuO) and temperature (600–900 K). The extent of waste pyrolysis increased as the temperature was raised to 900 K. This was accompanied by an increase in the oxide reduction. A significant reduction was attained at mtire/mCuO = 1.28 when the reactants were heated to 800 K and 900 K. Adding a small amount of waste high-density polyethylene to the tire sufficed for full CuO reduction. CuO reduction reactions and morphological evolution of flower-type CuO particles to relatively equiaxed Cu particles were discussed in terms of experimental and theoretical findings.
PubDate: 2021-01-08
• PubDate: 2021-01-07
• A Different Kind of MS&T: Virtual Meeting Featured Live Events and
On-Demand Technical Talks
• PubDate: 2021-01-07
• TMS Members Selected for Presidential Subcommittee; Explore the Redesigned
TMS Webinar Library
• PubDate: 2021-01-07
• Transitioning from an In-Person to Online Format Amidst the COVID-19
Pandemic as Discussed at the Judson Symposium
• PubDate: 2021-01-07
• Honoring the 2021 TMS Award Recipients
• PubDate: 2021-01-07
• in the final analysis
• PubDate: 2021-01-06
• In Case You Missed It: Business News from the Field
• PubDate: 2021-01-05
• Preview the TMS2021 Proceedings Volumes
• PubDate: 2021-01-05
• Textured Polymer Surfaces Mimicking the Tactile Friction Between Wood and
Skin
• Abstract: Abstract Polymer-based furniture with wood-like visual printing is widely used in domestic and office applications. Although polymers could fulfil the high quality requirements of strength and appearance, they cannot mimic the feel of wood during touch. In this study, polymers with textured surfaces were designed to mimic the tactile friction and naturalness of wood. The influence of a series of factors on tactile friction was assessed. Textured polypropylene surfaces showed a 14.8% reduction in friction, and were more similar to wood compared to un-textured rough polypropylene surface, indicating the significant influence of surface texture on tactile friction. The touch perception test further proved that polymer samples were perceived as more natural with a rough or textured surface than with a smooth surface. This study suggests that, with a detailed design of the surface texture parameter, it is possible to mimic the tactile friction and naturalness of wood by using textured polymers.
PubDate: 2021-01-04
• Low-Temperature Molten Salt Synthesis and Characterization of
Nanowire-Like TaB 2 Powder
• Abstract: Abstract TaB2 nanopowder has been prepared by the molten salt synthesis (MSS) technique. Nanowire-like TaB2 nanopowder was successfully synthesized by reacting in the Ta2O5–MgB2 system using KCl/NaCl as reaction media. The impact of the firing temperature (800°C to 1000°C), firing time (1 h to 4 h), and ratio of reactants to salt (1:0, 1:2, 1:5, and 1:10) on the preparation of the TaB2 nanopowder was examined. The resultant powder samples were characterized by scanning electron microscopy (SEM), x-ray diffraction (XRD) analysis, transmission electron microscopy (TEM), high-resolution transmission electron microscopy (HRTEM), and surface area analysis (SAA). The results showed that the reaction involved in the formation of crystalline TaB2 nanopowder could be completed successfully at 1000°C after 4 h of firing when the mass ratio of reactants to salt was 1:5. The resulting TaB2 nanopowders exhibited nanowire-like morphology.
PubDate: 2021-01-04
• Production Strategy for Manufacturing Large-Scale AlSi10Mg Components by
Laser Powder Bed Fusion
• Abstract: Abstract The long production time required for large-scale parts fabricated by laser powder bed fusion (LPBF) tends to induce cracks, distortions, and overheating problems. In this work, to address these challenges, we explored and established a suitable strategy for producing large AlSi10Mg components. The platform temperatures to prevent cracks and distortions were firstly determined. Then, the in situ aging behavior was investigated for samples under various platform temperatures and holding times. Our results revealed that platform temperatures of 150°C and 200°C can effectively prevent cracks and minimize distortions. Besides, using 150°C, samples can reach peak hardness with a holding time less than 13 h. In comparison, those samples produced with a holding time longer than 13 h at 150°C and 200°C show obvious over-aging responses and thus lower hardness. However, such a hardness impoverishment can be recovered by using a T6 post-process heat-treatment.
PubDate: 2021-01-04
• Recovery of Indium from Hard Zinc Slag by Pressure Leaching and Solvent
Extraction
• Abstract: Abstract In this study, hydrometallurgical processes involving pressure acid leaching and solvent extraction were developed to aid recovery of indium from zinc slag, which is produced in the imperial smelting process. Four different acid leaching methods were studied, namely atmospheric leaching, atmospheric leaching with KMnO4, roasting-atmospheric leaching, and oxygen pressure leaching in a sulfuric acid medium. Oxygen pressure acid leaching is the most effective method for indium extraction, and 94.1% of indium was leached under the optimum conditions, i.e., 300 g/L H2SO4,oxygen pressure 0.4 MPa, liquid/solid ratio 10 mL/g, and temperature 100°C for 5 h. X-ray diffraction and scanning electron microscopy examination of the raw material and leaching residue samples indicated that the intermetallic compounds Cu5Zn8and Cu2Zn, metallic zinc, and iron in the raw material dissolved, leaving the insoluble components PbSO4 and Pb as the major compounds in the leaching residue. A 98.5% proportion of the indium in the leaching solution was selectively extracted with 30% bis(2-ethylhexyl) phosphate and 70% kerosene by three-stage counter-current extraction, and 99.5% of the indium in the loaded organic phase was stripped by 6 mol/L HCl through four-stage counter-current stripping. The overall recovery yield of indium through all processes was approximately 92%.
PubDate: 2021-01-04
• Investigations on Positive (Sm 3+ ) and Negative (Ho 3+ ) Association
Energy Ion Co-doped Cerium Oxide Solid Electrolytes for IT-SOFC
Applications
• Abstract: Abstract Novel compositions of positive (Sm3+) and negative (Ho3+) association energy ion co-doped cerium oxide solid electrolytes were synthesized and analyzed for intermediate-temperature solid oxide fuel cell (IT-SOFC) applications. Powder x-ray diffraction (XRD) and Raman studies confirmed the phase of pure cubic fluorite structure, while densely packed porous-structured morphology was affirmed with high-resolution scanning electron microscope (HR-SEM) micrographs. The formations of oxygen vacancies and association energies were analyzed through optical properties using ultraviolet (UV) and photoluminescence (PL) spectra. Thermal analysis revealed high thermal stability without any structural deformations and a high thermal expansion coefficient at the intermediate temperature range. The incorporation of Sm3+ ions acts as an oxygen vacancy generator which influences the ionic conductivity properties, and Ce0.8Sm0.1Ho0.1O2−δ solid electrolyte showed the high conductivity of 0.72 × 10−2 S/cm at 600°C specifying that this solid electrolyte might be an excellent candidate for IT-SOFC applications.
PubDate: 2021-01-04
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762 | 2021-01-15 15:30:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6611259579658508, "perplexity": 12942.667933711631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00630.warc.gz"} |
http://mathoverflow.net/revisions/7314/list | 2 added 126 characters in body
Just to lend some context to the above question: the mapping class group of the two-torus is naturally isomorphic to GL(2, Z). If we restrict to orientation preserving homeomorphism the mapping class group is SL(2, Z). The periodic mapping classes (isotopy classes of homeomorphisms) are exactly those with trace less than two in absolute value. (Hmm, and +/- Id, I guess!) Now we need to count the number of conjugacy classes of periodic elements. There should be a cool algebraic way to do this. (Perhaps it would help to give a purely algebraic proof that the order of torsion is at most 6?)
I think that there is a geometric way to do this: every periodic element occurs as the symmetry of some flat torus (= parallelogram with opposite sides identified). All tori have have the hyperelliptic symmetry, corresponding to rotation by 180 degrees about any point. (These maps lie in the mapping class of the negative identity.) Other symmetries:
Rombic tori have a reflection symmetry as do rectangular tori.
The square torus has a rotation by 90 degrees.
The hexagonal torus has a rotation by 60 degrees.
So I count:
1. the identity, Id
2. the hyperelliptic = -Id = rotation by 180
3. rotation by 90
4. rotation by 60
5. rotation by 120
6. the reflection [[-1,0],[0,1]] (reflection in an axis) and
7. the reflection [[0,1],[1,0]] (exchange axes).
You can prove that the last two are distinct algebraically. Perhaps the lack of 45 degree rotation is a geometric proof.
Now, we could perform similar geometric tricks to obtain symmetries of $N_4$ and get at least all of the rotations... [Edit: For example, it is possible to build a copy of $\rm{Sym}_4$ by placing the cross-caps at the vertices of a tetrahedron.]
1
Just to lend some context to the above question: the mapping class group of the two-torus is naturally isomorphic to GL(2, Z). If we restrict to orientation preserving homeomorphism the mapping class group is SL(2, Z). The periodic mapping classes (isotopy classes of homeomorphisms) are exactly those with trace less than two in absolute value. (Hmm, and +/- Id, I guess!) Now we need to count the number of conjugacy classes of periodic elements. There should be a cool algebraic way to do this. (Perhaps it would help to give a purely algebraic proof that the order of torsion is at most 6?)
I think that there is a geometric way to do this: every periodic element occurs as the symmetry of some flat torus (= parallelogram with opposite sides identified). All tori have have the hyperelliptic symmetry, corresponding to rotation by 180 degrees about any point. (These maps lie in the mapping class of the negative identity.) Other symmetries:
Rombic tori have a reflection symmetry as do rectangular tori.
The square torus has a rotation by 90 degrees.
The hexagonal torus has a rotation by 60 degrees.
So I count:
1. the identity, Id
2. the hyperelliptic = -Id = rotation by 180
3. rotation by 90
4. rotation by 60
5. rotation by 120
6. the reflection [[-1,0],[0,1]] (reflection in an axis) and
7. the reflection [[0,1],[1,0]] (exchange axes).
You can prove that the last two are distinct algebraically. Perhaps the lack of 45 degree rotation is a geometric proof.
Now, we could perform similar geometric tricks to obtain symmetries of $N_4$ and get at least all of the rotations... | 2013-05-21 19:03:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7569057941436768, "perplexity": 695.2767079834888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700438490/warc/CC-MAIN-20130516103358-00018-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/questions/60951/effect-of-origin-poles-on-stability | # Effect of origin poles on stability?
What will be stability if we have only one single pole at origin in s domain?? and what will be the case for multiple poles at origin in s domain?
A system with simple distinct poles on the imaginary axis (and note that the origin is on the imaginary axis) and no poles in the right half-plane is called marginally stable. If you have poles with multiplicity greater than $$1$$ on the imaginary axis, or if there are poles in the right half-plane, then the system is unstable.
• @abtj: It's just as I said, the origin $s=0$ is on the imaginary axis, so a pole at $s=0$ is on the imaginary axis. And, obviously, not all poles on the imaginary axis are necessarily at the origin. – Matt L. Sep 28 '19 at 11:20
• At the risk of stating the obvious, it is not about the number of poles on the unit circle, but about the multiplicity of the poles. Separate simple poles on the unit circle, no matter how many, result in a marginally stable system, poles on the circle with multiplicity greater than $1$ cause the system to be unstable. – Matt L. Sep 29 '19 at 7:35 | 2020-05-31 11:23:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8718882203102112, "perplexity": 189.1726280311598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347413097.49/warc/CC-MAIN-20200531085047-20200531115047-00294.warc.gz"} |
https://www.physicsforums.com/threads/need-help-with-a-statically-indeterminate-system.324063/ | # Homework Help: Need help with a Statically Indeterminate System!
1. Jul 8, 2009
### TA1068
1. The problem statement, all variables and given/known data
Three steel bars are pin-connected to a rigid member K. Determine the force developed in each bar. Determine the load carried be each of the tension members and the elongation of each member
http://img3.imageshack.us/img3/6850/problemdiagram.jpg [Broken]
Known:
$$A_A_B$$ = 0.10 in^2
$$E_A_B$$ = 30E6 psi
$$A_C_D$$ = 0.20 in^2
$$E_C_D$$ = 15E6 psi
$$A_F_H$$ = 0.30 in^2
$$E_F_H$$ = 10E6 psi
2. Relevant equations
$$\delta$$ = (PL) / (AE)
3. The attempt at a solution
10$$P_C_D$$ + 20$$P_F_H$$ = 15(15000)
or
$$P_F_H$$ = 7500 - (1/3)$$P_C_D$$
The equation $$\delta$$ = (PL) / (AE) yields:
$$P_A_B$$ = 150,000$$\delta_A_B$$
$$P_C_D$$ = 200,000$$\delta_C_D$$
$$P_F_H$$ = 300,000$$\delta_F_H$$
And the sum of forces in the Y direction gives:
$$P_A_B$$ + $$P_C_D$$ + $$P_F_H$$ = 15000
This is where I'm stuck. If any point along K was fixed it would be easy; K is rigid, so then the distance from the fixed point can be turned into a ratio to find the other $$\delta$$ values. I think all 3 points (B, D, and H) are pulled downward, but I'm not sure what there relation is to each other. Any clues?
Last edited by a moderator: May 4, 2017
2. Jul 8, 2009
### TA1068
Yikes, I think my TEX tags are all messed up. Not sure how to fix it, let me know if you have any questions!
3. Jul 8, 2009
### TA1068
Now if it was something like this:
http://img194.imageshack.us/img194/8167/fixedpoint.jpg [Broken]
I would say sigma_CD = 10(theta) and sigma_FH = 30(theta) and all would be good.
But since it's like this:
http://img199.imageshack.us/img199/9023/nonfixedpoint.jpg [Broken]
I have another variable in there with x. Now sigma_AB = (x)(theta), sigma_CD = (10+x)(theta), and sigma_FH = (30+x)(theta)
Hmm...
Last edited by a moderator: May 4, 2017 | 2018-05-23 13:19:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.583935022354126, "perplexity": 3020.410440814285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865651.2/warc/CC-MAIN-20180523121803-20180523141803-00035.warc.gz"} |
http://gmatclub.com/forum/if-p-s-and-t-are-positive-prime-numbers-what-is-the-value-78410.html?fl=similar | Find all School-related info fast with the new School-Specific MBA Forum
It is currently 03 May 2015, 22:44
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# if p, s, and t are positive prime numbers, what is the value
Author Message
TAGS:
Director
Joined: 12 Oct 2008
Posts: 557
Followers: 2
Kudos [?]: 125 [0], given: 2
if p, s, and t are positive prime numbers, what is the value [#permalink] 10 May 2009, 21:09
00:00
Difficulty:
(N/A)
Question Stats:
100% (01:32) correct 0% (00:00) wrong based on 2 sessions
if p, s, and t are positive prime numbers, what is the value of p^3s^3t^3?
(1) p^3st=728
(2) t=13
Director
Joined: 27 Jun 2008
Posts: 547
WE 1: Investment Banking - 6yrs
Followers: 1
Kudos [?]: 48 [0], given: 92
Re: DS: Prime numbers [#permalink] 11 May 2009, 02:56
if p, s, and t are positive prime numbers, what is the value of p^3s^3t^3?
(1) p^3st=728
(2) t=13
(1) 728 = 2*2*2*7*13, so we know p=2, s=7or13, t=13or7. The values for s & t doesn't really matter because we need to find the value of (p*s*t)^3. So, its (2*7*13)^3
Suff
(2) nothing about s * t
Insuff
A
Manager
Joined: 16 Apr 2009
Posts: 246
Schools: Ross
Followers: 2
Kudos [?]: 38 [0], given: 10
Re: DS: Prime numbers [#permalink] 11 May 2009, 09:57
if p, s, and t are positive prime numbers, what is the value of p^3s^3t^3?
(1) p^3st=728
(2) t=13
Stat(1)
p^3st=728
728=2*2*2*7*13
Therefore suff
Stat(2)
t=13
but we don't know either s or p value so insuff
Hence A
_________________
Keep trying no matter how hard it seems, it will get easier.
Manager
Joined: 13 May 2009
Posts: 195
Followers: 5
Kudos [?]: 29 [1] , given: 1
Re: DS: Prime numbers [#permalink] 14 May 2009, 13:11
1
KUDOS
If p, s, and t are positive prime numbers, what is the value of p^3s^3t^3?
(1) p^3st=728
(2) t=13
Question:($$p^3s^3t^3$$)?
Question:( $$(pst)^3$$ )?
(1) $$p^3*s*t=728$$
Factorize 728
$$728 = 2(364)=2(2)(182)=2(2)(2)(91)=2^3(7)(13)$$
So, $$p^3st=2^3(7)(13)$$
This gives us that p=2, and s=7,t=13 or s=13,t=7.
But in either case it doesn't matter, because the order in which we multiply the primes has no effect on the answer. For example, $$(2)(7)(13) = (2)(13)(7))$$, yes? So $$((2)(7)(13))^3 = ((2)(13)(7))^3$$ as well.
Sufficient.
(2) By itself this is insufficient as it tells us nothing about $$p$$ or $$s$$.
_________________
Re: DS: Prime numbers [#permalink] 14 May 2009, 13:11
Similar topics Replies Last post
Similar
Topics:
2 If p is a prime number, what is the value of p ? (1) \sqrt 16 03 Oct 2010, 09:30
3 If p, s, and t are positive prime numbers, what is the value 7 22 Jan 2010, 23:18
if p, s, and t are positive prime numbers, what is the value 3 10 Aug 2008, 01:25
If p,s, and t are positive prime numbers, what is the value 5 04 Jun 2008, 12:19
I. If p is a prime number greater than 2, what is the value 9 10 Jun 2007, 17:44
Display posts from previous: Sort by | 2015-05-04 06:44:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6034896969795227, "perplexity": 3593.4163280775124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430453791682.22/warc/CC-MAIN-20150501041631-00086-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/water-pressure-in-u-tube-arms.933390/ | # I Water pressure in U-Tube arms
Tags:
1. Dec 4, 2017
### Nikhil Rajagopalan
A liquid in a U-tube maintains the same level in both the arms. When another liquid which does not mix with the first one is poured in it, the air interface with either of the liquids in both the arms will be at different levels.
Considering a point at the top surface of liquid A, the pressure there should be atmospheric pressure. Another point in liquid B taken at the same level will have a pressure that amounts to atmospheric pressure plus the pressure due to the column of liquid B over it. These cannot be same. So how is the pressure at the same depth not being equal when we use that principle to solve many problems.
2. Dec 4, 2017
### jbriggs444
You have correctly reasoned that the pressures 20 cm above P and Q in the two tubes will not be the same.
Often we deal with tanks in which there is only one fluid and that fluid's density is constant. In these tanks we often take it for granted that there is a path through the fluid from any point to any other point. And finally, these situations are often static -- the fluid has been allowed to settle down into an equilibrium with no waves, no flow and no sloshing from side to side. In such circumstances, the pressure within the fluid is a simple function of depth: $p=\rho g h$ where $\rho$ is the fluid density, g is the local acceleration of gravity and h is the depth below the reference height. ["Gauge" pressure is taken to be zero at the reference height].
Clearly, in such circumstances, this means that the pressure at two points at the same depth will be identical. Break any one of the three conditions (same fluid density, connected path, motionless fluid) and the guarantee no longer holds. [Technically, a fourth condition is also required -- gravity must be uniform across the area of interest]
But let us discard the $p = \rho g h$ and try to derive the equal pressure guarantee from first principles. Suppose that you are at point A somewhere in the fluid. The pressure there is P. Now you trace a path through the fluid to point B.
[Note that here we used the "connected" condition]
As you trace the path, every time the path goes downward by an increment of $d h$, you will add $\rho g\ dh$ to the pressure. If this pressure increase were not present, the tiny volume of fluid at this point on the path would be subject to an unbalanced net force. It would accelerate. But we have assumed that the fluid is in equilibrium. Similarly, ever time the path goes upward by an increment of $dh$ you will subtract $\rho g\ dh$ from the pressure.
The concept of tracing the path and adding up little contributions from incremental path segments is known in vector calculus: a "path integral".
[Note that here we used the "motionless at equilibrium" condition]
If the start and end points are at the same height then when you finish tracing the path, the sum of $d h$ values must be zero. If the fluid has constant density and if gravity is constant then this means that the sum of the $\rho g\ dh$ must also be zero.
[Note that here we used the "fluid of constant density" condition]
To summarize: There are conditions that must hold before the "equal pressure at equal depths" principle can be known to be true.
3. Dec 4, 2017
### stockzahn
You assume that in the two columns the pressure gradient must be identical at each height. This statement only applies, if the two liquids have the same density. Start from the level P-Q. Since in both arms, up to this level, the liquid has the same density, your approach is correct - the pressure is identical. Liquid A now has a higher density, therefore the pressure to up to its free surface must decrease faster than in the liquid B ($\rho g \Delta h = \Delta p$). The pressure at the free surface must have the atmosphere's value.
If you demand the same pressure profile in the two arms, you deny the existence of liquids with different densities. | 2018-08-22 06:00:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7810372114181519, "perplexity": 391.99318716616835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219495.97/warc/CC-MAIN-20180822045838-20180822065838-00386.warc.gz"} |
https://zbmath.org/?q=an%3A0794.17008 | # zbMATH — the first resource for mathematics
The crystal base and Littelmann’s refined Demazure character formula. (English) Zbl 0794.17008
Demazure’s character formula describes the weight multiplicities of the $$U({\mathfrak n}^ +)$$-module generated by an extremal vector of the irreducible highest weight $$U({\mathfrak g})$$-module, where $${\mathfrak g}$$ is a symmetrizable Kac-Moody Lie algebra. In his paper [Crystal graphs and Young tableaux (preprint)], P. Littelmann gives a conjecture of a generalization of the Demazure character formula which is described by crystal bases. In this paper the author proves this conjecture for any symmetrizable case.
[1] H. H. Andersen, Schubert varieties and Demazure’s character formula , Invent. Math. 79 (1985), no. 3, 611-618. · Zbl 0591.14036 [2] M. Demazure, Désingularisation des variétés de Schubert généralisées , Ann. Sci. École Norm. Sup. (4) 7 (1974), 53-88. · Zbl 0312.14009 [3] A. Joseph, On the Demazure character formula , Ann. Sci. École Norm. Sup. (4) 18 (1985), no. 3, 389-419. · Zbl 0589.22014 [4] M. Kashiwara, Crystalizing the $$q$$-analogue of universal enveloping algebras , Comm. Math. Phys. 133 (1990), no. 2, 249-260. · Zbl 0724.17009 [5] M. Kashiwara, On crystal bases of the $$q$$-analogue of universal enveloping algebras , Duke Math. J. 63 (1991), no. 2, 465-516. · Zbl 0739.17005 [6] M. Kashiwara, Global crystal bases of quantum groups , RIMS preprint 756, 1991. · Zbl 0774.17018 [7] P. Littleman, Crystal groups and Young tableaux , preprint, 1991. [8] O. Mathieu, Formules de caractères pour les algèbres de Kac-Moody générales , Astérisque (1988), no. 159-160, 267. · Zbl 0683.17010 [9] Shrawan Kumar, Demazure character formula in arbitrary Kac-Moody setting , Invent. Math. 89 (1987), no. 2, 395-423. · Zbl 0635.14023 [10] S. Ramanan and A. Ramanathan, Projective normality of flag varieties and Schubert varieties , Invent. Math. 79 (1985), no. 2, 217-224. · Zbl 0553.14023 | 2022-01-20 14:32:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8949093222618103, "perplexity": 737.1849661445422}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301863.7/warc/CC-MAIN-20220120130236-20220120160236-00154.warc.gz"} |
http://bmet.wikia.com/wiki/Life_expectancy | ## FANDOM
1,702 Pages
Life expectancy (LE) is the expected (in the statistical sense) number of years of life remaining at a given medical devices age. It is denoted by the average number of subsequent years of life for a device now aged x, according to experience.
## Calculate LE
Lets take for example that a piece of medical equipment is purchased for 1,000,000 and depreciates at a rate of \$150,000 per year, what is the estimated life expectancy (in years).
An example...
$1,500,000 / 150,000 = 10 (years)$ | 2017-08-23 08:16:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33938518166542053, "perplexity": 1839.746510162751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117911.49/warc/CC-MAIN-20170823074634-20170823094634-00654.warc.gz"} |
https://en.wikipedia.org/wiki/Patterson_function | # Patterson function
The Patterson function is used to solve the phase problem in X-ray crystallography. It was introduced in 1935 by Arthur Lindo Patterson while he was a visiting researcher in the laboratory of Bertram Eugene Warren at MIT.[1]
The Patterson function is defined as
${\displaystyle P(u,v,w)=\sum \limits _{hkl}\left|F_{hkl}\right|^{2}\;e^{-2\pi i(hu+kv+lw)}.}$
It is essentially the Fourier transform of the intensities rather than the structure factors. The Patterson function is also equivalent to the electron density convolved with its inverse:
${\displaystyle P\left({\vec {u}}\right)=\rho \left({\vec {r}}\right)*\rho \left(-{\vec {r}}\right).}$
Furthermore, a Patterson map of N points will have N(N − 1) peaks, excluding the central (origin) peak and any overlap.
The peaks positions in the Patterson function are the interatomic distance vectors and the peak heights are proportional to the product of the number of electrons in the atoms concerned.
Because for each vector between atoms i and j there is an oppositely oriented vector of the same length (between atoms j and i), the Patterson function always has centrosymmetry.
## One-dimensional example
Consider the series of delta functions given by
${\displaystyle f(x)=\delta (x)+3\delta (x-2)+\delta (x-5)+3\delta (x-8)+5\delta (x-10);\,}$
then the Patterson function is
{\displaystyle {\begin{aligned}P(u)={}&5\delta (u+10)+18\delta (u+8)+9\delta (u+6)+6\delta (u+5)+6\delta (u+3)+18\delta (u+2)+45\delta (u)+\\&{}+18\delta (u-2)+6\delta (u-3)+6\delta (u-5)+9\delta (u-6)+18\delta (u-8)+5\delta (u-10).\end{aligned}}}
## References
1. ^ Patterson, A. L. (1935). "A direct method for the determination of the components of interatomic distances in crystals". Zeitschrift für Kristallographie. 90: 517. doi:10.1524/zkri.1935.90.1.517. | 2018-11-15 13:15:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8477769494056702, "perplexity": 2065.3084240623994}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742685.33/warc/CC-MAIN-20181115120507-20181115142507-00235.warc.gz"} |
https://www.gradesaver.com/textbooks/math/calculus/calculus-with-applications-10th-edition/chapter-1-linear-functions-1-1-slopes-and-equations-of-lines-1-1-exercises-page-14/39 | ## Calculus with Applications (10th Edition)
A line with a positive rises from left to right while a line with a negative slope falls from left to right. Notice that in the given line, from the point $(-2, 0)$ to the point $(0, 2)$, the change in $y$ is $2$ and the change in $x$ is also $2$. This means that the slope of the line is $\frac{2}{2}=1$ Thus, the answer is Option (a). | 2018-09-22 04:01:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8997989296913147, "perplexity": 73.99074071717544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00171.warc.gz"} |
https://www.techwhiff.com/learn/consider-the-following-sample-data-for-the/200494 | # Consider the following sample data for the relationship between advertising budget and sales for Product A:...
###### Question:
Consider the following sample data for the relationship between advertising budget and sales for Product A:
Observation Advertising ($) Sales ($) 1 2 3 4 5 6 7 8 9 10 70,000 80,000 80,000 90,000 100,000 100,000 110,000 110,000 120,000 130,000 432,000 478,000 484,000 552,000 605,000 594,000 688,000 674,000 713,000 784,000
What is the slope of the "least-squares" best-fit regression line?
Note that the correct answer will be evaluated based on the full-precision result you would obtain using Excel.
#### Similar Solved Questions
##### Q.1- True or False? A growing threat to outdoor media is the ban being placed on...
Q.1- True or False? A growing threat to outdoor media is the ban being placed on them by some communities. True False Q.2- Compared to television stations, radio: A. has more production costs for ads. B. takes longer to have a commercial completed and placed. C. is far more costly to ...
##### As part of a survey, 18 college graduates and 19 non-graduates were asked, "How many hours...
As part of a survey, 18 college graduates and 19 non-graduates were asked, "How many hours did you spend at your job last week?" The results are shown in the stem-and-leaf display below. Answer the questions that follow. Number of hours at work Graduates Non-graduates 9 2 2 7 6 6 5 4 4 2 0 3...
##### Journalizing and Posting Transactions Instructions Chart Of Accounts General Journal T-Accounts Instructions Findlay Testing Inc. provides...
Journalizing and Posting Transactions Instructions Chart Of Accounts General Journal T-Accounts Instructions Findlay Testing Inc. provides water testing and maintenance services for owners of hot tubs and swimming pools. During September the following transactions occurred: September Transactions: S...
##### How we can integrate this ? ∫1/(x²-1)²dx
How we can integrate this ? ∫1/(x²-1)²dx...
##### Java Programming Part 1 File encryption is the science of writing the contents of a file...
Java Programming Part 1 File encryption is the science of writing the contents of a file in a secret code. Your encryption program should work like a filter, reading the contents of one file, modifying the data into a code, and then writing the coded contents out to a second file. The second file wi... | 2023-03-29 16:35:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1812126189470291, "perplexity": 2722.5887921653416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949009.11/warc/CC-MAIN-20230329151629-20230329181629-00703.warc.gz"} |
https://cefrexambot.com/part-2-listening-trinity-vs-cambridge/ | CEFR Language Exam Resource Centre
PART 2: LISTENING Trinity vs Cambridge
To determine the difficulty of the listening exams, I have measured the following aspects:
• Sentence complexity – Like with the reading exams, “Readability” score based on the word complexity (syllables and characters) and sentence complexity (average words per sentence) of the listening scripts.
• Vocabulary – Like with the reading exams, the percentage of basic vocabulary (CEFR: A1-A2)
• Speed – Words per minute
B1: ISE I vs PET
Trinity ISE I is easier than Cambridge PET.
B2: ISE II vs FCE
Cambridge FCE is easier than Trinity ISE II.
C1: ISE III vs CAE
Cambridge CAE is easier than Trinity ISE III.
B. CEFR Vocabulary Profile
English Profile is an online tool which can map the CEFR classifications of the words in a text (in this case, the listening script). If a script has a larger proportion of A1-A2 (low level) vocabulary, it can be fairly deemed to be easier to listen to.
As reference, if a 10’000 word sample from Ulysses, by James Joyce, is analysed, only 28% of the words are A1-A2.
B1: ISE I vs PET
The % of basic vocabulary is almost exactly the same.
B2: ISE II vs FCE
The Cambridge FCE contains over 10 % more basic vocabulary than ISE II and can there be considered easier in this respect.
C1: ISE III vs CAE
The % of basic vocabulary is almost exactly the same.
C. Words per minute
The average number of words per minute is a measure of how fast the speakers are speaking.
B1: ISE I vs PET
Trinity ISE I is considerably slower than Cambridge PET. In fact, the Cambridge PET is faster than either ISE II or ISE III!
B2: ISE II vs FCE
Trinity ISE II is slower than Cambridge FCE.
C1: ISE III vs CAE
Trinity ISE III is considerably slower than the Cambridge PET
Final Listening Exam Verdict
B1: ISE I vs PET
Trinity ISE I is easier than Cambridge PET.
Although the percentage basic of basic vocabulary is similar, the Cambridge PET complexity in terms of average word and sentence length is higher and the Cambridge PET is spoken 28 words per minute faster than Trinity ISE.
B2: ISE II vs FCE
Cambridge FCE is slightly easier than Trinity ISE II.
The readability formulas demonstrate that ISE II is more a bit more complicated while then CEFR vocabulary profiler states it has a lower percentage of basic vocabulary. This said, the Cambridge FCE is 10 words a minute faster than Trinity ISE II
C1: ISE III vs CAE
Trinity ISE III is easier than Cambridge CAE.
Although the readability scores suggest the scripts of Trinity ISE III are more complex, the sheer speed of Cambridge CAE makes it a lot more difficult than Trinity ISE. | 2020-08-11 01:34:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9121488332748413, "perplexity": 9849.466174649964}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738723.55/warc/CC-MAIN-20200810235513-20200811025513-00048.warc.gz"} |
https://stats.stackexchange.com/questions/508022/both-z-test-and-chi-squared-apply-in-testing-if-a-pill-makes-any-difference-in-a | # Both Z-test and chi-squared apply in testing if a pill makes any difference in avoiding the flu?
I want to test if a certain pill would make any difference in avoiding the flu.
I give the pill to group A and a placebo to group B. Then I count how many individuals got the flu in each group, and how many didn't.
I imagine two ways of testing if the pill made any difference in avoiding the flu:
Z test for proportion: I conduct a z test with the following hypoteshis: Null hypothesis: The proportion of people that got the flu in group A is equal to the proportion of people that got the flu in group B. Alternative Hypothesis: the proportions are different.
Chi-squared test: I create a contingency table with the sample and I conduct a chi-squared test for homogeneity.
What's the most adequate test in this case, and why?
• related to this question Feb 4 '21 at 14:46
There are various versions of both tests. If the error variance in the test of proportions is estimated from the two groups separately, and no continuity correction is used for either test, then the two tests may be equivalent.
Suppose there were 403 successes out of 500 subjects in the drug group and 366 out of 500 in the placebo group, then here are the two tests as computed in R. Notice that the methods of formatting the data and the methods of formatting the output differ between the two tests as implemented in R.
Test of binomial proportions. Data are numbers of successes and sample sizes. The continuity correction is declined (parameter cor=F).
prop.test(c(403,366), c(500,500), cor=F)
2-sample test for equality of proportions
without continuity correction
data: c(403, 366) out of c(500, 500)
X-squared = 7.7066, df = 1, p-value = 0.005502
alternative hypothesis: two.sided
95 percent confidence interval:
0.0219564 0.1260436
sample estimates:
prop 1 prop 2
0.806 0.732
Chi-squared test. Data are provided in a contingency table. The Yates correction is declined.
TBL
[,1] [,2]
[1,] 403 366
[2,] 97 134
chisq.test(TBL, cor=F)
Pearson's Chi-squared test
data: TBL
X-squared = 7.7066, df = 1, p-value = 0.005502
Notes: (1) Proportions test: If a pooled error variance is used in the test of binomial proportions, then the confidence intervals do not exactly match the significance test. That is, 95% confidence intervals excluding $$0$$ do not exactly match rejection at the 5% level. Also, the X-squared statistic shown in the R output for this test is the square of the z-statistic, shown by some other statistical software. Relevant Minitab output below:
Test and CI for Two Proportions
Sample X N Sample p
1 403 500 0.806000
2 366 500 0.732000
Difference = p (1) - p (2)
Estimate for difference: 0.074
95% CI for difference: (0.0219564, 0.126044)
Test for difference = 0 (vs ≠ 0): Z = 2.79 P-Value = 0.005
Fisher’s exact test: P-Value = 0.007
(2) Chi-squared test: If counts are sufficiently small for the (somewhat overly conservative) Yates correction to make a difference whether to reject, then perhaps it is best to use a Fisher exact test instead. The P-value of Fisher's test for my fake data is shown below.
fisher.test(TBL)\$p.val
[1] 0.006834361 | 2022-01-29 11:29:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6549873948097229, "perplexity": 567.4314481836865}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00381.warc.gz"} |
https://www.autoitscript.com/forum/topic/54687-cmd-cmd0/ | \$cmd = \$cmd[0]
Recommended Posts
```\$auth = StringRegExp(\$srcv, ':(.*?)!(.*?)@(.*?) (.*?) (.*) :(.*)', 1)
For \$i = 0 To UBound(\$auth) -1
Select
Case \$i = 0
\$nick_as_target = StringStripWS(\$auth[0], 8)
Case \$i = 2
\$foundhost = StringStripWS(\$auth[\$i], 8)
Case \$i = 4
\$cmdtarget = StringStripWS(\$auth[\$i], 8)
If (StringLeft(\$cmdtarget,1) <> "#") Then
\$cmdtarget = \$nick_as_target
EndIf
Case \$i = 5
\$cmdstring = \$auth[\$i]
EndSelect
Next
If (\$foundhost = \$hostmatch) Then
\$cmdl = StringRegExp(\$cmdstring, \$triggerchar & '(.*?) (.*)', 1)
\$cmdsplit = StringSplit(\$cmdl, " ")
\$countarray = UBound(\$cmdsplit)
For \$i = 0 To \$countarray - 1
Select
Case \$i = 1
\$cmdparam_1 = StringStripWS(\$cmdsplit[\$i], 8)
Case \$i = 2
\$cmdparam_2 = StringStripWS(\$cmdsplit[\$i], 8)
Case \$i = 3
\$cmdparam_3 = StringStripWS(\$cmdsplit[\$i], 8)
EndSelect
Next
If (\$cmdl <> 0) Then
\$cmd = \$cmdl[0]
_commands()
EndIf
EndIf```
i think there is a misstake somewhere in
```If (\$cmdl <> 0) Then
\$cmd = \$cmdl[0]
_commands()
EndIf```
because the script works without any problems when i run it as *.au3, by compiling it and running the .exe there is an error
at \$cmd = \$cmdl[0]
```\$cmd = \$cmdl[0]
\$cmd = \$cmdl^ ERROR
Error: Subscript used with non-Array variable.```
i think there is something wrong with the if (\$cmdl <> 0) then
any reason why it doesnt work as .exe but as *.au3? how can i fix this?
Share on other sites
where did you define \$cmdl[0] ????
`\$cmd = \$cmdl[0]`
this also defines \$cmdsplit[0] as a "total number" of how many splits were made
8)
Edited by Valuater
Share on other sites
where did you define \$cmdl[0] ????
\$cmdl = StringRegExp(\$cmdstring, \$triggerchar & '(.*?) (.*)', 1)
Create an account
Register a new account | 2018-12-13 17:29:23 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8113844394683838, "perplexity": 10696.559491971004}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825029.40/warc/CC-MAIN-20181213171808-20181213193308-00497.warc.gz"} |
https://web2.0calc.com/questions/finding-the-exact-perimeter-of-a-square-inscribed-in-a-circle | +0
# Finding the exact perimeter of a square inscribed in a circle?
0
87
1
Here's the problem:
I already solved the answer for 15.a) which is $$2{\sqrt{38}}$$ m, what I need help is with part b.
Here's my work:
but the answer is $${8\sqrt{19}}$$ m.
What did I do wrong?
Guest Jul 18, 2018
#1
+1
You haven't done anything wrong! You have just calculated ONE SIDE of the square!!.
So, all you have to do is multiply: 4 x 2sqrt(19) =8 sqrt(19).
Guest Jul 19, 2018 | 2018-10-21 20:01:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703525066375732, "perplexity": 1263.026278313259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514314.87/warc/CC-MAIN-20181021181851-20181021203351-00295.warc.gz"} |
https://projecteuclid.org/euclid.aoms/1177729438 | ## The Annals of Mathematical Statistics
### A Generalization of the Neyman-Pearson Fundamental Lemma
#### Abstract
Given $m + n$ real integrable functions $f_1, \cdots, f_m, g_1, \cdots, g_n$ of a point $x$ in a Euclidean space $X$, a real function $\phi(z_1, \cdots, z_n)$ of $n$ real variables, and $m$ constants $c_1, \cdots, c_m$, the problem considered is the existence of a set $S^0$ in $X$ maximizing $\phi\big(\int_s g_1 dx, \cdots, \int_s g_n dx\big)$ subject to the $m$ side conditions $\int_s f_i dx = c_i$, and the derivation of necessary conditions and of sufficient conditions on $S^0$. In some applications the point with coordinates $\big(\int_s g_1 dx, \cdots, \int_s g_n dx\big)$ may also be required to lie in a given set. The results obtained are illustrated with an example of statistical interest. There is some discussion of the computational problem of finding the maximizing $S^0$.
#### Article information
Source
Ann. Math. Statist., Volume 23, Number 2 (1952), 213-225.
Dates
First available in Project Euclid: 28 April 2007
https://projecteuclid.org/euclid.aoms/1177729438
Digital Object Identifier
doi:10.1214/aoms/1177729438
Mathematical Reviews number (MathSciNet)
MR47993
Zentralblatt MATH identifier
0046.36701
JSTOR | 2019-11-22 07:47:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5184676051139832, "perplexity": 361.6113300829003}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671245.92/warc/CC-MAIN-20191122065327-20191122093327-00184.warc.gz"} |
https://physics.stackexchange.com/tags/antennas/hot?filter=all | # Tag Info
## Hot answers tagged antennas
33
You are right, every circuit possesses some unintended capacitance, which is called "stray" capacitance. Whether or not it affects the operation of the circuit depends on the frequencies that the circuit is intended to operate at. The amount of stray capacitance that a circuit has is typically tiny, but at high enough frequencies even a very tiny amount of ...
32
A result known as Birkhoff's theorem forbids spherical electromagnetic radiation. The statement of the theorem is that any spherically symmetric vacuum solution to Maxwell's equations must be static. It is rather simple to prove. In a spherically symmetric solution $\mathbf E$ and $\mathbf B$ must be radial. Make an Ansatz, \mathbf E = E_0 \exp(i(\mathbf k\...
17
Corresponding wavelength is 22.11 meters long, but we want also to emit our EM waves into the environment. This means if we get a nice half-wave dipole antenna we would need it about 11 meters in length, $\lambda/2$. Which is quite large for mobile device. Ok, lets reduce size by using quarter-wave antenna as in WiFi, based on ideas of quarter-wave ...
15
Some numbers come from a review paper by Cullers (2000), who discusses the SETI Phoenix project. There, it is claimed that the Arecibo dish is capable of detecting a narrow band, coherent signal of $f=10^{-27}$ W/m$^2$ given a 1000 second observation. Assuming that this is an isotropic signal, then the implied power at distance $d$ is $p=4\pi d^2 f$, which ...
14
This is a problem of impedance matching for even an infinitesimally small antenna, i.e., a Hertz dipole (or monopole above a ground plane) can perfectly transmit and be perfectly matched to a pure sine wave. What it cannot do is to be matched to a signal of finite bandwidth. The impedance of a short dipole of length $h$ and radius $a$ is approximately $Z_{... 13 You might want to have a look at Does light induce an electric current in a conductor?. It's probably impossible for a radio aerial to emit visible light as the frequency of light is around the plasma frequency of the metal that the aerial is made of. We're not really supposed to address hypothetical questions, but if you could find some material with a ... 13 My answer is completely different. Longwave antennas usually are quarter-wave antennas, also known as Marconi antennas from its inventor. As the name says, they have length about 1/4 the wavelength to be transmitted. Compare them to dipole antennas generally used for shorter wavelengths, both in transmission and in reception, which are one half-wave long. A ... 11 The idea behind the quarter wavelength antenna is that it is self-resonant: it is "tuned". You can however use an antenna of any size to pick off some electromagnetic energy - and you can tune the antenna by adding some inductance in series (or inductance and capacitance). The reason that you tune an antenna is simply this: you want it to have real impedance,... 11 The radiation pattern of any dipole antenna looks similar to what you are showing in the 2D plots - but in your interpretation of the 3D pattern you have the axes wrong. A dipole antenna with the main axis vertical will transmit power in the horizontal plane, with less and less power as you go further away (inverse square law). If you measure the power as a ... 10 http://www.antenna-theory.com/antennas/shortdipole.php is a website with useful info., including formulas. To oversimplify, it seems to say that once the antenna is a tenth or less of the wavelength, the exact ratios don't matter so much. The antenna is inefficient, but it works for both sending and receiving. If you can detect the signal, of course you can ... 10 Summary: The fact that the length of an antenna is of similar size to the wavelength of light is a coincidence due to the similarity of the speed of light in air and the speed of light in the antenna (which are usually copper wires). For other waves, this may not be the case. Different guitar strings, for example, resonate at different frequencies despite ... 9 Yes, this is correct. However, you should also keep in mind$-$particularly when you're describing any type of antenna$-$that any such residual capacitance may very well be competing with the inductance of the circuit, coming from nonzero interactions between the different currents in different parts of the circuit. For an actual antenna, where you're ... 7 This isn't hypothetical. There is nothing that a radio does that can't be done in other parts of the spectrum. Many FM/AM radios operate in the optical range too. Your TV remote control uses IR. Lasers are used for high bandwidth point to point communications. And don't forget fiber optics, these are all radios that just use optics for the communication ... 7 The key to the efficiency of an antenna (whether for transmitting or receiving - the two processes are essentially reciprocal) is resonance, and impedance matching with the source / receiver. The size also matters in terms of the relationship between power and current. A nice analysis of the impact of size of an antenna on the power/current relationship is ... 7 "If that is possible, how do you produce a spherical EM radiation?" A spherically symmetric transverse field is topologically impossible - if it is required to be coherent and linearly polarized everywhere. This is the case for usual dipole or higher multipole radiation, as has already been pointed out in another answer. On the other hand, an incoherent ... 7 When a metal antenna wire is put into the field of a propagating electromagnetic wave with time-varying fields, there will be an electric and magnetic field inside the wire and thus also a current but the penetration is exponentially damped. The penetration depth$\delta\$ is called the skin depth. In treating boundary conditions with metals for ...
7
... to transmit in the longwave region, we only need to make the circuit oscillate in the right frequency That is the point. For lower frequency the electrons - which are accelerated inside the antenna rod -, need a longer distance inside the rod. Would the disturbance of these electrons reach the end of the rod too fast, the power of the antenna generator ...
7
A slight deviation from the actual question -- focus on antennas. Antennas can be capacitive or reactive in how they respond to RF energy either by feeding the antenna used for transmitting or from electromagnetic waves in receiving. A purely resonant "ideal" antenna to a single frequency is neither capacitive nor inductive as one definition of resonance ...
6
So here I am, answering my own question... Long story short -- I found the answer here, and in page 149 of 'Tools of Radio Astronomy' [Rohlfs/Wilson/Huttemeister 5/e 2009], and page 24 of 'Radio Astronomy' [Pawsey/Bracewell 1955] and I will now express that answer in my own words to save the reader a click! Interestingly, one part of this proof comes from ...
6
When you have a capacitor, current flows even though the "circuit" is not complete. This is because it's possible for electrons to bunch up - temporarily - in a conductor and generate a corresponding electric field. That is what happens in an antenna. An antenna is really a combination of an inductor (a straight wire) and a capacitor (when you put a net ...
6
Antenna performance is strongly affected by the presence of the ground nearby. The standard rule of thumb is to raise the antenna to a height above ground of about one half of the wavelength it will operate at, in order to minimize power loss in the ground and radiation directionality effects. At low frequencies- say, ~1 MHz- the wavelength is 300 meters ...
5
The requirements for transmitting antennae are much higher than for receiving antennas. Transmitting antennas must optimally radiate, so that the signal is not obscured by other stations with better antennas. If a receiver antenna is too short and far away from resonance, all received stations are uniformly weaker. What matters is that the desired signal is ...
5
1) Normally, the antenna isn't the only component that distinguishes between the various competing signals received. The antenna does have a bandwidth and will attenuate signals outside that band. A typical antenna on a cell phone mast for example, may receive in the range 1.8GHz - 2.4GHz (just an example, you would have to look up manufacturer data ...
5
(All images in this answer were made by me, for wikipedia! Links here) Let's start with the simplest question: Voltage relative to what? It's the voltage from one line to the other. Now let's look at an animation of a transmission line. This one is terminated with an impedance-matched resistor, so we don't have to think about reflections yet. The dots ...
5
With a resonant antenna, the reactance (capacitive and inductive) should be zero. Short antennas are usually capacitive so that capacitive reactance is offset using an inductor. Often for an AM radio a loop inductance is included. Also, some antennas are longer than they appear because the conductor is wrapped around the core of the antenna (sometimes you ...
5
This paper contains an important analysis of the different trade-off between bandwidth and energy efficiency. The interesting conclusion from that paper is that the most energy-efficient way to send and receive interstellar messages (over flat spacetime) that maximise the bit-rate requires making the bandwidth of transmission very large. In particular, this ...
5
I think that this is a very good question because it makes one think beyond a "standard" explanation. When you study electromagnetic induction you learn about the magnetic flux change through a closed loop, which produces an induced EMF. However, the loop does not have to be a conducting loop. If it was an ideal dynamo with no resistance, friction etc., ...
4
I didn't see the episode, but it may be referring to "Phreaking", by which the signals from a CRT monitor can be listened-in on (it uses high frequency changing currents to display the information, so these will inevitably result in some RF radiation from which this information can in principle be extracted). Wikipedia article has a bit more info.
4
The radio waves or microwaves that are used for communication don't contain just one photon. They contain a bunch. (Maybe someone will do the math for how many photons a standard radio broadcast antenna is producing each second; it'll blow your knee-high off even if you're wearing sandals over them.) Consider for example a frequency-modulated signal. The ...
4
The electricity through the coil is probably coming directly from the network, so it oscillates at either 50 or 60 Hz. That would be the frequency your antenna radiates. This is very different to the frequencies your phone works in, around 1 GHz (a thousand million† Hertzs, or twenty million times faster). So, essentially, your wave (weak, as Lemon pointed ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-10-22 23:51:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6977129578590393, "perplexity": 682.1639985685576}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107880401.35/warc/CC-MAIN-20201022225046-20201023015046-00655.warc.gz"} |
https://www.physicsforums.com/threads/difficult-computational-statistics-problem.784482/ | # Difficult computational statistics problem
Tags:
1. Nov 27, 2014
### Bazzinga
I've got a tricky computational statistics problem and I was wondering if anyone could help me solve it.
Okay, so in your left pocket is a penny and in your right pocket is a dime. On a fair toss, the probability of showing a head is p for the penny and d for the dime. You randomly chooses a coin to begin, toss it, and report the outcome (heads or tails) without revealing which coin was tossed. Then you decide whether to use the same coin for the next toss, or to switch to the other coin. You switch coins with probability s, and use the same coin with probability (1 - s). The outcome of the second toss is reported, again not reveling the coin used.
I have a sequence of heads and tails data based on these flips, so how would I go about estimating p, d, and s?
2. Nov 27, 2014
### Ray Vickson
What you are describing is a so-called Hidden Markov Model. Here, the underlying state (dime or penny) follows a Markov chain with transition probability matrix
$$\mathbb{P}= \pmatrix{1-s & s \\ s & 1-s}$$
However, the state is not observable---only the outcomes (H or T) of tossing the coins can be observed.
There are several useful tutorials available on-line: see, eg.,
http://di.ubi.pt/~jpaulo/competence/tutorials/hmm-tutorial-1.pdf or
http://www.cs.ubc.ca/~murphyk/Bayes/rabiner.pdf
This last source has a brief treatment of your problem, as an illustrative example.
3. Nov 27, 2014
### Bazzinga
Great I'll take a look at those! Thanks! | 2018-01-17 13:59:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6796361804008484, "perplexity": 969.2769112317542}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084886939.10/warc/CC-MAIN-20180117122304-20180117142304-00789.warc.gz"} |
http://lists.gnu.org/archive/html/lilypond-user/2010-01/msg00371.html | lilypond-user
[Top][All Lists]
Hello, I'm sure this is in the docs somewhere; I've checked the index, sections 4.4 & 4.5, and the snippets list, but I can't figure out how to write some text (\markup?) above the clefs. I'd also like to be able to move the text around that area a little bit, for which I'm guessing that it's something like: `\override SOMETHING #'staff-position = #-8` Thanks for any help, Gerard | 2014-04-24 10:07:26 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8426759839057922, "perplexity": 550.1348760992536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
https://www.cut-the-knot.org/do_you_know/GoldenRatioInWuXing.shtml | # Golden Ratio in Wu Xing
### Solution
Counting the angles, we can reduce the problem with the straight lines to the simplified diagram:
Here, $\angle BOC=18^{\circ},\,$ $\angle COF=2\cdot 60^{\circ}+12^{\circ}=132^{\circ},\,$ so that $\angle GOF=60^{\circ}.$
Assuming the unit circle, $\displaystyle GO=\sin 18^{\circ}=\frac{\sqrt{5}-1}{4},\,$ $FO=\frac{\displaystyle GO}{\displaystyle \cos 60^{\circ}}=\displaystyle \frac{\sqrt{5}-1}{2}.\,$ It follows that
$\displaystyle \frac{FO}{EF}=\frac{FO}{1-FO}=\frac{\sqrt{5}-1}{3-\sqrt{5}}=\frac{\sqrt{5}+1}{2},$
the Golden Ratio.
For the intersections with the center circle, the problem reduces to the following diagram.
The required ratio $\displaystyle\frac{DG}{GO}=\varphi,\,$ is rather classical.
### Acknowledgment
The problem has been kindly posted on the CutTheKnotMath facebook page by Tran Quang Hung.
Disclaimer: Wu Xing is a five-element aspect of the Taoist thought, not necessarily represented by the above diagram. | 2017-09-22 01:10:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8290374875068665, "perplexity": 1492.874734730506}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688103.0/warc/CC-MAIN-20170922003402-20170922023402-00140.warc.gz"} |
https://www.physicsforums.com/threads/call-for-help-in-finding-approximate-inverse-matrix.726038/ | # Call for help in finding approximate inverse matrix
1. Dec 1, 2013
### genxium
I'm looking for solutions to this problem:
Matrices A(m,n) and B(n,m) satisfy AB=I(m,m) where n isn't equal to m.
Can I find a matrix S(m,n) such that SA=I(n,n) or SA approximates I(n,n)?
By approximate I don't have preferred definition, hence any suggestion is welcome!
2. Dec 1, 2013
### Office_Shredder
Staff Emeritus
From the first line I'm assuming that m<n. In this case no as the product of matrices has rank no larger than the rank of any invididual matrix, so SA can have rank no larger than m. In particular I(n,n) has rank n which is too large.
3. Dec 1, 2013
### genxium
Yes that's why I say approximate, I'm interesting in looking for an approximate & numerical solution to this problem.
4. Dec 1, 2013
### Office_Shredder
Staff Emeritus
You can find a matrix S such that SA is diagonal with m 1's and the rest 0 on the diagonal, do you consider that approximate?
It would help if you described why you want to find such a matrix S.
5. Dec 1, 2013
### genxium
Uhm... I'm afraid not, because I would like to use this result to perform image sequence compression & recovery, thus in practice m will be small (I'm talking about Principal Components actually). Can I get all positive elements in the diagonal for the product matrix?
6. Dec 1, 2013
### Office_Shredder
Staff Emeritus
You can basically pick S to get any nxn rank m matrix. So you could have a matrix that has all 1's everywhere for example, but I'm guessing that's not what you want either.
You could do something like find S such that
$$||SA-I||^2_{F}$$
the sum of squares of the difference of the entries is minimal. This is a least square problem so it is easily solved numerically and probably algebraically as well. I would assume that S is the pseud-inverse of A in this case because that's always the answer.
7. Dec 2, 2013
### genxium
I do think this approach is on the right track, yet I just wondering "how approximate" I can get if using least square. Does anyone ever do some research works on this problem and hold state-of-the-art conclusion?
8. Dec 2, 2013
### D H
Staff Emeritus
Given than m<n, you *cannot* find a matrix S such that SA = In×n. Such a matrix does not exist.
9. Dec 2, 2013
### Office_Shredder
Staff Emeritus
Thinking about it a bit more I realize that you are likely trying to do compressed sensing:
http://en.wikipedia.org/wiki/Compressed_sensing
where what you really want to do is given y = Ax solve for what x is. There are known algorithms for calculating x assuming it has certain structures (such as sparsity) that images typically satisfy or come close to satisfying in the right basis.
The typical setup is a bit different than principal component analysis - once you've done principal component analysis the whole point is that the extra information is noise and you are totally throwing it away - there is no way to recover the information and there is no reason you should want to recover the information. Compressed sensing works a bit different - you take a (typically) random projection of the data to get a lower dimension, and then the inversion problem is finding some principal components under a certain basis (which is usually not the standard basis as that would give weird looking stuff) that would project down to give the same projection as your original one.
10. Dec 2, 2013
### genxium
Yes you're right, actually I'm looking for some numerically approximate methods to handle this problem.
11. Dec 2, 2013
### genxium
Thank you very much Office_Shredder! You saved my day! The link you gave is tackling exact the problem I met. I'm reading it and seems it could provide useful information for my project :) | 2018-03-18 08:05:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6791155338287354, "perplexity": 890.98785919229}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645550.13/warc/CC-MAIN-20180318071715-20180318091715-00449.warc.gz"} |
https://neurips.cc/Conferences/2019/ScheduleMultitrack?event=15158 | Timezone: »
Adaptive Trust Region Policy Optimization: Convergence and Faster Rates of regularized MDPs
Lior Shani · Yonathan Efroni · Shie Mannor
Sat Dec 14 08:50 AM -- 09:10 AM (PST) @ None
Trust region policy optimization (TRPO) is a popular and empirically successful policy search algorithm in Reinforcement Learning (RL) in which a surrogate problem, that restricts consecutive policies to be close' to one another, is iteratively solved. Nevertheless, TRPO has been considered a heuristic algorithm inspired by Conservative Policy Iteration (CPI). We show that the adaptive scaling mechanism used in TRPO is in fact the natural RL version" of traditional trust-region methods from convex analysis. We first analyze TRPO in the planning setting, in which we have access to the model and the entire state space. Then, we consider sample-based TRPO and establish $\tilde O(1/\sqrt{N})$ convergence rate to the global optimum. Importantly, the adaptive scaling mechanism allows us to analyze TRPO in regularized MDPs for which we prove fast rates of $\tilde O(1/N)$, much like results in convex optimization. This is the first result in RL of better rates when regularizing the instantaneous cost or reward. | 2022-05-19 09:45:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17368553578853607, "perplexity": 1741.1775133505232}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662526009.35/warc/CC-MAIN-20220519074217-20220519104217-00703.warc.gz"} |
https://math.stackexchange.com/questions/1404534/why-does-positive-definite-matrix-have-strictly-positive-eigenvalue/1404539 | # Why does positive definite matrix have strictly positive eigenvalue?
We say $A$ is a positive definite matrix if and only if $x^T A x > 0$ for all nonzero vectors $x$. Then why does every positive definite matrix have strictly positive eigenvalues?
• Write down the definition! what does it mean for a matrix to be strictly positive definite? (assuming having positive eigenvalues is not the definition though!) – Ehsan M. Kermani Aug 21 '15 at 3:09
• What exactly is your definition of a positive definite matrix? – davidlowryduda Aug 21 '15 at 3:10
• @mixedmath A is a positive definite matrix, if and only if X'AX is greater than 0 for all the non zero entry of X..... – MathMA Aug 21 '15 at 3:12
Suppose our matrix $A$ has eigenvalue $\lambda$.
If $\lambda = 0$, then there is some eigenvector $x$ so that $Ax = 0$. But then $x^T A x = 0$, and so $A$ is not positive definite.
If $\lambda < 0$, then there is some eigenvector $x$ so that $Ax = \lambda x$. But then $x^T A x = \lambda \lvert x \rvert^2$, which is negative since $\lvert x \rvert^2 > 0$ and $\lambda < 0$. Thus $A$ is not positive definite.
And so if $A$ is positive definite, it only has positive eigenvalues.
• Great explanation, can't be more descriptive and clear than this. – MathMA Aug 21 '15 at 3:29
• Why must an eigenvalue a real number? – user1551 Aug 21 '15 at 10:21
• It doesn't need to be, but complex eigenvalues fit into the second case too. – davidlowryduda Aug 21 '15 at 14:37
• The OP's definition of positive definiteness concerns only about $x^{\color{red}{T}}Ax$ for real vector $x$. For complex eigenvector we don't have $x^Tx=|x|^2$. While showing that $x^\ast Ax>0$ for all complex vectors $x$ is just a one-liner, in the context of the OP, I think this is the least obvious part. – user1551 Aug 22 '15 at 10:12
Hint: If $\lambda$ is an eigenvalue of $A$, let $x$ be the associated eigenvector, and consider $x'Ax$. | 2020-04-05 07:23:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510361552238464, "perplexity": 249.85825861095043}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370529375.49/warc/CC-MAIN-20200405053120-20200405083120-00287.warc.gz"} |
https://indico.desy.de/event/28202/contributions/105443/ | # EPS-HEP2021 conference
26-30 July 2021
Zoom
Europe/Berlin timezone
## Underlying Event studies and search for jet modifications in pp and p-Pb collisions with ALICE at the LHC
Not scheduled
20m
Zoom
### Speaker
Ahsan Mehmood Khan
### Description
It is well-established that high-multiplicity pp and p–Pb collisions exhibit various signatures associated with the formation of QGP in heavy-ion collisions. In this contribution, we present results obtained using Underlying Event (UE) techniques, used to measure the average number density and the average total transverse momentum ($p_{\rm T}$) in the Toward, Transverse, and Away regions with respect to the leading trigger particle, but employed in novel ways. A conventional UE analysis is applied in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV to test the similarities between pp and p-Pb collisions. The charged particle multiplicity in the Transverse UE-dominated region, $N_{\rm T}$, is used as a multiplicity estimator to establish relations between particle production in pp, p-Pb and Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV. The results are compared with predictions from QCD-inspired Monte Carlo event generators. Finally, the UE studies are used to search for jet modification by subtracting the UE contributions measured in the Transverse region from the Toward and the Away regions. These studies in terms of $N_{\rm T}$ are powerful tools to search for jet modification patterns from the smallest systems, events with multiplicities lower than the mean for minimum-bias pp collisions, to the largest systems, central heavy-ion collisions, in a coherent way.
Collaboration / Activity ALICE | 2021-10-22 01:19:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7580738663673401, "perplexity": 2936.1528086350945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585449.31/warc/CC-MAIN-20211021230549-20211022020549-00113.warc.gz"} |
https://brilliant.org/discussions/thread/pre-rmo-2014/ | # This note has been used to help create the RMO Math Contest Preparation wiki
Pre-RMO was conducted on 12th October 2014 in Mumbai, India
I have posted the questions in this set. There are 20 questions in total (the questions are numbered according to set A).
If you haven't appeared for the pre-RMO, you can take this set as a test. Try to complete the paper within the time limt, that is 2 hrs 30 min.
Update: Answers have been added to the final note in this set. Be sure to check them out !!!!!
If you have found a nice way to solve any question, do post the worked solution in the comments below the question. Hoping to see some creative solutions !!!!!
For those who wrote the exam, how did it go?
Note by Pranshu Gaba
6 years, 9 months ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
even i gave pre-rmo. how many questions did you get correct?
- 6 years, 9 months ago
I didn't get the answer key, so I am not sure, but I am expecting around 17. What about you?
- 6 years, 9 months ago
Greater than 12. Did a silly mistake in the paper. Can I qualify with this score?
- 6 years, 9 months ago
How can you write pre-RMO now..?I think you are in a college (from your age-18)
- 6 years, 9 months ago
I am currently studying in 11th grade, and my actual age is 16, so I can write pre-RMO.
- 6 years, 9 months ago
Didntkno' Dat ...Ok :D
- 6 years, 9 months ago
How many are you getting correct? the answer key has been published on the website.
- 6 years, 9 months ago
I am getting 18. What is your score?
- 6 years, 9 months ago
11 (ashamed)
- 6 years, 9 months ago
How many marks do you think that the cut off will be for qualifying for RMO?
- 6 years, 9 months ago
I am sorry but I do not have any idea regarding that. Anyways, the results will out by 31 October.
- 6 years, 9 months ago
Hello there, It's good to find a community full of Enthusiastics in Mathematics. I'm, providing an Analysis of this year and last year's Pre RMO paper here. Feel free to discuss the same here.
2014 Paper
(Please note that there's a small mistake in solution to Q18, I'll correct it as soon as possible.)
- 6 years, 9 months ago
In the 16th question, the diagram could be drawn such that P, Q and R lie between AI, BI and CI respectively. So the diagram won't be that messy.
- 6 years, 9 months ago
That's right, but I had taken the points to be on the extended lines AI,BI and CI (Without taking into consideration the easier case). Now that raises a question which I missed out in the discussion. There are 8 possibilities for the points P, Q, R as it is not explicitly mentioned in the question whether the points P, Q, R lie on the segment AI, BI, CI or them extended. The answer will vary accordingly. However it should be safely assumed that one should either take the three points dividing the segments internally or externally. In both cases, answer is $55^0$
- 6 years, 9 months ago
And I think that in question number 18, it should be that f (n) = n for the rest of the prime numbers.
- 6 years, 9 months ago
That's correct. I had the same thing in mind, however missed out in typing.
- 6 years, 9 months ago
Ok, the mistake in Q.18 is corrected.
- 6 years, 9 months ago
Isn't it supposed to be 6 questions?how did it become 20?
- 6 years, 9 months ago
The pre-rmo has 20 questions to be solved in 2 and a half hours. If you qualify in the pre-rmo, you are eligible to appear for the RMO which has around 6 questions to be solved in 3 hours.
- 6 years, 9 months ago
Dude but I am directly writing rmo There is no pre rmo and all I'm from bangalore
- 6 years, 9 months ago
The pre Rmo is for Mumbai region only. They introduced it here so that they don't have to waste their time correcting the RMO papers of students who get really very low marks.
- 6 years, 9 months ago
@Shiv Kumar I think you wrote the Association of Mathematics Teachers of India's NMTC Contest and if you had got selected for the second round then irrespective of qualifying the second round you can pay $Rs.~20$ and get to write RMO. This is what happens in Chennai, all as I know, Isn't it @Krishna Ar and I forgot to write it this year, totally, well next year is the only remaining year for me to write, isn't it IDK..can we write it afetr schools too . Now IDK the matter in bangalore ..just a guess...and a query..
Arya :)
- 6 years, 9 months ago
In bangalore u have to just pay 120 and u can write I think it's only for our school fiitjee integrated!! I didn't write anything and in Dec ill write rmo
- 6 years, 9 months ago
@Shiv Kumar In mumbai you first have to write pre-rmo which consists of 20 questions. If you qualify, you are eligible to write RMO.
- 6 years, 9 months ago
@Arya Samanta From next year onwards standard 12th students aren't eligible to write pre-rmo.
- 6 years, 9 months ago
I can't write PRE-RMO and RMO too? if that is so..it then ..it..damn it!
- 6 years, 9 months ago
Should u show the solution as well in the paper? Pls help me I'm new to this and wat is pre Rmo?
- 6 years, 9 months ago
Only the final answer must be shown in the paper. You can refer to the HBCSE website for more info.
- 6 years, 9 months ago
Can you tell me how you'd prepared for RMO/pre-RMo
- 6 years, 9 months ago
I solved the last year's pre-RMO paper, as well as lots of questions on Brilliant !
- 6 years, 9 months ago
Wat is pre rmo and rmo? Is there a difference?
- 6 years, 9 months ago
I got 19 :)
- 6 years, 9 months ago
which one didn't you get.
- 6 years, 9 months ago
I made a silly mistake in the questionn where we were asked to find the number of solutions to xy = x+ y + gcd (x, y), where x <=y. God knows why I included (3,2) as a solution.
- 6 years, 9 months ago
did you include (2,3)? by the way are you from PACE
- 6 years, 9 months ago
Ya I am from PACE nerul :)
- 6 years, 9 months ago
you were the topper right?
- 6 years, 9 months ago
Yup.
- 6 years, 9 months ago
congratulations.
- 6 years, 9 months ago
Ru from pace? Isn't pace in bangalore?
- 6 years, 9 months ago
The results are out!!!!! The cutoff is only 40/100 lol !
- 6 years, 9 months ago
hey ! Im in the 10th grade and i qualified for RMO .. Presently I have only the past year papers to solve ..and i am clueless about which bookks to refer to ! .. Please help..
- 6 years, 9 months ago
i didn't expect it to be so low.
- 6 years, 9 months ago
Anyways :p
- 6 years, 9 months ago
you started preparing for minor test 2?
- 6 years, 9 months ago
Kind of.
- 6 years, 9 months ago
Thanks @Pranshu Gaba this set helped me a lot..
- 6 years ago
You're welcome! I'm glad it helped. Are you appearing for the math olympiad this year?
- 6 years ago
Yes
- 6 years ago
All the best!
- 6 years ago
Thanks and keep updating such good stuff's
- 6 years ago
Sure!
I am in 12th now, so I am not eligible to apply for RMO this year. I wouldn't be able to share any RMO questions. However, I look forward to developing and sharing cool new problems!
- 6 years ago
Sure
- 6 years ago
hii pranshu
- 5 years, 10 months ago
What do think the cut off marks will be this year for qualifying for RMO 2014?
- 6 years, 9 months ago
i think it should be around 8 questions as last year's paper was simpler than this year and last year the cutoff was around 9 questions.
- 6 years, 9 months ago
I really felt that the last year's paper was slightly more difficult. Anyways, the cutoff probably wouldn't exceed 10 (but don't curse me if it does!), so you'd probably qualify.
- 6 years, 9 months ago
i felt the last year's paper to be simpler. I could solve around 15 questions.
- 6 years, 9 months ago | 2021-07-29 17:24:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 10, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9621747136116028, "perplexity": 2909.9178873118008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153892.74/warc/CC-MAIN-20210729172022-20210729202022-00156.warc.gz"} |
https://stats.stackexchange.com/questions/252050/simulation-and-mathematical-notation-for-arima0-1-1-with-drift | # Simulation and mathematical notation for ARIMA(0,1,1) with drift
I am attempting to write the mathematical model for and also simulate an MA(1) process that has drift (in R).
I have referenced ARIMA (0,1,1) or (0,1,0) - or something else?, Simulation of forecasted values in ARIMA (0,1,1), and Fitting ARIMA with a drift on R. I am aware that the "forecast" package that I am using addresses the issues relating to mean reporting here: http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm.
My understanding (e.g. from example bottom of page on the very useful https://www.otexts.org/fpp/8/7 page on the otext site) is that an MA(1) model (on a series that for which the first difference is stationary) can be written as: $$(1-B)^d(y_t- \mu t) = (1 - \theta B) e_t$$ where $B$ is the backshift operator, $d$ is the order of differencing (here is 1), $y_t$ is the original series indexed by time $t$, $\theta$ is the MA1 parameter and $e_t$ is the error indexed at time $t$.
From the above otext site, I note:
the inclusion of a constant in a non-stationary ARIMA model is equivalent to inducing a polynomial trend of order $d$ in the forecast function.
So while my forecasts are correct, Q.1. How do I represent/differentiate this single order trend in the equation from the equation for an MA(1) model without drift? I ask this because expanding the above formula just produces the exact same equation/model I expect for an MA(1) without drift, i.e. (and apologies in advance if there is just something obviously wrong with my maths): \begin{aligned} y_t - \mu t - y_{t-1} + \mu t &= e_t - \theta e_{t-1} \\ y_t - y_{t-1} &= e_t - \theta e_{t-1} \\ \Delta y_t &= e_t - \theta e_{t-1} \\ \end{aligned}
In terms of simulating an MA(1) with drift I used the following, Q.2. Is this the correct approach or is there a more direct/accurate way to do this?:
library(lattice)
library(forecast)
set.seed(325354)
temp <- arima.sim(n=100, list(order=c(0,1,1), ma=.75), sd = 2.7)
mean(temp)
ts.sim.orig <- ts(temp, start=c(2011,1), frequency=12)
xyplot(ts.sim.orig)
mean(ts.sim.orig)
# Stationary
kpss.test(ts.sim.orig)
# Drift term being 0.3
ts.sim <- ts(1:length(ts.sim.orig) * 0.3 + temp, start=c(2011,1), frequency=12)
xyplot(ts.sim)
mean(ts.sim)
# Not stationary
kpss.test(ts.sim)
# Stationary
kpss.test(diff(ts.sim, 1))
# Non zero mean on first differnce
mean(diff(ts.sim, 1))
# Plot is reasonable?
xyplot(diff(ts.sim, 1))
# First fit without drift:
summary(fit99 <- Arima(ts.sim, order=c(0,1,1), include.constant = F))
# Forecast is flatline as expected. Great.
plot(forecast(fit99))
# The drift is close to the mean from: mean(diff(ts.sim, 1))
summary(fit99 <- Arima(ts.sim, order=c(0,1,1), include.constant = T))
# Forecast includes trend
plot(forecast(fit99))
y <- ts.sim
x <- 1:length(ts.sim)
# Roughly equal to drift term as expected
summary(lm(y ~ x))
# Aside, this don't seem to recover the drift parameter too well...
summary(fit99 <- auto.arima(ts.sim, d = 1, trace = T, stepwise=FALSE, approximation=FALSE, seasonal = F))
plot(forecast(fit99))
Finally, does the approach to the simulation above answer my first question? That is, is the appropriate representation of the MA(1) with drift (and presumably assuming a zero intercept) equal to the following equation? $$\Delta y_t = \beta t + e_t - \theta e_{t-1}$$ Thank you. | 2019-03-18 22:08:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9971734881401062, "perplexity": 2896.186980894222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912201707.53/warc/CC-MAIN-20190318211849-20190318233849-00494.warc.gz"} |
http://travis.giggy.com/miki/113.034.html | Folder:
113 Math
File:
113.034 Statistics - Coefficient of determination and Squared error
# Coefficient of determination and Squared error
r^2 = coefficient of determination
- Always between zero and one: 0 >= r^2 <= 1
- r^2 is literally just r squared, it's that easy
- $$r^2 = 1 - \frac{SE_{regression}}{SE_\bar{y}}$$
- Reads: r squared = 1 minus the squared error of the regression line divided by the squared error of the average of all y values
- If squared error is small, it means that the residuals of all points along the regression line are small and they fit the line pretty well
- If squared error is big, it means that the residuals of the points along the regression line are big and they don't fit the line very well
- So, r^2 close to 1 is a good fit and r^2 close to zero is a bad fit
• statistics | 2021-01-17 18:13:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6217700839042664, "perplexity": 972.0123646115505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513144.48/warc/CC-MAIN-20210117174558-20210117204558-00211.warc.gz"} |
http://scylla.ceas.uwm.edu/465/summary/new/sum_feb_1.html | Polarization and devices Feb. 1, 2018
• Polarization: vec e = hat x E_x cos ( omega t -k z ) + hat y E_y cos (omega t -k z + phi ), phi = phi_y - phi_x is the relative phase between the 2 components.
Unpolarized -phi is a random variable.
Linearly polarized - phi = 0 or pi
Circularly polarized - E_x = E_y , phi = pm pi/2
Elliptically polarized - E_x ne Ey , phi ne 0 or pi
Polarization devices - polarizer, birefringent polarizers, rotator.
Ref. 2 Sec. 6.1
Rotator: based on optical activity or Faraday effect
Put in terms of right circular unit vector hat e_R = {hat x - j hat y}/sqrt(2) and left circular unit vector hat e_R = {hat x + j hat y}/sqrt(2), vec E = E_R e^{-jk_R z) hat e_R + E_L e^{-jk_L z) hat e_L;
angle of rotation = rho z = (k_L - k_R ) z /2; rho is rotatory power. For example, Faraday effect rho = V H where H is magnetic field intensity and V is the Verdet constant, i.e. magneto-optics.
Ref. 2 Sec. 6.6
• Anisotropic materials: causes birefringence since refractive indices are different along different axes, i.e. directional dependent. P_i = sum_j epsilon_o chi_{ij} E_j
In general, the refractive indices are different in all 3 axes (x, y, z) in biaxial crystals. More common are uniaxial crystals which have one axis with refractive index different from those of the other two axes. This unique axis is called optic axis and refractive index along this axis is called n_e or n_{||} The refractive index along the other 2 axes is n_o or n_{_|_}
They are used to make polarization devices, e.g.
quarter wave plate causes a pi/2 phase difference between x and y components, i.e. Delta phi = pi/2 (2 m + 1 )
It can be a polarization converter.
Half wave plate causes a pi phase difference between x and y components, i.e. Delta phi = pi (2 m + 1 )
• Inhomogeneous media: refractive index is a function of position, P = epsilon_o chi (r) E
Applications - grade index fiber, mirage.
• Nonlinear media: P = epsilon_o ( chi ^{(1)} E + chi ^{(2)} E^2 + chi ^{(3)} E^3 )
Nonlinear effects result in harmonic generation.
Four-wave mixing (FWM), Stimulated Raman Scattering (SRS), and Stimulated Brillouin Scattering (SBS) rob energy from signal into different freq. channels.
• Dispersive medium: Different frequency components travel with different speeds, e.g. Prisms. | 2018-07-22 16:19:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6518781185150146, "perplexity": 6974.523026805916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676593378.85/warc/CC-MAIN-20180722155052-20180722175052-00285.warc.gz"} |
https://electronics.stackexchange.com/questions/32938/how-fast-should-i-clock-my-cpld-as-compared-to-my-spi-bus-speed | # How fast should I clock my CPLD as compared to my SPI bus' speed?
As I'm sure everyone here knows, in FPGA/CPLD design one often needs to synchronize a slower asynchronous signal (say, the SCK line of SPI) with a much faster clock signal thats directly fed to the FPGA/CPLD. My question is, how much faster does the FPGA/CPLD clock needs to be relative to my asynchronous signal? Ten times? Twenty times?
In my case less than 10x doesn't work well. Specifically: I set my SCK speed to 4 MHz whereas my clock was 20 MHz. This didn't work at all. 2 Mhz works, but occasionally I get some problems. At 1 MHz, it works very well - no issues so far.
VHDL Code for the CPLD:
library ieee;
use ieee.std_logic_1164.all;
entity PISO is
port(CLK, nCS, SCK, nRESET : in std_logic;
PI : in std_logic_vector(71 downto 0);
SO : out std_logic);
end PISO;
architecture archi of PISO is
signal tmp: std_logic_vector(PI'high downto PI'low);
signal bitOut: std_logic;
signal rise, fall : std_logic;
signal oscena: std_logic;
signal iCLK : std_logic;
signal SCK_rising, SCK_falling, SCK_sync, SCK_delay : std_logic;
signal CS_rising, CS_falling, CS_sync, CS_delay : std_logic;
component sync
generic(
RESET_STATE : std_logic := '0' -- '0' for active low sync
);
port(
clk : in std_logic;
rstN : in std_logic;
d : in std_logic;
q : out std_logic
);
end component;
begin
sync1 : sync
generic map(
RESET_STATE => '0'
)
port map(
clk => clk,
rstN => nRESET,
d => sck,
q => SCK_sync
);
sync2 : sync
generic map(
RESET_STATE => '1'
)
port map(
clk => clk,
rstN => nRESET,
d => nCS,
q => CS_sync
);
process(clk, nRESET)
begin
if (nRESET = '0') then
sck_rising <= '0';
sck_falling <= '0';
sck_delay <= '0';
elsif rising_edge(clk) then
if cs_sync = '1' then
sck_delay <= '0';
sck_rising <= '0';
sck_falling <= '0';
else
sck_delay <= sck_sync;
sck_rising <= sck_sync and (not sck_delay);
sck_falling <= (not sck_sync) and sck_delay;
end if;
end if;
end process;
process(clk, nRESET)
begin
if (nRESET = '0') then
cs_rising <= '0';
cs_falling <= '0';
cs_delay <= '0';
elsif rising_edge(clk) then
cs_delay <= cs_sync;
cs_rising <= cs_sync and (not cs_delay);
cs_falling <= (not cs_sync) and cs_delay;
end if;
end process;
process(CLK, nRESET)
begin
if (nRESET = '0') then
tmp <= (others => '0');
elsif rising_edge(CLK) then
if CS_sync = '0' then
if SCK_falling = '1' then
tmp <= tmp(PI'high -1 downto PI'low) & '0';
end if;
elsif CS_sync = '1' then
tmp <= PI;
end if;
end if;
end process;
SO <= tmp(PI'high) when nCS = '0' else 'Z';
end archi;
And here's the code for the sync component:
library ieee;
use ieee.std_logic_1164.all;
entity sync is
generic (
RESET_STATE : std_logic := '0' -- '0' for active low sync
);
port (
clk : in std_logic;
rstN : in std_logic;
d : in std_logic;
q : out std_logic
);
end entity;
architecture behavioral of sync is
signal d_meta : std_logic;
begin
process(clk, rstN)
begin
if (rstN = '0') then
d_meta <= RESET_STATE;
q <= RESET_STATE;
elsif (clk'event and clk = '1') then
d_meta <= d;
q <= d_meta;
end if;
end process;
end architecture;
Regarding simplicity, I know SPI is super simple but I'm a newbie so all of this is rather difficult for me. Only after weeks did it make sense to me that I do need to sync. the signals in the CPLD/FPGA (initially I was just using the SCK as my clock and didn't even have a separate clock on my board. It worked fine for slower speeds but increasing the speed to even 1 MHz made the naiveness of my approach obvious). I'm sure (infact, I know because of your excellent posts around here) your approach is much simpler and more elegant, the issue is that I'll need to get my head around it first because as of now it just sounds like greek to me!
• What's the SCK's pulse width? – stevenvh May 30 '12 at 13:33
• @stevenvh By pulse width, I assume you mean the length of time where the SCK is high? I believe that's half of the time period, so about 0.25 uS if my math is right. I'll measure it tomorrow at work and update here. – Saad May 30 '12 at 13:40
• The reason I'm asking is that clock frequency isn't everything. If your clock is 1MHz that's 1us period. So sampling at 10MHz would give you 10 samples over that, ideally 5 high and 5 low. If your clock produces only narrow pulses, say 100ns, that may be perfectly within specs (probably, since you're allowed to clock it much faster), but you'll only have 1 sample at 10MHz. So there's a chance of missing that one. – stevenvh May 30 '12 at 13:44
• I'm not sure what's naive about using SCK as a clock inside the CPLD, unless it's something CPLD-specific. FPGAs can have multiple clock domains inside. Was the clock trace suffering from transmission-line reflections, or was it too slow, or something else? – ajs410 May 30 '12 at 20:37
• @ajs410 I was told that async. design is a Bad Thing in the FPGA/CPLD world and it's better if I sycned my signals with an external clock. – Saad May 30 '12 at 21:02
There are so many things in this question that it is difficult to know where to start.
I am assuming that your FPGA logic is a SPI slave, not a master. If it is a master then you have a whole different set of issues which I'm going to avoid going into right now.
The simple direct answer to your question is that you need to sample an async signal at least two times the frequency of your signal. So if you have a 4 MHz clock then you need to sample it at 8 MHz or higher. Of course, nothing is simple or direct in this case.
You have things a little more difficult because you are not sampling one async signal, you are sampling three (CLK, CS, and MOSI). You also need to keep those three signals time-aligned with each other through the sampling process. And you have to spit out MISO in such a way as to not violate your setup/hold time at the master.
None of this is easy, but having a higher speed clock will make things much easier. How much higher depends on your code, and you didn't post your code. I think that I could write code to do it with an 8x clock, but that is just a guess. Honestly, however, I think this is the wrong approach.
SPI is a super simple interface, and it would be good if you kept it super simple. SPI has its own clock, and if you use it as a clock then everything becomes almost easy. Instead of changing clock domains on the serial SPI interface, change clock domains on the parallel data going in/out of your shift registers. If you look at those signals carefully you might even realize that you don't need to do anything special, or if you do then it's just a flip-flop per signal. Then you don't need to have your main clock be higher than your SPI clock. Your main clock could actually be slower!
I do this on my SPI FPGA/CPLD interfaces and I have no problems running SPI at 30+ MHz, with or without a second clock domain.
• "at least two times the frequency of your signal". You know that this is not quite correct. I mention in comment a 100ns pulse width at a 1MHz clock. Sampling at 2MHz will likely miss the pulse. You need twice the harmonic required to have a decent signal, amplitude-wise, and edge-wise. – stevenvh May 30 '12 at 13:56
• @stevenvh True, but I wasn't going to worry about the details too much because I went on to say that for this application you probably need 8x, or better yet do something else entirely. – user3624 May 30 '12 at 14:21
• Or use hardware that will hold the pulse and has to be actively reset so you receive it? – Kortuk May 30 '12 at 14:25
• @Kortuk - That sounded like a good idea, but on second though, what if it clocks on the falling edge? That will have to be accurate. IIRC SPI clocks in and out at opposite edges. – stevenvh May 30 '12 at 14:58
• @stevenvh, or operate in opposite direction(reset sends line high, or just invert the signal for all it matters). I am just saying if there is a certain signal you are looking for something like this can help. – Kortuk May 30 '12 at 14:59
I would suggest that, if practical, an SPI slave should have registers that are clocked from the SPI pins. Those registers, or signals that derive from them, should then be double-synchronized with other logic. Depending upon application requirements, signals from the other logic may either be double synchronized to the SPI clock or may be fed to the SPI's data output asynchronously (if the SPI data is going to be fed to a single data path which contains at least two latches before it splits, and if the application won't care whether an attempt to read something at the moment it changes yields high or low, async reporting may be just fine).
Even if one doesn't want to design an entire SPI slave interface to be SPI-clocked, one can still have a few key aspects of it clocked from the SPI wires. At minimum, I would suggest that the SPI clock wire should trigger a latch for the data and a toggle latch. Pass the data latch through one more level of synchronization than the toggle latch, and then assume that any time the toggle latch value differs from the previous one, new data exists on the clock wire. Some extra latching logic on the chip-select wire would allow reliable detection of short unselect/reselect events.
PS--Depending upon how the logic and software protocol are designed, it may be possible to have an SPI slave device work reliably even if the attached logic likes to sleep when idle and doesn't wake up "instantly". Such behavior could not be implemented if everything on the SPI bus had to be synchronized to the main-logic clock (which wouldn't be running while the device was asleep).
• ...would there be a danger that the FPGA might rewrite things so that transitions on that signal when it wasn't supposed to matter might cause glitches downstream? For example, given the equation Q := (Q and not E) or (D and E) [D FF with enable] would there be a danger that a compiler might implement that using a non-hazard-free 3-input mux, such that a transition on D near the clock edge could glitch Q even when synchronous signal E was low? Is there any nice way to handle such things? – supercat Jul 17 '14 at 15:35 | 2020-09-27 13:50:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3220379948616028, "perplexity": 3298.5370178731887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400279782.77/warc/CC-MAIN-20200927121105-20200927151105-00157.warc.gz"} |
https://www.bacchusbites.com/RCFOA/seminar_template_id_243.html | Existence results for some nonlinear elliptic equations with measure data in Orlicz-Sobolev spaces
Ge Dong (Ph.D, Tongji University)
15:00 pm to 16:00 pm, May 12th, 2015 Science Building A1510
Abstract:
We prove the existence results in the setting of Orlicz spaces for the following nonlinear elliptic equation $A(u) + g(x, u, Du) = \mu$ , where A is a Leray-Lions operator defined on $D(A) \subset W_0^1 L_M(\Omega)$, while $g$ is a nonlinear term having a growth condition with respect to $Du$, but does not satisfy any sign condition. The right-hand side $\mu$ is a bounded Radon measure data. | 2020-09-21 02:20:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.862216055393219, "perplexity": 373.0693330820015}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400198887.3/warc/CC-MAIN-20200921014923-20200921044923-00574.warc.gz"} |
https://tmfujis.wordpress.com/2016/10/13/doing-mcmc-with-pymc-and-making-tr2-a-bit-bayesian/ | # Doing MCMC with PyMC and making tr2 a bit Bayesian
When you do a Bayesian inference, Markov chain Monte Carlo (MCMC) sampling is a common method to obtain a posterior probability of your model or parameter. So far, I have avoided using MCMC in my programs because I like simple and rapid algorithms. But, MCMC now looks unavoidable when I do more sophisticated Bayesian modeling.
There are many software platforms to do MCMC with reasonable learning curves, like Stan & Rstan or BUGS. These are definitely worth studying when you want to be a serious Bayesian.
But, for now, I choose a Python package called “PyMC“. This is because it is well integrated with Python language, and there is a nice introductory textbook to learn this package. (It is freely available online, and they sell hard copies too.)
After reading some chapters of the book, I tried to solve a simple problem with PyMC, which is related to phylogenetic inference and species delimitation.
The problem is estimation of λ in the model below.
$P(n_1,n_2,n_3|\lambda) = \frac{N!}{n_1!n_2!n_3!}(1-\frac{2}{3}e^{-\lambda})^{n_1}(\frac{1}{3}e^{-\lambda})^{N-n_1}$
Why is this related to delimitation? Because this is a model of distribution of tree topology when you sample 3 individuals from 3 species.
If you sample 3 individuals from 3 different species and reconstruct gene trees, you are more likely to see one particular topology, the same one as species tree, than others. On the other hand, if you sample 3 individuals from 1 species, 3 types of topology are evenly observed. Finding this skew / evenness of topology distribution is a basic idea of the tr2-delimitation.
N in the model above is the number of sampled loci, which is a sum of n1, n2 and n3, counts of three possible tree topology of 3 samples. As λ (a branch length of species tree) increases, you more frequently observe topology 1 (species tree topology). The distribution becomes more even when λ is close to zero.
With this model, a posterior distribution of λ when you observe topology counts [n1,n2,n3] is,
$P(\lambda|n_1,n_2,n_3) = \frac{P(n_1,n_2,n_3|\lambda)\pi(\lambda)}{P(n_1,n_2,n_3)}$
I tried to estimate this distribution by MCMC. Luckily, there is an analytical solution to posterior distribution of λ at least with a uniform prior. So, I can check if MCMC can actually work.
The code below simulates n1, n2 and n3 with a particular λ value and does MCMC sampling to estimate λ’s posterior with simulated values, then outputs 5000 samples.
import sys
import numpy
import pymc
##simulated frequencies of triplets
l = 0.598 #true lambda = 0.598
#l = 0.162 #or lambda = 0.162
prob = [1-2*numpy.exp(-l)/3, numpy.exp(-l)/3, numpy.exp(-l)/3]
count_obs = numpy.random.multinomial(100, prob)
print(l, prob, count_obs)
##Bayesian model
lambd = pymc.Uniform("lambda", lower=0.0, upper=5.0) #Uniform prior for lambda
#A pymc function translating lambda to 3 probabilities of triplets
@pymc.deterministic
def triplet_prob(lambd=lambd):
p1 = 1-2*numpy.exp(-lambd)/3
p2 = p3 = numpy.exp(-lambd)/3
return [p1, p2, p3]
#observed values were associated with the multinomial model
obs = pymc.Multinomial("obs", n=sum(count_obs), p=triplet_prob, observed=True, value=count_obs)
#run MCMC
model = pymc.Model([obs, triplet_prob, lambd])
mcmc = pymc.MCMC(model)
mcmc.sample(100000, burn=50000)
with open("trace.lambda.%0.3f.txt"%l, "w") as f:
for i in mcmc.trace("lambda")[::10]:
f.write("%f\n"%i)
PyMC has a quite elegant and intuitive way to abstract a Bayesian modelling.
You can easily define prior distributions of parameters and develop models by combining them. The observed data are connected to the model with the “observed=True” option. Dependency of variables can be traced by “parent” and “children” attributes. Then, you can run MCMC just by calling mcmc.sample.
The distribution of posterior probability of λ when λ = 0.598 was like this. (In this case, simulated numbers are [n1,n2,n3]=[60,21,19])
The left plot is the trace of MCMC, and the right the histogram of MCMC samples. The curve superimposed on the histogram is an analytical solution. As you can see, the MCMC distribution fitted the analytical solution surprisingly well. This is great. The 95% credible interval (CI) is (0.30, 0.78). So, I am quite sure that λ is larger than zero and topology distribution is skewed.
When λ is smaller (λ = 0.162,[n1,n2,n3]=[44, 33, 23]), estimation became more uncertain. The 95%CI is (0.035, 0.38). A bit difficult to say n1 is more frequent.
I think this credible interval approach is OK for this simple case to just estimate λ. But, if you seriously implement species delimitation, a model comparison with reversible jump MCMC is required. It looks much more laborious to write codes for it since PyMC doesn’t have rjMCMC functions.
Regardless, I think PyMC is a good package which is easy to learn and have nice, readable documentations. It is probably a reasonable option if you want to integrate a Bayesian inference in your Python codes. Also, I think it is a handy tool to prototype a Bayesian model before writing it from scratch with other fast languages. | 2018-10-19 22:29:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7310131192207336, "perplexity": 1893.428288457964}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512460.41/warc/CC-MAIN-20181019212313-20181019233813-00174.warc.gz"} |
https://cracku.in/blog/cat-ratio-and-proportion-questions-pdf/ | # CAT Ratio and Proportion Questions PDF [Most Important with Solutions]
0
872
Ratio and Proportions for CAT is an important topic in the Quant section. Over the past few years, Ratio and Proportion have made a recurrent appearance in the CAT Quants Section. You can expect around 2-3 questions in the 22-question format of the CAT Quant sections. You can check out these Ratio and Proportion CAT Previous year questions. In this article, we will look into some important Ratio & Proportions Questions for CAT. These are a good source for practice; If you want to practice these questions, you can download this CAT Ratios and Proportions Questions PDF, which is completely Free.
• CAT Ratios & Proportions – Tip 1: CAT Ratio and Proportion questions appear in the CAT and other MBA entrance exams every year. If you’re starting the prep, firstly understand the CAT Arithmetic Syllabus; It is one of the most important topics and hence should not be avoided. Based on our analysis of the Arithmetic CAT Previous Year Questions: 3-4 questions were asked on this topic.
• CAT Ratios & Proportions – Tip 2: In the CAT exam, CAT Ratio questions are Not very tough. A strong foundation in this topic will aid a student in answering these questions. Practice these CAT Ratio and Proportion problems with solutions PDF. Getting yourself acquainted with the basics of these concepts will help you solve the problems. Learn all the major formulae from these concepts. You can learn all the Important CAT Ratio & Proportion Formulas here.
Question 1: If a certain weight of an alloy of silver and copper is mixed with 3 kg of pure silver, the resulting alloy will have 90% silver by weight. If the same weight of the initial alloy is mixed with 2 kg of another alloy which has 90% silver by weight, the resulting alloy will have 84% silver by weight. Then, the weight of the initial alloy, in kg, is
a) 3.5
b) 2.5
c) 3
d) 4
Solution:
Let the alloy contain x Kg silver and y kg copper
Now when mixed with 3Kg Pure silver
we get $\frac{\left(x+3\right)}{x+y+3}=\frac{9}{10}$
we get 10x+30 =9x+9y+27
9y-x=3 (1)
Now as per condition 2
silver in 2nd alloy = 2(0.9) =1.8
so we get$\frac{\left(x+1.8\right)}{x+y+2}=\frac{21}{25}$
we get 21y-4x =3 (2)
solving (1) and (2) we get y= 0.6 and x =2.4
so x+y = 3
Question 2: A tea shop offers tea in cups of three different sizes. The product of the prices, in INR, of three different sizes is equal to 800. The prices of the smallest size and the medium size are in the ratio 2 : 5. If the shop owner decides to increase the prices of the smallest and the medium ones by INR 6 keeping the price of the largest size unchanged, the product then changes to 3200. The sum of the original prices of three different sizes, in INR, is
Solution:
Let price of smallest cup be 2x and medium be 5x and large be y
Now by condition 1
we get $2x\ \times\ \ 5x\ \times\ y\ =800$
we get $x^2y\ =80$ (1)
Now as per second condition ;
$\left(2x+6\right)\times\ \left(5x+6\right)\ y\ =3200$ (2)
Now dividing (2) and (1)
we get $\frac{\left(\left(2x+6\right)\times\ \left(5x+6\right)\right)}{x^2}=40$
we get $10x^2+42x+36\ =\ 40x^2$
we get $\ 30x^2-42x-36=0$
$5x^2-7x-6=0$
we get x=2
So 2x=4 and 5x=10
Now substituting in (1) we get y =20
Now therefore sum = 4+10+20 =34
Question 3: One part of a hostel’s monthly expenses is fixed, and the other part is proportional to the number of its boarders. The hostel collects ₹ 1600 per month from each boarder. When the number of boarders is 50, the profit of the hostel is ₹ 200 per boarder, and when the number of boarders is 75, the profit of the hostel is ₹ 250 per boarder. When the number of boarders is 80, the total profit of the hostel, in INR, will be
a) 20200
b) 20500
c) 20800
d) 20000
Solution:
Profit per boarder = Total profit / Number of boarders.
Let the number of boarders be n.
Profit/boarder = 1600 – (Total cost/n)
Let the total cost be a + bn, where a = fixed, and b is the variable additional cost per boarder.
Profit/boarder = 1600 – (a + bn)/n
Profit/boarder = 1600 – a/n – b
1600 – a/50 – b = 200
1600 – a/75 – b = 250
Solving, we get a = 7500, and b = 1250
Hence, total profit with 80 people = 80 ( 1600 – 7500/80 – 1250) = 80 (350 – 7500/80) = 28000 – 7500 = Rs. 20500
Question 4: The arithmetic mean of scores of 25 students in an examination is 50. Five of these students top the examination with the same score. If the scores of the other students are distinct integers with the lowest being 30, then the maximum possible score of the toppers is
Solution:
Let sum of marks of students be x
Now therefore x = 25*50 =1250
Now to maximize the marks of the toppers
We will minimize the marks of 20 students
so their scores will be (30,31,32…..49 )
let score of toppers be y
so we get 5y +$\frac{20}{2}\left(79\right)$=1250
we get 5y +790=1250
5y=460
y=92
So scores of toppers = 92
Question 5: The total of male and female populations in a city increased by 25% from 1970 to 1980. During the same period, the male population increased by 40% while the female population increased by 20%. From 1980 to 1990, the female population increased by 25%. In 1990, if the female population is twice the male population, then the percentage increase in the total of male and female populations in the city from 1970 to 1990 is
a) 68.25
b) 68.75
c) 68.50
d) 69.25
Solution:
Let us solve this question by assuming values(multiples of 100) and not variables(x).
Since we know that the female population was twice the male population in 1990, let us assume their respective values as 200 and 100.
Note that while assuming numbers, some of the population values might come out as a fraction(which is not possible, since the population needs to be a natural number). However, this would not affect our answer, since the calculations are in ratios and percentages and not real values of the population in any given year.
Now, we know that the female population became 1.25 times itself in 1990 from what it was in 1980.
Hence, the female population in 1980 = 200/1.25 = 160
Also, the female population became 1.2 times itself in 1980 from what it was in 1970.
Hence, the female population in 1970 = 160/1.2 = 1600/12 = 400/3
Let the male population in 1970 be x. Hence, the male population in 1980 is 1.4x.
Now, the total population in 1980 = 1.25 times the total population in 1970.
Hence, 1.25 (x + 400/3) = 1.4x + 160
Hence, x = 400/9.
Population change = 300 – 400/9 – 400/3 = 300 – 1600/9 = 1100/9
percentage change = $\frac{\frac{1100}{9}}{\frac{1600}{9}}\times\ 100\ =\ \frac{1100}{16}\%=68.75\%$
Question 6: In a tournament, a team has played 40 matches so far and won 30% of them. If they win 60% of the remaining matches, their overall win percentage will be 50%. Suppose they win 90% of the remaining matches, then the total number of matches won by the team in the tournament will be
a) 80
b) 78
c) 84
d) 86
Solution:
Initially number of matches = 40
Now matches won = 12
Now let remaining matches be x
Now number of matches won = 0.6x
Now as per the condition :
$\frac{\left(12+0.6x\right)}{40+x}=\frac{1}{2}$
24 +1.2x=40+x
0.2x=16
x=80
Now when they won 90% of remaining = 80(0.9) =72
So total won = 84
Question 7: A person buys tea of three different qualities at ₹ 800, ₹ 500, and ₹ 300 per kg, respectively, and the amounts bought are in the proportion 2 : 3 : 5. She mixes all the tea and sells one-sixth of the mixture at ₹ 700 per kg. The price, in INR per kg, at which she should sell the remaining tea, to make an overall profit of 50%, is
a) 653
b) 688
c) 692
d) 675
Solution:
Considering the three kinds of tea are A, B, and C.
The price of kind A = Rs 800 per kg.
The price of kind B = Rs 500 per kg.
The price of kind C = Rs 300 per kg.
They were mixed in the ratio of 2 : 3: 5.
1/6 of the total mixture is sold for Rs 700 per kg.
Assuming the ratio of mixture to A = 12kg, B = 18kg, C =30 kg.
The total cost price is 800*12+500*18+300*30 = Rs 27600.
Selling 1/6 which is 10kg for Rs 700/kg the revenue earned is Rs 7000.
In order to have an overall profit of 50 percent on Rs 27600.
Thes selling price of the 60 kg is Rs 27600*1.5 = Rs 41400.
Hence he must sell the remaining 50 kg mixture for Rs 41400 – Rs 7000 = 34400.
Hence the price per kg is Rs 34400/50 = Rs 688
Question 8: In a football tournament, a player has played a certain number of matches and 10 more matches are to be played. If he scores a total of one goal over the next 10 matches, his overall average will be 0.15 goals per match. On the other hand, if he scores a total of two goals over the next 10 matches, his overall average will be 0.2 goals per match. The number of matches he has played is
Solution:
Let Total matches played be n and in initial n-10 matches his goals be x
so we get $\frac{\left(x+1\right)}{n}=0.15$
we get x+1 =0.15n (1)
From condition (2) we get :
$\frac{\left(x+2\right)}{n}=0.2$
we get x+2 = 0.2n (2)
Subtracting (1) and (2)
we get 1 =0.05n
n =20
So initially he played n-10 =10 matches
Question 9: A box has 450 balls, each either white or black, there being as many metallic white balls as metallic black balls. If 40% of the white balls and 50% of the black balls are metallic, then the number of non-metallic balls in the box is
Solution:
Let the number of white balls be x and black balls be y
So we get x+y =450 (1)
Now metallic black balls = 0.5y
Metallic white balls = 0.4x
From condition 0.4x=0.5y
we get 4x-5y=0 (2)
Solving (1) and (2) we get
x=250 and y =200
Now number of Non Metallic balls = 0.6x+0.5y = 150+100 = 250
Question 10: From a container filled with milk, 9 litres of milk are drawn and replaced with water. Next, from the same container, 9 litres are drawn and again replaced with water. If the volumes of milk and water in the container are now in the ratio of 16 : 9, then the capacity of the container, in litres, is
Solution:
Let initial volume be V, final be F for milk.
The formula is given by : $F\ =\ V\cdot\left(1-\frac{K}{V}\right)^n$ n is the number of times the milk is drawn and replaced.
so we get $F=\ V\left(1-\frac{K}{V}\right)^{^2}$
here K =9
we get
$\frac{16}{25}V\ =\ V\ \left(1-\frac{9}{V}\right)^{^2}$
we get $1-\frac{9}{V}=\ \frac{4}{5}or\ -\frac{4}{5}$
If considering $1-\frac{9}{V}=-\frac{4}{5}$
V =5, but this is not possible because 9 liters is drawn every time.
Hence : $1-\frac{9}{V}=\frac{4}{5},\ V\ =\ 45\ liters$
Check out the CAT Formula Handbook which includes the most important formulas you must know for CAT.
• So, these are some of the most important CAT Ratio questions PDF. Download these Ratio & Proportions Questions for CAT PDF, with detailed Answers. Also, practice the ratio and proportion concepts well; it is not a tough topic.
• Practising and solving these questions and answers is one of the key steps in acing CAT Arithmetic questions. You can also check out CAT Ratio & Proportions Previous year Questions with detailed solutions here.
• Try these 3 Cracku Free CAT Mocks, which come with detailed solutions and with video explanations. | 2022-08-15 16:03:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5555964112281799, "perplexity": 1591.8076155035349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572192.79/warc/CC-MAIN-20220815145459-20220815175459-00603.warc.gz"} |
http://www.solved-problems.com/tag/factors/ | # Solving Quadratic Equations I: Factoring (Grouping)
Sometimes, quadratic equations can be solved by factorizing, which is also called grouping. The solve by factoring process is usually consists of three major steps:
All terms should be moved to one side of the equation using addition or subtraction. Rewrite the equation so that the left side of the equation is set equal to 0. For example, if the original equation is $x^2 = 5x - 3$, it should be rewritten as $x^2 - 5x + 3 = 0$.
The equation should be factored completely. It will have two factors.
Each factor should be set equal to zero. Since factors are of first power, they are easy to be solved.
All of the solutions should be combined to obtain the full solution set for the original equation.
The catch of this method is the finding of factors. Except some trivial cases, factoring quadratic equations is not easier than using other methods to solve the quadratic equations. Here are a few tips to help you factorize a quadratic equations: | 2018-06-18 20:37:02 | {"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7217846512794495, "perplexity": 194.00818516049938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267861163.5/warc/CC-MAIN-20180618203134-20180618223134-00616.warc.gz"} |
http://www.logicmatters.net/latex-for-logicians/classroom/ | # 7. At the conference, in the classroom
Presentations.
• By far the most widely used/widely recommended LaTeX presentation package is beamer.cls (Till Tantau, 2003-07: maintained by Vedran Miletić and Joseph Wright). Every logic talk with slides that I’ve been to in the last few years has evidently used this package.
• For other LaTeX presentation options, see Screen Presentation Tools (Michael Wiedmann’s comprehensive comparison page, revised in 2014).
Problem sets
• probsoln.sty (Nicola Talbot, 2000-12): simple package for generating problems sheets — and answer sheets — by selecting from problems and solutions defined in another file).
• exercise.sty (Paul Pichaureau 2004-09): flexible package for setting out exercises/answers in various documents.
• exam.sty (Philip Hirschhorn 1994-2011): a nice option in you want to maintain a single document which, with a single toggle, both prints a preamble-and-exercises set, and a set of exercises-and-solutions. (This is what I have chosen to use for my Gödel book exercises). | 2017-06-26 22:35:03 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8199558854103088, "perplexity": 8515.367820847596}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320869.68/warc/CC-MAIN-20170626221252-20170627001252-00675.warc.gz"} |
https://www.computer.org/csdl/trans/tc/2010/10/ttc2010101392-abs.html | Issue No. 10 - October (2010 vol. 59)
ISSN: 0018-9340
pp: 1392-1401
Kwok-Wo Wong , The City University of Hong Kong
Fei Chen , Chongqing Univeristy, Chongqing
Xiaofeng Liao , Chongqing Univeristy, Chongqing
ABSTRACT
In this paper, the period distribution of sequences generated by Chebyshev polynomials over the finite field $Z_N$ is analyzed. It is found that the distribution is unsatisfactory if N (the modulus) is not chosen properly. Based on this finding, we present an attack on the public-key algorithm based on Chebyshev polynomials over $Z_N$. Then, we modify the original algorithm to make it suitable for practical purpose. Its security under some existing models is also discussed in detail.
INDEX TERMS
Chaos, Chebyshev polynomials, period distribution, public-key cryptography, security analysis.
CITATION
Kwok-Wo Wong, Fei Chen, Xiaofeng Liao, "On the Security of Public-Key Algorithms Based on Chebyshev Polynomials over the Finite Field $Z_N$", IEEE Transactions on Computers, vol. 59, no. , pp. 1392-1401, October 2010, doi:10.1109/TC.2010.148 | 2016-09-27 10:55:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6833323240280151, "perplexity": 1897.4540958446366}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661023.80/warc/CC-MAIN-20160924173741-00109-ip-10-143-35-109.ec2.internal.warc.gz"} |
http://math.stackexchange.com/questions/54679/is-there-an-infinite-number-of-primes-constructed-as-in-euclids-proof | # Is there an infinite number of primes constructed as in Euclid's proof?
In Euclid's proof that there are infinitely many primes, the number $p_1 p_2 ... p_n + 1$ is constructed and proved to be either a prime, or a product of primes greater than $p_n$.
Trivially, we could also use the number $R_n=p_1 p_2 ... p_n - 1$ to prove the theorem, for n>2.
Intuitively, as $n$ grows, the probability that $R_n$ is prime gets smaller. Is there a proof that $R_n$ is not prime for any $n$ greater than some integer $M$ ? Or conversely, that there is an infinite number of prime $R_n$ numbers?
A possibly equivalent question: Is there a prime number greater than $p_n$ and smaller than $R_n$ for any $n$ greater than some integer $M$ ?
-
Perhaps this is pedantry, but this isn't quite what Euclid's proof says. You seem to be using $p_i$ to mean "the $i$th prime", but this doesn't work: after all, if you only know $p_1, \dots, p_n$, then the number $R_n$ won't necessarily give rise to $p_{n+1}$ as a factor. Euclid does something different: he lets $\{p_1, \dots, p_n\}$ be any finite set of primes, then constructs a new one $p_{n+1}$. Here $p_i$ simply means "the $i$th prime that we have found". Your question makes sense in spite of this, but isn't very related to Euclid's proof as it stands. :) – Billy Jul 31 '11 at 0:57
Euclid did indeed consider arbitrary finite sets of primes, not just the set of the smallest $n$ primes. – Michael Hardy Jul 31 '11 at 4:30
So I think the answer to the title question is no, the proof isn't really constructive unless you pick all the primes. For example if you just knew that 3 and 5 were primes 3(5)+1=16 which isn't prime or the product of primes greater than ones in your list. – user9352 Aug 18 '11 at 16:42
The answer to the last question is yes. Much more is known: there is always a prime between $p$ and $2p$, by Bertrand's Postulate, which has long been a theorem.
The numbers $P_n=p_1p_2\cdots p_n +1\:$ have been looked at quite a bit, your $R_n$, for no clear reason, somewhat less. Very little of a general character is known about the $P_n$, and even less about the $R_n$. Prime numbers of either shape are called primorial primes.
It is not known whether there is an infinite number of primorial primes. More startlingly, it is not known whether an infinite number of the $P_n$ or $R_n$ are composite! There has been a great deal of computational work on these numbers, and primes do seem to become scarce among them as $n$ gets large. You can find some information here.
-
For googling purposes, this is called Bertrand's postulate. :) – Billy Jul 31 '11 at 0:49
I guess you meant $P_n$ to be the one with a '+' ? – Adrian Jul 31 '11 at 1:12
@Adrian: Thank you, fixed. – André Nicolas Jul 31 '11 at 1:22
Euclid's proof says that if there were only finitely many primes, the product of these primes +1 would be a new prime. It arrives at a contradiction.
Hence, it really makes no assertion about the product of the first $n$ primes, plus one.
-
Actually, "the product of these primes $+1$" is not asserted to be a new prime; it is merely asserted that this product is divisible by some prime greater than any in the initial collection. – Nick Strehlke Jul 31 '11 at 1:54
Note the false premise. What I am saying is that were there finitely many primes, their product + 1 must be a new prime. This is a contraction, so no assertion is made there. Only the fact that there is an infinity of primes is revealed. – ncmathsadist Jul 31 '11 at 2:12
Correct. there is no expectation it would be. – ncmathsadist Jul 31 '11 at 2:44
Here is my proof. Suppose there is a finite collection of primes. Let $P$ be its product. Then $P + 1$ is relatively prime to every prime. That is a contradiction, since we know every positive integer splits into a finite product of primes. – ncmathsadist Jul 31 '11 at 2:59
@Nick: He never said that the product of the first x many primes + 1 would be prime. He's saying the exact same thing as you. – mixedmath Jul 31 '11 at 3:19 | 2015-08-05 13:02:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8823665976524353, "perplexity": 217.57013516707923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438044271733.81/warc/CC-MAIN-20150728004431-00214-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://brilliant.org/discussions/thread/educational-websites/ | ×
# Educational Websites
What education websites do you people use other than Brilliant,AOPS,MITOCW,edX,Coursera,udacity??
2 years, 10 months ago
Sort by:
Khan Academy!!! · 2 years, 10 months ago
Also Function Space · 2 years, 10 months ago
Wow, function space is a gr8 site! Thanks for the recommendation! :) · 2 years, 10 months ago
I agree @Happy Melodies · 2 years, 10 months ago
me too · 2 years ago
Are you a member at Function space ? If so , can you explain how the environment there is ? · 2 years ago
i just got to know about function space
i haven't created an ID there yet
the 'me too' was for khan academy · 2 years ago
Ok · 2 years ago
Project Euler,Top Coder,uva,Talent Buddy and Code Chef for all you CS junkies. · 2 years, 10 months ago
Do you use checkio?? · 2 years, 10 months ago
No. I havent heard of it before. What is it? · 2 years, 10 months ago
It's a way to learn Python by exploring a gaming world.....The tasks are structured almost like Project Euler .... just google it and you'll find an option to make an account.... · 2 years, 10 months ago
Also try Quora. It's not an education site, but it really is awesome! · 2 years, 10 months ago
I use Codecademy for programming. · 2 years, 10 months ago
StackExchange · 2 years, 10 months ago
It's a Q & A forum right???I've looked up answers there but never opened an account...time to change that..... · 2 years, 10 months ago
Can't believe that I came across this note after such a long time !!! · 2 years ago
Thanks for starting this discussion.May God bless you. · 2 years, 10 months ago
Glad that you like it! · 2 years, 10 months ago
Like it!!I love it. · 2 years, 10 months ago
Try mathoverflow.com , physicsforidiots.com. P.S. - The latter website is really a good one.It is not a taunt because I also am one. · 2 years, 10 months ago
Thanks for your suggestion...My Physics is also not that good......... · 2 years, 10 months ago
Well it is much better than mine.{For instance just see my tag levels}.Can you just present some kind of a sequence for studying PCM so that I can plan my work better. I also started a discussion in this respect but it did not get any response .{see my feed you will get the started discussion named 'study sequence'. · 2 years, 10 months ago
Fuctionspace.org · 2 years, 10 months ago
Nobody mentioned wolfram alpha · 2 years ago
I'm surprised to see that nobody has mentioned AoPS. It is a great resource for learning and practicing maths problems! · 2 years, 10 months ago
I asked for websites other than AoPS because everybody knows about that... :) · 2 years, 10 months ago | 2017-03-30 22:38:08 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921225368976593, "perplexity": 7400.310762931209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218203536.73/warc/CC-MAIN-20170322213003-00292-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://www.cacrs.com/question/which-of-the-following-is-most-true-regarding-corrections-to-crf-entries-according-to-ich-gcp/ | 1
• The monitor should ensure that appropriate corrections, additions, or deletions are made, dated and initialized by the investigator
• The monitor should ensure that appropriate corrections, additions, or deletions are made, dated, explained (if necessary), and initialed by the investigator or designate
• The monitor should ensure that appropriate corrections are initialized by the investigator or designate
• The monitor should ensure that appropriate corrections, additions, or deletions are made, dated, explained (if necessary), and initialized by the investigator | 2021-01-19 15:55:34 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8365338444709778, "perplexity": 7634.510377352009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519395.23/warc/CC-MAIN-20210119135001-20210119165001-00772.warc.gz"} |
https://lemon.cs.elte.hu/hg/lemon/diff/ef200e268af2/doc/groups.dox | doc/groups.dox
changeset 1202 ef200e268af2 parent 1200 62ba43576f85 child 1204 dff32ce3db71
1.1 --- a/doc/groups.dox Sat Jan 08 22:51:16 2011 +0100
1.2 +++ b/doc/groups.dox Sun Jan 09 00:56:52 2011 +0100
1.3 @@ -561,8 +561,9 @@
1.4 the problem is to find a shortest possible tour that visits each node exactly
1.5 once (i.e. the minimum cost Hamiltonian cycle).
1.6
1.7 -These TSP algorithms should be used with a
1.8 -metric cost function. Otherwise, they could yield worse results.
1.9 +These TSP algorithms are intended to be used with a \e metric \e cost
1.10 +\e function, i.e. the edge costs should satisfy the triangle inequality.
1.11 +Otherwise the algorithms could yield worse results.
1.12
1.13 LEMON provides five well-known heuristics for solving symmetric TSP:
1.14 - \ref NearestNeighborTsp Neareast neighbor algorithm | 2022-05-27 15:49:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9142218828201294, "perplexity": 13144.06531424911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00015.warc.gz"} |
https://brilliant.org/discussions/thread/i-need-your-help-in-this-odd-problem/ | ×
# I need your help in this 'odd' problem.
Sum of all 3-digit numbers whose digits are all odd.
options given are
A) 4,94,550
B)4,04,595
C)69,375
D)62,581
It is a problem from NEST exam 2007 I tried a lot but I am just unable to do it. A detailed solution is welcome! Thank you. :)
Note by Dhanashree J
3 years, 6 months ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$ | 2017-12-16 09:04:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9993330240249634, "perplexity": 10621.419124084061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587496.62/warc/CC-MAIN-20171216084601-20171216110601-00203.warc.gz"} |
https://msp.org/ant/2018/12-9/p06.xhtml | #### Vol. 12, No. 9, 2018
Recent Issues
The Journal About the Journal Editorial Board Subscriptions Editors' Interests Submission Guidelines Submission Form Editorial Login Ethics Statement ISSN: 1944-7833 (e-only) ISSN: 1937-0652 (print) Author Index To Appear Other MSP Journals
Dynamics on abelian varieties in positive characteristic
### Appendix: Robert Royals and Thomas Ward
Vol. 12 (2018), No. 9, 2185–2235
##### Abstract
We study periodic points and orbit length distribution for endomorphisms of abelian varieties in characteristic $p>0$. We study rationality, algebraicity and the natural boundary property for the dynamical zeta function (the latter using a general result on power series proven by Royals and Ward in the appendix), as well as analogues of the prime number theorem, also for tame dynamics, ignoring orbits whose order is divisible by $p$. The behavior is governed by whether or not the action on the local $p$-torsion group scheme is nilpotent.
##### Keywords
abelian variety, inseparability, fixed points, Artin–Mazur zeta function, recurrence sequence, natural boundary
##### Mathematical Subject Classification 2010
Primary: 37P55
Secondary: 11N45, 14G17, 14K02, 37C25, 37C30 | 2019-11-17 18:34:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37668585777282715, "perplexity": 2921.487848689308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669225.56/warc/CC-MAIN-20191117165616-20191117193616-00396.warc.gz"} |
https://www.physicsforums.com/threads/math-stuff-that-hasnt-been-proven.519698/ | Math stuff that hasn't been proven
1. Aug 6, 2011
micromass
In elementary school or high school, we often use stuff that has never actually been proven (in that class). For example
- Pythagoras' theorem.
- Addition of natural numbers is associative.
- Every number can be uniquely (up to order) decomposed in prime factors.
Accepting such a things really annoyed me, I would always ask why something is true. The answer that most teachers gave me was "can you find an example where it doesn't work," sigh. I had to wait until university to actually see a proof for such a things...
So, were you ever annoyed that something wasn't proven in school?? And what would have liked to see a proof/reason of??
2. Aug 6, 2011
BloodyFrozen
- Every number can be uniquely (up to order) decomposed in prime factors.
This one bothers me alot!
Also proof of quadratic formula (easy to derive though)
and...
ex= limx->∞ (1+1/x)x
The teacher would just say that's what ex is (and give the calculus notation:grumpy:), but not tell us how to get it, even in the most basic terms
And one last thing,
The SA and Volume formulas in the back of the book. Never knew how they got it until I read about rotational volume and Archimedes' way of proving some of them
Last edited: Aug 6, 2011
3. Aug 6, 2011
chiro
I found pretty much that to be case of most (if not all) math taught in primary/high school.
Also most of the students would constantly remark why we even need to do an integral and that it has "no use in society".
In some ways I can empathize with those students because had they taken a few uni courses, they might have changed there perspective and maybe even enjoyed or appreciated what they were learning.
4. Aug 6, 2011
disregardthat
In statistics it has bothered me why we could use the normal distribution in certain situations. Even at basic university level it is not proved (at least where I study).
5. Aug 6, 2011
micromass
Exactly!!!! Every time when encountering a statistics problem, they assume a certain distribution. It was never very clear to me how we could ever know the distribution of an event. This has always bothered me!
6. Aug 7, 2011
HallsofIvy
In applications of mathematics, you have to start with some model. What model you use depends upon the situation. I don't know about you but when I first learned probability distributions I also learned why they would be useful for modeling specific situations. For example, you can develop the Poisson distribution as a model that "expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event" (http://en.wikipedia.org/wiki/Poisson_distribution)
The normal distribution that you mention is especially important because of one of the most important theorems in statistics, the "Central Limit Theorem":
If you have a large sample from any probability distribution (with finite moments) with mean $\mu$ and standard deviation $\sigma$ then the average value of the sample is approximately normally distributed with mean $\mu$ and standard distribution $\sigma/\sqrt{n}$. And the larger the sample, the better the approximation.
In other words, no matter what the actual distribution is, the average of your sample will be at least normally distributed.
7. Aug 7, 2011
I like Serena
Nope. Those things never annoyed me.
I always knew these things were outside the scope of the class material and that none of the other students would be interested in it.
That didn't stop me from finding out for myself, elsewhere.
The nice thing about the class materials was that it gave me an overview of what there was, and that it triggered my curiosity to want to learn more!
By now I have discovered that there is simply too much to learn (or want to learn). :grumpy:
So some things I take for granted, and some things that peak my interest, I delve into wide and deep. :!!)
Last edited: Aug 7, 2011
8. Aug 7, 2011
Dr. Seafood
In high school I found it so silly and at times upsetting how much we take for granted.
As a math tutor, I try to prove almost all the results I use. I'm teaching really elementary calculus right now, but I'm trying to be as rigorous as possible without being silly -- "Silly" meaning that I go ahead and demonstrate existence and uniqueness of "0" with respect to ℝ when I'm just trying to teach a first-year science major what "derivative" means. I feel a good analysis book or course would probably have that expectation, but it's unnecessary as far as the scope of elementary calculus goes.
That said, I prove as much as possible. I present limits in their rigorous epsilon-delta form, and prove the useful so-called "limit laws" (not all of them because there are many, I leave some of them as exercises (lol math)). I take no differentiation rule for granted. I prove the squeeze theorem for real functions, Fermat's theorem, mean value theorem. I do not usually prove the extreme value theorem; I think that proof is too difficult and unnecessary for our purposes, so I omit it and present an intuitive geometrical sketch. That said, with one of my students I proved the intermediate value theorem, which I regret now because it's a pretty confusing proof (it relies on completeness, and I had to define supremum). I try not to stray too far from our chosen topic, but I also try to take for granted as little as possible.
Some rigour that I feel is necessary to present is when developing derivatives for transcendental functions: particularly exponential and trigonometric. I was tired of being told "without proof or development, there exists a number "e" such that ex is its own derivative with respect to x." But I do this in my lessons at first, however, and then proceed to define function "ln(x)" to be the inverse of ex. From this definition I prove ln(xy) = ln(x) + ln(y), and ln(xy) = yln(x). I implicitly differentiate x = ey to find the derivative of ln(x) wrt x. I use the fact that b = eln(b) for all b (by definition of ln) to finally find the derivative of bx. I use continuity of the logarithm to show that e = lim (x + 1/x)x and use this to approximate the decimal expansion of e.
Anyways, that's an example of the level of rigour I provide when I teach. It keeps the lessons interesting; I feel it becomes to laborious to just say "guess what, the derivative of the exponential is the log times the exponential" and then start using chain rule a billion times. I didn't prove something like that the log is continuous for positive real arguments, and I only make an intuitive "stretching of base" argument to persuade that e exists. But I prefer this kind of lesson because it shows that the number which makes the exponential its own derivative is approximately 2.71828.
Last edited: Aug 7, 2011
9. Aug 7, 2011
HallsofIvy
How do you prove that if you do not assume that the derivatrive of $e^x$ is $e^x$?
10. Aug 7, 2011
Dr. Seafood
Well, I'm not assuming that e = 2.71828... . The thing that is assumed is that there exists a number e with the property that ex is its own derivative. Developing the decimal approximation of e from that assumption works out pretty well, as I described before. Of course, side-stepping existence like that is not rigorous, but in defense of my teaching, it's the best I can do when presenting this material...
A limit of importance when developing the derivative of the exponential is L = (ax - 1)/x, with x tending to 0. The specific assumption made is that there exists a number e such that, setting a = e, we have L → 1 as x → 0. This assumes that the limit exists.
I think it takes a whole different kind of rigour to show the existence of e. The most I can do at the elementary calculus level is make a "stretching" argument, i.e. consider akx = (ak)x, and show that we can change the base of an exponential (by "stretching") to fit the data points of another exponential. We want the derivative at x = 0 to be 1, and it seems you can choose the stretch factor k = 1/L (L defined as in the last paragraph) so that this is possible. This will "stretch" the base of the exponential to be the required number e. This is not rigorous at all; in fact, it's the exact thing micromass referred to in the OP. I just shrug and say "Oh, it exists, okay you better believe me." But I don't feel that the level of precision required here is necessary to teach this topic. I actually don't even know how to delve into rigor with this kind of argument, but the geometry usually makes this seem plausible enough for a student to believe me.
The point I'm trying to make is that at least from this perspective, you can see the motivation for the development of such a number e = 2.71828... . I think that's really important in a teaching setting. Omitting/ignoring the rigorous proof of the existence of e is much less annoying than just presenting the irrational number without providing motivation.
Last edited: Aug 7, 2011
11. Aug 7, 2011
Bogrune
I've got one: How is π equal to the ratio of a circle's diameter to its circumference?
12. Aug 7, 2011
dalcde
We define pi to be the ratio. It's not a magic number that somehow is the ratio.
13. Aug 7, 2011
micromass
Yes, but why is the ratio a constant?? That seems nontrivial to me...
14. Aug 7, 2011
Dr. Seafood
^ Actually, that's not surprising to me really: all circles are similar to one another, so the ratio between circumference and diameter shouldn't change when we change the diameter. That the ratio happens to be close to 3.14 is something I'm interested in finding ...
15. Aug 8, 2011
Bogrune
Well visually it looks as if the ratio to a circle's diameter to it circumference is a bit less than 3.141592653...
16. Aug 8, 2011
Dr. Seafood
^ How did you draw that??
17. Aug 8, 2011
Bogrune
I simply used a compass, and I gave it a radius of 0.5. I then drew a line through it to make its diameter, and then I cut a few pieces of string its approximate size, and I "wrapped" them around the circle. Although I think I trimmed them innacurately...
18. Aug 8, 2011
romsofia
Why ln(0) is undefined, without looking at the graph. Never understood it.
19. Aug 8, 2011
Dr. Seafood
ln(x) is number y such that ey = x, so ln(0) asks for y such that ey = 0. But ey > 0 for all y, so this is not possible in the real domain; i.e. ln(0) asks for a nonsense evaluation.
20. Aug 8, 2011
daveb
For quite some time I've been of the opinion that a rudimentary discussion of rings, fields and groups would be of great benefit to high school algebra students, so that then they understand why they are learning what they are larning. | 2018-01-21 09:20:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8503824472427368, "perplexity": 574.778002473164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890394.46/warc/CC-MAIN-20180121080507-20180121100507-00503.warc.gz"} |
https://zbmath.org/?q=an:1328.19012 | # zbMATH — the first resource for mathematics
Equivariant $$K$$-Chevalley rules for Kac-Moody flag manifolds. (English) Zbl 1328.19012
The paper under review supplies $$K$$-theoretic versions of the equivariant Chevalley rule holding for the infinite dimensional spaces known as Kac-Moody thick flag manifolds, investigated in great detail by M. Kashiwara [in: Algebraic analysis, geometry, and number theory, Proc. JAMI Inaugur. Conf., Baltimore/MD (USA) 1988, 161–190 (1989; Zbl 0764.17019)], in the realm of infinite dimensional Lie algebras. Within this framework, the authors prove four cancellation-free Chevalley formulas (Theorem 3.4 and Theorem 4.8), i.e. expansions with all positive coefficients of the product of the equivariant class of an equivariant line bundle with an arbitrary Schubert variety.
In the finite-dimensional context, recall that if $$G$$ is a semisimple Lie group and $$B$$ a Borel subgroup, the homogeneous space $$G/B$$ is a projective variety, because $$B$$ is parabolic (indeed is the minimal parabolic subgroup of $$G$$). The stratification of $$G/B$$ in terms of closures of affine cells of a natural cellular decomposition was already studied by C. Chevalley [Proc. Symp. Pure Math. 56, 1–23 (1994; Zbl 0824.14042)]. For special choices of $$G$$ and $$B$$ one recovers more classical situations. Two main instances are provided by the complete flags of $${\mathbb C}^n$$ and the Grassmannian $$G(k,n)$$, parametrizing inclusions $$0\subseteq W\subseteq {\mathbb C}^n$$, where $$W$$ is a vector subspace of $${\mathbb C}^n$$ of dimension $$k$$. The Chevalley formula for $$G(k,n)$$ rules precisely the intersection $$\sigma_1\sigma_\lambda$$ of the generator of the Picard group of $$G(k, n)$$ with arbitrary Schubert cycles $$\sigma_\lambda$$ (parametrized by partitions contained in a $$k\times (n-k)$$ rectangle). For the manifold $$Fl({\mathbb C}^n)$$ parametrizing complete flags of subspaces of $${\mathbb C}^n$$, the Chevalley rule is known as Monk rule. It prescribes the intersection of the basis of the Picard group, generated by the $$n$$ classes corresponding to simple permutations, with natural defined Schubert varieties associated to some fixed reference flag. Equivariant versions of the Chevalley formula for Grassmannians are well known (see e.g. A. Knutson and T. Tao [Duke Math. J. 119, No. 2, 221–260 (2003; Zbl 1064.14063)]).
The transition to the equivariant setting is not straightforward but it naturally demands to be considered and investigated. In fact, one can think of the equivariant cohomology, and thus of the corresponding equivariant K-theory, of $$G/B$$ with respect to the action of a maximal sub-torus of $$B$$. Equivariant Chevalley formulas in the $$K$$-theory of $$G/B$$ are also well known and yield the so-called Monk’s formula in type A. So, the generalization to the Kac-Moody thick flag varieties, dealt with in the paper under review, is a truly important step, and it can be viewed as as a natural prosecution of previous investigations published in a couple of articles by the first author and A. Postnikov [Int. Math. Res. Not. 2007, No. 12, Article ID rnm038, 65 p. (2007; Zbl 1137.14037)] and [Trans. Am. Math. Soc. 360, No. 8, 4349–4381 (2008; Zbl 1211.17021)].
It is remarkable that the formulas the authors obtain specialize to those for the classical flag manifolds recalled above. This desirable feature enables to detect and to fix a gap in the proof of previous results by S. Griffeth and A. Ram [Eur. J. Comb. 25, No. 8, 1263–1283 (2004; Zbl 1076.14068)] and H. Pittie and A. Ram [Electron. Res. Announc. Am. Math. Soc. 5, No. 14, 102–107 (1999; Zbl 0947.14025)].
The main tools used in the paper is the theory of Lakshmibay-Seshadri paths, treated in Section 3.4, as well as the alcove model, described in the aforementioned papers by Lenart and Postnikov, and many more technicalities which are partly explained and partly precisely referred to the appropriate literature.
The authors devote the final Section 5 of the article to describe a few illuminating examples, including the very useful one concerning the affine Grassmannian. One should dutifully add that the various Kac-Moody Chevalley rules have been implemented in the software sage which will be soon available for free public distribution. The paper ends with an essential bibliographical list. The references, on the other hand, are so precisely distributed along the exposition that the interested reader can find, although not without a personal effort, his/her own path to orient him/herself into the labyrinth of the many sophisticated mathematical techniques and knowledges, invoked and displayed in this truly fascinating article.
##### MSC:
19L47 Equivariant $$K$$-theory 17B67 Kac-Moody (super)algebras; extended affine Lie algebras; toroidal Lie algebras 22E67 Loop groups and related constructions, group-theoretic treatment
##### Software:
combinat; SageMath
Full Text: | 2021-01-21 09:13:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6923574805259705, "perplexity": 568.1073712443373}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524270.28/warc/CC-MAIN-20210121070324-20210121100324-00574.warc.gz"} |
https://datascience.stackexchange.com/questions/47824/understanding-youtube-recommender-candidate-generation-step | # Understanding Youtube recommender (candidate generation step)
I'm trying to understand Deep Neural Networks for YouTube Recommendations.
Their candidate generation step outputs top N items
• via softmax (with negative sampling) at training time .
• via nearestneighbor at serving time.
1. I guess $$v_j$$ represents, (from softmax layer to nearest neighbor index)
topn videos you get via softmax, and represent them in the original encoding (same encoding you used for the input (used for embedded video watches))
apparently, $$v_j$$ are in the different encoding from the input encodings.
The softmax layer outputs a multinomial distribution over the same 1M video classes with a dimension of 256 (which can be thought of as a separate output video embedding)
I'm trying to understand what they mean by interpreting softmax output as a separate output video embedding. I thought softmax layer that outputs 1M classes has dimension of 1M, where does 256 came from? (It's the same question as How to create a multi-dimensional softmax output in Tensorflow? and I don't think it has been answered there..)
2. user vector $$u$$ is the output of the final ReLU unit, although I'm not sure what this user vector is used for.
3. I guess in serving time, to pick the top N for a given user, user vector $$u$$ is used by nearest-neighbor. But my understanding of nearest-neighbor is for a given vector, it finds nearest vectors in the same dimension. (such as given an movie, find nearest movies). However here, you are given a user and need to find topn videos. How does that work?
My best guess is that, for a given user, u get a user vector as the ReLU output, then find user-user nearest neighbor, and combine their topn items obtained in the training time. But it's just a guess..
• did you ever figure this out? – Dan Scally Sep 18 '19 at 13:52
$$v_j$$ is the learned vector of weights (dimension 256) that connect the last hidden layer (dimension 256) to the corresponding output node for video class $$j$$. The last hidden layer is the user embedding vector $$u$$.
The paper uses a vocabulary $$V$$ of 1M video classes so the deep neural net learns 1M vectors of weights that connect the last hidden layer to each class: $$[v_1, \ldots, v_{1M}]$$. Negative sampling is used to train such a model with so many classes.
Below softmax function on page 2 section 3.1 with vj and $$u$$.
$$P(w_t=i|U,C) = \frac{e^{v_i u}}{\sum_{j \in V e^{v_{j}u}}}$$
I believe $$v_j$$ is "thought of as a separate output video embedding" because $$v_j$$ can be interpreted as a compressed representation of video j in a 256 dimensional vector space.
Since $$e^x$$ is monotonically increasing, we just care about the dot product $$v_iu$$ in the numerator of equation above. This is why "the scoring problem reduces to a nearest neighbor search in the dot product space" where we simply find the nearest neighbors $$v$$ to $$u$$. | 2021-05-08 13:46:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 17, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8188154697418213, "perplexity": 1004.2317369866759}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.7/warc/CC-MAIN-20210508121446-20210508151446-00081.warc.gz"} |
https://zenodo.org/record/4603494/export/csl | Dataset Open Access
# RP-VIO: Robust Plane-based Visual-Inertial Odometry (Dataset)
Ram, Karnik; Kharyal, Chaitanya; Harithas, Sudarshan; Krishna, Madhava
### Citation Style Language JSON Export
{
"DOI": "10.1234/TODO",
"language": "eng",
"author": [
{
"family": "Ram, Karnik"
},
{
"family": "Kharyal, Chaitanya"
},
{
"family": "Harithas, Sudarshan"
},
{
}
],
"issued": {
"date-parts": [
[
2021,
3,
14
]
]
},
"abstract": "<p>Dataset accompanying our paper on 'RP-VIO: Robust Plane-based Visual-Inertial Odometry for Dynamic Environments'. Data description is available on our project page: <a href=\"https://github.com/karnikram/rp-vio\">https://github.com/karnikram/rp-vio</a><br>\n<br>\n<strong>Paper abstract</strong>: Modern visual-inertial navigation systems (VINS) are faced with a critical challenge in real-world deployment: they need to operate reliably and robustly in highly dynamic environments. Current best solutions merely filter dynamic objects as outliers based on the semantics of the object category. Such an approach does not scale as it requires semantic classifiers to encompass all possibly-moving object classes; this is hard to define, let alone deploy. On the other hand, many real-world environments exhibit strong structural regularities in the form of planes such as walls and ground surfaces, which are also crucially static. We present RP-VIO, a monocular visual-inertial odometry system that leverages the simple geometry of these planes for improved robustness and accuracy in challenging dynamic environments. Since existing datasets have a limited number of dynamic elements, we also present a highly-dynamic, photorealistic synthetic dataset for a more effective evaluation of the capabilities of modern VINS systems. We evaluate our approach on this dataset, and three diverse sequences from standard datasets including two real-world dynamic sequences and show a significant improvement in robustness and accuracy over a state-of-the-art monocular visual-inertial odometry system. We also show in simulation an improvement over a simple dynamic-features masking approach. Our code and dataset are publicly available.</p>",
"title": "RP-VIO: Robust Plane-based Visual-Inertial Odometry (Dataset)",
"type": "dataset",
"id": "4603494"
}
95
546
views | 2021-07-28 04:16:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2119838148355484, "perplexity": 8069.381254870436}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153521.1/warc/CC-MAIN-20210728025548-20210728055548-00146.warc.gz"} |
http://www.mpim-bonn.mpg.de/de/node/7064 | # Homotopical Morita theory for corings
Posted in
Speaker:
Kathryn Hess
Zugehörigkeit:
EPF Lausanne
Datum:
Mon, 2017-02-13 09:30 - 10:30
Location:
MPIM Lecture Hall
(Joint work with Alexander Berglund)
A coring $(A,C)$
consists of an algebra $A$ in a symmetric monoidal category and a
coalgebra $C$ in the monoidal category of $A$-bimodules. Corings and
their comodules arise naturally in the study of Hopf-Galois extensions
and descent theory, as well as in the study of Hopf algebroids. In this
lecture, I will address the question of when two corings $(A,C)$ and
$(B,D)$ in a symmetric monoidal model category $\mathcal V$ are
homotopically Morita equivalent, i.e., when their respective categories
of comodules $\mathcal V_{A}^{C}$ and $\mathcal V_{B}^{D}$ are Quillen
equivalent.
The category of comodules over the trivial coring $(A,A)$ is isomorphic
to the category $\mathcal V_{A}$ of $A$-modules, so the question
englobes that of when two algebras are homotopically Morita equivalent.
I will begin by discussing this special case, extending previously known
results.
To approach the general question, I will introduce the notion of a
braided bimodule and show that adjunctions between $\mathcal V_{A}$ and $\mathcal V_{B}$ that
lift to adjunctions between $\mathcal V_{A}^{C}$ and $\mathcal V_{B}^{D}$ correspond precisely to braided
bimodules
. I will describe descent-type criteria for when a braided
bimodule
induces a Quillen equivalence between $\mathcal V_{A}^{C}$ and
$\mathcal V_{B}^{D}$. In particular, I will provide conditions under
which a morphism of corings induces a Quillen equivalence, providing a
homotopic generalization of results by Hovey and Strickland on Morita
equivalences of Hopf algebroids. As an illustration of the general
theory, I will describe in detail the homotopical Morita theory of
corings in the category of chain complexes over a commutative ring.
© MPI f. Mathematik, Bonn Impressum & Datenschutz | 2021-09-25 15:02:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7596747279167175, "perplexity": 1060.6001813228127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057687.51/warc/CC-MAIN-20210925142524-20210925172524-00439.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.