url
stringlengths 17
172
| text
stringlengths 44
1.14M
| metadata
stringlengths 820
832
|
---|---|---|
http://math.stackexchange.com/questions/25875/is-this-quantification-correct/25877 | # Is this quantification correct?
Let $Q(x,y)$ be the statement $y = 2x +1$ what are the values of the following. The universe is $Z^+$ {1,2,3,....}.
(a) $\forall x\exists y Q(x,y)$ This is true
(b) $\exists x \forall y Q(x,y)$ This is false.
-
That's correct. – Myself Mar 9 '11 at 1:38
## 1 Answer
The first is true. You are correct. For the second, truth would imply that 2x+1=1 has a positive integer as a solution, which is false, so you are correct again.
-
It's worse than that for the second one, it would imply that for some $x\in\mathbf Z^+$ it holds that $2x-y = 1 = 2x-y'$ for all $y,y'\in\mathbf Z^+$, so it would imply that $y=y'$, thus $|\mathbf Z^+|=1$. – Myself Mar 9 '11 at 1:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 10, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.954059362411499, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/24777/dissipation-when-the-temperature-is-not-constant?answertab=votes | # Dissipation when the temperature is not constant
Consider a process where some chemical species diffuses from one part of a system (which I'll call $A$) to another ($B$) at a rate $r$ $\text{mol}\cdot \mathrm s^{-1}$. If the system's temperature is constant and homogeneous, we say that energy is dissipated at a rate $$D = r(\mu_B - \mu_A),$$ where $\mu_A$ and $\mu_B$ are the chemical potentials of the particles in parts $A$ and $B$ of the system respectively. The dissipation is always positive, because $D=T d_i\!S/dt$, where $d_i\!S/dt$ is the rate at which entropy is produced within the system due to the transport process.
However, if the temperatures of the two parts of the system are different then the second law says $$\frac{d_i\!S}{dt} = r\left( \frac{\mu_B}{T_B} - \frac{\mu_A}{T_A} \right) \ge 0,$$ which means that the above expression for $D$ can be negative if $T_A>T_B$.
So my questions are
• Is the term "dissipation" generally thought to be meaningful in systems that don't have a constant, homogeneous temperature?
• If yes, what is the correct expression for it in the above scenario?
• Most importantly, does anyone know of a reference where the concept of dissipation in systems with a non-constant or non-homogeneous temperature is discussed?
-
I wonder if crossposting to a private beta is allowed.. Chem.SE would like this 'un, but you don't have an account there. I can do it, but I don't know if it's allowed.. – Manishearth♦ May 3 '12 at 17:20
I have an account there now :) (I'd committed to the proposal but oddly didn't get an email to say it'd gone live, so I didn't know until you mentioned it.) What's the best way to cross-post it - should I just post a new question there with the same text? – Nathaniel May 3 '12 at 17:38
#account check your spam folder, if its not there then file a bugreport on MSO #crosspost: only if ypu can make the two questions distinct meta.stackoverflow.com/questions/71938/… Otherwise, we can migrate.. On Phy.SE we have more activity, but less users who know this stuff. On chem, less overall activity but more users who can answer thia. So the better site(for getting an answer) is debatable. I say the post is a better fit on chem, but maybe not its ability to get an answer quickly. – Manishearth♦ May 3 '12 at 17:59
I'd say go ahead and migrate it. There seems to be an impressive knowledge of chemical thermodynamics over there, and it would be good to see more questions like this on that site. – Nathaniel May 3 '12 at 20:45
Flag it (for ♦ moderator attention) if you want a migration. Anyway, I already did that :) – Manishearth♦ May 4 '12 at 0:45
show 3 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9499592185020447, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/89819/finding-purely-transcendental-parts-of-field-extensions | ## Finding purely transcendental parts of field extensions
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
If we have a field $K$ such that $K\cong K(t)$ (i.e. it is isomorphic to the field you get if you adjoin one transcendental) then is there necessarily a subfield $L\lt K$ such that $L\ncong L(t)$ and $K$ is a purely transcendental extension of $L$?
The basic idea is that if we have something like $K={\mathbb Q}(u_i,t_i^{1/2^n})_{i,n\in\mathbb N}$ then we are looking for the field ${\mathbb Q}(t_i^{1/2^n})_{i,n\in{\mathbb N}}$.
-
## 1 Answer
I don't know if this is a counterexample or not, but what would you do if $K$ is the field of fractions of the monoid ring $R = \mathbb{Z}[\prod_{i=1}^\infty \mathbb{N}]$? ($R$ is an integral domain because of the lexicographic ordering on the monoid, and $R[t] \cong \mathbb{Z}[\mathbb{N}\times \prod_{i=1}^\infty \mathbb{N}] \cong R$. Note $K$ is the same as the field of fractions of the group ring $\mathbb{Z}[\prod_{i=1}^\infty \mathbb{Z}]$. Baer proved that $\prod_{i=1}^\infty \mathbb{Z}$ is not free, but I don't know if this rules out the possibility of a transcendence basis for $K$ over $\mathbb{Q}$, let alone over any other $L$ satisfying $L\not\cong L(t)$.)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9139341115951538, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/156341-unit-sphere-r-3-arcwise-connected.html | # Thread:
1. ## The unit sphere in R^3 is arcwise connected
So I attempted to find the intersection of the unit sphere with a plane that contains any two arbitrary points $a, b$ on the sphere and the origin, but I only came up with a really convoluted expression that I am not even sure can be used to find a function $f: [0,1] \to \mathbb{R}^{3}$ such that $f(0) = a, f(1) = b$ and for all $t\in [0,1]$, $f(t)$ is on the unit sphere. Any help would be appreciated.
2. Originally Posted by Pinkk
So I attempted to find the intersection of the unit sphere with a plane that contains any two arbitrary points $a, b$ on the sphere and the origin, but I only came up with a really convoluted expression that I am not even sure can be used to find a function $f: [0,1] \to \mathbb{R}^{3}$ such that $f(0) = a, f(1) = b$ and for all $t\in [0,1]$, $f(t)$ is on the unit sphere. Any help would be appreciated.
You can parametrize the sphere with two angles $\vartheta, \varphi$ so that you get $x=r\sin\vartheta \cos\varphi, y = r\sin \vartheta \sin\varphi, z=r\cos\vartheta$.
This is a map $g:\, [0;2\pi)\times [-\pi/2;\pi/2]\to \in\mathbb{R}^3, (\varphi,\vartheta)\mapsto (x,y,z)$.
Assuming that you have coordinates $(\varphi_a,\vartheta_a), (\varphi_b,\vartheta_b)$ for two given points $a,b$ on the unit sphere, you next parametrize the line segment connecting the points $(\varphi_a,\vartheta_a), (\varphi_b,\vartheta_b)$ that lie in the rectangular area of the domain of these two parameters. This gives you the map $h:\, [0;1]\to [0;2\pi)\times [-\pi/2;\pi/2]$.
Now a parametrization of the arc connecting a and b on the sphere is $f\,: [0;1]\to \mathbb{R}^3, t\mapsto g(h(t))$, I think.
3. What would a parameterization of the line segment look like, simply $h(t) = (\varphi_a,\vartheta_a) + t(\varphi_b - \varphi_a,\vartheta_b - \vartheta_a)$?
4. Originally Posted by Pinkk
What would a parameterization of the line segment look like, simply $h(t) = (\varphi_a,\vartheta_a) + t(\varphi_b - \varphi_a,\vartheta_b - \vartheta_a)$?
Sure, why not? Since the domain of the parametrization of the sphere by $g:\, (\varphi,\vartheta)\to\mathbb{R}^3$ is a rectangle and thus convex, that parametrization h(t) of the line segment is sure to give you a line segment that remains completely within the domain of g. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 21, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9392900466918945, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/2435/how-to-use-itos-formula-to-deduce-that-a-stochastic-process-is-a-martingale/2442 | # How to use Itô's formula to deduce that a stochastic process is a martingale?
I'm working through different books about financial mathematics and solving some problems I get stuck.
Suppose you define an arbitrary stochastic process, for example
$X_t := W_t^8-8t$ where $W_t$ is a Brownian motion.
The question is, how could I deduce that this stochastic process is a martingale or not using Itô's formula?
The only thing I know is:
Looking at the stochastic integral $\int K dM$ where $M=\{M_t\}$ is a martingale, which is right continuous with left limit, null at $0$ and satisfies $sup_t E[M_t] < \infty$ and $K$ a stochastic process bounded and predictable, then $\int K dM$ is a martingale too.
But I'm not sure if this is helpful in this situation. An example of how to solve such types of problems would be appreciated.
Just to be sure, I state Itô's formula which I know so far.
Let $\{X_t\}$ a general $\mathbb{R}^n$ valued semimartingale and $f: \mathbb{R}^n \to \mathbb{R}$ such that $f\in C^2$. Then $\{f(X_t)\}$ is again a semimartingale and we get Itô's formula (in differential form):
$$df(X_t) = \sum_{i=1}^n f_{x_i}(X_t)dX_{t,i} + \frac{1}{2}\sum_{i,j=1}^n f_{x_i,x_j}(X_t)d\langle X_i,X_j\rangle_t$$
-
## 3 Answers
In general, if you have a process that you can write under the form $F(B_t,t)$ where $F$ is $\mathcal{C}^{2,1}$ then Itô's lemma gives you the drift term and diffusion term of $dF$. Then if the resulting SDE has a null drift (that's where Black Scholes PDE comes from), and you get a only local martingale. For it to be a proper martingale you can look at theorem 1.
But you have easier sufficient conditions, in particular if you only need martingale property over finite time intervals. Those conditions are about the integrability of the quadratic variation process, but as I can't remember them exactly, I won't try to derive them here. They must appear in any book over stochastic integration with respect to Brownian motion.
Best regards
-
For Itô Processes $dX(t) = \mu(t) \mathrm{d}t + \sigma(t) \mathrm{d}W(t)$ you have the result that (under appropriate assumptions which ensure that the local martingale is a martingale, e.g. $E( (\int \sigma(t)^2 \mathrm{d}t )^{1/2} ) < \infty$, etc.): $X$ is a martingale $\Leftrightarrow$ $\mu(t) = 0$.
So in order to check if a process $X$ is a martingale use Itô to get its "$\mathrm{d}X = \ldots$-representation" and check the coefficient of $dt$ on zero.
(I believe the exact result can be found in Øksendal, Bernt K.: Stochastic Differential Equations: An Introduction with Applications)
-
Rather simply and generally when you take the stochastic differential of a process and get no drift term but simply an ito integral, then this process is a martingale. From memory that's how you retrieve some pde equations whose solutions lead to martingale (take the differential, look at the dt partial differentials term, then look for solution that would yield a vanishing dt term)
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 25, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9238010048866272, "perplexity_flag": "head"} |
http://en.wikipedia.org/wiki/Controller_(control_theory) | # Controller (control theory)
In control theory, a controller is a device, possibly in the form of a chip, analogue electronics, or computer, which monitors and physically alters the operating conditions of a given dynamical system.[1]:p.21
## Input and Control Variables
A system can either be described as a MIMO system, having multiple inputs and outputs, therefore requiring more than one controller; or a SISO system, consisting of a single input and single output, hence having only a single controller. Depending on the set-up of the physical (or non-physical) system, adjusting the system's input variable (assuming it is SISO) will affect the operating parameter, otherwise known as the controlled output variable. Upon receiving the error signal that marks the disparity between the desired value (setpoint) and the actual output value, the controller will then attempt to regulate controlled output behaviour. The controller achieves this by either attenuating or amplifying the input signal to the plant so that the output is returned to the setpoint. For example, a simple feedback control system, such as the one shown on the right, will generate an error signal that's mathematically depicted as the difference between the setpoint value and the output value, r-y.
A simple feedback control loop illustrates that the error signal is received by controller C, which then either attenuate or amplify the input signal to the plant.
This signal describes the magnitude by which the output value deviates from the setpoint. The signal is subsequently sent to the controller C which then interprets and adjusts for the discrepancy. If the plant is a physical one, the inputs to the system are regulated by means of actuators.
For example, the heating system of a house can be equipped with a thermostat (controller) for sensing air temperature (output variable) which can turn on or off a furnace or heater when the air temperature drops or exceeds a desired temperature, otherwise known as the setpoint.
In this example, the thermostat is the controller, receiving information of its surroundings which it then uses to regulate the activity of the heater. The heater is the processor that warms the air inside the house to the desired setpoint, usually room temperature. A number of other examples are given in the table below. [1]:p.19
System Controlled Outputs Include Controller Desired Performance Includes
Aircraft Course, pitch, roll, yaw Autopilot Maintain flight path on a safe and smooth trajectory
Furnace Temperature Temperature controller Follow warm-up temperature profile, then maintain temperature
Waster treatment pH value of effluent pH controller Neutralize effluent to specified accuracy
Automobile Speed Cruise Controller Attain, then maintain selected speed without undue fuel consumption
The notion of controllers can be extended to more complex systems. In the natural world, individual organisms also appear to be equipped with controllers that assure the homeostasis necessary for survival of each individual. Both human-made and natural systems exhibit collective behaviors amongst individuals in which the controllers seek some form of equilibrium
## Types of Controlling System
In control theory there are two basic types of control. These are feedback and feed-forward. The input to a feedback controller is the same as what it is trying to control - the controlled variable is "fed back" into the controller. The thermostat of a house is an example of a feedback controller. This controller relies on measuring the controlled variable, in this case the temperature of the house, and then adjusting the output, whether or not the heater is on. However, feedback control usually results in intermediate periods where the controlled variable is not at the desired set-point. With the thermostat example, if the door of the house were opened on a cold day, the house would cool down. After it fell below the desired temperature (set-point), the heater would kick on, but there would be a period when the house was colder than desired.
Feed-forward control can avoid the slowness of feedback control. With feed-forward control, the disturbances are measured and accounted for before they have time to affect the system. In the house example, a feed-forward system may measure the fact that the door is opened and automatically turn on the heater before the house can get too cold. The difficulty with feed-forward control is that the effect of the disturbances on the system must be accurately predicted, and there must not be any unmeasured disturbances. For instance, if a window were opened that was not being measured, the feed-forward-controlled thermostat might still let the house cool down.
To achieve the benefits of feedback control (controlling unknown disturbances and not having to know exactly how a system will respond to disturbances) and the benefits of feed-forward control (responding to disturbances before they can affect the system), there are combinations of feedback and feed-forward that can be used.
Some examples of where feedback and feed-forward control can be used together are dead-time compensation, and inverse response compensation. Dead-time compensation is used to control devices that take a long time to show any change to a change in input, for example, change in composition of flow through a long pipe. A dead-time compensation control uses an element (also called a Smith predictor) to predict how changes made now by the controller will affect the controlled variable in the future. The controlled variable is also measured and used in feedback control. Inverse response compensation involves controlling systems where a change at first affects the measured variable one way but later affects it in the opposite way. An example would be eating candy. At first it will give you lots of energy, but later you will be very tired. As can be imagined, it is difficult to control this system with feedback alone, therefore a predictive feed-forward element is necessary to predict the reverse effect that a change will have in the future. '''
## Types of controllers
Most control valve systems in the past were implemented using mechanical systems or solid state electronics. Pneumatics were often utilized to transmit information and control using pressure. However, most modern industrial control systems now rely on computers for the industrial controller. Obviously it is much easier to implement complex control algorithms on a computer than using a mechanical system.
For feedback controllers there are a few simple types. The most simple is like the thermostat that just turns the heat on if the temperature falls below a certain value and off it exceeds a certain value (on-off control).
Another simple type of controller is a proportional controller. With this type of controller, the controller output (control action) is proportional to the error in the measured variable.
In feedback control, it is standard to define the error as the difference between the desired value (setpoint) $y_s$and the current value (measured) $y$. If the error is large, then the control action is large. Mathematically:
$u(t) = K_c*e(t)+ u_0$
where
$u(t)$ represents the control action (controller output),
$e(t)=y_s(t)-y(t)$ represents the error,
$K_c$ represents the controller's gain, and
$u_o$ represents the steady state control action (bias) necessary to maintain the variable at the steady state when there is no error.
It is important that the control action $u$ counteracts the change in the controlled variable $y$ (negative feedback). There are then two cases depending on the sign of the process gain.
In the first case the process gain is positive, so an increase in the controlled variable (measurement) $y$ requires a decrease in the control action $u$ (reverse-acting control). In this case the controller gain $K_c$ is positive, because the standard definition of the error already contains a negative sign for $y$.
In the second case the process gain is negative, so an increase in the controlled variable (measurement) $y$ requires an increase in the control action $u$ (direct-acting control). In this case the controller gain $K_c$ is negative.
A typical example of a reverse-acting system is control of temperature ($y$) by use of steam ($u$). In this case the process gain is positive, so if the temperature increases, the steam flow must be decreased to maintain the desired temperature. Conversely, a typical example of a direct-acting system is control of temperature using cooling water. In this case the process gain is negative, so if the temperature increases, the cooling water flow must be increased to maintain the desired temperature.
Although proportional control is simple to understand, it has drawbacks. The largest problem is that for most systems it will never entirely remove error. This is because when error is 0 the controller only provides the steady state control action so the system will settle back to the original steady state (which is probably not the new set point that we want the system to be at). To get the system to operate near the new steady state, the controller gain, Kc, must be very large so the controller will produce the required output when only a very small error is present. Having large gains can lead to system instability or can require physical impossibilities like infinitely large valves.
Alternates to proportional control are proportional-integral (PI) control and proportional-integral-derivative (PID) control. PID control is commonly used to implement closed-loop control.
Open-loop control can be used in systems sufficiently well-characterized as to predict what outputs will necessarily achieve the desired states. For example, the rotational velocity of an electric motor may be well enough characterized for the supplied voltage to make feedback unnecessary.
The drawback of open-loop control is that it requires perfect knowledge of the system (i.e. one knows exactly what inputs to give in order to get the desired output), and it assumes there are no disturbances to the system.
## References
1. ^ a b Salgado, Graham C. Goodwin, Stefan F. Graebe, Mario E. (2001). Control System Design. Upper Saddle River, N. J.: Prentice Hall. ISBN 0139586539. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 18, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9031810164451599, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/116658/how-to-work-rsa-encryption-decryption/116713 | # How to work RSA encryption/decryption
I need an array populated with characters and integer keys for each, and I want to, using this set, encode messages, and then decode them later on . Essentially I am trying to write RSA algorithm for this. However, the maths of it is what I am still kind of lost on. In order to encode, I would have to encode bits of the input string, and this after reading the corresponding key for each character, M.
```` Ciphertext=0
for i from 1 to length(message) do
M:= ltable[message[i]] // Read the number that corresponds to the Character
ciphertext:= ciphertext * ((M^encryptionkey) mod modfactor);//encode the returned number and add to ciphertext
Loop next
return ciphertext;
//To Convert all the input message at once, and then encode I have this algorithm
** for i from 1 to length(message) do
M:= ltable[message[i]]//get the number that corresponds to character
ciphertext:= ciphertext * M;//perform operation to update
Loop next
return ((ciphertext^encryptionkey) mod modfactor);**
````
i.e Ciphertext = M^e mod n where C is the resulting ciphertext, e is my encryption key and modfactor is the product of my primes.
The code above spews out my input message in an encrypted form. However, when I am to decode to resulting ciphertext, I run into some problem reversing the operations above to get the exact message that was sent as input to my encryption program. Essentially given a String of length 100, I encrypt each character of the input string with the above, but to reverse/decode, I don't seem to be getting the same message.
cipher = cyphertext; while cipher> 0 do rest = (cipher ^decryptionkey) mod modfactor message = concatenate(ntable[rest],message): wrk = (cipher/rest)/modfactor
End While
Return message
The code/algorithm above does not seem to work. Test primes used are p=263 and q=911, and encryptionkey= 27 How do I accomplish this please?
-
You have to compute $\text{cipher}^{\,d}$ mod $n$ where $d=e^{-1}$ mod $\varphi(n)$ is the decryption key. Could you point out more specifically what "some problem" is? – anon Mar 5 '12 at 12:31
I do have my encryption and decryption keys worked out. The problem is the how of encryption and then decryption the message input, I mean the algorithm for encryption 1 character at a time and then decryption to yield each character at a time. – Kobojunkie Mar 5 '12 at 12:35
The problem is that your message is way bigger than your `modfactor`, so even without encryption if you turned the message into a number and then reduced modulo pq and then turned back into a string of characters, you'd get something different. – anon Mar 5 '12 at 13:18
That is why I had it encoding my information one character at a time initially. If I have an input that hundreds of lines long, I still would like to be able to handle that without needing to keep looking for bigger and bigger prime numbers each time. – Kobojunkie Mar 5 '12 at 13:50
I see. I imagine you're supposed to break the message into pieces, each piece as big as possible without letting the corresponding number go over modfactor - though the primes involved still preferably large. I'm not the go-to guy for details on the protocol though, sorry. – anon Mar 5 '12 at 14:04
show 1 more comment
## 2 Answers
You're not supposed to encrypt one character at a time! You need to turn an entire message into a single number, and then perform the modulo exponentiation on that plaintext number to get the ciphertext number. Then you do the exponentiation with the private decryption key to reverse the process from the cipher to the plaintext, and then turn that number back into the message.
-
when I convert all my message at once, I get this cipher = 230958. How then do I decode this using my code above? That remains a problem. – Kobojunkie Mar 5 '12 at 12:56
@Kobojunkie: How are you turning strings of characters into numbers, first of all? Just concatenating bits together? If so, after you compute the cipher, the math in your code is correct in that you take it to the power of the decryption key modulo modfactor. After that, you would split the digits up into pieces and convert each piece into a character. I can't vouch for your code or what protocol you're using because I'm only familiar with the number theoretic aspects of RSA.. – anon Mar 5 '12 at 13:02
I have a table/array where each character has a number assigned. So whenever I look up the character in my array, I return the number that corresponds and then perform operations on the number that way. When I go to decode, I get the number, check the table for the character that corresponds to it and concatenate that to be decoded plaintext. – Kobojunkie Mar 5 '12 at 13:03
The problem is in mainly decoding the ciphertext so I can get back the code for the initial ciphertext at least. How do I get back M=14891505242871484726569672376320000000000000 from the ciphertext=230958. That is what I am having problems with, at the moment. – Kobojunkie Mar 5 '12 at 13:07
@Kobo: Okay. How do you put the numbers for each character of the message together? In RSA, the message as a whole becomes one single number. (Or are you doing some strange RSA-lite problem as an exercise you were given? Encrypting each character independently seems a rather pointless substitution cipher..) | Edit: How about you give the values for `modfactor` as well as `encryptionkey` and `decryptionkey` and if possible the message in the OP? (Plus the prime factors of `modfactor` wouldn't hurt either...) – anon Mar 5 '12 at 13:07
show 1 more comment
Well, here are the things you want to do:
1. Convert the plaintext string into a numeric plaintext.
2. Encrypt the numeric plaintext to produce a ciphertext.
3. Decrypt the ciphertext to produce the numeric plaintext again.
4. Convert the numeric plaintext back into a string.
Judging by your code, you are producing your numeric plaintext (which, for some reason, is called `ciphertext`) by taking all of the characters in the plaintext string and multiplying them together. That won't work, because multiplication is not injective: given `a*b`, it is impossible to tell what `a` and `b` are.
If you want to convert a string into a big number, do this:
````Set num to 0.
For char in string:
Shift num left by 8 bits.
Add char to num.
End for.
Return num.
````
To convert it back:
````While num > 0:
Set char to (num mod 2^8).
Prepend char to string.
Shift num right by 8 bits.
End while.
Return string.
````
This algorithm only works if all of your characters are less than 2^8. If some of them aren't, replace 8 with a bigger number.
By the way, if your algorithm still doesn't work, try isolating the problem by skipping steps 1 and 4 entirely. If you can successfully encrypt and decrypt a number, that means steps 2 and 3 are working correctly, and so the problem lies in steps 1 and 4. If you can't, that means steps 2 and 3 are not working correctly, and so the problem lies there.
-
I am not working with the bits or able to shift bits. Like I explained, I have an array with random numbers assigned to unique characters, and using that information, I need to encrypt my string. And then decrypt afterwards. – Kobojunkie Mar 5 '12 at 20:03
Shifting left by $n$ bits is the same as multiplying by $2^n$; and shifting right by $n$ bits is the same as dividing by $2^n$ and rounding down. – Tanner Swett Mar 6 '12 at 4:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 8, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8986993432044983, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/15371/understanding-time-is-time-simply-the-rate-change/32094 | # understanding time: Is time simply the rate change?
Is time simply the rate of change?
If this is the case and time was created during the big bang would it be the case that the closer you get to the start of the big bang the "slower" things change until you essentially approach a static, unchanging entity at the beginning of creation?
Also, to put this definition in relation to Einstein's conclusions that "observers in motion relative to one another will measure different elapsed times for the same event." : Wouldn't it be the case that saying the difference in elapsed time is the same as saying the difference in the rate of change.
With this definition there is no point in describing the "flow" of time or the "direction" of time because time doesn't move forward but rather things simply change according to the laws of physics.
Edit: Adding clarification based on @neil's comments:
The beginning of the big bang would be very busy, but if time was then created if you go back to the very beginning it seems there is no time and there is only a static environment.
So it seems to me that saying time has a direction makes no sense. There is no direction in which time flows. There is no time; unless time is defined as change.
So we have our three dimensional objects: and then we have those objects interact. The interaction is what we experience as time. Is this correct or is time more complicated than this?
-
If you're concerned with the rate at which things change, shouldn't things go faster as you approach the Big Bang? The first hour of the universe was an extremely busy time. – Niel de Beaudrap Oct 4 '11 at 15:36
More generally and to the point: how do you determine "the rate of change" without a fixed standard for time, anyhow? Fast processes still happen now; just perhaps less frequently than before. That, and we're often more interested in glacially slow processes, such as human behaviour, and well, the movements of glaciers. It makes the most sense to establish a collection of commensurable standards of time reaching back to the Big Bang; but commensurability pretty much prevents any process of "time inflation" --- at least in how we measure time. – Niel de Beaudrap Oct 4 '11 at 15:38
Things may happen "faster" compared to things happening on earth now but wouldn't you eventually reach the beginning where nothing is happening and you reach a static/stable environment – coder Oct 4 '11 at 16:02
It depends on how you're trying to define a changing scale of time! If the "activity" (very vague) of the universe is getting slower with time in an exponential decay, then going backwards in time would look like watching a computer which performs one instruction in 1Gyr, a second instruction in .5Gyr, a third in .25Gyr, getting faster with time. If you "rescale time" so that each instruction takes one "operational time unit", what you find is not that things come to a rest but that you can squeeze in an infinite regress of activity immediately after the Big Bang. Very speculative of course! – Niel de Beaudrap Oct 4 '11 at 16:10
2
I admit how time apparently "flows" is a difficult problem and one of the most mysterious in physics. But reading one comment above I remember one of the famous quotes "Any intelligent fool can make things bigger and more complex... It takes a touch of genius - and a lot of courage to move in the opposite direction. " – user1355 Oct 7 '11 at 17:05
show 3 more comments
## 9 Answers
Since for some reason this question has resurfaced, I would like to point to a similar one posed later than this.
Observation of change is important to defining a concept of time. If there are no changes, no time can be defined. But it is also true that if space were not changing, no contours, we would not have a concept of space either. A total three dimensional uniformity would not register.
Our scientific time definition uses the concept of entropy to codify change in space, and entropy tells us that there exists an arrow of time.
In special relativity and general relativity time is defined as a fourth coordinate on par with the three space directions, with an extension to imaginary numbers for the mathematical transformations involved. The successful description of nature, particularly by special relativity, confirms the use of time as a coordinate on par with the space coordinates.
It is the arrow of time that distinguishes it in behavior from the other coordinates as far as the theoretical description of nature goes.
-
This question ("Is time simply the rate of change?") is too ambiguous to have any meaningful answer. I can think of interpretations in which the question is vacuous (begging the question: "what is meant by 'rate of change'?"), tautological ("rate of change" == d/dt), or in which the answer is 'no' (GR).
You might find the answer you seek in this book:
• From Eternity to Here: The Quest for the Ultimate Theory of Time by Sean Carroll.
-
1
to rephrase: is time a thing in itself or is time simply things changing? This is probably a hard question to articulate. Add that to my lack of understanding of physics :-) – coder Oct 10 '11 at 14:25
3
@Jeremy: most questions that are hard to articulate in this way are not meaningful, they are only philosophical words that make the brain go in circles. The questions about time which are meaningful are those that can be answered by observations. – Ron Maimon Dec 8 '11 at 5:47
Time is what is measured by clocks.
But how is time modelled in physical theories ?
In the Schrödinger equation time enters as an external parameter. How does this parameter correspond to the time measured by clocks ?
The following reference might be a good introduction to this and related questions concerning time and quantum mechanics : http://www.physedu.in/uploads/publication/1/7/28-1-3-The-challenging-concept-of-time-in-quantum-mechanics1401.pdf
-
There's is no such notion as "time" in isolation from space. Since time is a measure of entropy of space, then time wouldn't exist if the space is absolutely static.
Imagine that one will somehow manage to 'rollback' the matter & energy to a state in which it was yesterday. Would this be a time travel? I don't see reasons why it wouldn't.
There are things not affected by time - say, physical laws and regularities. Since we assume that they are the innate property of the universe, we also assume that they exist out of the scope of time and space. That is, time didn't exist before the BigBang, but the laws did.
Edit: it's rather difficult for me, though, to imagine a physical law existing in isolation from things that it governs.
-
Physics Law is the description of the thing that it governs. – Prathyush Oct 14 '12 at 14:45
Certainly time is intriguing, but there are two different things going on here: (1) there is (classically) the manifold, (2) and the zeroth component of the momentum 4-vector.
To start, the temporal part of the gravitational potential does have some weird geometry that we aren't used to in everyday life and this certainly plays a role in some of the strangeness surrounding "time", but a decomposition of the EFE demonstrates that actually $g_{00}$ and $g_{0i}$ don't have time derivatives. The temporal parts of the space-time manifold, are static, only the spatial parts, $g_{ij}$ are dynamic. So where is this notion of "flow" coming from?
Instead, think of the manifold as a landscape, with something like a "temporal" direction. Our movement through that direction, is determined by the zeroth component of the momentum 4-vector, energy, temporal momentum. Why are almost all things in everyday life moving in the same "direction" of time? its not because we are all in the same river, its because we are all made of the same stuff. If you want to relate "time" with a rate of change, a place to start looking is at the momentum 4-vector, not the spacetime manifold.
-
A clear understanding of time in my opinion Still eludes us.
Within scope of classical Concepts there is a Perfectly valid practical definition of time. Which essentially is the correlations between the periodic behaviour of systems. For instance the behaviour of a pendulum is correlated with the behaviour of the motion of the sun around the planet as N number of periods pendulum corresponds to 1 Period of earth orbit around the sun. The property of periodicity in classical systems is essential in the definition of a clock.
The Question about arrow of time in my opinion Boils down to our inability to prepare system in precise initial conditions, Which only allows the possibility of predicting its behaviour on in a statistical Sense. We are also limited to measure only certain properties of a system, and we cannot acquire complete information. This is a limitation we must accept on our ability to perform experiments. In this sense, if we use the clock we defined only using Classical Concepts, then this implies that flows have a preferred direction ie, a the direction of incerasing entropy.
Question of time In my opinion will completely resolve Itself if one understands what a memory impression is. Memory being impression being permanent contains a record of the passing of time. I think it is very closely connected to the foundational Issues that plague Quantum measurement.
Saying this coming to your question on Time running Slower closer to the big-bang, In one sense One can say that there is no structure to measure the movement of time. But really to answer this question we have to wait for the discovery of Quantum Gravity.
-
The concept of time is intimately related with the concept of causality. If we don't have the notion that something can cause some other thing then there is no objective meaning to the word "time". It's causality which enables us to decide and describe which event is past and which is present.
In relativity, as we know, space and time are intimately related to each other. What is just space to some observer may be a combination of space and time for another. It is therefore helpful to think of a $4$ dimensional space called spacetime whose points represent events. An event therefore needs $4$ independent numbers to be uniquely specified. Out of this $4$ numbers one is a little special. If you draw a light cone at any point in this space then all except one of the axes will be outside the light cone. This special axis is the direction of time and the numbers it represent is "time".
-
1
Thanks for not commenting on the reason for the down vote. That's a huge relief ;) – user1355 Oct 7 '11 at 17:08
+1 because this definitely doesn't deserve a -1 :-). You essentially rephrase my question in the first part. I could have rephrased as asking, "is it the case that time is simply causality?" If this is the case then it seems the notion of a "flow" of time only exists because we have a memory of the past events; when in fact there is no past, there is no future, there is only stuff which interacts. So the words "time", "causality" and "interaction" are interchangeable leaving us with only stuff that changes. – coder Oct 7 '11 at 17:44
Not my downvote, but the concept of time does not require a concept of causality, which is notoriously hard to pin down in the microrealm and probably doesn't make any sense. – Ron Maimon Oct 10 '11 at 5:30
I disagree with you Ron. First it is dangerous to say that causality doesn't make any sense in the micro realm. If it were so, then how can you ever trust QFT which is fully consistent with S.R. and which requires causality to hold strictly. Secondly, if two events are space like separated in the micro realm how do you decide which has taken place earlier? You can't, unless you seriously modify the existing theories or unless you are talking about some as yet unknown QG theory. – user1355 Oct 11 '11 at 16:02
@RonMaimon: However, it is true that in the microscopic world there may be processes which may not have any intrinsic "arrow of time". But that's an altogether different issue, right? – user1355 Oct 11 '11 at 16:09
show 3 more comments
well the way time should be conceived is the same way you should look at motion or any type of energy kinetic or potential, ergo it should be treated as such. example, when a object falls from a table the time it takes to travel through the air co insists with the space around it ("space-time to be precise which the fine gent below me is proclaiming). So your question comes up which the answer would most likely be yes. but not to forget time is also a unit of measurement such as length width and depth and we use it as such. it simply being the rate of change is plausible under certain theoretical works in the past which many have been trying to prove fact in the present.
-
you seem to think time started with the big bang, how long was the matter there before the big bang? we are still only talking about half an equation. two points are still needed to justify either of our perspectives
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9550288319587708, "perplexity_flag": "middle"} |
http://physics.aps.org/articles/print/v5/47 | # Viewpoint: Rethinking the Neutrino
, Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge MA 02139, USA
Published April 23, 2012 | Physics 5, 47 (2012) | DOI: 10.1103/Physics.5.47
The Daya Bay Collaboration in China has discovered an unexpectedly large neutrino oscillation.
To some, this may be the year of the dragon, but in neutrino physics, this is the year of $θ13$. Only one year ago, this supposedly “tiny” mixing angle, which describes how neutrinos oscillate from one mass state to another, was undetected, but the last twelve months have seen a flurry of results from experiments in Asia and Europe, culminating in the result from the Daya Bay Collaboration, now being reported in Physical Review Letters, that shows that $θ13$ is not small after all [1]. A not-so-tiny mixing angle forces us to rethink theory, calling for new explanations for why quarks and leptons are so different. It also opens the door to new experiments, potentially allowing the discovery of $CP$ violation—a difference between neutrinos and antineutrinos that may be related to the matter asymmetry of the early universe.
Neutrino oscillations are a simple idea that can be derived in any intermediate level quantum mechanics class. In the weak interaction, neutrinos are produced and detected in “flavor” eigenstates—electron ($e$), muon ($μ$), and tau ($τ$). However, the Hamiltonian depends on energy, which in turn depends on mass. In a simple two-neutrino model, with just the muon and electron flavors, the mass eigenstates ($ν1$ and $ν2$) can be rotated with respect to the flavor eigenstates,
$(νeνμ)=(cosθ-sinθsinθcosθ)(ν1ν2).$
(1)
Mixing between flavor and mass states sounds strange, but quarks do it. Why shouldn’t neutrinos do it too?
A result of this mixing is that the neutrino born via the weak interaction in one flavor state will evolve with time to have some probability of interacting as the other flavor. The probability of this oscillation is equal to $sin22θsin2(1.27Δm2L/E)$, in the (admittedly strange, but useful) units of $L$ (in meters, m), $E$ (in mega-electron-volts, MeV), and $Δm2=m22-m12$ (in $eV2$). This formula depends on two fundamental parameters: the mixing angle, $θ$, which sets the amplitude for the oscillation probability, and the squared mass splitting, $Δm2$, which affects the wavelength of the oscillation. It also depends on two experimental parameters: $L$, the distance the neutrinos have traveled from source to detector, and $E$, the energy of the neutrino. While Eq. (1) is a simplified two-neutrino picture, expanding to the three known flavors, $νe$, $νμ$, and $ντ$, it follows a similar line of thought; in this case the two-dimensional rotation matrix with one angle, $θ$, becomes a three-dimensional matrix with three Euler angles: $θ12$, $θ23$, and $θ13$.
The story of neutrino oscillations measurements begins with an article by Davis on a search for solar neutrinos that appeared in Physical Review Letters in 1964 [2] (also a year of the dragon!). But despite Davis’ subsequent discovery of a very obvious signature, it took a long time for physicists to understand and correctly interpret the data as a problem with the underlying particle physics. In the standard model, neutrinos are massless. This assumption is necessary to explain parity violation, a $100%$ asymmetry in the spin dependence of beta decay. But if neutrinos have zero mass, $Δm2=0$ in Eq. (1), the oscillation probability goes to zero, so oscillations cannot occur. The standard model was so successful in all other aspects of particle physics that it took years and many careful follow-up experiments for physicists to accept the experimental reality: neutrinos have mass and do oscillate.
As we have continued to study oscillations, we have learned that neutrinos are not like the other particles in many ways. For one thing, the mass splittings are very small compared to those of their charged particle partners. For another, two of the mixing “Euler” angles, $θ12$ and $θ23$ are very large—exactly the opposite of the mixing matrix seen in quarks. But the last mixing angle, $θ13$, remained elusive, despite a dedicated search by two experiments: Palo Verde, in Arizona [3], and Chooz, in France [4]. Many people assumed that the $θ13$ was very tiny, partially inspired by the phenomenology of so-called tri-bi-maximal mixing [5]. In its purest form, this idea, put forward in $2002$ to explain the pattern of mixing angles, requires $θ13=0$. Various other theories gave order of magnitude predictions from $1×10-5$ to $1×10-2$. Experimentalists responded by designing for small $sin22θ13$ values. The present-generation experiments were designed to reach $sin22θ13=0.01$, but the Neutrino-Factory-of-Our-Dreams could potentially reach $1×10-4$.
Suddenly, this view has changed. The first challenges to expectation have come from accelerator-based “appearance” experiments, which look for interactions of a new flavor ($νe$) in a sea of interactions of the original flavor ($νμ$). While this sounds like a search for a needle in a haystack, in a well-designed experiment, like T2K in Japan (see 18 July 2011 Viewpoint) [6] and MINOS [7] at Fermilab in Illinois, the needle sticks out. Nevertheless, these experiments had low statistics and confusing results. The errors were relatively large and the results were still consistent with $θ13=0$. But the central values were not tiny (see Fig. 1)—a first clue that $θ13$ was going to be larger than expected. Appearance experiments are problematic because, in the case of three neutrinos, they are sensitive to unknown parameters beyond $θ13$, including the ordering of the mass states (called “the mass hierarchy”) and the $CP$-violating parameter. On the other hand, disappearance experiments have a much simpler oscillation formula, which reduces to $Pdisapp=1-sin22θsin2(1.27Δm2L/E)$.
Daya Bay, which is located in China, is illustrative of the new generation of disappearance experiments. This experiment is housed at one of the new ultrapowerful reactor complexes that are coming online worldwide. Reactors are copious sources of antielectron neutrinos ($ν¯e$), for free! The idea is to search for antineutrino disappearance as a function of distance, so experiments have multiple detectors. Daya Bay uses six (Fig. 2). These can be placed at varying distances ($L$) from the reactor source to trace the oscillation wave. The antineutrinos interact with free protons in the scintillator oil of the detector via an “inverse beta-decay” process that produces a positron and a neutron: $ν¯e+p→e++n$. This interaction has a cross section known to high precision; the energy of the antineutrino can be fully reconstructed, and the positron followed by a subsequent neutron capture produces a coincidence signal that separates the signal from background. The development of long-lifetime gadolinium-doped scintillator was a crucial breakthrough in detecting $θ13$, since this improves the neutron capture time by more than an order of magnitude.
In fact, the first reactor experiment to present a result was Double Chooz, in France [8]. The central value was again relatively large (Fig. 1), and at that point, it began to sink in: $θ13$ must be big. Within five months, Double Chooz’s experiment was followed by the Daya Bay result, $sin22θ13=0.092±0.016(stat)±0.005(syst)$ [1]. In particle physics, a statistical significance of $5σ$ is needed to claim a discovery, and Daya Bay’s measurement was conclusive: $θ13$ is nonzero. The central value of their measurement was large and in excellent agreement with Double Chooz. Hot on Daya Bay’s heels was the result from the RENO experiment [9] in Korea, which yet again agreed with $sin22θ13∼0.1$.
What does large $θ13$ mean? It means that the neutrino community is suddenly busy organizing workshops to re-think the next steps. With a well-measured value of $θ13$ we can pursue the mass hierarchy and the $CP$-violation parameter in appearance experiments that should come online soon. We shall see if measurements of these new parameters also defy expectations.
In one year, we went from knowing nothing, to having a full picture of this mixing angle. We found the $θ13$ dragon, and it roared!
### References
1. F. P. An et al., Phys. Rev. Lett. 108, 171803 (2012).
2. R. Davis, Jr., Phys. Rev. Lett. 12, 303 (1964).
3. F. Boehm et al., Phys. Rev. Lett. 84, 3764 (2000).
4. M. Apollonio et al., Phys. Lett. B 466, 415 (1999).
5. P. F. Harrison, D. H. Perkins, and W. G. Scott, Phys. Lett. B 530, 167 (2002).
6. K. Abe et al. (T2K Collaboration), Phys. Rev. Lett. 107, 041801 (2011).
7. P. Adamson et al. (MINOS Collaboration), Phys. Rev. Lett. 107, 181802 (2011).
8. Y. Abe et al. (Double Chooz Collaboration), Phys. Rev. Lett. 108, 131801 (2012).
9. S.-B. Kim et al. (RENO Collaboration), arXiv:1204.0626 (hep-ex); Phys. Rev. Lett. (to be published).
### Highlighted article
#### Observation of Electron-Antineutrino Disappearance at Daya Bay
F. P. An et al.
Published April 23, 2012 | PDF (free)
### Figures
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 61, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9240549206733704, "perplexity_flag": "middle"} |
http://quant.stackexchange.com/questions/tagged/distribution | # Tagged Questions
The distribution tag has no wiki summary.
2answers
88 views
### Transformation to reduce standard deviation without changing median
Consider some negative skew and high kurtosis return time-series $X_t$. I do not know the functional form of the pdf of $X_t$ and have about 150,000 data points. Suppose that I was to create an ...
2answers
167 views
### Fitting distributions to financial data using volatility model to estimate VaR
I want to fit a distribution to my financial data using a volatility model to estimate the VaR. So in case of a normal distribution, this would be very easy, I assume the returns to follow a normal ...
1answer
104 views
### Value at Risk Monte-Carlo using Generalized Pareto Distribution(GPD)
I have created a VBA program to calculate VaR by using Monte Carlo, I have simulated Brownian Motion. This method might be ok for 100% equity portfolio, but let's say this portfolio may have fixed ...
1answer
194 views
### What are $d_1$ and $d_2$ for Laplace?
What are the formulae for d1 & d2 using a Laplace distribution?
1answer
229 views
### Benfords law and quantitative finance
Benford's law has been applied in various ways for detecting fraud (e.g. elections or accounting). But what are the most useful applications of Benford in quantitative finance? Are there any? I have ...
1answer
296 views
### What distribution should I apply to estimate the likelihood of extreme returns?
Say I have a limited sample, a month of daily returns, and I want to estimate the 99.5th percentile of the distribution of absolute daily returns. Because the estimate will require extrapolation, I ...
2answers
323 views
### Distribution for High Kurtosis
Can you please advise which distribution to follow when your skewness is 0.28 and Kurtosis value is 51. Since it's leptokurtic and positively skewed I would like to fit distribution and also wanted to ...
0answers
116 views
### What is the relation between return volatility and return rank volatility, and how can I control the latter?
I have no experience in finance, but I've been playing around with a virtual portfolio. I'm trying to control the "rank volatility" distribution - that is, the volatility of a stock's daily rank in ...
5answers
705 views
### What distribution to assume for interest rates?
I am writing a paper with a case study in financial maths. I need to model an interest rate $(I_n)_{n\geq 0}$ as a sequence of non-negative i.i.d. random variables. Which distribution would you advise ...
1answer
115 views
### What are some common models for one-sided returns?
One typically models the log returns of a portfolio of equities by some unimodal, symmetric (or nearly symmetric) distribution with parameters like the mean and standard deviation estimated by ...
2answers
705 views
### Tools in R for estimating time-varying copulas?
Are there libraries in R for estimating time-varying joint distributions via copulas? Hedibert Lopes has an excellent paper on the topic here. I know there is an existing packaged called copula but ...
2answers
2k views
### How can I compare distributions using only mean and standard deviation?
I only have means and standard deviations of samples of two random variables. What technique can I use to determine how similar the distributions these describe are? Assume that the values are built ...
0answers
274 views
### Probability distributions in quantitative finance [closed]
What are the most popular probability distributions in quantitative finance and what are their applications?
0answers
110 views
### Getting the actual distribution of a stock price at time T using implied volatility [duplicate]
Possible Duplicate: How to derive the implied probability distribution from B-S volatilities? Let's assume a stock price S, with volatility $\sigma$ constant, no dividend, and risk free ...
2answers
2k views
### How to derive the implied probability distribution from B-S volatilities?
The general problem I have is visualization of the implied distribution of returns of a currency pair. I usually use QQplots for historical returns, so for example versus the normal distribution: ...
2answers
1k views
### How can I estimate the degrees of freedom for a Student's T distribution?
I am doing research estimating the value at risk for non-normally distributed assets. I need help in the process of estimating the parameters of Student's t distribution and which method to use. I ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9060951471328735, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/5226/how-slow-is-a-reversible-adiabatic-expansion-of-an-ideal-gas/5511 | # How slow is a reversible adiabatic expansion of an ideal gas?
A truly reversible thermodynamic process needs to be infinitesimally displaced from equilibrium at all times and therefore takes infinite time to complete. However, if I execute the process slowly, I should be able to get close to reversibility. My question is, "What determines when something is slow?"
For definiteness, let's take an insulating cylindrical piston with cross-sectional area $A$ and original length $L_0$. There is an ideal gas inside with $n$ molecules of gas with mass-per-molecule $m$. The temperature is $T_0$, and the adiabatic index is $\gamma$.
I plan to expand the piston adiabatically to length $L_1$, taking a time $t$ to do so. If I take $t$ to be long enough, the process will be nearly-reversible. However, $t$ being long does not mean "one minute" or "one year". It means $t >> \tau$ for some
$$\tau = f(A, L_0, L, n, m, T, k_B, \gamma)$$
What is $\tau$?
From purely dimensional considerations, I guess the relationship is something like
$$\tau = \sqrt{\frac{mLL_0}{k_bT}}f(n, L/L_0, A/L^2, \gamma)$$,
but I don't have a strong physical explanation.
Edit A meaningful answer should let me do the following: I take a certain example piston and try expanding it a few times, putting it in a box so I can measure the heat released to the environment. I calculate the entropy change in the universe for the expansions. After doing several expansions, each slower than the last, I finally get $\Delta S$ for the universe down to a number that I think is sufficiently small. Next, I plan to repeat the experiment, but with a new piston that has different dimensions, different initial temperature, etc. Based on my results for the previous piston, how can I figure out how long I should take to expand the new one to achieve a similar degree of reversibility on the first try?
For reference, the pressure is
$$P = \frac{nk_BT}{V}$$
and the speed of sound is
$$v = \sqrt{\frac{\gamma k_b T}{m}}$$,
and I'm happy to have an answer in terms of these or other derived quantities. Formulas for entropy and thermodynamic potentials can be found in the Wikipedia article.
-
""but with a new piston that has different dimensions, different initial temperature, etc"" Different dimensions are a method to explore some dissipation mechanisms. But what do You see in different temperatures? A adiabatic cylinder/piston does not exchange heat with the gas inside. So its temperature is irrelevant (its made from a wonder stuff in any case, so You can assume that it has zero specific heat) – Georg Feb 16 '11 at 18:35
@Georg Initial temperature affects the speed of sound. – Mark Eichenlaub Feb 16 '11 at 22:18
Is not $\tau$ simply $\tau=\frac{L_0}{v}$ – Martin Gales Feb 18 '11 at 7:23
@Martin I would guess that $L$ comes into it. We need to equilibrate with the entire cylinder, and if the cylinder is very long, that will take a longer time because the bottom needs to be able to talk to the top. – Mark Eichenlaub Feb 18 '11 at 7:27
## 4 Answers
I am a student so please point out in gory detail anything I did wrong.
For a process to be quasistatic, the time scales of evolving the system should be larger than the relaxation time. Relaxation time is the time needed for the system to return to equilibrium.
We have an adiabatic process, so equilibrium must be preserved at each point, that is to say
(Working within the validity of the kinetic theory for ideal gases and ignoring friction)
$(A L(t))^\gamma P(t) = (A L_0)^\gamma P(t_0)$
Momentum gained by the piston:
$\Delta p = 2 m v_x$
A molecule would impact the piston every
$\delta t = \frac{2(L_0+ \delta x) }{v_x}$
The force exerted on the piston is $F =\frac{\Delta p}{\delta t} = \frac{m v_x^2}{L_0+\delta x}$ Pressure would be $P = \frac{P}{A}$ and for $N$ such molecules
$P = \frac{N m <v_x>^2}{A (L_0+\delta x)} = \frac{N m <v>^2}{3A (L_0+\delta x)}$
So at the instant $t=t'$ where the piston has been displaced by $\delta x$, we have
$(A L(t))^\gamma P(t) = \frac{N m <v>^2}{3A^{1-\gamma}} (L_0+\delta x)^{\gamma -1}$
Expanding in series
$= \frac{N m <v>^2 L_0^{\gamma-1}}{3A^{1-\gamma}} (1 + \frac{(\gamma-1) \delta x}{L_0}+ O(\delta x^2) )$
Substituing $\frac{\delta x}{L_0} = \frac{\delta t v_x}{2 L_0} -1$
$(A L_0)^\gamma P(t_0) = (A L_0)^\gamma P(t_0) (1 + (\gamma-1) (\frac{\delta t v_x}{2 L_0} -1))$
If we want our process to be reversibly adiabitic atleast to first order, we must have from above
$\delta t = \frac{2 L_0}{<v_x>}$
Now, this is time until collision for the starting case. Investigating second orders
$(A L_0)^\gamma P(t_0) = (A L_0)^\gamma P(t_0) (1 + \frac{(\gamma-1) \delta x}{L_0}+ \frac{1}{2} (\gamma-1)(\gamma-2) (\frac{\delta x}{L_0})^2 +O(\delta x^3) )$
Looking at just the series terms
$1 + (\gamma-1)\frac{\delta x}{L_0} (1 +\frac{1}{2} (\gamma -2) \frac{\delta x}{L_0}) \approx 1$
This would be true for
$\delta t = \frac{4 L_0}{<v_x>} (\frac{1}{2-\gamma} -1)$
Now, this is the "time until next collision" for a gas molecule hitting the piston. To maintain reversibility, at least to second order, the piston should be moved from $L_0$ to $L_0 + \delta x$ in time $\tau = \delta t$ so that the system variables follow the adiabatic curve.
The $<v_x>$ can be calculated from the maxwell distribution
-
1
An issue with the solution is that I substituted, rather naively, $<v_x>$ for $v_x$ thinking that for a collection of N such molecules, the independent values would be replaced by the mean values. However, from the maxwell distribution, $<v_x>=0$ so the final expression tells us that $\delta t$ should be infinite, which is something we already know. – Approximist Feb 21 '11 at 5:44
1
The only way I can think around this difficulty is that the standard deviation of velocity is $\sigma_{v_x} = \sqrt{\frac{k_B T}{m}}$. So, remaining within one standard deviation of the velocity, we have the lower bound $4 L_0 \sqrt{\frac{m}{k_B T}} \frac{\gamma-1}{2-\gamma}<\delta t <\infty$. – Approximist Feb 21 '11 at 5:45
1
Thus, the answer, accroding to all this is, that if you want to wait long enough for all the velocity fluctuations to "smooth out", then for reversible expansion, your discrete steps should be infinitely spaced in time. Otherwise, depending on the precision with which you can verify that $PV^\gamma = const$ then the smallest time interval in which your process will be reversible will be $\tau = 4 L_0 \sqrt{\frac{m}{k_Bt}}\frac{\gamma-1}{2-\gamma}$ – Approximist Feb 21 '11 at 6:02
1
@Approximist The comments above seem to me to be critical to the final answer. I'd suggest you should include them in your actual answer? Great work, by the way. – kharybdis Feb 22 '11 at 21:16
1
It took me quite a while before I understood physically what your derivation was saying, but well done. I found this insightful. – Mark Eichenlaub Feb 24 '11 at 6:18
show 3 more comments
This is not a direct answer to the question but rather a slightly different perspective on this adiabatic expansion. I am not sure how correct it is.
So, let us assume that the piston moves(in direction of $x$-axis) infinitely slowly with velocity $\vec{v}_p$. Let a molecule flies to the piston at a velocity $\vec{v}_k$. With respect to the piston, its velocity will be $\vec{v}_{k_{rel}}=\vec{v_k}-\vec{v_p}$. Normal component(relative piston) of relative velocity is $(v_{k_{rel}})_x=v_{kx}-v_p$. Let's denote by $\vec{v'}_{k_{rel}}$ the velocity of the molecule with respect to the piston, after reflection. Tangential component of relative velocity as a result of the reflection does not change, and the normal changes sign. $$(v'_{k_{rel}})_x=-(v_{k_{rel}})_x=-v_{kx}+v_p$$ Let's denote by $v'_k$ the velocity of the molecule relative to the fixed cylinder walls after reflection. Its normal component is $v'_{kx}=(v'_{k_{rel}})_x+v_p=-v_{kx}+2v_p$ and the tangential component is the same as that of the velocity $\vec{v}_k$. As a result of the reflection from the piston, the kinetic energy of the molecule is incremented:
$$\frac{1}{2}m(-v_{kx}+2v_p)^2-\frac{1}{2}mv_{kx}^2=-2mv_pv_{kx}+2mv_p^2$$ Let's denote by $n_k$ the number of molecules per unit volume whose velocities are approximately equal to $\vec{v}_k$. Number of hits these molecules on the piston during the time $dt$ is equal to $z_k=An_k(v_{kx}-v_p)dt$ where A is area of the piston. As a result, the kinetic energy of molecules in this group in time dt will increase:
$$(-2mv_pv_{kx}+2mv_p^2)An_k(v_{kx}-v_p)dt=-2mn_k(v_{kx}^2-v_p^2)dV$$ where $dV=Av_pdt$ is an increase of the volume of gas for the same time.
The increment of the kinetic energy of the whole gas:$$dE_{kin}=dU=-dV\sum_{v_{kx}>0}2mn_kv_{kx}^2+2dVv_p^2\sum_{v_{kx}>0}mn_k$$ Here $U$ is the internal energy of an ideal gas. The summation is only for those groups of molecules, which move in the direction of the piston. When summarizing of all groups of molecules, moving like a piston, and from him, then the sum should be divided in half. In this case: $$dU=-dV\sum mn_kv_{kx}^2+dVv_p^2\sum mn_k$$ But by definition, the first sum is the pressure of the gas $P$ and the second sum is simply the density of the gas $\rho$. Thus we get a differential equation:
$$dU+PdV=\rho v_p^2dV$$ Internal energy of an ideal gas may be expressed as follows: $$U=\frac{f}{2}PV$$ where $f$ is a number of degrees of freedom of molecule. Using the fact that the adiabatic index is $\gamma=\frac{f+2}{f}$, the differential equation can be rewritten:
$$\frac{dP}{dV}+\gamma\frac{P}{V}=(\gamma-1)\frac{\rho}{V}v_p^2$$ If $v_p=0$ then we get from it the adiabatic equation: $PV^{\gamma}=const$
Because $\rho$ depends on the pressure and temperature it is impossible to integrate the differential equation directly. But for small shifts of the piston we can assume that the density is approximately constant, i think.
-
I like your answer though it is a generalization of the adiabatic law and not a condition for the system to be in LTE. – Shaktyai Sep 15 '12 at 13:18
The good answer to your question was indeed a condition on the velocity of the piston much lower than the average molecule velocity. To understand why, you need to study kinetics and fluid theories. From Boltzmann's equation one can deduce the fluid equations that give rise to classical thermodynamics. The passage from the kinetics scale to the fluid scale is only valid if macroscopic time scale and macroscopic gradient lengths are much greater than the microscopic relaxation time and the particle mean free path.
-
Do you mean there is no entropy production outside the the kinetics scale? – Yrogirg Sep 18 '12 at 16:57
I do not see where I have suggested it is so. Entropy was first defined at the kinetic scale. As long as you know how to compute f(r,v,t) you can compute S. But out of equilibrium, you may be in difficulty to connect S to T or any other macroscopic parameter. Things have changed with the edits, the initial formula in the question is no longer present. – Shaktyai Sep 18 '12 at 18:41
The funny thing is that the answer to your particular question is not even "one minute" or "one year". Expansion/contraction of gases is effectively reversible for the regimes where hydrodynamic description is valid, that is gas motion is governed by Navier-Stokes equations.
The easiest way to see this is to remember how is the formula for the speed of sound you mention is derived:
$$v = \sqrt{\frac{\gamma k_b T}{m}}$$
You assume air to contract/expand adiabatically and assume no dissipation, that is the work done by pressure goes entirely to the internal energy and vice versa. And you come to the acoustic wave equation. So expansion/contraction of the gas is reversible to the extent the sound propagation is described by the usual wave equation.
The primary reason for the phenomenon is that for gases second viscosity is zero in the assumptions of kinetic theory. Actual value is above zero, but it is neglected in practice. In fluid dynamics second viscosity $\zeta$ measures entropy production due to expansions/contractions:
$$\sigma_\zeta = \frac{\zeta}{T} (\operatorname{div} \boldsymbol v)^2$$
Here $\boldsymbol v$ is the fluid velocity. So it's not the expansion that causes irreversibility for the gas in piston. There are two other sources of entropy in a fluid flow. The first one is heat conduction:
$$\sigma_{\lambda} = \frac{\lambda}{T^2} (\operatorname{grad T})^2$$
If you have no temperature gradients, it is zero.
The other one is arises from shear viscosity:
$$\sigma_\mu =\frac{2 \mu}{T} \left[\frac{\partial v_i}{\partial x_j} + \frac{\partial v_j}{\partial x_i} - \frac{1}{3} \frac{\partial v_k}{\partial x_k} \right]^2$$
The expression above is written in Cartesian coordinates, repeating indexes mean summation, $a_{ij}^2 = a_{ij} a_{ij}$.
I think it is possible to construct a piston where no shear stresses near the wall will arise, so the entropy production would be zero.
To answer your question, we may assume gas expansion in piston reversible if the entropy produced is small compared to the overall entropy
$$\Delta S = \int_0^t \int_V \sigma(\boldsymbol r, t') \; d \boldsymbol r dt' \ll S$$
this is the condition for $t$.
Once again, the explanation above is true whenever hydrodynamic description is valid, if you have shock waves, continuum description is not applicable for part of the region.
Let's assume $\zeta$ be non zero. Than entropy produced would be
$$\Delta S = \frac{\zeta}{T} \left[\frac{|L_1 -L_0| / t}{L} \right]^2 t A L \sim \frac{1}{t}$$
So making expansion slow really reduces entropy produced.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 72, "mathjax_display_tex": 18, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9303176403045654, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/85172/fastest-way-to-compute-subfields-of-mathbbq-sqrt82-i-which-are-galois?answertab=oldest | # Fastest way to compute subfields of $\mathbb{Q}(\sqrt[8]{2},i)$ which are Galois over $\mathbb{Q}$?
I have the lattice of subfields of the splitting field $\mathbb{Q}(\sqrt[8]{2},i)$ over $x^8-2$, and the corresponding lattice of subgroups of the Galois group $G$ of the splitting field.
I'm now interested in the finding the subfields which are Galois over $\mathbb{Q}$. What's the fastest way to find them?
I know that a subfield will be Galois over $\mathbb{Q}$ iff the automorphisms in $G$ fixing the subfield form a normal subgroup, but it seems difficult to go through and actively find all the normal subgroups. Is there a faster way?
-
## 1 Answer
On the contrary, (a small part of) the power of Galois theory is precisely that it reduces the difficult question of finding Galois subfields to the significantly easier question of finding normal subgroups. This is the faster way :)
You should figure out what the group $G=\text{Gal}(\mathbb{Q}(\sqrt[\large 8]{2},i)/\mathbb{Q})$ looks like - for starters, we know that $$|G|=[\mathbb{Q}(\sqrt[\large 8]{2},i):\mathbb{Q}]=[\mathbb{Q}(\sqrt[\large 8]{2},i):\mathbb{Q}(\sqrt[\large 8]{2})][\mathbb{Q}(\sqrt[\large 8]{2}):\mathbb{Q}]=2\cdot8=16.$$ What else do we know - for example, can you think of some elements you know will be in $G$? One that we know will be there is complex conjugation, which I will denote $\rho$, $$\rho:\mathbb{Q}(\sqrt[\large 8]{2},i)\to\mathbb{Q}(\sqrt[\large 8]{2},i),\qquad \rho:{\sqrt[\large 8]{2}\mapsto \sqrt[\large 8]{2}\atop i\mapsto -i}$$ Can you think of any others? Once we work out the elements and how they interact (i.e. the group structure), it actually isn't that bad of a problem to find the normal subgroups. This question may help you out, and if you're still having trouble, you could ask a separate question about how to find the normal subgroups of this particular group.
-
I know the Galois group is isomorphic to the quasidihedral group of order 16, and the other automorphism is $$\sigma\colon{ \sqrt[8]{2}\mapsto\zeta_8\sqrt[8]{2}\atop i\mapsto i}$$ and the Galois group is defined by the relations $\sigma^8=\rho^2=1$ and $\sigma\rho=\rho\sigma^3$. I've written out the whole lattice of subgroups of $G$ in terms of their generators, and I've found 3 groups of order 8 (hence normal with index 2), so their corresponding subfields are Galois. I've also found 5 subgroups of order 4, and 5 subgroups of order 2. Do I really have to brute force compute it all? – Evariste Nov 24 '11 at 10:36
I guess it boils down to how can I effeciently tell which of those 10 subgroups are normal in $G$? – Evariste Nov 24 '11 at 10:36
I think the advice in the linked question is probably as good as we can get in general - you could compute the conjugacy classes of the group, and use that $H<G$ is normal iff it is a union of conjugacy classes, but it is not much faster, I think. One freebie is that the center is normal, but the rest we just have to brute force. – Zev Chonoles♦ Nov 24 '11 at 10:54
– Zev Chonoles♦ Nov 24 '11 at 10:56
Thanks for the link, I might just trust it. There's no clever way to compute the conjugacy classes other than to just conjugate each element of the group by every element of the group for $16^2$ little computations, right? – Evariste Nov 24 '11 at 11:02
show 1 more comment | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.950846254825592, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/161515-continuous-section.html | # Thread:
1. ## continuous section
Hello,
I should show that a surjective, continuous map f:X->Y is an identification if it admits a section s:Y->X.
I don't understand the part: "it admits a section s:Y->X." what does this mean?
what is a section?
2. I believe that a section (see also here) is a right inverse of a given function. That is, $\displaystyle s$ is a section of $\displaystyle f$ is $f \circ s = \mathrm{id}$. Sometimes a section is defined as a function that has a left inverse (so it itself is a right inverse).
Correspondingly, "to admit a section" means to have a right inverse. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367425441741943, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/53266/values-where-infinite-products-of-primes-and-composites-are-equal/96151 | ## Values where infinite products of primes and composites are equal
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Highly grateful for your help/steers on the following question (at the end):
Take the infinite product:
$$\displaystyle T(s) = \prod _{n=2}^{\infty } \left( \dfrac{{n}^{s}} {{n}^{s}-1}\right)$$
for $\Re(s) > 1$ it is equal to:
$$\displaystyle \prod _{primes}^{\infty } \left( \dfrac{{p}^{s}} {{p}^{s}-1}\right) * \prod _{composites}^{\infty } \left( \dfrac{{c}^{s}} {{c}^{s}-1}\right)$$
I.e. the Euler-product (equal to $\zeta(s)$) multiplied by its composite "equivalent" ( excluding 1 since that is a bit of a strange composite anyway).
Why my interest? I wanted to learn more about the composite infinite product (and see if it had a 'zeta' like version). Soon became clear to me that the only way to learn more about this product, is to concentrate on $T(s)$ and then divide it by $\zeta(s)$.
I searched the web but there is hardly anything known about $T(s)$. F.i. Wolfram math only shows (formula 20) two different solutions (note: both need to be raised to $^{-1}$ to get $T(s)$ !) for odd and even integers and by reading through some arxiv math pre-prints the best I could find was a single, but still integer only formula that is:
$$\prod _{k=1}^{s-1}\Gamma \left( 2- {{\rm e}} ^{{\frac {2 i \pi k}{s}}} \right), ( \Re(s) > 1, s \in \mathbb{N})$$
I then decided to explore ways to extend the domain for $s$ and derived the following formula:
$$\displaystyle \ln \left( T\left( s \right) \right) = \ln \prod_{n=2}^{\infty } \left( \left( -1+{n}^{-s} \right) ^{-1} \right) = \sum_{n=2}^{\infty } \ln \left( \left( 1-{n}^{-s} \right) ^{-1} \right)$$
$$\displaystyle = \sum_{m=1}^\infty \sum_{n=2}^{\infty } \frac{1}{mn^{ms}} = \sum_{n=1}^\infty \frac{\zeta(n s)-1}{n}$$
And this brings us to:
$$T(s)={\rm e}^{\left( \displaystyle \sum_{n=1}^\infty \frac{\zeta(n s)-1}{n} \right)}$$
Yep, there's always a $\zeta(s)$ hiding around the corner somewhere...
So, let's see what the plot looks like for $s>0$ ($T(s)$ diverges for $s<0$).
$T(s)=\displaystyle {\rm e}^{\left( \displaystyle \sum_{n=1}^\infty \frac{\zeta(n s)-1}{n} \right)} \text{ blue}$
$\displaystyle \zeta(s) = \prod _{primes}^{\infty } \left( \dfrac{{p}^{s}} {{p}^{s}-1}\right) \text{purple}$
$\displaystyle \frac{T(s)}{\zeta(s)} = \prod _{composites}^{\infty } \left( \dfrac{{c}^{s}} {{c}^{s}-1} \right) \text{ brownish}$
graph
For $s>1$ I could numerically solve the following equation:
$$\displaystyle \prod _{primes}^{\infty } \left( \dfrac{{p}^{s}} {{p}^{s}-1}\right) = \prod _{composites}^{\infty } \left( \dfrac{{c}^{s}} {{c}^{s}-1} \right)$$
giving this interesting number $s = 1.397737620...$ (there is only one for $\Re(s) > 1$ )
I obviously took a deep dive with this number on Google and Plouffe's inverter, but have not found anything 'beautiful' or related to other constants as yet.
Then the domain $0 < s < 1$. It is easy to see in the graph that $T(s)$, and therefore also $\dfrac{T(s)}{\zeta(s)}$, have 'trivial' poles for $s= \dfrac{1}{k}, k \in \mathbb{N}$ that are induced by the fact that for each $s= \dfrac{1}{k}$ there always is a $n s = 1$ that makes at least one term in the infinite sum equal to the pole $\zeta(1)$ (hence the whole sum turns into a pole).
But I'm actually mostly intrigued by what happens under the x-axis and especially where:
$$\zeta(s) = \dfrac{T(s)}{\zeta(s)}$$ or
$$|\zeta(s)| = {\rm e}^{\displaystyle \left(\frac12 \sum_{n=1}^\infty \frac{\zeta(n s)-1}{n} \right)}$$.
If I have done my analysis correctly, this result would imply that there are an infinite number of values for $0 < s < 1$, where the (analytically continued) infinite products of primes and composites are equal (since $\zeta(s)$ remains negative between $0 < s < 1$ and there are an infinite number of poles separating the intersection points). And that would imply/reveal an infinite amount of tiny bits of information about how the primes 'grow like weed between the composites'.
Of course I checked $T(s)$ also for $s \in \mathbb{C}$, however, any graph I've produced sofar for $s=a+bi$ of $T(s)={\rm e}^{\left(\displaystyle \sum_{n=1}^\infty \frac{\zeta(n s)-1}{n} \right)}$ did not reveal any non-trivial zeroes (nope, not even at $a=\frac12$...), although the curves do seem to be trending towards a large number of very chaotically distributed zeroes when $a \rightarrow 0$.
So, apologies for the relatively long intro to my question:
Since $\zeta(s)$ has been analytically continued throughout the entire complex domain, is it allowed to also analytically continue the division of $\dfrac{T(s)}{\zeta(s)}$ into the domain $s<1$? Or do the nominator and denominator each require an individual continuation and does the concept of division get 'lost in continuation'?
-
1
Why do you think this "composite product" will tell you anything interesting about anything? – David Hansen Jan 27 2011 at 2:16
David, Here's the thought. Riemann linked the Euler prime product, via the analytically continued Zeta-function, via its non-trivial zeroes (all allegedly lying on line a=1/2) to the prime counting function. Since the logarithmic prime counting function phi(x) = x - ln(2pi) - infinity sum(x^rho / rho), I wondered whether a Composite-counting function exists as well. Since such function requires the same non-trivial zeroes (i.e. = (ln(2pi) + infinity sum(x^rho / rho)), I conjectured that there should be a link back into the infinite composite product (and the Zeta). Hence the quest. – Agno Jan 27 2011 at 11:07
3
This is a pretty old question, but I just saw it for the first time: I think questions like this can be valuable -- asking your own questions rather than just trying to answer other people's is always valuable. But you might find helpful Tim Gowers's discussion of why the Zeta function is a 'natural' and useful thing to consider: dpmms.cam.ac.uk/~wtg10/zetafunction.ps – Brad Rodgers May 6 2012 at 21:44
Thanks Brad. A pretty old question indeed (actually my very first ever on MathOverflow. I even remember the excitement as well as the anxiety from throwing a pretty rough idea in front of so many sharp brains). Will check out the link on the 'naturalness' of the Zeta function. – Agno May 10 2012 at 19:22
## 4 Answers
I think that T is meromorphic on $\mathbb{C}$ just like $\zeta$, with a single pole at $s=0$. The ratio should be fine everywhere except at $s=1$, the negative integers, and the critical strip (or line, on the RH).
-
Do you have any reasons for thinking that $T$ can be extended to a meromorphic function on $\mathbb C$? – Greg Martin May 10 2012 at 22:47
@Greg: Unfortunately I did not make any notes and in the year-plus since I answered the question I do not recall my reasoning. If you have any contrary thoughts, please give a separate answer! – Charles May 11 2012 at 2:36
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Just to elaborate a bit on my reaction to David Hansen's valid comment that I actually should have explained in my original question ("my intended response was too large to fit in the margin" ;-) ).
My interest in the infinite product of composites is based on the following idea.
Infinite products of the shape $\prod _{primes}^{\infty } \left( \dfrac{{p}^{s}} {{p}^{s}-1}\right)$ are only defined for $R_e >1$. The domain can be extended via analytical continuation as Riemann showed in his 1859 paper for the Euler product. He named the new function $\zeta(s)$ and proved it be valid for all $s \in \mathbb{C}$ with the only exception a pole at $s=1$. He found $\zeta(s)$ had zeroes (trivial ones at -2, -4, ... and non-trivial ones that appear to all be on the line $s = 0.5 + bi$). And via further transformations he also established a direct connection between the zeroes and the prime counting function $\pi(s)$.
If we take the logarithmic version of the prime-counting function $\psi(s)$ (i.e. the sum of all prime powers less than x, weighted by a natural logarithm of the power e.g.:
$\psi(10) = 3 \log(2) + 2 \log(3) + 1 \log(5) + 1 \log(7)$
then the exact prime counting function is ($\rho_k$ is a non-trivial zero):
$\psi(x) = x - \log(2\pi) - \frac12 \log(1- \frac{1}{x^2}) - \sum_{\rho} \dfrac{x^{\rho}}{\rho}$
Guess this is pretty standard stuff for the readers of this board and I also fully appreciate that the prime numbers are the atoms of the composites (what's in a name), but I wondered whether there could exist a Composite-counting-function that might be derived from $\prod _{composites}^{\infty } \left( \dfrac{{c}^{s}} {{c}^{s}-1}\right)$ in a similar fashion as Riemann did for the prime product. If so, one could use it as a sort of "detour" to approach the Riemann hypothesis from the other side. Let's just try a very small step backwards from the end result:
$C(x)$ = number of composities < x.
$\psi(x) = x - C(x)$
$C(x) = (\log(2\pi) + \frac12 \log(1- \frac{1}{x^2}) + \sum_{\rho} \dfrac{x^{\rho}}{\rho})$
A Composite-counting-function will therefore also be dependent on the non-trivial zeros. Since I couldn't find any way to obtain a zeta-like $C(x)$ function for the composite infinite product, I started exploring the route via $T(s)$ and got some success (I hope) by getting it expressed fully into $\zeta(s)$'s as:
$C(s) = \dfrac{e^{\sum_{n=1}^\infty \frac{\zeta(n s)-1}{n}}}{\zeta(s)}$
And before I even start dreaming about analytic continuation with contour integrals or further steps with Fourier/Mellin transforms, I'd been keen to understand whether the ratio can indeed be continued into $s \in \mathbb{C}$. If Charles' very encouraging response is indeed confirmed, then this would imply that the division $\dfrac{T(s)}{\zeta(s)}$ would induce a peak in $C(s)$ for each $s=\rho_k$. So, I'd need to count peaks rather than zeroes to compute the infinite sum of $\rho$'s in the Composite-counting-function.
P.S. After I plotted the graph for $T(s)$ with $s=0.5 + xi$, the $C(s)$-peaks and the $\zeta(s)$-zeroes at $s=\rho_k$ do not fully balance out and keep hovering between 0 and 1.
-
i'st easy to see that:
$$\ln T(s)=\sum_{n=1}^{\infty}\frac{\zeta(ns)-1}{n}$$
using the integral definition of the zeta function, one can show that:
$$\ln T(s)=s\int_{0}^{\infty}\frac{E_{s}(x^{s})-1}{xe^{x}(e^{x}-1)}dx$$
where : $E_{\alpha}(z)$ is the mittag-leffler fuction .
now, following Riemann's trick, here is what i did :
$$I(s)=-s\oint_{c}\frac{E_{s}((-x)^{s})-1}{xe^{x}(e^{x}-1)}dx$$
the contour is the usual Hankel contour. consider $I(-s)$ :
$$I(-s)=s\oint_{c}\frac{E_{-s}((-x)^{-s})-1}{xe^{x}(e^{x}-1)}dx=-s\oint_{c}\frac{E_{s}((-x)^{s})}{xe^{x}(e^{x}-1)}dx$$
• the Mittag-Leffler function admits the beautiful continuation : $E_{\alpha}(z^{-1})=1-E_{-\alpha}(z)$ -
or
$$I(s)-I(-s)=s\oint_{c}\frac{dx}{xe^{x}(e^{x}-1)}=s\oint_{c}(-x)^{-1}e^{-x}dx-s\oint_{c}\frac{(-x)^{-1}dx}{e^{x}-1}$$
now :$$\oint_{c}(-x)^{-1}e^{-x}dx=\frac{-2\pi i}{\Gamma(1)}=-2\pi i$$ and the second integral could be thought of as:
$$\oint_{c}\frac{(-x)^{-1}dx}{e^{x}-1}=\lim_{z\rightarrow 0}\oint_{c}\frac{(-x)^{z-1}dx}{e^{x}-1}=-2i\lim_{z\rightarrow 0}\sin(\pi z)\Gamma(z)\zeta(z)=i\pi$$
or :
$$I(s)-I(-s)=-3\pi is$$
lets go back to the 1st integral, and expand the Mittag-leffler function :
$$I(s)=-s\oint_{c}\frac{E_{s}((-x)^{s})-1}{xe^{x}(e^{x}-1)}dx=-s\sum_{n=1}^{\infty}\frac{1}{\Gamma(1+ns)}\oint_{c}\frac{(-x)^{sk-1}dx}{e^{x}(e^{x}-1)}$$ $$=s\sum_{n=1}^{\infty}\frac{2i \sin(k\pi s)\Gamma(ks)}{\Gamma(1+ns)}\left(\zeta(ks)-1\right)=2i\sum_{n=1}^{\infty}\sin(k\pi s)\frac{\zeta(ks)-1}{k}$$
now the problem becomes finding a function of the variable s -lets call it $A(s)$- such that:
$$\sum_{n=1}^{\infty}\sin(k\pi s)\frac{\zeta(ks)-1}{k}=A(s)\sum_{n=1}^{\infty}\frac{\zeta(ks)-1}{k}$$
if we define :
$$k(s)=\sum_{n=1}^{\infty}\frac{\zeta(ks)-1}{k}$$
then :
$$A(s)k(s)-A(-s)k(-s)=-\frac{3}{2}\pi s$$ and the problem becomes proving the existence of $A(s)$ for all s, and of course, finding it !!
-
Thanks for your response, Mohammad. I like the angle you took, however also got a bit confused by the interchange between $n$ and $k$. Are these used correctly in all the sums? – Agno May 10 2012 at 19:13
@Agno ... sorry for the late response ... here is my email : [email protected] , you might be interested in my work on this problem . – mohammad-83 Jul 28 at 21:19
I am seeking a proof of \lim_{m \rigtharrow \infty} \frac{Gamma(m)^m}{prod_{j=0}^{n-1}(m) that you need to use in thrid formula for show that \prod_{k=2}^{\infty}\frac{1}{1-k^{-n}}=\prod_{j=0}^{n-1}\Gamma(2-e^{\frac{2\pi j}{n}i})
You talk about arxiv , do you remember the url?
Thanks.
Fortuna
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 80, "mathjax_display_tex": 22, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.934216320514679, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/21367/proof-that-pi-is-transcendental-that-doesnt-use-the-infinitude-of-primes/21389 | Proof that pi is transcendental that doesn’t use the infinitude of primes
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I just taught the classical impossible constructions for the first time, and in finding my class a reference for the transcendence of pi, I found a dearth of distinct proofs. In particular, those that I read all require the existence of infinitely many primes, which strikes me as extraneous. Is there a known proof that requires only knowledge that I would "expect", namely, integral calculus to get your hands on the actual constant and algebraic properties of polynomials in connection with the assumption that the constant is algebraic?
-
3
The proof that there are infinitely many primes is both beautiful and easy---why avoid it? It seems especially appropriate if your students haven't seen it yet. – Joel David Hamkins Apr 14 2010 at 18:27
40
Two comments. First, transcendence theory as is commonly understood (incl. the transcendence of $\pi$) is number theory. Second, $\pi^2/6 = \prod_p (1-1/p^2)^{-1}$ so the irrationality of $\pi^2$ implies the infinitude of primes. – Felipe Voloch Apr 14 2010 at 20:59
4
@Vladimir and Felipe: We all lump algebraic/transcendental numbers under the number theory umbrella, but I do not accept that every pair of results under that umbrella must be seen as entwined. Example from algebra: Elliptic curves are related to the j-function, which are related to the monster group. If I said I was shocked that there was a connection between elliptic curves and the monster group, someone could say that they both fall under "algebra". I do not believe your comments are quite this extreme, but I still don't believe that saying "both are number theory" is a strong point. – Barry Apr 14 2010 at 22:51
4
Bit of a coincidence, the April 2010 M.A.A. Monthly has short proofs that $\zeta(2) = \pi^2 / 6,$ pages 352-353, and that $\pi^2$ is irrational, pages 360-362 – Will Jagy Apr 15 2010 at 1:56
7
I think it's awesome that a sentence can start with, "Second, $\pi ^2/6 = \prod _p (1 - 1/p^2)^{-1}$..." – Amit Kumar Gupta Dec 4 2010 at 10:16
show 11 more comments
3 Answers
The infinitude of primes (more precisely, the existence of arbitrarily large primes) might actually be necessary to prove the transcendence of $\pi$. As I explained in an earlier answer, there are structures which satisfy many axioms of arithmetic but fail to prove the unboundedness of primes or the existence of irrational numbers. Shepherdson presented a simple method for constructing such models, I will present such a model where $\pi$ is rational!
The Shepherdson integers $S$ consist of all Puiseux polynomials of the form $$a = a_0 + a_1T^{q_1} + \cdots + a_kT^{q_k}$$ where $0 < q_1 < \cdots < q_k$ are rationals, $a_0 \in \mathbb{Z}$, and $a_1,\dots,a_k \in \mathbb{R}$. This is a discrete ordered domain, where $a < b$ iff the most significant term of $b-a$ is positive; this corresponds to making $T$ infinitely large. This ring $S$ satisfies open induction axioms $$\phi(0) \land \forall x(\phi(x) \to \phi(x+1)) \to \forall x(x \geq 0 \to \phi(x))$$ where $\phi(x)$ is a quantifier free formula (possibly with parameters). So the ring $S$ satisfies the same basic axioms as $\mathbb{Z}$, but only a very limited amount of induction. In the field of fractions of $S$, $\pi$ is equal to the ratio $\pi T/T$. In other words, $\pi$ is a rational number!
Is $\pi T/T$ really $\pi$? The integers form a subring of $S$, and if $p,q \in \mathbb{Z}$ then $p/q < \pi T/T$ in $S$ if and only if $p/q < \pi$ in $\mathbb{R}$. So $\pi T/T$ defines the same Dedekind cut as $\pi$ does, which is a very accurate description of $\pi$. Indeed, any proof of the transcendence of $\pi$ must ultimately be based on the comparison of $\pi$ and its powers with certain rational numbers, which $\pi T/T$ will accomplish just as well as the real number $\pi$. However, the usual definitions of $\pi$ are not easily formalizable in this basic theory, so there is much room for debate here and I wouldn't claim that $\pi T/T$ satisfies all reasonable definitions of $\pi$. Shepherdson only presented this argument for real algebraic numbers like $\sqrt{2}$, which have a finitary description in this theory and leave little room for debate. In any case, the conclusion to draw from this is that basic arithmetic with open induction does not suffice to prove that $\pi$, or any other real number, is irrational (never mind transcendental).
What about primes? In the ring $S$, the only primes are the ones from $\mathbb{Z}$. Although there are infinitely many primes in $S$, it is not true that there are arbitrarily large primes. For example, there are no primes larger than $T$. Thus $S$ is a model where the unboundedness of primes fails and so does the irrationality of $\pi$. This only shows that basic arithmetic with open induction does not suffice to prove either result. A possible line of attack to show that the unboundedness of primes is necessary to prove the transcendence of $\pi$ would be to show that the minimum amount of induction necessary to prove that $\pi$ is transcendental also suffices to prove the unboundedness of primes. Unfortunately, I do not know how much induction is necessary to prove the transcendence of $\pi$. (And the minimum amount of induction necessary to prove the unboundedness of primes is still an open problem.)
Well, here is a partial answer, which is a bit of a bummer. There is another Shepherdson domain $S_0$ similar to the above where $\pi$ is transcendental over $S_0$ and $S_0$ does not have arbitrarily large primes. This shows that the transcendence of $\pi$ does not imply the unboundedness of primes over basic arithmetic with open induction. The ring $S_0$ is the subring of $S$ where the coefficients of the Puiseux polynomial are restricted to be algebraic numbers. The unboundedness of primes fails in $S_0$ because the real algebraic numbers form a real closed field just like $\mathbb{R}$. The number $\pi$ is transcendental over $S_0$ because it is transcendental over the field of real algebraic numbers.
This is not entirely surprising since open induction is a very weak base theory and the Shepherdson type rings are very pathological. To constrain such pathologies Van Den Dries suggested requiring that the domain is integrally closed in its field of fractions; he called such domains normal but I don't know if this is standard terminology. Neither $S$ nor $S_0$ are normal. More convincing examples would be normal discrete ordered domains. The methods of Macintyre and Marker (Primes and their residue rings in models of open induction, MR1001418) suggest that normal analogues of $S$ and $S_0$ might exist.
The conclusion that I draw from this is that open induction is probably too weak a base theory to study this question. Stronger base theories run into the difficulty that it is still not known just how little induction is necessary to prove the unboundedness of primes. The next reasonable candidate is bounded-quantifier induction (IΔ0), which is not known to imply the unboundedness of primes. Using the Euler product $\pi^2/6 = \prod_p (1-p^{-2})^{-1}$ looks promising, but so far I can only make sense of this product in IΔ0 + Exp which is known to prove the unboundedness of primes.
-
1
As Felipe pointed out, the fact that $\pi^2 = 6\prod_p(1-p^{-2})^{-1}$ suggests that the transcendence of $\pi$ does imply the infinitude of primes over a relatively weak base theory. – François G. Dorais♦ Apr 14 2010 at 21:21
1
Thank you François. I can only report that I got the gist of your entire response, but I take from it that you believe it possible that the infinitude of primes is necessary in the proof and have a potential explanation for this. If this is indeed the case, then I would find it fascinating. This has been bugging me the way needing analysis to prove the fundamental theorem of algebra bugs some people, but at least there, I understand why getting your hands on $\mathbb{C}$ without analysis is difficult. – Barry Apr 14 2010 at 23:06
4
This may be the most interesting and surprising answer I have read on MO so far! :-D – Andrea Ferretti Apr 15 2010 at 16:12
4
Dedekind cut, eh? Since this model is nonarchimedean, a cut corresponds to many different elements; some rational and some irrational. So you need a more convincing reason that this particular element is pi. – Gerald Edgar Dec 4 2010 at 1:01
2
(That is precisely the conclusion of my post.) – François G. Dorais♦ Feb 22 2011 at 17:19
show 11 more comments
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I recommend the book Irrational Numbers by Ivan Niven, one of the M.A.A. Carus Monographs and available as a paperback. He proves irrationality of $\pi$ and $\pi^2$ much earlier, then shows that $\pi$ is also transcendental with the Lindemann theorem, chapter 9. I really like this book.
As you know from teaching your class, impossibility of the compass and straightedge constructions does not need anywhere near the full weight of transcendence, merely that the associated constant not lie in a tower of fields that expresses the idea of taking square roots, see
http://mathoverflow.net/questions/14950/
or Appendix C in Galois Theory by Joseph Rotman, where he uses "only elementary field theory; no Galois theory is required." However, in line with your complaint, I should admit that I do not personally know of any proof that shows $\pi$ is not among the "constructible numbers" except for proofs of transcendence.
-
Thank you. I'll check out Niven's proof. Do you know if he uses the infinitude of primes? I'm guessing his proof is similar to that in his Monthly article, in which case he does. – Barry Apr 14 2010 at 19:20
2
I recommend the aritcle "A rational approach to Pi" by Frits Beukers staff.science.uu.nl/~beuke106/Pi-artikel.ps . Although this is concerning irrationality measures it occurred to me that it might be possible to prove transcendence of pi by showing that its irrational measure is $> 2$, and then using Roth's theorem (only half kidding)! – Victor Miller Apr 15 2010 at 2:39
That's a nice paper, Victor, and not an area I knew about. – Will Jagy Apr 15 2010 at 3:56
Yes, an entertaining paper. – Barry Apr 15 2010 at 15:23
Frits needs the estimate for the least common multiple of $1,2,\dots,n$. It's hard without primes. A proof of the irrationality of $\pi$ which I consider "most elementary" and which does not touch the primes in an obvious way is in Robert Breusch's note in the Amer. Math. Monthly 61 (1954) 631-632. The transcendence proofs for $\pi$ all require primes... – Wadim Zudilin Apr 16 2010 at 10:45
My feeling is similar to Barry's: the infinitude of primes may not be necessary. For example, in Chapter 2 of Niven's Irrational Numbers, he also used the infinitude of primes to prove that cos(r) is irrational for nonzero rational r. But our recent proof (Monthly, April/2010, 360-362, mentioned by Will Jagy earlier) does not need this at all. By the way, our proof (half-page long) can replace more or less the entire Chapter 2 of Niven's book (except the transcendence of e).
-
I'm now more convinced that the infinitude of primes is not essential. It's used in the proof of the transcendence of pi for very similar purposes as in the proof of the irrationality of cos(r) by Niven, thus may be replaced by recurrences as in our Monthly paper (also available at arxiv.org/abs/0911.1933). The recurrences, however, will be very very messy. – Li Zhou Dec 4 2010 at 16:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 79, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426888823509216, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/73982-finding-intersections-adding-2-equations.html | # Thread:
1. ## Finding intersections by adding 2 equations
For each of the following pairs of equations, find the point of intersection by adding the two equations together. Remember: you might need to change the coeffients and/or signs of the variables before adding.
Help!
x-y = -1
x+y= 9
2. Originally Posted by TaylorTeamEdward324
For each of the following pairs of equations, find the point of intersection by adding the two equations together. Remember: you might need to change the coeffients and/or signs of the variables before adding.
Help!
x-y = -1
x+y= 9
If you add two equations, you are their left hand sides together, and their right hand sides together.
So you'd get:
$x-y + (x+y) = -1+9$
$x-y+x+y = 8$
$2x = 8$
$x = 4$
Then just put x = 4 into one of the original equations to get y!
$x + y = 9$
$4+y = 9$
$y = 9-4$
$y = 5$
3. ## Can you break it down?
Okay? What I don't understand is how you got the -1 and why you have the x - y (x+y)... I don't understand what the reasoning is behind that. Could you explain that to me?
Thanks so much for helping me... I AM getting the same answer but not really understanding the way I'm getting it.. make sense? LOL
Thanks again!!! | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9503657221794128, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/2602/experiments-that-measure-the-time-a-gas-takes-to-reach-equilibrium | # Experiments that measure the time a gas takes to reach equilibrium
If you take two ideal gases at different temperatures, and allow them to share energy through heat, they'll eventually reach a thermodynamic equilibrium state that has higher entropy than the original. The time evolution of that entropy increase is easy to predict, because we know the time evolution of $T$ and $U$ for each of the gases (assuming we have the necessary constants). This is OK.
Now take a system beyond the scope of thermodynamics. A single box containing a gas that is not in thermodynamic equilibrium (doesn't follow the Boltzman distribution). One would expect that gas to quickly thermalise, and reach said equilibrium. Entropy can still be defined using statistical mechanics, and the final state will have higher entropy than the initial state.
I'm looking for quantitative experimental evidence of this effect.
Ideally, I'd like references measuring the time a gas takes to thermalise.
Obviously this time depends on many factors and is not always doable. I wasn't more specific because I don't want to be picky, I'm looking for any experiments that verify it.
-
(I'm not happy about the title, so I'm open to suggestions.) – Bruce Connor Jan 7 '11 at 16:24
It's really hard to find a good title for this. Maybe something like "Looking for a measurement of the time it takes the gas to attain equilibrium"? – Marek Jan 7 '11 at 19:31
1
Are you aware that equilibration in classical gases is set by microscopic timescales and that doing such a measurement is much more an engineering challenge (in which case I don't have any helpful references), or are you simply interested in what kind of timescales arise (in which case these scales are very much not universal; they will depend on the kind of gas, it's density, the specific interaction potential between the particles, etc.) – wsc Jan 8 '11 at 1:09
@wsc: I understand these scales vary greatly. I'm interested in any sort of reference to them being measured, not one case specifically. Obviously the experiment has to be fine tuned to make this scale measurable, but I'm guessing it's possible. – Bruce Connor Jan 8 '11 at 17:53
I know I guy who was stymied in one attempt to measure the temperature (which they wanted as proxy for something else, but my memory is shaky) of a gas immediately after exiting a super-sonic aperture because the longitudinal and transverse modes had not yet thermalized. I don't recall the whole story, but it might be possible to work in a context like that where fine time distinctions can be made my measuring distances. – dmckee♦ Jan 8 '11 at 23:14
show 1 more comment
## 2 Answers
"The time a gas takes to reach equilibrium" isn't really well-defined -- and even trying to interpret it in terms of an average sense can be tricky: in particular, there may very well be states which never reach equilibrium (consider very carefully placed and aimed gas moleculres in a box). As far as physics is concerned though, the Boltzmann equation is what's believed to govern the approach to equilibrium, but one must put in some assumed "collision term" to actually put it to use.
In terms of where your question may have been studied in current research, the issue of thermalization is obviously going to be a bigger issue in fields of physics when one is dealing with numbers of particles near the limits of validity of thermodynamics, so that's a sign that you might look for papers in the field of "cold atoms". See for example this 2001 paper by the group of Alain Aspect which carefully measures a "thermalization time" $\tau_{th}$ which they define via a Boltzmann equation-inspired model inspired by a theory paper of Luiten et al. These papers might be a good place to start to see how working physicists think about thermalization.
You frame your question in terms of "ideal gases" but as you'll notice from these papers, when you get down to numbers of particles and temperatures which are manageable directly in these experiments, quantum effects play a nontrivial role in the population dynamics. (It's also worth giving some thought to the following: how can an ideal gas equilibrate if the particles are defined not to interact with each other? Your answer will probably help you figure out better what exactly you're looking for.)
If you're more interested in what happens to a large number of "classical" particles and how they thermalize purely through collisions, this is fairly easy to simulate on the computer with molecular dynamics code, and then you can measure whatever you like with the computed particle trajectories. Of course this isn't strictly a physical "experiment" but it is in my opinion a real test of the kinetic theory predictions all the same.
-
I did some coding with molecular dynamics a few years ago, but that doesn't really solve it. You get to see the molecules scramble up, but the entropy doesn't increase. In fact, both classical and quantum mechanics a ruled by reversible laws (even if you include collision) and neither can explain why a box of particles would irreversibly converge to equilibrium (unless you include a non-deterministic term somewhere). And I'll start looking into the references, thanks. – Bruce Connor Jan 16 '11 at 19:16
I unfortunately don't have too much time to continue to research this question. I can give direction to you for your own research. I was somewhat intrigued by the question because the thrust of it is to imply that macroscopic physical laws are somehow invalid. To your point, classical laws do somewhat breakdown in the quantum regime do to phenomenon such as Bose-Einstein condensates: //cua.mit.edu/ketterle_group/popular_papers/physics%20today%20v2.pdf Which is obviously an area of active research.
However, in general your question can be alternatively be understood as a test of diffusivity governed by the Arrhenius equation. The Science and Engineering of Materials Second Edition by Donald Askelund, chapter 5 on the Atom Movement of Materials, discuss Arrehenius equation in the context of movement of defects and imperfections in materials. In the context provided though, one would assume that the vacuum surrounding an initial lump of gas as being a material (no I do not prescribe to aether, but this is a convenient assumption). So the question then is one of what the activation energy is to cause the atoms to move into the vacuum vs the potential energy available for such a reaction.
A quick look for test of the Arrhenius equation will reveal some intersting results.
As you move further into this question you find discussion on reaction and diffusivity: //www.scholarpedia.org/article/Reaction-diffusion_systems
There are multiple examples of diffusion theory underlying the processes used in several areas of technology. So I'm not sure about the context and intent of your question.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9542511105537415, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/66241/is-a-particular-element-of-a-particular-ring-a-nonzerodivisor/66254 | ## Is a particular element of a particular ring a nonzerodivisor?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $A$ be the ring $\Bbbk[\alpha_0, \alpha_1, \alpha_2, x_0, x_1, x_2]$ (where $\Bbbk$ is an infinite field, algebraically closed if it matters). Let $g \in \Bbbk[\alpha_0, \alpha_1, \alpha_2]$ be a homogeneous polynomial of degree at least one, such that $\alpha_0$ does not divide $g$. Let $$I = (g, \sum_i \alpha_i x_i).$$
Is $\alpha_0$ necessarily a nonzerodivisor in $A/I$?
A little bit of motivation: I've shown that $I$ has a certain nice property that I would like to carry over to its localization $I[\alpha_0^{-1}] \subset A[\alpha_0^{-1}]$. This will hold automatically if $I[\alpha_0^{-1}] \cap A = I$, which is true iff $\alpha_0$ is a nonzerodivisor in $A/I$. (Each of these is clearly equivalent to the statement: if $\alpha_0 f \in I$, then $f \in I$.)
I can show that at least $\alpha_0$ is not nilpotent, by showing that $(A/I)[\alpha_0^{-1}]$ is not the zero ring.
-
## 1 Answer
I think the answer is yes.
Let's try to think of the question geometrically. The polynomial $g$ defines a curve $C$ in $\mathbb{P}^2$ with coordinates $\alpha_i$ (I'd prefer these to be $x$'s and your $x$'s to be $\alpha$'s, but we won't change your notation). The bihomogeneous polynomial $\sum \alpha_ix_i=0$ defines the codimension $1$ locus in $\mathbb{P}^2\times (\mathbb{P}^2)^\ast$ of pairs $(p,\ell)$ where $p$ is a point in $\mathbb{P}^2$ and $\ell$ is a line containing $p$. Then $I$ defines the codimension $2$ locus consisting of pairs $(p,\ell)$ with $p\in C$ and $p\in \ell$. The irreducible components of this locus are in bijective correspondence with the irreducible components of $C$, and a polynomial $f\in k[\alpha_i]$ will be a zero divisor in $A/I$ if and only if $gcd(f,g)\neq 1$, so that $f$ vanishes on a component of the locus defined by $I$.
-
An additional note on embedded points: $C$ has no embedded points since it is a projective hypersurface, and the locus defined by $I$ is flat over $C$, hence maps associated points to associated points, hence has no embedded points by a dimension count. (All the fibers over $C$ are isomorphic to $\mathbb{P}^1$.) – Charles Staats May 28 2011 at 20:19
Thanks! After seeing this answer, I was not surprised to learn you are a student of Joe Harris. (By which I mean, of the limited number of authors I have read, he was the only one who gave the impression of having this level of facility with incidence loci.) – Charles Staats May 28 2011 at 20:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 48, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9407142400741577, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=411880 | Physics Forums
## Open sets and metric spaces...
I'm reading Analysis on Manifolds by Munkres and in the section Review of Topology Munkres states the following theorem without proof:
Let X be a metric space; let Y be a subspace. A subset A of Y is open in Y if and only if it has the form A = U ∩ Y where U is open in X.
All he has defined is a metric space, subspace, B(y;ε) = {x|d(x,y) < ε} which is the ε neighborhood of y. And he defined open sets to be: A subset U of X is said to be open in X if for each y ∈ U there is a corresponding ε > 0 such that B(y;ε) is contained in U.
I have never taken Topology or even read about it, so I wrote a proof for it which I'm not sure is correct. Here it is:
Since A is open then for x0 ∈ A there exists an ε > 0 such that B(x0;ε) is in A. Further, B(x0;ε) is open (I've proved this already). Now taking the union of all such ε neighborhoods of x's in A also produces and open set (I've also proved this). Therefore, letting U = U B(x; ε) proves this direction as clearly, A = U B(x;ε) ∩ Y.
(⇐)
A = U ∩ Y
This means that any x ∈ A is also in U and Y. Therefore, since x ∈ U, which is open, there exists B(x; ε) in U. But I want the open ball to be in A so letting ε' = ε and then B'(x;ε') = B(x;ε)∩Y. However since B(x;ε) is in U, then B'(x;ε') is in U∩Y which is A.
Am I on the right track? Btw, what if I have a metric X, and a closed set Y in X and then I choose U to be open and not entirely in Y but entirely in X. I define A to be U∩Y, but then A isn't open since it contains part of the boundary of Y. What am I not understanding? And if X is a metric space and Y a subspace, if A is open in Y, does it necessarily imply that A is open in X also?
I'd appreciate the help!
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
Quote by Buri I'm reading Analysis on Manifolds by Munkres and in the section Review of Topology Munkres states the following theorem without proof: Let X be a metric space; let Y be a subspace. A subset A of Y is open in Y if and only if it has the form A = U ∩ Y where U is open in X.
It may be intended as a definition. Given a subset Y of a metric space X there are two routes to defining a topology on it.
(i) metric space X -> topological space X -> topological subspace Y.
(ii) metric space X -> metric subspace Y -> topological subspace Y.
In (i) the open sets for the topological subspace Y are defined as the sets U ∩ Y where U is open in the topological space X.
The final topology on Y is the same via either route which is what you're proving. The proof looks OK (but would be clearer if you made a separate notation such as BY(y;ε)=B(y;ε)∩Y for the open spheres in Y).
The open spheres in the metric space X are not necessarily open spheres in the metric subspace Y nor vice versa. Also if B(x;ε) means the open sphere in X with centre x and radius ε, then B(y;ε)∩Y is an open sphere in Y if y∈Y, but B(x;ε)∩Y is not necessarily an open sphere in Y.
I just read your penultimate paragraph more carefully. What you seem to be misunderstanding is that open sets in the topological space X are not necessarily open sets in the topological space Y nor vice versa. The example you give is open in Y but not in X. (Which also answers your last question).
## Open sets and metric spaces...
Thanks for your help. I was reading up on topologies on a set X a bit and I feel like I understand how it is possible for something to be open in Y but not in X (if Y is a subspace of X). So open sets are relative to their universe so to speak. However, it seems like the balls B(y;ε) = {x|d(x,y) < ε} are not really dependent on the set they're in. As the proof that B(y;ε) is open (Munkres asks the reader to prove it) follows from the triangle inequality which is part of the definition of a metric and so, they're always open. So why is it that B(y;ε) seems to be special? Is there a reason why we prove sets are open by using these open balls?
Sorry if these are just dumb questions...
Consider $\mathbb{R}$ as a metric space (with $d_{\mathbb{R}}(x,y)=|x-y|$ for $x,y\in\mathbb{R}$). Then $\mathbb{Q}$ is a metric subspace (with metric $d_{\mathbb{Q}}(p,q)=d_{\mathbb{R}}|_{\mathbb{Q}\times\mathbb{Q}}(p,q)=| p-q|$). But $\frac{\sqrt{2}}{2}\in B_\mathbb{R}(0;1)$ and $\frac{\sqrt{2}}{2}\notin \mathbb{Q}$, so $B_\mathbb{R}(0;1)$ is not an open ball in $\mathbb{Q}$. Similarly $B_\mathbb{Q}(0;1)$ can't be $B_\mathbb{R}(x,r)$ for any $x\in \mathbb{R}$ and $r\in \mathbb{R}^+$, because it would have to be $$B_\mathbb{R}(\frac{\text{sup }B_\mathbb{Q}(0;1)+\text{inf }B_\mathbb{Q}(0;1)}{2};\frac{\text{sup }B_\mathbb{Q}(0;1)-\text{inf }B_\mathbb{Q}(0;1)}{2})=B_\mathbb{R}(0;1)$$ but $\frac{\sqrt{2}}{2}\in B_\mathbb{R}(0;1)$ and $\frac{\sqrt{2}}{2}\notin B_\mathbb{Q}(0;1)$. Further $B_\mathbb{R}(\sqrt{2};1)\cap\mathbb{Q}$ is also not an open ball in $\mathbb{Q}$. We don't so much prove sets are open using open balls as define open sets in metric spaces using open balls, though of course to prove any particular set in a metric space is open we have to show the definition is satisfied. Open balls don't exist in a topological space where the topology is not derived from a metric.
Quote by Martin Rattigan Open balls don't exist in a topological space where the topology is not derived from a metric.
I don't understand what you mean. Could you please explain?
Just another question, I was trying to understand what a topology on a set X is and if I take X = R and have $$\tau$$ contains all subsets of X then this would be a topology for X, right? Topology on Munkres defines an open set in the following way, a subset U of X with topology $$\tau$$ is open in X if U is in $$\tau$$. So with my example above, then sets which we normally call closed (like [a,b]) would then be called open? This is all because when we normally say (a,b) is open and [a,b] is closed we're using the $$\tau$$ which only has open sets in the "normal" sense?
You pretty much explained it yourself in the second paragraph. If $S$ is a set of cabbages then $\mathcal{P}(S)=\{O:O\subset S\}$ is a topology on $S$. It is unnecessary to define a distance function $d_S(b,c)$ between cabbages $b,c\in S$ in order to define the topology $\mathcal{P}(S)$. If no such distance function is defined then the normal definition of $B_S(c;r)$ as $\{b\in S:d_S(b,c)<r\}$ becomes meaningless. If $S$ is any set and $B\subset\mathcal{P}(S)$, then $B$ can be used to generate a topology on $S$ in the same way that the set of open balls generates a topology in a metric space. The open balls are meaningful only in metric spaces.
Ahh I get it now. Thanks a lot for your help. I really appreciate it :)
Thread Tools
Similar Threads for: Open sets and metric spaces...
Thread Forum Replies
Calculus 10
Calculus & Beyond Homework 1
Differential Geometry 7
Calculus & Beyond Homework 2
Calculus & Beyond Homework 6 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9540655016899109, "perplexity_flag": "head"} |
http://mathhelpforum.com/algebra/151356-dividing-rational-expressions.html | # Thread:
1. ## Dividing Rational Expressions
Once again I have no one to turn to for help so I must put my fate in your hands.
These problems are for a study guide for my midterm tonight and I have no idea how to do them. Any help is appreciated, thank you.
2. For Problem 10, you should find the values of x which make the denominator zero. Can you see why that would be the thing to do?
For problem 11, what do you suppose would be a good first step?
For problem 13, there might be an easier way than multiplying everything out. What do you suppose that might be?
3. The answer for problem 10 is 5 but I don't see how that equals 0.
Problem 11, factor the top and bottom?
Problem 13, factor again?
4. 10. 5, -1/2
11. -(x+3)/2
12. (x+3)/(x-3)
5. Would it be to much trouble to ask how you solved those?
6. Originally Posted by juvenilepunk
Once again I have no one to turn to for help so I must put my fate in your hands.
These problems are for a study guide for my midterm tonight and I have no idea how to do them. Any help is appreciated, thank you.
Hi juvenilepunk,
In order for the expression to be undefined, the denominator would have to equal zero.
Now how can that happen? Two ways:
[1] $x-5=0$
[2] $2x+1=0$
Now solve these two equations and you will find the two values of x that makes the original rational expression undefined.
7. Originally Posted by masters
Hi juvenilepunk,
In order for the expression to be undefined, the denominator would have to equal zero.
Now how can that happen? Two ways:
[1] $x-5=0$
[2] $2x+1=0$
Now solve these two equations and you will find the two values of x that makes the original rational expression undefined.
Thank you for the explanation, got it. Thanks.
8. Originally Posted by juvenilepunk
Once again I have no one to turn to for help so I must put my fate in your hands.
These problems are for a study guide for my midterm tonight and I have no idea how to do them. Any help is appreciated, thank you.
Now, for [12], you have to do a little factoring in the numerator and denominator.
$\frac{x^3-9x}{6x-2x^2}=\frac{x(x^2-9)}{2x(3-x)}$
Notice that now in the numerator you have the difference of two squares.
Remember how to factor this type?
$\frac{x(x-3)(x+3)}{2x(3-x)}$
Now, it gets a little tricky. Notice the one of the factors in the denominator is almost like one of the factors in the numerator, but not quite.
The signs are reversed (3 - x) instead of (x - 3).
It would be nice if we could do something to remedy that. And we can. We just factor out a -1 and it switches. (3 - x) = -(x - 3).
Now we have: $\frac{x(x-3)(x+3)}{-2x(x-3)}$
I'll bet you can finish this up now.
Try to tackle [13], and let us know where you get stuck. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9367315769195557, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-geometry/81986-sequence-differentiable-functions-non-differentiable-limit.html | Thread:
1. Sequence of differentiable functions, non-differentiable limit
I am trying to find a sequence of differentiable functions which converge uniformly on [-1,1] but such that the uniform limit is NOT differentiable on (-1,1).
I figure I'd like to make something converge to f(x)=|x| (which is not differentiable at 0, so not differentiable on (-1,1) ). I have an idea, but haven't been able to hack through the details. The idea is to define each member of the sequence such that outside of [-1,1] each function is identically |x|, but on (-1,1) I want to mutate x^2 somehow so that the sequence converges uniformly to |x|...and them's the details I haven't hacked through.
2. Try $f_n(x)=|x|^{\frac{n+1}n}.$
3. Thank you very much! That is so much nicer than my attempt. I'm going to apply that sequence for sure, but still I am curious to know if my attempt could be made to work.
Hopefully I'll have time to give it some mind and if I do I'll put up my efforts (but I'm a few weeks behind in coursework so for now I'll just move on).
4. Here is another possiblity. For all n, $f_n(x)= 0$ for x< 0, $f_n(x)= nx$ if $0\le x\le 1/n$, f(x)= 1 for x> 1/n, is continuous for all x (but NOT differentiable at x= 1/n). The sequence converges to f(x)= 0 for x< 0, 1 for $x\ge 0$ which is not continuous at x= 0.
So integrate that: $F_n= 0$ for x< 0, $F_n(x)= \frac{n}{2}x^2$ for $0\le x\le 1/n$, $F_n(x)= x$ for x> 1/n is differentiable for all n (in particular, its derivative at 0 is 0, and at 1/n is 1) but its limit, F(x)= 0 for x< 0, x for $x\ge 0$ which is not differentiable at x= 0. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9709933400154114, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/19126/theory-of-cones/21842 | ## Theory of cones
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Hi all,
Can anyone point me to some references to the theory of finitely-generated cones in euclidean space? I'd like to know in particular if there is a notion of basis/dimension/linear dependence or so for such cones.
Appreciate any help.
-
Since you're asking for a list of references, you might want to make the question community wiki – jc Mar 23 2010 at 18:54
I guess the 1-skeleton of the cone is the closest thing to a 'basis' for cones: picking a representative vector for each 1-simplex, any vector in the cone can be writen (non-uniquely) as a positive linear combination of these vectors. – J.C. Ottem Apr 19 2010 at 16:19
If you care about the integer points in the cone, there is the notion of Hilbert basis: en.wikipedia.org/wiki/…) which might be what you're looking for. – Steven Sam Apr 19 2010 at 21:04
## 8 Answers
People who study toric varieties in algebraic geometry are interested in this kind of notions, since an "affine toric variety" can be completely described by a cone in a euclidean space. Properties of the given cone translate into properties of the variety.
One book about the subject is Fulton, Introduction to toric varieties.
-
1
Fulton defines the dimension of the convex polyhedral cone $C=[\sum_{i=1}^m\alpha_i v_i| \alpha_i\geq0]\subset\mathbb{R}^n$ as the dimension of the linear space $C+(-C)$. He does not define any notion of "basis" for the cone $C$. Does anybody know if there is some notion of basis for a cone? – Shake Baby Mar 23 2010 at 21:22
There is a concept of a generating set for a cone (take positive linear combinations of the vectors in that set to generate the cone) and accordingly, a concept of conic independence. See 2.5 in the book in my answer for one fairly detailed source. – jc Mar 25 2010 at 15:05
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
My first recommendation would be Chapter 1 of Fulton's Introduction to Toric Varieties: Google Books
If you need more material I would suggest taking a look at 'Convex cones' by Fuchssteiner and Lusky which is rather good: Google Books
-
You could take a look at Cones and duality by Aliprantis and Tourky. Specifically Sections 1.6 and 1.7 may have some results that could be of interest to you. Google Books
The authors define a notion of basis of a cone (a set $B$ in the cone so that every vector in the cone is a positive real multiple of an element in $B$; think unit sphere intersected with the cone), but I wonder whether that is what you have in mind. If you mean that every vector in the cone can be written uniquely as a positive linear combination of (extremal) vectors in the cone, then you might want to take a closer look at what are called lattice cones (in finite dimensional spaces, or equivalently, finite dimensional Riesz spaces). Introduction to operator theory in Riesz spaces by Zaanen gives a very gentle introduction to the subject (so don't let the term "operator theory" scare you off). Google Books
-
When I was looking for references on related topics a few years ago I found the following book online, which was helpful for picking up terminology, etc. CONVEX OPTIMIZATION & EUCLIDEAN DISTANCE GEOMETRY (In particular, chapter 2 covers linear independence and cones at a pretty basic level with plenty of pictures and examples). However, you might wish to look for more standard references on convex geometry and convex analysis. Various books on polytopes also cover related material.
-
Another classical reference is Oda's Convex Bodies and Algebraic Geometry (no Google Books preview, unfortunately). You might find especially useful the appendix, entitled Geometry of Convex Sets.
-
Grunbaum, convex polytopes: http://books.google.co.uk/books?id=ISHO86XJ1CsC&printsec=frontcover#v=onepage&q&f=false
-
In some sense this is (part of) the theory of linear programming. If you want a reference for that, check out Bertsimas and Tsitsiklis' Introduction to Linear Optimization.
-
If you are interested in the lattice point enumeration aspect, I'd also suggest Computing the Continuous Discretely by Beck and Robins. There's a version of the book available on their website which you can use to preview it.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9279904365539551, "perplexity_flag": "head"} |
http://www.quantummatter.com/the-origin-of-instantaneous-action-in-natural-laws/ | Milo Wolff's site on his
Space Resonance Theory.
A fresh look on
modern quantum physics.
Recent articles
# The Origin of Instantaneous Action in Natural Laws
September 10th, 2010 by Milo Wolff
### ABSTRACT:
In the last millennium we learned that objects obey fixed laws of nature. Until the last decade, these laws have been entirely empirical; that is, the laws were measured properties of nature, no theoretical or physical origin was known. These measurements indicated that the movement of energy and information, which are needed to carry out the laws, travel consistently at the speed of light. This motion satisfied our rule of causality; that is: Events always occur after their causes.
However, some events have annoyingly seemed to violate the rule of causality. Certain forces and events seem to be transmitted instantaneously. These events are the transmission of energy and information which are related to the gravitational force, the magnetic force, inertial force, and relatively new phenomena termed “The EPR Effect" (Einstein, Podolsky and Rosen) and the Mossbauer Effect.
It is the purpose of this article to explain the origin and cause of the strange instantaneous events associated with these laws. We will show that causality is not actually being violated. Instead, the strange events are merely appearances, “shaumkommen” in the words of Erwin Schrödinger. They were created by our former incomplete knowledge of the Wave Structure of Matter and of the energy exchange mechanism of quantum wave structures. All communication is actually at velocity c.
In order to understand this it is first necessary to review the origin of the natural laws and the newly developed Wave Structure of Matter, because the cause of these events lies in this wave structure and the medium of the waves. Without this preliminary review, instantaneous action cannot be explained. The Wave Structure of Matter is an exciting frontier of science which reveals the connectedness of all matter in the universe. It provides new understanding of quantum events and unravels many puzzles, including that of instantaneous action.
## A. Wave Structure, Instantaneous Action, and the Natural Laws
### INTRODUCTION
The origins of the natural laws from the Wave Structure of Matter are new topics in science. To study them you must first reject the ancient Democritus particle made of ‘substances' and replace it with the correct quantum wave structure of matter. The rules of quantum waves are simple and easy to visualize. The hard part is getting rid of old thinking habits, particularly 'matter substance,’ and replacing it with ‘wave structure.’ One major fault of the particle ‘matter substance' concept was that it did not provide answers to fundamental questions like: How are the basic units of time, length, and mass formed? What is the mechanism of energy exchange? What is the origin of the natural laws? What is a photon? What is a particle? The review below describes how the Wave Structure of Matter provides these answers.
The key to understanding 'instantaneous action' is recognition of two ways of energy transfer between quantum structures. One way is direct, source and receiver undergo a resonant exchange. In the other way, the quantum wave medium acts as an intermediate 'broker' in the exchange. It is the broker behavior which leads to the appearance of instantaneous energy transfer. These two will be discussed below when explaining 'instantaneous action.'
The extraordinary revelation of the quantum universe is that the laws of physics are properties of the quantum wave medium which itself is formed from waves of all other matter. Thus, all science grows out of the medium's properties. As we learn more about it, prepare yourself for a fascinating adventure.
## B. History of the Wave Structure of Matter
The search for the structure of the electron started over a century ago in H.A. Lorentz's book, "Theory of the Electron" (1909)[1]. In 1876, the famous geometer-mathematician Clifford suggested that all physical laws were the result of undulations (waves) in the fabric of space[2].
Ernst Mach convinced Einstein that any theory of the structure of the universe must contain his inertia principle[3], but Einstein could not incorporate it into Relativity because Relativity has no medium of communication. Einstein knew this was the weakness of Relativity and suggested that matter was a communicating wave structure. In 1924, Einstein’s friend, Hans Tetrode, was the first to propose that energy transfer required two-way communication between particles[4].
Louis Duc de Broglie proposed[5] a wavelength $\lambda = \frac{h}{p}$ for the quantum waves of an electron containing an oscillator of frequency $f = \frac{mc^{2}}{h}$. Nobel laureate Paul Dirac, who developed much of the theory describing the quantum waves of the electron, was never satisfied with the point-particle electron because the Coulomb force required a mathematical correction termed "renormalization". In 1937 he wrote[6]:
"This is just not sensible mathematics. Sensible mathematics involves neglecting a quantity when it turns out to be small — not neglecting it because it is infinitely large and you do not want it!"
Weyl, Clifford, Einstein, and Schrödinger agreed that the puzzle of matter will be found in the structure of space, not in point-like bits of matter[7]. They speculated:
“What we observe as material bodies and forces are nothing but shapes and variations in the structure of space. The complexity of physics and cosmology is just a special geometry.”
This idea had an enduring appeal because of its economy of concepts and simplicity of design.
In 1945, Wheeler and Feynman represented a charged particle by assuming a pair of spherical inward and outward electromagnetic waves[8]. Their use of advanced (inward) waves is an apparent violation of the principle of causality, "Events cannot occur before their causes". Wheeler and Feynman showed that the puzzling inward waves do not violate causality because they are not directly observable. Their work pioneered a key concept that every particle sends outward quantum waves and receives a ‘Response from the Universe,’ as described later.
Phipps hypothesized that the electron-positron is the fundamental particle of the universe[9]. He reasoned that the infinite extent of charge forces were more fundamental than local effects of baryons. Cramer used an analogy of the inward and outward waves of the Wheeler-Feynman electron to interpret the waves of classical quantum theory as real, in contrast to the older unreal “probability wave.”[10] He named them an offer-wave (outward) and a response-wave (inward). In 1990-98, Wolff expanded these ideas and showed the origin of the natural laws[11-26]. In 1996, he pointed out that the Wave Structure of Matter may have anti-particles with anti-gravity. This may remove objections to Hannes Alfvén’s book, "Worlds and Anti-Worlds", and suggests a solution for the redshift and missing matter paradoxes.
## C. Questioning the Natural Laws
As recently as ten years ago we did not know where natural laws come from or even that it was possible to find out. Some scientists believed, in a religious fashion, that we were not allowed to know, that we must just accept the empirical laws given to us by nature. Still others believed that the natural laws were already complete and to obtain further understanding all we needed to do was manipulate them mathematically. Now the origin is found in the behavior of the Wave Structure of Matter. Let’s review the basic requirements of the laws by asking questions about their behavior[19, 21, 24].
### Particles, Laws, and the Universe are Mutually Dependent.
What is the connection between particles and the universe? Without particles the physical universe is undefined because our definition of universe is a collection of particles or objects and their distribution. Similarly, the natural laws are meaningless without particles because laws require particles upon which to operate. The converse is also true: we cannot identify a particle and its properties without the force laws to locate and measure it. Thus the cosmos, particles, and laws form a trilogy, each dependent on the others for its properties. This trilogy of laws, particles and the cosmos can prevail only if there exists a medium of communication linking each particle to all other particles in that universe. The communication link must establish a uniform measure of time and length for all matter.
### Mach’s Principle.
The above concept, that laws and particles were dependent on the universe, had its first birth with Ernst Mach and Bishop Berkeley 100 years ago, who explained Newton’s law of inertia, $F = ma$. At that time, the unknown origin of Newton's law of inertia attracted frequent attention. Mach boldly suggested that inertia depends upon the existence of the distant stars[4]. His reason arose from two fundamentally different methods of measuring a body's rotational inertia. First, without looking at the sky, one can measure the centrifugal force on a rotating mass $m$ and use Newton's Law in the form $F = \frac{mv^{2}}{r}$ to find circumferential speed, $v$. The second method compares the object's angular positions with the distant stars. Mysteriously, both methods give exactly the same result. Mach reasoned that there must be a causal connection between the distant matter in the universe and inertia. He asserted:
The laws of inertia are established by all the matter of the universe.
It is now known that not only was Mach correct, but his concept applies to all the other laws as well.
### Scales of Measurement.
Consider two particles in space, Figure 1. They obey natural laws interacting with each other. We know that the laws involve scales of time, length, and mass. How are scales established and communicated between two particles? What process measures the distance between them, establishes the force, and guides each particle to the vector of acceleration it must undergo? Consider the length scale used by the particles. Every particle must have access to the same length scale, otherwise interactions would be chaotic, not the orderly laws we observe obeyed by all particles. But we know if no other matter is present, length scales are meaningless since length is a relative measure. Thus the length scale and the laws which use it must depend on the existence of other matter.
Figure 1. How do Force Laws Operate? Two particles move towards each other obeying a law. How is the direction found? How is the separation measured? What establishes length scales? Similar questions apply to time. Every particle, everywhere, has to have access to the same clock in order to carry out orderly laws in the cosmos. Where is the universal clock? How is time communicated among all particles? They cannot behave independently so there must exist a common clock related to all the matter of the universe.
### The Cosmic Clock.
The quantum wave medium pervading all space is common to all particles and establishes the cosmic clock. Such clocks are alike because the homogeneity of the medium of the waves produces a fixed wave frequency. As suggested by de Broglie, the cosmic clock is the well-known frequency of the electron $f = \frac{mc^{2}}{h}$[5]. This frequency is a property of the quantum wave medium and, thus, it is the same for all particles. Similarly the uniform quantum wave medium also provides a measure of length - the electron wavelength.
### Finding Range and Location.
The spherical wave structure of particles provides range and location information for the force laws (see Figure 3). Everyone who has learned nautical navigation knows that the curvature of a wave front is sufficient to determine the range and position of the center of the source of the wave fronts. This is the simple mechanism available to two particles to find their relative range and position.
## D. Properties of the Quantum Wave Medium[19,21,24]
### Property 1. Dimension scales are a property of the ensemble of matter.
The scales of length and mass, for any particle if alone in the universe, would be meaningless because scales of measure can only be defined by comparison with other matter. For example, at least six separated particles are necessary to crudely define length in a 3D space. Thus the scale of length requires the existence of an ensemble of particles. There is no way to choose a special ensemble, thus the required ensemble must include all observable matter. This ensemble creates the quantum wave medium observed in Mach’s Principle. The ensemble has to be assigned great importance because time, length and mass are the basic scales used to describe all science and engineering.
### Property 2. Interacting particles must communicate with each other.
Force laws between two particles cannot operate unless they are aware of each other's location. Continual two-way communication between particles is required to execute the laws of nature. This communication takes place by spherical waves in the space (quantum wave medium) between the particles.
### Property 3. The scale of time requires a cosmological clock.
Laws cannot operate if particles have no reference to a cosmological clock. Each particle must have a way to relate its own time-related behavior with other particles. Nature’s cosmological clock is the frequency of quantum waves in the uniform quantum wave medium common to all particles.
### Property 4. Mach’s Extended Principle.
The only possible reference for changing motion (acceleration or rotation) is the entire ensemble of matter in a universe, as proposed by Ernst Mach in 1883. Not only inertia, but other laws of physics must similarly depend on the matter of the universe mediated by the quantum wave medium.
### Property 5. Natural constants.
The extension of Mach's Principle shows that the natural constants such as $c$, $h$, $G$, and $e$ also depend on the quantum wave medium. These constants determine measurable properties of solids and electromagnetism: For example, the solid crystal array, shown in Figure 2, is a space matrix of atoms held rigidly in space. How are the atoms suspended in space? We must conclude that the crystal’s rigidity derives from fixed standing quantum waves propagating in a rigid quantum wave medium. Calculations for diamonds and nuclear structure yields an enormous rigidity.
### Property 6. Particles are wave structures.
The wave structures are found below as solutions of Principle I - the Quantum Wave Equation. As we will show, wave structures, like the electron, are produced by properties of the quantum wave medium. In the laboratory, these wave structures have measured properties identical to what we call ‘mass’ and ‘charge’. Accordingly we realize that mass and charge ‘substances', as such, do not exist.
Figure 2. The Presence of the Quantum Wave Medium. The nuclei of the atoms in a solid crystal are suspended in a lattice of the quantum waves of their surrounding electrons. The lattice consists of three sets of standing waves forming the boundary of the crystal. Suspension of the lattice, in apparently empty space, reveals the presence of the quantum wave medium.
### Discussion of the Quantum Wave Medium.
The rule of simplicity of nature is remarkably true for the quantum wave medium. This single entity underlies all the laws of nature. The entire content of physics, the structure of matter, space and time, and the natural laws, arise from only three mathematical principles which govern the behavior of the quantum wave medium.
### The Next Scientific Revolution.
The quantum wave medium is a new revolution in science! Because the quantum wave medium, or “space” or “vacuum,” is the basis of the structure of matter as well as the natural laws, it is the biggest topic in basic science. The task of studying the quantum wave medium is NOT a minor dissident issue. It is the source of nature’s laws and it is the frontier adventure.
## E. Energy Transmission by the Quantum Wave Medium
About 1900, an Ether concept arose to explain the transmission of light and other electromagnetic (e-m) waves. Scientists sought to find a propagation medium similar to air, the medium of sound waves. No such Ether medium for e-m waves exists. Instead the wave phenomena involved is actually quantum waves and a quantum wave medium supports these quantum waves. What appeared as e-m waves were large numbers of energy exchanges between quantum states, acting in concert. Insensitive apparatus does not see single exchanges. Quantum waves and the quantum wave medium are real physical entities, whereas e-m waves are calculations of our subjective impressions. The failure to recognize that e-m waves are a representation of many quantum exchanges has been the root cause of much confusion.
## F. The Three Principles of the Quantum Wave Medium
Only three basic principles are needed to describe the properties of the quantum wave medium and the structure of the electron and other particles. The first principle determines the form of the quantum waves which propagate in the medium. The second principle describes how the medium is formed by all the matter in the universe. The third principle describes how waves combine to minimize amplitudes in space.
### Principle I - The Quantum Wave Equation.
The following Wave Equation from Wolff[11,12,13] determines the form and character of waves propagating in the quantum wave medium:
Formula 1
$\nabla^{2} \Psi - \dfrac{1}{c^{2}} \dfrac{\partial^{2} \Psi}{\partial t^{2}} = 0$
where $\nabla$ is the differential operator, $\Psi$ is a scalar quantum wave amplitude and $c$ is the wave velocity. The solutions of this equation are spherical waves whose centers are the measured location of particles. The waves extend everywhere, but energy exchanges appear as if there were a point of charge.
## G. The Wave Structure of the Electron
### Structure of the Electron.
Most people, even some scientists, prefer to imagine that the electron is a "particle" like a baseball or a bullet. Laboratory evidence does not support this idea. Instead, an electron is a quantum wave structure, shown in Figure 4, whose spherical waves travel with fixed velocity $c$ in the quantum wave medium. Its nominal location is the spherical center. The electron is composed of two solutions of the Medium Wave Equation: an inward and an outward spherical quantum wave traveling at light speed, $c$.
Formula 2
$\Psi_{electron} = \Psi_{IN} + \Psi_{OUT} = \dfrac{\Psi_{0} e^{i(\omega t + kr)}}{r} + \dfrac{\Psi_{0} e^{i(\omega t - kr)}}{r} = \dfrac{\Psi_{0} e^{i \omega t} \cdot sin(kr)}{r}$
The exponential factor is an oscillator of frequency $\omega$ and wave number $k$. The sine function modulates the oscillator with a standing wave of wavelength $\lambda = {2\pi}{k}$, the Compton wavelength of the electron. $\Psi_{0}$ contains the numerical constants. The inward wave rotates phase at the center, reverses direction, and becomes the outward wave. The factor $\frac{1}{r}$ causes the wave amplitude to decrease inversely as the radius increases. At $r \rightarrow 0$, the amplitude of the separate waves is infinite. But when the two solutions are combined the opposite signed infinities cancel and a finite standing wave results.
Figure 4. Electron Structure. The IN and OUT waves combine to form a standing wave. The amplitude of the continuous quantum waves is a scalar number, not an electromagnetic vector. At the high density center, the standing wave amplitude is finite, not infinite. At the center, the IN wave rotates 180 degrees converting it to an OUT wave. This fixed rotation creates the + or - quantum spin. It becomes an electron or positron, depending on the rotation direction.
### Formation of the IN Waves.
It is mathematically convenient to envision a particle alone, comprised of its IN and OUT waves, separated from other particles in the universe. This simple structure allows us to examine and study the particle, uncluttered by waves of other particles. But this simplest representation does not allow us to understand the origin of the inward waves. We are puzzled because we have ignored other waves from the universe.
Two hundred years ago Christiaan Huygens, a Dutch mathematician, found that if the spherical wavelets from multiple sources in a flat surface were examined at some distance away, the wavelets combined their separate amplitudes to create another larger wave-front. This plane wave is said to be a ‘Huygens combination’ of the separate spherical wavelets. Huygens combinations can take many forms; for example, a line of sources will form a cylindrical wave-front.
Apply Huygens discovery to the single electron and consider each particle’s waves as composed of the Huygens components of all other waves. The electron’s IN-wave is a Huygens combination of wavelets, formed from 'reflections' of its own OUT-wave after encountering other particles in the universe. At each encounter, a signature of the initial particle is transferred to the OUT-waves of the other particles. These outward wave signatures return as a ‘response from the universe’[8]. When the response waves arrive at the initial particle center, they form its IN-wave. You can envision the entire structure of the electron as an enormous spherical standing wave, obscured in a sea of other waves. The electron waves travel between the particle center and other matter of the universe.
### We Are Part of the Universe!
A particle cannot exist without all the other particles in the universe. Each particle depends on all other particles to create its IN-wave. Thus, in a very real sense, the substance of our bodies is part of the universe and the universe is part of us. We are totally inter-dependent. Take a breath now! ....... The forced conclusion is awesome. We have to think of ourselves, our bodies, our brain and its mind, every atom and molecule within us, as inextricably joined with other matter of the universe. If the rest of the universe did not exist we could not exist.
At this point we have to take a hard look and ask, "Is this crazy? Is this science fantasy? Or is there evidence to prove that the universe really behaves this way?" The answer is: "Yes, it is more real than the old physics.” The indisputable proof is that this wave structure correctly predicts the origin (below) of the empirical natural laws which had never been known before.
### Spin of the Electron.
The nature and cause of spin was unknown before the Wave Structure of Matter. Quantum spin occurs when the returned inward wave rotates at the wave center with a phase shift to become an outward wave. The phase shift requirement is similar to light reflecting at a mirror. Phase shift requires a 180 degree rotation of the wave, either clockwise or counter-clockwise. The two rotation choices produce angular momentum of $\frac{+h}{2\pi}$ or $\frac{-h}{2\pi}$. One choice is an electron with +spin, the other is a positron with -spin. Thus the electron is the mirror-image of the positron.
Rotation of the inward waves involves an astonishing property of 3D space called spherical rotation. This allows the electron to retain spherical symmetry while imparting a quantized "spin" along an arbitrary axis. The rotation property is described by Misner, Thorne & Wheeler[27]. Batty-Pratt & Racey show how this property leads to the famous Dirac Equation[28]. Wolff shows how the spin rotation operators multiply Equation 2[23]. Then the electron wave becomes:
Formula 3
$\Psi_{total} = ROT_{IN} \Psi_{IN} - ROT_{OUT} \Psi_{OUT}$
where $ROT_{IN}$ and $ROT_{OUT}$ are the Dirac operators which rotate the waves. Equation 3 is the link between the Wave Structure of Matter and the Dirac theory.
### Principle II - Source of the Quamtum Wave Medium.
Both logic and observation have told us that the presence of all matter of the universe must determine local naturals laws, such as inertia, according to Mach’s Principle. A quantum wave medium has to be created throughout all of space by all the matter of the universe. Wolff[11,12,13] expresses this as:
Formula 4
$mc^{2} = hf = k' \sum\limits_{n=1}^{N} \dfrac{\Psi_{n}^{2}}{r_{n}^{2}}$
That is, the mass and frequency of an electron are proportional to the sum of the intensities of waves from all the particles in the universe, including the waves of the electron itself. The variables $m$ and $f$ are the mass and frequency of that electron, $c$ is the velocity of light and $h$ is Planck's constant.
This basic principle is a prescription for the density of the quantum wave medium. The density is proportional to the sum of wave intensities from all matter, producing Mach’s Principle. The principle yields a density nearly uniform everywhere as required by Property 3 of the quantum wave medium, uniform because there are an enormous number of particles contributing to the density.
### Testing Principle II.
How do we know that the quantum waves are really dense at the electron center? This is not obvious. We can test by comparing the wave intensity from the universe with the local intensity of the electron’s wave. The local intensity at some small radius, say $r_{0}$, must equal the total intensity of the waves from the other $N$ particles in a universe of radius $R$. This condition results in:
Formula 5
$r_{0}^{2} = \frac{R^{2}}{3N} \qquad \mbox{$
Is the test correct? We find out by putting in the usual values from cosmology: $R = 10^{26}$ meters and $N = 10^{80}$ particles. Then the radius r0 equals $6 \times 10^{-15}$ meter. This radius must be similar to an electron center. It is! It almost matches the classical radius which is $2.8 \times 10^{-15}$ meters. Since $R$, $N$ and $r_{0}$ are vastly different sized numbers, this match is not a coincidence. It is proof that dense electron centers really exist, confirming Principle II. Equation 5 is also a relation between the size $r_{0}$ of an electron and the size of the universe, $R$. The size of an electron depends on all the matter contained within the universe! This fulfills Properties 4 and 5 of the quantum wave medium.
### Mass and Charge Depend on the Parameters of the Universe.
The Equation of the Cosmos also indicates how matter-waves from the universe produce the mass, $mc^{2}$, of each electron: Combine Formula 5 with the Compton wavelength $r_{0} = \frac{h}{mc}$, to get:
Formula 6
$mc^{2} = \dfrac{hc \sqrt{3N}}{R}$
We see that $mc^{2}$ depends on $R$ and $N$.
### Charge is conserved.
The electric charge $e^{2}$ also depends on the total $N$ particles. Combine the Equation of the Cosmos with the classical electron radius $r_{0} = \dfrac{e^{2}}{mc^{2}}$ to get:
Formula 7
$e^{2} = \dfrac{mc^{2}R}{\sqrt{3N}}$
The appearance of the factor $e^{2}$ is noteworthy! Recall that charge $e$ never occurs alone in physical laws, but always occurs as the product $e^{2}$. Thus $e^{2}$ is the meaningful constant. We see that charge $e^{2}$ is a property of the quantum wave medium (through $R$ and $N$). This is why there is only one value of charge in nature. Stop and think a moment about Equations 6 and 7. They state that the basic constants $e$, $m$, and $h$ depend only on the matter in the universe, $R$ and $N$. Thus the quantum wave medium underlies the natural constants of science as well as the laws!
## H. Energy Exchange Mechanism of Particle Wave Structures
Energy transfers between quantum states in atoms and molecules are the fundamental basis of scientific measurement and knowledge. Calculations and thought processes cannot take place without energy transfers. Every measurement and observation is a transfer of energy between a quantum source and receiver. Storage of information, whether on a computer disk or in our brain, always requires an energy transfer.
### Mechanism of Energy Transfer.
An energy transfer usually occurs between two atomic or molecular quantum states, a source and a receiver. In the source, an energy shift occurs downward. In the receiver, an equal shift occurs upward. This equality is the origin the conservation of energy. But before the shifts occur, the IN/OUT quantum waves of source and receiver must exchange information to determine that the energy exchange is possible. The preliminary information exchange is accomplished by the IN/OUT waves. At the end of the process, the two final shifts can be observed in our lab. No ‘photon’ travels from source to receiver. The photon concept is very useful for calculation, but is not a reality. Note that an energy shift, $dE$, is equivalent to a frequency difference, $df = \frac{dE}{h}$. Energy, mass, and frequency are equivalents.
Frequency mixing of the IN/OUT waves and information comparison can occur because wave propagation in the dense centers is non-linear. Mixing is similar to AC signals flowing through a non-linear element, like a diode in an electronic circuit. That is, if two signals are inputs, the output will contain the two signals plus the sum and difference frequencies of the two signals. If the frequency of one wave matches another, resonance occurs. An example is a tuned radio receiver.
Principle II provides the non-linear element because of the large density ‘bump’ of the quantum wave medium near the center of charged particles. This bump corresponds to the mechanism we call "charge." The electron wave structure looks like a point particle because energy exchanges take place at the tiny non-linear central bump. No mass or charge substance is needed.
### Principle III - The Minimum Amplitude Principle (MAP)
Principles I and II provide particle structure and the energy exchange mechanism. But a third Principle is needed to determine the direction in which energy exchanges proceed. This Principle governs particle energy exchanges similar to the entropy law which always decreases available heat within a thermodynamic system. It makes water remain level in a lake. The Minimum Amplitude Principle is:
The total amplitude of particle waves at each point in space always seeks to minimize itself
or:
Formula 8
$\int \left(\Psi_{1} + \Psi_{2} + \Psi_{3} + ... + \Psi_{n}\right)^{2}\,dx\,dy\,dz = \mbox{a minimum}$
$dx\,dy\,dz$ is a small volume of space. How does this principle work? First, note that the principle sums wave amplitudes, not intensities. The MAP minimizes the total wave amplitude by moving the wave centers. For example, consider two identical electrons which have identical wave patterns. The two electrons will move apart (repulsion of like charges) in order to reduce the total wave amplitude. But if one of them is a positron with amplitude opposite to the electron, they move together (attraction). Then their amplitudes partly cancel, satisfying the MAP. If their opposite centers move together and coincide, they annihilate each other and their energy, $2mc^{2}$ is transferred to other particles.
Another example is the Pauli Exclusion Principle. In this case, MAP prevents two identical electron resonances (fermions) from occupying the same state because their total amplitude would be a maximum rather than a minimum. Also, electrons in atomic "shells" always take the pattern of the lowest level. Again we see the awesome conclusion that quantum waves establish and rule the universe.
## I. Explanations of Instantaneous Action
### Energy Exchanges to the Medium.
Principles I and II are basic to the instantaneous action phenomena. They determine the process of all energy exchanges whose speed of transfer was the heart of the action controversy. Instantaneous action occurs when there is an exchange of energy, weak compared to the much stronger electric force exchange. Energy transfer does not take place directly between two objects; it occurs between one object and the surrounding quantum wave medium. This is possible because of the very high density of the medium. Afterwards, the imbalanced medium will readjust to a MAP-determined equilibrium and transfer energy to the second object. Nothing happens instantaneously.
When an exchange takes place, between for example a moving planet and the quantum wave medium surrounding it, the exchange appears instantaneous to us because we think the exchange occurs immediately to a distance object, the Sun. It only appears so because we had a false notion that particles were ‘mass substance’ in totally empty space without a quantum wave medium. The legacy of Democritus has haunted us for 3000 years!
In hindsight, the only reason this phenomenon appeared puzzling was because we were unaware of the presence of the quantum wave medium as an intermediate agency. Knowing it is there, everything follows the usual rules of mechanics and nothing travels at velocities other than $c$. Four cases are discussed below:
### 1. Instantaneous Inertial Forces.
Inertial forces were once regarded as mysterious because the quantum wave medium was unknown. It was thought that an instant reaction occurred between an object and somewhere else. Mach’s Principle, that inertia depends on the presence of all the matter in the universe, was compelling, but the implied instant action only deepened the mystery. This was dramatized by words attributed to Mach:
“When the subway jerks, it is the universe which throws you down!”
Now, after understanding the Wave Structure of Matter, we see that the ‘instantaneous’ forces are local exchanges to the local quantum wave medium. No mystery.
#### Let’s calculate inertia.
The forces of inertia, $F = ma$, are tremendously smaller (10-40) than electric charge forces. Therefore they can be perturbations of the electric forces. Acceleration causes a change of the electrons' wavelengths in the quantum wave medium. This wavelength change disturbs the amplitude balance with local waves of the quantum wave medium. The MAP corrects the imbalance by forces to move the accelerated resonance with respect to the quantum wave medium. Forces are energy exchanges which take place between the accelerated electron and local Ether waves of all other matter. The recoil energy exchanges are eventually transmitted to other masses of the universe, seeking a MAP equilibrium.
Compute a perturbation, using a force on the accelerated mass analogous to electric force on an accelerated charge:
Formula 9
${\bf\sf\ F_{e}} = e'\,{\bf\sf E}$
where $F_{e}$ = electric force, ${\bf\sf E}$ = electric field and $e'$ are electric charges. In analogy:
Formula 10
${\bf\sf\ F_{m}} = m'\,{\bf\sf M}$
where ${\bf\sf\ F_{m}}$ = mass force (inertial force), ${\bf\sf M}$ = mass field and $m'$ are masses.
The ${\bf\sf E}$ field of an accelerated charge $e$ depends on the magnetic vector potential ${\bf\sf A}$. That is,
Formula 11
${\bf\sf\ E} = \dfrac{\partial{\bf\sf A}}{\partial t} = \dfrac{e' {\bf\sf a}}{4 \pi \varepsilon_{0}\ c^{2}\,r}$
where ${\bf\sf A}$ is vector potential and ${\bf\sf a}$ is acceleration.
For the particle $m$, assume a mass field derived from an analogous acceleration potential:
Formula 12
$\mbox{mass field} = {\bf\sf\ M} = \dfrac{m' {\bf\sf a} G}{c^2\,r}$
The tiny gravity constant $G$, has replaced the large electric constant $\frac{1}{4 \pi \varepsilon_{0}}$, which determines the perturbation magnitude. To find the force on the masses $m'$, set $m'$ equal to the mass of the universe:
Formula 13
$m' = d_{u} V_{u} = d_{u}\,\frac{4}{3}\pi\,R^{3}$
where $d_{u}$ = mass density of the universe and $V_{u}$ is the Hubble volume of the universe. Choose the average distance $R$ of masses $m'$ as half the Hubble radius, $R = \frac{c}{2H}$. The force between the particle $m$ and masses $m'$ becomes:
Formula 14
${\bf\sf F} = m'{\bf\sf M} = \dfrac{d_{u}\,\frac{4}{3}\pi (\frac{c}{2H})^{3}\,G m {\bf\sf a} }{c^{2}\,r} = \left( \dfrac{8 \pi G\ d_{u}}{3H^{2}} \right) m {\bf\sf a}$
Choose $d_{u}$ equal to the density of a flat (critical) universe $d_{c} = \dfrac{3H^{2}}{8\pi G}$. Insert it into Equation 14. Nearly everything cancels so the factor in parentheses ( ) above becomes one and the remainder is Newton's Law of inertia, $F = ma$. This surprising result shows that inertial mass equals gravitational mass as observed, predicts a flat universe, and reaffirms Mach's Principle.
This result may clarify another controversial paradox - the cause of the redshift and the Big Bang. Note that the Hubble radius, $R = \frac{c}{2H}$ above, is proportional to the presumed age, $R$ - of the object $m$. If the age of $m$ were younger than matter near us, as suggested by the quasar studies of Halton Arp, then the mass forces would be less[29]. This is exactly the requirement used by Jayant Narlikar and Halton Arp to explain the origin of quasars and the redshift without an expanding universe or the big bang[30].
### 2. Instantaneous Gravity Forces.
Gravity forces were regarded as mysterious because they appear to act instantaneously. Astronomers and spacecraft navigators get the correct answers for the motions of the planets and the spacecraft by assuming gravity acts instantaneously. Why? Using the 'exchange to the medium' mechanism above, we can assume that if energy is exchanged between a particle and the nearby medium, the space or the medium must be moving or changing. The only possible motion we know is the ‘redshift’ which appears as if space were expanding. We try that below by calculating gravity force using redshift measurements and the Wave Structure of Matter. It is not necessary to choose a cause of the redshift.
Gravity energy transfer is small (10-40), so treat it as a perturbation of the electric force. If there is a particle motion relative to the quantum wave medium due to space expanding, the inward waves of the electron are not exactly the same length as outward waves because the IN wave at a point precedes its companion OUT wave at the same point. The space expansion causes an imbalance of the wavelengths which is proportional to the time and the distance from the center. The MAP will correct the imbalance by movement due to gravity force.
#### Estimate the ratio of the gravity force to the electric force.
Define: $dF$ = gravity force, $F$ = electric force, $T$ = time, $dT$ = a time interval, $R$ = radius to a point. The fractional expansion of space equals $\frac{dL}{L}$, during a time interval $dT$. Using the Hubble relation, one gets $dL = H\,dT \frac{dF}{F}$. The time interval to traverse $R$ is $dT = \frac{R}{c}$. The measure of distance for a charged-particle wave is its wavelength, so approximate $R = \frac{h}{mc}$.
Using these relations, Wolff obtains the ratio of the gravity and electric forces between a proton and an electron[13]:
Formula 15
$\dfrac{\mbox{electric force}}{\mbox{gravity force}} = \dfrac{F}{dF} = \dfrac{mc^{2}}{hH} = 5.8 \times 10^{-39}$
Compare this with the measured ratio = $\dfrac{e^{2}}{4 \pi \varepsilon_{0}\ G\ m_{e}\,m_{p}} = 2.3 \times 10^{-39}$. They agree within Hubble error. This match is not likely to be a coincidence because these are very large numbers.
This result helps understand the origin of gravity and the origin and growth of matter in the universe. For example, continuous creation of matter may produce the redshift, which in turn creates the gravity forces above. This perturbation is like an induction of a gravity force by the changing space. The cause of the redshift is also the cause of gravity. Such an interpretation is more satisfactory than the Big Bang.
### 3. Instantaneous Magnetic Forces.
Peter Graneau has done experiments to show that energy exchanged by magnetism appears instantaneous, not according to the Poynting vector ${\bf\sf E}\!\times\!{\bf\sf H}$[31]. His results are explained as an instantaneous perturbation of the electric force, where the perturbing factor is the relative velocity law and special relativity. Lorrain and Corson rewrite this little-known, 90-year old derivation with the well-known result[32]:
Formula 16
$\mbox{Magnetic force on a moving charge} = {\bf\sf F} = q ({\bf\sf v}\!\times\!{\bf\sf B})$
where $q$ is the charge with relative velocity ${\bf\sf v}$, and ${\bf\sf B}$ is the magnetic field.
#### Importance of the IN Waves.
The inward waves are just as real as their symmetrical partners, the outward waves. Neglect of the IN waves often gives an incorrect result because the inward and outward waves contribute equally when quantum properties are involved. Both the IN and OUT waves together are necessary to communicate between particles. Difficulties are not encountered when using full quantum theory since it implicitly contains both waves. This is one reason why quantum mechanics has stood the test of time.
### 4. The EPR Paradox.
Quantum theory puzzles are often created by incomplete knowledge of the Wave Structure of Matter and the energy exchange mechanism involving the IN/OUT waves. This puzzle arose because of neglect of the role of the IN/OUT waves which transfer information between the source and the receivers. The particles involved must fulfill physical boundary conditions in order for their wave sets to resonate and initiate an energy shift upward in one atomic state and a shift downward in the other. Exchange of information of the boundary conditions must take place, unseen by us, before the final energy shifts which we actually observe. Not knowing the precursor information exchange has taken place, one feels the energy exchange is instantaneous and mysterious.
In 1935, Einstein, Podolsky and Rosen (EPR) proposed a gedanken (thought) experiment which they thought could not be accounted for by quantum theory[33]. They trusted the causality idea that “Events cannot occur before their causes” and concluded that quantum theory was not always right. The experiment would prove it. In their experiment, polarized photon pairs are emitted from a central source, pass through the adjustable polarization filters on the left and right, and enter two coincidence detectors on each side. Simultaneous detection (coincidences) are recorded and plotted as a function of the angular difference between the filter settings.
The central source simultaneously emits paired photons which always have parallel polarization. The polarization filters at each of the oppositely-located detectors can be set at any angle with respect to each other. If the filters are at right angles, there will be no coincident photons detected. If filters are set parallel, all the photon pairs will be detected. The plot of coincident detections versus the angular difference of the filter settings was output of the experiment. The shape of this plot became a controversy.
The EPR paper predicted a straight line plot. The prediction was based upon the belief that two independent ‘photons’ traveled from source to detectors. They reasoned that since ‘photons’ cannot travel faster than the speed of light, neither photon detector could have advance knowledge of the polarization of the photon entering the other detector. Therefore the plot would be linear.
Einstein also knew that quantum theory predicted the plot of the experiment to be a somewhat curved line, in violation of the photon concept. No one tried this experiment for 37 years because everyone trusted the EPR argument! Then the experiment was carried out many times and quantum theory was verified. This was a big surprise because the failure of the photon concept, and apparently causality, suggested that communication was taking place at speeds greater than the velocity of light, perhaps instantaneously.
The most recent experiment by Aspect et al. used acoustical-optic switches at a 50 MHz rate to shift the polarizers during the supposed flight of the photons to eliminate effects of one detector on the other[34]. They reported that the EPR assumption was violated by five standard deviations, whereas quantum theory was verified.
Interestingly, a close friend of Einstein, the German philosopher-physicist Hans Tetrode, took another view of causality even more puzzling at the time. In 1922, he made the remarkable proposal that a particle never emits radiation except to another particle[4]. He said:
“The Sun would not radiate if it were alone in space and no other bodies could absorb its radiatio;.”
and
“..a star in my telescope, 100 light years away, already knew 100 years ago that I would observe it tonight.”
It now appears that Tetrode was correct! Tetrode appeared to understand the advance information exchange process but never wrote his ideas in detail.
Explanation of the EPR Paradox. Causality is not violated in the EPR experiments if one understands the process of radiation in terms of the inward and outward waves. The IN/OUT waves contain information which identifies the atoms, their energy and polarization states. Before the actual energy shifts can occur in the detectors, the IN/OUT waves determine if source and detector have matching boundary conditions. If suitable, the final exchange process is a coupling of two resonant oscillators, the source and detector. One increases its frequency by $df = \frac{dE}{h}$, and the other decreases frequency by $df$. The exchange proceeds in five stages:
1. Before changes of state occur, the IN/OUT waves of source and detectors contain information of their own energy state (frequency) and polarization, including the filter settings.
2. Before transition, all three devices ‘learn’ the wave state of the other two, otherwise the exchange could not begin because it must conform to the MAP. This precursor information exchange is not observable by humans because there are no frequency shifts which we can detect. It travels at velocity $c$ of the waves.
3. In a transition stage, if an exchange is possible, the IN and OUT waves begin a resonant exchange of their wave frequencies and amplitudes to minimize the total amplitudes, following the MAP.
4. In a final stage, the source atom (S), which has the higher energy level, shifts its energy state downward. This event is observed by us and interpreted as a “photon” leaving the S atom. Information of the lowered energy state of S arrives later at the detector atom D. We interpret this as if a “photon” particle had arrived at D with the velocity $c$ of the quantum waves.
5. Atom D changes its energy state upward, thus satisfying MAP. We observe this event and imagine a “photon” had gone from S to D. MAP is satisfied and both atoms remain in their now stable states.
We see that both detectors receive information via the IN-OUT waves concerning the frequency and polarization of the future photon pair before the change of energy state of the detector. This precursor information was conveyed by quantum waves and not moving 'photons'. Communication traveled between the atoms at the speed $c$ of quantum waves. Thus there was no violation of causality even though it was unobservable to us. We can only observe the final energy state changes at the source and detector.
## J. Proof of the Wave Structure of Matter
Before the Wave Theory of Matter, all laws of physical phenomena were obtained from empirical observations. They were experimentally observed to be true but were not predicted from any underlying physical cause. Their existence was a matter of faith in nature. Nature became a god. Now, using the quantum wave description of the electron, the natural laws are predicted as observed. They are not god-given but are results of the quantum wave structure of the universe. The prediction of the natural laws is overwhelming evidence that the quantum wave description is correct. It is not possible to review all the laws but several interesting examples are mentioned below. The remainder can be found in the literature.
### Conservation of Energy
In an energy exchange, a source shifts frequency downward and a receiver shifts frequency upward. These exchanges must occur between resonant states with identical frequencies, resulting in equal and opposite energy changes. This is the origin of conservation of energy.
### Spin of the Electron:
Spin is a result of rotation of the inward (advanced) quantum waves of an electron at the electron center in order to become the outward (retarded) waves. Rotation is required to maintain proper phase relations of the two wave amplitudes, similar to mirror reflection of e-m waves. The spherical rotation, which is a unique property of 3D space can be described using SU(2) group theory algebra. In SU(2), the IN and OUT waves of the charged particle are the elements of a Dirac spinor wave function. Thus all charged particles satisfy the Dirac Equation[23, 28].
### Origin of Quantum Mechanics and Special Relativity
Quantum mechanics and special relativity have one feature in common: Both laws depend on the relative velocity between two particles. Noticing this, we immediately ask: What happens to the waves of two Space Resonances (SR) in relative motion with velocity $\beta = \frac{v}{c}$? One SR is a source and the other SR is an observer (detector). To answer this, write out the Equation 2 of a space resonance modified by the Doppler effect:
Formula 17
$\Psi_{received} = \mbox{Doppler shifted}(\Psi_{IN} + \Psi_{OUT}) = \dfrac{2 \Psi_{0} e^{i k \gamma (ct + \beta r)} \cdot sin[k \gamma (\beta c t + r)] }{r}$
where the wave-number $k = \frac{2\pi}{\mbox{wavelength}}$, $r$ = radius, $\beta = \frac{v}{c}$, and $\gamma = \dfrac{1}{\sqrt{1 - \left( \frac{v}{c} \right)^{2}}}$.
Equation 17 gives the waves seen by either SR. Both see an exponential oscillator of the quantum wave frequency, modulated by a sinusoid containing the deBroglie wavelength. Each SR receives identical Doppler waves from the other because the two SRs are symmetrical compared to each other. Because of this symmetry, the resulting quantum and relativistic effects do not depend on the direction of relative motion, exactly as observed.
The Doppler effect of relative motion also creates a deBroglie wavelength, $\frac{h}{mv}$, in the modulation of the electron waves in Equation 17. Recalling that the deBroglie wavelength is the experimental basis of quantum theory, we understand how quantum theory arises from the space resonance wave structure. Only a wave structure of matter could produce this behavior.
The relativistic mass increase factor in Equation 17 is $\gamma = \dfrac{1}{\sqrt{1 - \left ( \frac{v}{c} \right )^{2}}}$. This result is remarkable! The Doppler effect miraculously changes the combined IN and OUT wave frequencies in exactly the right way so that every momentum and mass (frequency) increases by the observed relativistic factor, $\gamma$. This is the origin of the relativistic mass increase of moving particles - often sought but found only in the Wave Structure of Matter.
## K. Conclusions
### 1. Instantaneous Action is Misunderstood.
Causality is not actually being violated in the puzzling events termed ‘Instantaneous Action’. Instead, the strange events are merely appearances. They were created by our incomplete knowledge of the Wave Structure of Matter and of the energy exchange mechanism. Actually, all energy and information transfer is at $c$, the speed of light. The cause of these events lies in the wave structure of the charged particles and the universal quantum wave medium. The Wave Structure of Matter is itself a new exciting forefront of science which displays the inter-dependence of all matter in the universe and restores some of the original adventurous spirit of natural philosophy.
### 2. An Inter-dependent Universe.
The most extraordinary conclusion is that the laws of physics and the structure of matter ultimately depend upon waves established by the matter itself. Every particle communicates its wave state with all other matter so that energy exchange and the laws of physics are properties of the entire ensemble. Mach's Principle is just one of a family of inter-dependent principles.
### 3. Two Views of our Universe.
Depending upon whether we observe with our human senses or with laboratory quantum-logic, we see different worlds. One world that we see with five senses is our familiar 3D environment governed by the natural laws. Electromagnetic energy exchanges stimulate our senses to form mental images of this world. These images create our sense of human reality. This is termed the "Energy World" since energy-exchange allows us to observe it.
"The "Quantum World," composed of unseen quantum waves which form the structure of the fundamental particles (electrons, protons, neutrons), can only been seen with laboratory instruments. We cannot observe these waves directly although they fill the empty space around us. We know of their existence when two particles change their quantized wave states (energy levels), and one of the two particles is in a human sense organ. Quantum waves and the quantum wave medium are the hidden fountainheads of both worlds.
### 4. Theories of Everything.
There is a story about a Newtonian physicist who challenged the idea that the Earth was supported on Atlas's shoulders. He asked, "What is Atlas standing on?" The reply, "On a turtle". "And what is the turtle standing on?" "On another turtle." It was turtles all the way down. The physicist scoffed and thought he had won the argument.
Have we finally found the theory of everything? In my opinion the search has no end. True, wave-structure has identified length, time, and mass. It explains the origins of the laws, resolves the wave-particle duality and other paradoxes, and provides a new tool for science. This might convince us that we finally understand everything. But do we? Inevitably a new question pops up: "How do the properties and structure of the quantum wave medium arise?" This new question is created by the old question which we thought we had so cleverly explained. Again it is, "Turtles all the way down." The next frontier is to learn and understand the quantum wave medium. Wolff has begun a world-wide-web public forum to investigate the Quantum Wave Structure of Matter[25].
## L. References and Further Reading
1. Henrik Lorentz, "Theory of Electrons", Leipzig (1909), Dover Books (1952).
2. William Clifford (1876), "The World of Mathematics", p 568, Simon & Schuster, NY (1956).
3. Ernst Mach, "The Science of Mechanics", (German, 1883, Engl: London, 1893).
4. H. Tetrode, "Z. Phyzik", 10, 317 (1922).
5. Louis Duc de Broglie, PhD Thesis "Recherce sur la Theorie des Quanta", U. of Paris (1924).
6. P. A. M. Dirac, Nature, 174, 321, p 572 (1937).
7. Walter Moore, "Life of Schrödinger", Cambridge University Press (1989).
8. J. Wheeler and R. Feynman, "Interaction with the Absorber as the Mechanism of Radiation", Rev. Mod. Phys., 17, p 157 (1945).
9. T. Phipps, "Towards a Fundamental Mechanics- II", Found. Phys., 6, No. 1, pp 71-82 (1976).
10. John Cramer, “The Transactional Interpretation of Quantum Mechanics," Rev. Mod. Phys., 58, pp 647-687 (1986).
11. Milo Wolff, "Exploring the Physics of the Unknown Universe", ISBN 0-9627787-0-2. Technotran Press, CA (1991).
12. Milo Wolff, "Microphysics, Fundamental Laws and Cosmology", Proc. 1st Int'l Sakharov Conf. Phys., Moscow, May 21-31, 1991, pp 1131-1150, Nova Sci. Publ., NY. ISBN: 1560720735, editors: L. Keldysh & V. Fainberg, (1992).
13. Milo Wolff, "Fundamental Laws, Microphysics and Cosmology", Physics Essays, 6, pp 181-203 (1993).
14. Milo Wolff, "Exploring the Universe and the Origin of its Laws", Int'l. Conf. on Grav. and Cosmology, IUCAA, Pune, India, Dec 13-19, (1995).
15. Milo Wolff, "Beyond the Point Particle - A Wave Structure for the Electron", Galilean Electrodynamics, 6, No. 5, pp 83-91, (1995).
16. Milo Wolff, "Beyond the Point Particle - The Origin of Quantum Phenomena", Conf. Hon. Prof. J. Vigier, York U., 24 June 1995.
17. Milo Wolff, "The Logical Origins of Physical Laws from Cosmology", AAAS Conf., U. Ariz., Flagstaff, June, 1996.
18. Milo Wolff, "Worlds and Anti-worlds Revisited", AAAS Conf., U. Ariz., Flagstaff, June, 1996.
19. Milo Wolff, "The Eight-fold Way of the Universe", Apeiron, 4, No. 4, Oct. 1997.
20. Milo Wolff, "Properties of the Quantum Ether and its Relationship to Natural Laws", NPA Region. Conf., U of Conn., June 9-13, 1997.
21. Milo Wolff, "Exploring the Universe and the Origin of its Laws", Frontier Perspectives, Temple U., 6, No 2, (1997).
22. Milo Wolff, "Relativistic Mass Increase and Doppler Shift Without Special Relativity", Galilean Electrodynamics, 8, No. 4, (1997).
23. Milo Wolff (1997), "Origin of the Spin of the Electron", Amer. Phys. Soc. web site: http://publish.aps.org/eprint/gateway/eplist/aps1997nov22_006.
24. Milo Wolff (1998), "The Eightfold Way of the Universe", Amer. Phys. Soc. web site: http://publish.aps.org/eprint/gateway/eplist/aps1998may13_003.
25. Milo Wolff (1998), Web site: "Milo Wolff's Quantum Corner", http://www.quantummatter.com/.
26. Milo Wolff, “Matter Waves and Human Consciousness”, The Noetic Journal, 2, No. 1 (1999). In press.
27. C. W. Misner, K. Thorne, and J.A. Wheeler, "Gravitation", (W. H. Freeman Co.), p 1149 (1973).
28. E. Batty-Pratt, and T. Racey, "Geometric Model for Fundamental Particles", Intl. J. Theor. Phys., 19, pp 437-475 (1980).
29. H. Arp (1993), "Fitting Theory to Observation - From Stars to Cosmology", in Progress in New Cosmologies, pp 1-28, Plenum Press, NY, 1993.
30. J. Narlikar and H. Arp, “Flat Space time Cosmology”, Astro. Phys. J., 405, 51-56, (1993).
31. P. Graneau and N. Graneau, "Newton vs. Einstein", Carlton Press, ISBN 0800624514X, (1993).
32. P. Lorrain and D. Corson, "Electromagnetic Fields and Waves", W. H. Freeman Co. (1970).
33. A. Aspect, J. Dalibard, and G. Rogers (1982), Phys. Rev. Lett., 49, 1804.
### 7 Responses to “The Origin of Instantaneous Action in Natural Laws”
1. Wow!...
A very spectacular post....
2. Spielautomaten says:
Toller Blog....
Das ist aber ein interessanter Artikel....
3. Cool sites...
[...]we came across a cool site that you might enjoy. Take a look if you want[...]…...
4. letmewatchthis says:
Read was interesting, stay in touch…...
[...]please visit the sites we follow, including this one, as it represents our picks from the web[...]…...
5. how to make money says:
Great website...
[...]we like to honor many other internet sites on the web, even if they aren’t linked to us, by linking to them. Under are some webpages worth checking out[...]…...
6. earn money from home says:
Websites worth visiting...
[...]here are some links to sites that we link to because we think they are worth visiting[...]…...
7. onebuckresume says:
Blogs ou should be reading...
[...]Here is a nice website You Might Find useful that we Encourage You to see[...]…...
Avatars by Sterling Adventures
Books of Milo Wolff
Recent Comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 145, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9166363477706909, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/48864/list | Return to Question
3 added 191 characters in body; edited title
WasIs Grothendieck a computer?
I can't resist asking this companion question to the one of Gowers. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.
So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics.
I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.
Added, 11 December: Being a faulty human, I had an inexplicable attachment to the past tense. But, being weak-willed on top of it all, I am bowing to peer pressure and changing the title.
2 deleted 2 characters in body
I can't resist asking this companion question to the one of Gowers. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and it's its special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.
So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics.
I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine.
1 [made Community Wiki]
Was Grothendieck a computer?
I can't resist asking this companion question to the one of Gowers. There, Tim Dokchitser suggested the idea of Grothendieck topologies as a fundamentally new insight. But Gowers' original motivation is to probe the boundary between a human's way of thinking and that of a computer. I argued, therefore, that Grothendieck topologies might be more natural to computers, in some sense, than to humans. It seems Grothendieck always encouraged people to think of an object in terms of the category that surrounds it, rather than its internal structure. That is, even the most lovable mathematical structure might be represented simply as a symbol $A$, and it's special properties encoded in arrows $A\rightarrow B$ and $C\rightarrow A$, that is, a grand combinatorial network. I'm tempted to say that the idea of a Grothendieck topology is something of an obvious corollary of this framework. It's not something I've devoted much thought to, but it seems this is exactly the kind of reasoning more agreeable to a computer than to a woolly, touchy-feelly thinker like me.
So the actual question is, what other mathematical insights do you know that might come more naturally to a computer than to a human? I won't try here to define computers and humans, for lack of competence. I don't think having a deep knowledge of computers is really a prerequisite for the question or for an answer. But it would be nice if your examples were connected to substantial mathematics.
I see that this question is subjective (but not argumentative in intent), so if you wish to close it on those grounds, that's fine. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.961989164352417, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/continuity+metric-spaces | # Tagged Questions
1answer
43 views
### Proving Continuity with Open Sets
I have a doubt about how to prove continuity using the definition in terms of open sets. The $\epsilon$-$\delta$ definition of continuity is not very pleasant to work with, however, I know what must ...
2answers
42 views
### Finding a bounded, non-compact set of functions $f:[0,1]\to\Bbb R$
Consider the metric space $(X, d)$ given by $$X = \{\text{all continuous functions}\,f:[0,1]\to\Bbb R\}$$ with $$d(f,g)=\sup_{t\in[0,1]}|f(t)-g(t)|.$$ Find with proof a set $A \subseteq X$ with ...
2answers
49 views
### Find a convergent function in metric space
Let $C[−1, 1]$ be the space of continuous functions equipped with the metric $p(f,g) = \max\{|f(x)−g(x)| \mid x \in [−1, 1]\}$. Then the sequence of functions $(f_n):[−1,1]\rightarrow \mathbb{R}$ ...
2answers
38 views
### Intuitive explanation of ball-based definition for continuity of functions in metric spaces
First of all, hat tip to @Fayz for providing this definition. Backstory: I broke my glasses several days ago and, in the meantime, this important definition was written on a board I could not see. ...
2answers
43 views
### Need to confirm: Sup Metric $C[0,1]$, question about boundary
For the sup metric, $C[0,1]$. Let $S \subset C[0,1]$ be given by: $$S=\left\{f:[0,1]\to \mathbb{R} \ : \ 0 \leq f\left(\frac{1}{2}\right)<1\right\}$$ The question is simple: is this set open or ...
1answer
74 views
### Continuity in metric space, TRUE or FALSE?
Let $(X,d)$ and $(Y,e)$ be metric spaces , and let $f: X \to Y$ be a function. True or false ? Give a proof or a counterexample as appropriate. $(a)$ If $d$ is the discrete metric on ...
3answers
56 views
### Continuous map between metric spaces
Suppose $X,Y$ are metric spaces, let $A \subset X$ be a bounded subset of $X$ and $f: A \to Y$ to be a continuous bjection. Prove or disprove that $f^{-1}$ is continuous. Remark: If each closed ...
1answer
39 views
### Let $f:(X, d) \mapsto (Y,d)$ be an mapping such that $Graph (f)$ is connected. [duplicate]
Where $X$ is connected. Does it imply $f$ to be continuous?
1answer
54 views
### How to show that a continuous map on a compact metric space must fix some non-empty set.
Suppose $(X,d)$ is a compact metric space and $f:X\to X$ a continuous map. Show that $f (A)=A$ for some nonempty $A\subseteq X.$ I start this by supposing that $A_0:=X$ and $A_{n+1}:=f(A_n)$ for ...
4answers
80 views
### Is a continuous function like a homomorphism/isomorphism for metric spaces?
If I had to define a notion of a homomorphism/isomorphism on metric spaces, I'd say something like this. Let $A$ and $B$ be metric spaces with norms $\| \cdot \|_A$ and $\| \cdot \|_B$ respectively. ...
1answer
66 views
### Why $f(x) = \frac{d(x,A)}{d(x,A)+d(x,B)}$ is uniform continuous?
Let $X$ be a metric space, $A$ and $B$ are two subsets of $X$. $d(x, A) = \inf_{z \in A}d(x,z)$ and $\inf_{x \in A,y \in B}d(x,y) = \delta > 0$ We define $$f(x) = \frac{d(x,A)}{d(x,A)+d(x,B)}$$ ...
1answer
24 views
### Compact metric space: proof $\text{diam}(K)$
I am to assume that $K$ is a compact metric space. I must prove that there are two points $x,y$ contained in $K$ such that $d(x,y)=\text{diam}(K)$. Recall \$\text{diam}(K)= \sup \{ d(x,y) \mid x,y ...
1answer
41 views
### $f:X\rightarrow X$ be a continuous map, we need to show $f(\cap A_n)=\cap f(A_n)$
let $X$ be a complete metric space with metric $d$ and $A_{i}$'s are nested sequence of closed sets in $X$ i.e $[A_1\supseteq A_2\dots]$ such that $\sup\{d(x,y):x,y\in A_n\}\to0$ as $n\to\infty$ ...
1answer
41 views
### equivalent metric
Let $(X; d)$ and $(Y; d')$ be metric spaces, and let $f : X \to Y$ be continuous. Define $df (x; y) = d(x; y) + d'(f(x); f(y))$ for $x, y \in X$. Show that $df$ is a metric on $X$ that is equivalent ...
3answers
70 views
### Consequence of Invariance of Domain
The Invariance of Domain theorem states that Given a continuous injection $f : U \to \mathbb{R}^n$, where $U$ is a nonempty open subset of $\mathbb{R}^n$, $f$ is an open map. These slides (see ...
2answers
93 views
### Continuous linear functionals
Let L be a continuous linear functional on a metric linear space X. Prove: L(S) is a bounded set for any bounded subset S of X. The metric is translation invariant.
2answers
88 views
### Bounded functions on subsets of Euclidean space
It is known that given any closed and bounded $X \subseteq \mathbb{R}^n$ and a bounded continuous function $f : X \to \mathbb{R}$, $f(X)$ has a minimum value and maximum value. This can be proved by ...
1answer
92 views
### Extension of continuous function
The question is: Let $(K,\rho)$ be compact metric space. $F\subset K$ closed. $f:F\rightarrow \mathbb{R}$ continuous. Is there a continuous extension of $f$ on $K$? Attempt: Suppose there exists ...
1answer
69 views
### Continuity of metric space of integrals of continuous functions
Let $R$ be the real line with the standard metric $d:R \times R \to R$ be defined by $d(x,y) = |x-y|$. Let $X$ be the set of continuous functions $f:[a,b] \to R$ of an arbitrary closed interval ...
1answer
179 views
### pointwise limit on a complete metric space
Let $\{f_n: X\rightarrow \mathbb{R}\}$ be a sequence of continuous real-valued functions on a complete metric space, $X$. Suppose this sequence has a pointwise limit, $f$. How easy is it to see that ...
1answer
99 views
### Characterising continuous maps between metric spaces
Let $f:(X,d)\to (Y,\rho)$. Prove that $f$ is continuous if and only if $f$ is continuous restricted to all compact subsets of $(X,d)$. I could do the left to right implication but couldn't do the ...
2answers
98 views
### Metric Of A Graph
The following is question 6 from page 99 of Walter Rudin's Principles Of Mathematical Analysis. I'm having trouble understanding what the metric of the graph might be (which, as far as I can tell, is ...
2answers
97 views
### Finding a continuous function with specified properties
This is a homework question in my analysis class: Let $A$ and $B$ be two nonempty closed subsets of a metric space $X$ that do no intersect. Show that there is a continuous function \$f:X\rightarrow ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 90, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9188490509986877, "perplexity_flag": "head"} |
http://mathhelpforum.com/advanced-applied-math/65415-physics-easy-torque-question.html | # Thread:
1. ## Physics easy torque question
A beam is held in place by two wires, one at each end. A monkey with a mass of 50kg is hanging somewhere on the beam. There is a force of 180 N on one of the wires. Draw the picture, find the force on the 2nd wire and where the monkey is hanging if the beam is 2m long.
Any insight? T = F(d) will be used I am guessing and so far I have a line drawn labeled 2m with 2 vertical lines on the ends, and an arrow on the left vertical line to show the 180 N of force. Thanks in advance.
2. Well assuming that beam is in equilibrium, there must be zero moments, and also zero total force in all directions. In this case there is only forces in 1 direction, the vertical.
So, yes. You have a beam which is 2m long. Let's imagine that the far left is A, and the far right is B. So length AB = 2m.
Imagine that the wire at B is the one with a force of 180N.
Then the wire at A will have an upwards force, $F_A$ which creates a torque $T = -F_{A}(2)$ about point B. This torque will be clockwise, and that's why it's negative.
Then there's the monkey. The monkey has mass 50kg, and hence it has a weight $F_{monkey} = 9.81(50)$.
The weight of the monkey will cause an anticlockwise torque about B, given by $T_{monkey} = 9.81(50) \times d$ where d is the distance between the monkey and B.
Now we know that the sum of the moments about B must be zero, hence:
$\Sigma T = -2F_{A} + 9.81(50)d = 0$
And we also know that the sum of the forces is zero, hence:
$\Sigma F_{y} = 180 + F_{A} - 50(9.81) = 0$
Using the last equation, you can get a value for $F_{A}$, and if you substitute that value into the 2nd to last equation, you can get a value for $d$, and that's everything!
One major assumption here is that the beam itself has zero/negligible weight.
3. Excellent, thanks. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9491866230964661, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?t=517120 | Physics Forums
## Dynamical systems notation
I have some questions about what I think is a fairly standard and common short-hand notation used in physics.
Today I watched lecture 2 in the nptelhrd series Classical Physics by Prof. V. Balakrishnan. In it, he models a kind of system called a simple harmonic oscillator, I think using $TC = C \times \mathbb{R} = \mathbb{R}^2$ for a state space (He calls it phase space, but I'll use the more general name, as phase space is said elsewhere to have a coordinate called "momentum" whereas he calls the corresponding coordinate "velocity".), where $C$ is the configuration space of the system, and $TC$ the tangent bundle thereon. He labels points in state space with $q$ and $\dot{q}$, thus $(q,\dot{q}) \in \mathbb{R}^2$. So far so good. Then he writes some equations:
$$\ddot{q}=-\omega q, \enspace\enspace\enspace V(q)=\frac{1}{2}m\omega^2q^2, \enspace\enspace\enspace m\ddot{q}=-\frac{\mathrm{d} V}{\mathrm{d} q}(q);$$
$$\dot{q}=v, \enspace\enspace\enspace \dot{v}=-\frac{V'(q)}{m}, \enspace\enspace\enspace\frac{\mathrm{d} v}{\mathrm{d} q}=-\frac{\omega^2}{v}q.$$
I'm not satisfied that I understand all of these symbols.
I think $\omega = \sqrt{k/m}$ and $m$ are constants (angular velocity and mass). I think $\ddot{q}$ should mean the value at $t$ of the second derivative of some function whose value at $t$ is labelled $q$. I'm guessing this implicit function is the first component function, $\gamma_1$, of a curve function, $\gamma : \mathbb{R} \rightarrow \mathbb{R}^2 \; |\; t \mapsto (\gamma_1(t),\gamma_2(t))$, whose image is a trajectory in state space, and that this is an arbitrary element of the set of trajectories defined by the differential equation(s). I think $V : \mathbb{R} \rightarrow \mathbb{R}$ is a scalar field on the configuration space $C = \mathbb{R}$.
Does $\dot{q}=v$ mean $\gamma_2(t)=f\circ\gamma_1(t)$ for some unknown function $f:\mathbb{R}\rightarrow \mathbb{R}$?
If so, does does $\dot{v}$ mean $(f\circ\gamma_1)'(t)$ or $f'\circ\gamma_1(t)$? I'm guessing the latter.
Is $-\frac{V'(q)}{m}$ to be read as $-\frac{(V\circ\gamma_1)'(t)}{m}$ or $-\frac{V'\circ\gamma_1(t)}{m}$? Again, I'd guess the latter.
$$\frac{\mathrm{d} v}{\mathrm{d} q}=-\frac{\omega^2}{v}q$$
Is it
$f'\circ\gamma_1(t)=-\frac{\omega^2}{f\circ\gamma_1(t)}\gamma_1(t) \enspace ?$
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
I think I've got it now. He writes points in the image of the curve as (q(t),v(t)), meaning $\gamma[t]=(\gamma_1[t],\gamma_2[t])$. (I'll use square brackets here around the arguments of a function, to disambiguate them from the rounded brackets used to show order of operations.) His equations $\dot{q}=v$ and $\dot{v}=-V'(q)/m$ mean $$\gamma'[t]:=(\gamma_1'[t],\gamma_2'[t])=\left ( \gamma_2[t],-\frac{V'\circ\gamma_1[t]}{m} \right )$$ So, although "position"coordinate $q$ and "velocity"coordinate $\dot{q}$ are independent variables for functions whose domain is the state space, "position"particle $\gamma_1$ and "velocity"particle $\gamma_2 = \gamma_1'$ are functions related in the familiar way, the latter being always the derivative of the former, in any dynamical system. It just happens the same names, and often symbols, are used for both concepts. Finally, $$\frac{\mathrm{d} v}{\mathrm{d} q}$$ can be analysed, with the Leibniz notation for the single-variable chain rule in mind, as $$\frac{\mathrm{d} v}{\mathrm{d} t}\frac{\mathrm{d} t}{\mathrm{d} q}$$ which, all being well, means $$(\gamma_2\circ(\gamma_1^{-1}))'\circ\gamma_1[t]$$ $$=(\gamma_2'\circ(\gamma_1)^{-1}\circ\gamma_1)[t]\cdot((\gamma^{-1})'\circ\gamma_1)[t]$$ $$=\gamma_2'[t]\cdot((\gamma^{-1})'\circ\gamma_1)[t]$$ $$=\frac{\gamma_2'[t]}{\gamma_1'[t]},$$ so that, in this context, an expression like $\mathrm{d}f$ can be read as another notation for $f'$.
Proof of the identity used in the final step. Let $y=f[x]$ such that $x=g[y]$, where $x$ and $y$ are arbitrary real numbers. Then $$(g\circ f)'[x]=((g'\circ f)[x])\cdot(f'[x]).$$ But $g\circ f[x]=g[y]=x$, so $g\circ f$ is the identity function on $\mathbb{R}$. So $$(Id)'[x]=1=((g'\circ f)[x])\cdot(f'[x]),$$ so, for $f'[x]\neq 0$, $$g'[y]=\frac{1}{f'[x]}.$$ That is: $$(f^{-1})'\circ f[x]=\frac{1}{f'[x]}.$$ Hence the notation $$\frac{\mathrm{d} x}{\mathrm{d} y}=\frac{1}{(\frac{\mathrm{d} y}{\mathrm{d} x})}.$$
## Dynamical systems notation
Quote by Rasalhague angular velocity
Oopsh, no not angular velocity, just a parameter, a constant of the system.
I think my interpretation of the ideas is right, but from this thread (see especially Fredrik's post #19), it seems I may have misunderstood what role the notation $q$ and $\dot q$ play, and that they're really synonymous with $\gamma_1$ and $\gamma_2=\gamma_1'$, but are traditionally used sloppily also to denote the value of these functions. However, I'm still troubled by the widespread insistence that they stand for "independent variables" (which obviously isn't the case if they're defined as $\gamma_1$ and $\gamma_2=\gamma_1'$, or even the values of these functions). Balakrishnan talks about them as independent variables, and Roger Penrose calls them independent variables, in the quote in #23 of the thread I linked to. Penrose also seems to be treating them as (natural?) coordinate functions on the state space. But perhaps he's simultaneously letting them denote the coordinate representations of curves through the state space...
Thread Tools
| | | |
|-------------------------------------------------|----------------------------------------------|---------|
| Similar Threads for: Dynamical systems notation | | |
| Thread | Forum | Replies |
| | Calculus & Beyond Homework | 0 |
| | Engineering, Comp Sci, & Technology Homework | 0 |
| | Differential Equations | 3 |
| | Special & General Relativity | 7 |
| | Introductory Physics Homework | 3 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 49, "mathjax_display_tex": 15, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9331626892089844, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/81034/compact-embeddings-of-sobolev-spaces-a-counterexample-showing-the-rellich-kondra/81044 | ## Compact Embeddings of Sobolev Spaces: A Counterexample Showing The Rellich-Kondrachov Theorem Is Sharp
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $U$ be an open bounded subset of $\mathbb{R}^n$ with $C^{1}$ boundary. Let $1 \leq p < n$ and $p^{\ast} = pn/(n-p)$. Then the Sobolev space $W^{1,p}(U)$ is contained $L^{p^{\ast}}(U)$ and there is a constant $C$, depending only on $p$, $n$, and $U$, such that $$||u||_{L^{p^{*}}(U)} \leq C ||u||_{W^{1,p}(U)}$$ for every $u \in W^{1,p}(U)$ (cf. Theorem 2 in Section 5.6.1 of Partial Differential Equations by Evans).
The Rellich-Kondrachov Compactness Theorem says that $W^{1,p}(U)$ is compactly embedded into $L^{q}(U)$ for every $1 \leq q < p^{*}$. This means two things:
(i) There is a constant $C$, depending only on $p$, $n$, and $U$, such that ```$$
\displaystyle{ ||u||_{L^q(U)} \leq C||u||_{W^{1,p}(U)} }
$$``` for every $u \in W^{1,p}(U)$.
(ii) Every bounded sequence $(u_k)$ in $W^{1,p}(U)$ has a subsequence $(u_{k_j})$ that converges in $L^q(U)$.
Is there a standard counterexample that shows we cannot take $q=p^{\ast}$ in the Rellich-Kondrachov Compactness Theorem? In other words, I am asking for a sequence $(u_k)$ that is bounded in the $W^{1,p}(U)$ norm but has no convergent subsequence in ${L^{p^{\ast}}(U)}$. Note that such a sequence would have a subsequence that converges in $L^q(U)$ for every $1 \leq q < p^{\ast}$ but diverges in ${L^{p^{*}}(U)}$.
Thanks.
-
Isn't the name Kondrashev? – timur Mar 8 2012 at 17:49
The Russian is В.И.Кондрашов. Therefore, according with the BGN/PCGN romanization of Russian, the English transliteration should be "Kondrashov". Nevertheless, for some reason, "Rellich-Kondrachov theorem" gets more Google results than "Rellich-Kondrashov theorem" (small numbers, anyway) – Pietro Majer May 25 at 17:54
## 2 Answers
Yes, there is a standard way. Take any nonzero $u\in W^{1,p}(U)$, with support in a ball, say w.l.o.g $\operatorname{supp}(u)\subset B(0,r)\subset U$, and consider $$u_\epsilon(x):= u\big(\frac{x}{\epsilon}\big)$$ Under this action by dilatations, the $L^q$ norm of $u$ and the $L^{p}$ norm of $\nabla u$ rescale with the same powers exactly for $q=p^*$: $$\|u_ \epsilon\| _ q = \epsilon^{n/q} \|u\|_ q$$ $$\| \nabla u_ \epsilon\|_p = \epsilon^{\frac{n-p}{p}} \|\nabla u\| _ p$$ This means that the normalized family , for all $0 < \epsilon \le 1$,
$$\epsilon ^ { - \frac {n} {p ^ *} } u \Big( \frac{x} {\epsilon} \Big) \ , \ \ \ 0 < \epsilon \le r$$
is bounded in $W^{1,p}$, and has a constant non-zero norm in $L^{p*}$, and of course has no convergent subsequences there for $\epsilon \to 0$, since it converges a.e. to zero. Note also that it converges to $0$ in $L^q$ for all $q < p^*$, as it has to be.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
This question has cropped up in a work of mine.When the p* norm is computed without using the gradient value of the sequence,the Gagliardo-Nirenberg ```default condition' 1-n/p + n/p* = 0
appears to impose a restriction on the sequence index epsilon,since the sequence index disappears.This has the```effect' that the W^(1,p) bound condition the sequence has to compulsorily verify is ignored and this sounds illogical,specially when the Rellich-Kondrachov theorem tacitly makes use of such a bound in its proof.On the other hand,since the gradient values of the sequence have to be used in integration to verify the W^(1,p) bound and keeping in mind that there is a formula that expresses a C^1 function of compact support in terms of its gradient(this by integration by parts of the fundamental solution of the Laplacian),it is possible to obtain estimates for the p* norm in terms of the L^infinity norm of the defining test function u and what is more,such an estimate involves positive powers of the sequence index epsilon even though the Gagliardo-Nirenberg default condition is used,showing that the sequence actually converges to zero in p* norm! The actual meaning therefore is that the p* norm computations,one without using the gradient of the sequence and the other using the gradient and its implied value in integration,are different and if forcibly compared,leads to the contradiction that u vanishes identically.We have worked out the proof that relies on nontrivial facts such as strong bounds for the Hardy-Littlewood maximal function. It is to be noted further that the condition q < p* is only a sufficient condition to establish the Rellich-Kondrachov theorem by interpolation and there is nothing to support that the theorem can not be established by a procedure that does not require interpolation. In summary,the counter-example seems vacuous,showing that compactness at p* may still be an open problem!
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 45, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9123409390449524, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/53300/locally-compact-hausdorff-space-that-is-not-normal/112221 | Locally compact Hausdorff space that is not normal
Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What is a good example of a locally compact Hausdorff space that is not normal. It seems to be well-known that not all locally compact Hausdorff spaces are normal (and only a weaker version of Urysohn's Lemma holds in general in the locally compact Hausdorff case), but I can't seem to think of any examples that demonstrate this, and I have tried all of the "standard" topological counterexamples such as the long line, etc.
-
4 Answers
I think the Tychonoff plank serves as an example. It is obtained by taking the product of the two ordinals `$\omega_1+1$` and `$\omega+1$`, each with the order topology, and removing the corner point `$(\omega_1,\omega)$`. The product is a compact Hausdorff space, so the plank, as an open subspace, is locally compact. But it is not normal, because the "edges" `$\omega_1\times\{\omega\}$` and `$\{\omega_1\}\times\omega$` don't have disjoint neighborhoods.
-
4
I think this is also the standard example of a non-normal subspace of a normal space. – Andreas Blass Jan 26 2011 at 0:34
See also related mathoverflow.net/questions/30662/… – Joel David Hamkins Jan 26 2011 at 0:55
3
Also, Counterexamples in Topology by Steen and Seebach is good for this type of question. The Tychonoff plank is mentioned in there. – Todd Trimble Jan 26 2011 at 1:03
The example below is slightly "nicer", in that it is first countable as well, while points like $(\omega_1, 0)$ do not have countable local base. – Henno Brandsma Feb 27 2011 at 17:31
You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Another nice elementary example is the rational sequence topology. For every irrational number $x$ we pick a sequence $q(x)_n$ of rational numbers, all different, that converge to $x$ (in the usual topology on $\mathbb{R}$). A topology on $\mathbb{R}$ is then defined by specifying basic neighbourhoods: a rational number $q$ has $\{ q \}$ as a basic neighbourhood (it is isolated), while an irrational number $x$ has basic neighbourhoods of the form $\{ q(x)_n : n \ge k \}$, $k \in \mathbb{N}$. One checks that this defines a topology in which the irrationals are closed and discrete (in itself), $\mathbb{Q}$ is dense (and open), and $X$ is Hausdorff and zero-dimensional (basic open sets are clopen, this uses that 2 sequences that converge to 2 different irrationals only have at most finitely many terms in common, so no basic neighbourhood can have another irrational in its closure), so Tychonov, and locally compact, as all basic neighbourhoods are compact (finite or convergent sequences). But by Jones' lemma (in a normal space, with dense set $D$ and closed discrete set $A$ we have that $2^{|A|} \le 2^{|D|}$), a proof of which can be found here, e.g.) we have that this space is not normal.
-
This is an excellent example! Is there an explicit construction of two closed, disjoint sets that cannot be separated in this space? – mathahada Feb 27 2011 at 11:57
The Jones' lemma argument is a nice counting argument: in a separable space there are at most $\mathfrak{c} = 2^{\aleph_0}$ many real-valued functions (as each function is determined by its restriction to the countable dense subset), and for every subset $B$ of the closed discrete set $A$ we need a distinct function continuous real-valeud $f_B$ that is 1 on $B$ and 0 on $A \setminus B$, both of which are closed and disjoint in the whole space. This does need Urysohn functions and some set theory, but you cannot really do topology without these anyway, so it's useful in a course, I think. – Henno Brandsma Feb 27 2011 at 13:06
So to recap (comments can only be so long...), I don't know of any direct argument (2 closed sets that cannot be separated), but the counting argument is nice enough, and can also be used to show non-normality of $SxS$, where $S$ is the Sorgenfrey line, or for Mrowka's $\Psi$-space (I think the rational sequence topology is called $\Psi$-like, by some authors, as it's very similar, that first space is also pseudocompact and non-countably compact, and needs a MAD family on $\mathbb{N}$, see the <a href="matwbn.icm.edu.pl/ksiazki/fm/fm41/… paper</a>. – Henno Brandsma Feb 27 2011 at 13:13
[continuing] Such spaces are non-normal already by the theorem that normal pseudocompact spaces are countably compact. So if a course would cover these notions, I'd include $\Psi$-space as well. – Henno Brandsma Feb 27 2011 at 13:15
I don't think there is a constructive proof like for the Sogenfrey line, at least if the rational sequences are not given beforehand. If we merely consider two disjoint subsets of the irrationals we can arrange for sequences with even denominators to converge to the first set, and for sequences with odd denominators to converge to the other and then basic neighborhoods suffice to separate the sets. – mathahada Feb 27 2011 at 14:18
show 1 more comment
My example of a locally compact space which is not normal ,is the Katetov space. this space is defined as follows:
$K$ =$\beta\mathbb{R}$-($cl_{\beta \mathbb{R}}$$\mathbb{N}$-$\mathbb{N}$). this space has The countable subset $\mathbb{N}$ as a closed subset with any accumulation point in $K$. that's why, this space is not countanly compact. but this space is pseudocompact.(You can easily check this claim). then this space is not normal. because a normal pseudocompact space is countably compact.
-
Spacebook, a database-driven version of Steen and Seebach's Counterexamples in Topology, lists the following locally compact Hausdorff (i.e. T2) spaces that are not normal. (Some of these appear already in other answers.)
Deleted Tychonoff Plank
Open Uncountable Ordinal Crossed with Uncountable Cartesian Product of Unit Interval
Rational Sequence Topology
Thomas’s Plank
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 35, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9427332282066345, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/61031/dimension-of-a-vector-subspace-of-f-2n/181804 | # Dimension of a vector subspace of $F_2^n$
Let $W$ be a vector subspace of $F_2^n$. Then, $|W| = 2^k$ for $k \leq n$. Is it always true that $\text{dim}(W) = k$? If it is, where can I find a proof?
-
9
It may be easier to go the other way. If $W$ has a basis consisting $k$ vectors, how many elements does it have? Try it this way, if you are stuck! – Jyrki Lahtonen Aug 31 '11 at 21:09
## 1 Answer
Let $W$ be a vector subspace of $F_2^n$. Let $k$ = dim($W$). Let $w_1,\dots,w_k$ be a basis of $W$ over $F_2$. Every element $x$ of $W$ can be uniquely written as $x = a_1w_1 + \cdots + a_kw_k$, where $a_i \in F_2$. Hence as a vector space over $F_2$, $W$ is isomorphic to $F_2^k$. Hence $|W| = 2^k$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 23, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8898283243179321, "perplexity_flag": "head"} |
http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.em/1268404806 | previous :: next
### Minimal Permutation Representations of Nilpotent Groups
Ben Elias, Lior Silberman, and Ramin Takloo-Bighash
Source: Experiment. Math. Volume 19, Issue 1 (2010), 121-128.
#### Abstract
A minimal permutation representation of a finite group $G$ is a faithful $G$-set with the smallest possible size. We study the structure of such representations and show that for certain groups they may be obtained by a greedy construction. In these situations (except when central involutions intervene) all minimal permutation representations have the same set of orbit sizes. Using the same ideas, we also show that if the size $d(G)$ of a minimal faithful $G$-set is at least $c|G|$ for some $c>0$, then $d(G) = |G|/m + O(1)$ for an integer $m$, with the implied constant depending on $c$.
First Page:
Primary Subjects: 20B35, 20D15
Secondary Subjects: 20D30, 20D60 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8433521389961243, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/11488?sort=votes | ## Varieties cut by quadrics
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Is there a characterization of the class of varieties which can be described as an intersection of quadrics, apart from the taulogical one?
Lots of varieties arise in this way (my favorite examples are the Grassmanianns and Schubert varieties and some toric varieties) and I wonder how far can one go.
-
2
A Google search turned up Remark 1 on page 131 here: books.google.com/… – Qiaochu Yuan Jan 12 2010 at 1:45
## 5 Answers
In fact the answer is in some sense tautological: every projective variety can be realized as a scheme-theoretic intersection of quadrics! See e.g.
D. Mumford, "Varieties defined by quadratic equations", Questions on algebraic varieties, C.I.M.E. Varenna, 1969 , Cremonese (1970) pp. 29–100,
for quantitative refinements of this question.
-
Ah! Quite interesting. I knew hypersurfaces always arise this way, but this is considerably more than what I would have wanted :P – Mariano Suárez-Alvarez Jan 12 2010 at 1:56
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
As Pete already indicated, Mumford's theorem says that for any projective variety $X\subset \mathbb P^n$, its Veronese emberdding $v_d(X)\subset \mathbb P^N$ is cut out by quadrics, for $d\gg0$. So a reasonable question is for the variety with a fixed projective embedding (such as Grassmannians in the Plucker embedding and not in some other random embedding).
For this latter more meaningful question, many "combinatorial" rational varieties, such as Grassmannians, Schubert varieties (as you pointed out), flag varieties, determinantal varieties, etc., are cut out by quadrics.
For the "non-combinatorial", non-rational varieties, the most classical result is Petri's theorem: a smooth non-hyperelliptic curve of genus $g\ge 4$ in its canonical embedding is cut out by quadrics, with the exceptions of trigonal curves and plane quintics.
There is a vast generatization of this property: $X\subset \mathbb P^n$ satisfies property $N_p$ if the first syzygy of its homogeneous ideal $I_X$ is a direct sum of $\mathcal O(2)$, the second syzygy is a direct sum of $\mathcal O(3)$, etc., the $p$-th syzygy is a direct sum of $\mathcal O(p+1)$. In this language, $X$ is cut out by quadrics is equivalent to the property $N_1$.
A 1984 Green's conjecture is that a smooth nonhyperelliptic curve satisfies $N_p$ for $p=Cliff(X)-1$, where $Cliff(X)$ is the Clifford of $X$. This has been proved for generic curves of any genus by Voisin (in characteristic 0; it is false in positive characteristic).
Another notable case: the ideal of $2\times 2$ minors of a $p\times q$ matrix has property $N_{p+q-3}$.
-
I meant canonical curves; I made the correction, thanks. – VA Jan 12 2010 at 2:31
By the way, the canonical image of hyperelliptic curves, being rational normal curves, are also cut out by quadrics. – jvp Jan 12 2010 at 2:33
As Pete L. Clark says, if you can Veronese your line bundle then the answer is "all of them".
So a more interesting question may be: for which varieties M does every ample line bundle give an embedding defined by quadrics?
The best sufficient condition I know is that MxM have a Frobenius splitting w.r.t. which the diagonal is compatibly split. See Brion and Kumar's book about Frobenius splitting, and Sam Payne's article, which discusses the toric case. The first shows that it's true for flag manifolds, and the second that it's not true for all Schubert varieties.
-
That is an embedding by a COMPLETE linear system, right? And isn't a singular toric variety compatibly split (but the ideal may not be generated by quadrics?) – VA Jan 12 2010 at 19:59
I had never seen Veronese used as a verb. Cute :) – Mariano Suárez-Alvarez Jan 12 2010 at 20:05
And isn't a singular toric variety compatibly split No. A singular TV M is indeed split, but the issue is not splitting M itself, but splitting MxM with the diagonal M being split. If I understood right Sam shows you can't do this for M = F_1, which occurs as a Schubert variety. – Allen Knutson Jan 12 2010 at 20:23
BTW there's another variant of this question: let M > N be a pair of varieties, and ask that M be cut out be quadratics, and N further cut from M by linear conditions. Again, this occurs for any (M,N) once one Veroneses enough, and occurs for (M = flag manifold, N = Schubert variety) for any line bundle by Frobenius-splitting results of Ramanathan. – Allen Knutson Jan 12 2010 at 23:52
It seems impossible to give a better answer than Pete L. Clark's but those passing by might appreciate a sketchy proof.
If $X \subset \mathbb P^n$ is a projective variety then it is the intersection of finitely many hypersurfaces of degree at most $k$. If we consider the natural morphism from $\mathbb P^n$ to $\mathbb P H^0(\mathbb P^n,\mathcal O_{\mathbb P^n}(k))^{\ast}$ then the image of $X$ will be the intersection of the image of $\mathbb P^n$ with a finite number of hyperplanes.
Being the image of $\mathbb P^n$ itself a intersection of quadrics, it follows that any projective variety can be expressed as the intersection of quadrics and hyperplanes.
A more condensed version of the argument above can also be found here.
-
Here is an easy hands-on way to define any variety as being cut out by quadrics. Write your variety as the zero locus of a bunch of polynomials. Mark each monomial that occurs in any one of these polynomials and which has degree higher than two. We are going to add new variables and new quadratic equations to turn that monomial into a degree two monomial. For example if the monomial $x y^2$ occurs in one of your polynomials , add a new variable $z$ and add the equation $z = xy$. Then rewrite all occurrences of $xy^2$ in any of your polynomials as $zy$. To take another example, if the monomial $x^5$ occurs then add two new variables, say $u$ and $v$, together with the two new quadric equations $u = x^2$ $v = u^2$ and the replace all occurrences of $x^5$ by $xv$. Just keep on tacking new variables, and new equations until you've turned all of your original monomials into either quadratic or linear monomials. Now you've got your variety sitting in a much bigger space due to all the new variables, but its defining polynomials are now all quadratic plus linear.
(If you don't like the linear terms arising , play the homogenization game.)
I learned this trick in some two page paper by a Russian showing every compact manifold is
diffeomorphic to one defined by quadratic equations. That paper in turn is related to a theorem attributed to Milnor (or Thurston) asserting that every manifold can be realized by planar linkages. Sorry I don't have the references.
-
2
Skolem used this trick to show that to decide the solvability of arbitrary polynomial equations in integers, it would suffice to do it for those of total degree 4: see page 3 of math.mit.edu/~poonen/papers/h10_notices.pdf . (Skolem's work was in the 1920s, which presumably predates the two-page paper you are referring to. And it wouldn't surprise me if the trick were known even before Skolem!) – Bjorn Poonen Feb 8 2010 at 7:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 39, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9404060244560242, "perplexity_flag": "middle"} |
http://medlibrary.org/medwiki/Cross_correlation | # Cross correlation
Welcome to MedLibrary.org. For best results, we recommend beginning with the navigation links at the top of the page, which can guide you through our collection of over 14,000 medication labels and package inserts. For additional information on other topics which are not covered by our database of medications, just enter your topic in the search box below:
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long-signal for a shorter, known feature. It also has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.
For continuous functions, f and g, the cross-correlation is defined as:
$(f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(t+\tau)\,d\tau,$
where f * denotes the complex conjugate of f.
Similarly, for discrete functions, the cross-correlation is defined as:
$(f \star g)[n]\ \stackrel{\mathrm{def}}{=} \sum_{m=-\infty}^{\infty} f^*[m]\ g[n+m].$
Visual comparison of convolution, cross-correlation and autocorrelation.
The cross-correlation is similar in nature to the convolution of two functions.
In an autocorrelation, which is the cross-correlation of a signal with itself, there will always be a peak at a lag of zero unless the signal is a trivial zero signal.
In probability theory and statistics, correlation is always used to include a standardising factor in such a way that correlations have values between −1 and +1, and the term cross-correlation is used for referring to the correlation corr(X, Y) between two random variables X and Y, while the "correlation" of a random vector X is considered to be the correlation matrix (matrix of correlations) between the scalar elements of X.
If $X$ and $Y$ are two independent random variables with probability density functions f and g, respectively, then the probability density of the difference $Y - X$ is formally given by the cross-correlation (in the signal-processing sense) $f \star g$; however this terminology is not used in probability and statistics. In contrast, the convolution $f * g$ (equivalent to the cross-correlation of f(t) and g(−t) ) gives the probability density function of the sum $X + Y$.
## Explanation[]
As an example, consider two real valued functions $f$ and $g$ differing only by an unknown shift along the x-axis. One can use the cross-correlation to find how much $g$ must be shifted along the x-axis to make it identical to $f$. The formula essentially slides the $g$ function along the x-axis, calculating the integral of their product at each position. When the functions match, the value of $(f\star g)$ is maximized. This is because when peaks (positive areas) are aligned, they make a large contribution to the integral. Similarly, when troughs (negative areas) align, they also make a positive contribution to the integral because the product of two negative numbers is positive.
With complex-valued functions $f$ and $g$, taking the conjugate of $f$ ensures that aligned peaks (or aligned troughs) with imaginary components will contribute positively to the integral.
In econometrics, lagged cross-correlation is sometimes referred to as cross-autocorrelation[1]
## Properties[]
• The cross-correlation of functions f(t) and g(t) is equivalent to the convolution of f *(−t) and g(t). I.e.:
$f\star g = f^*(-t)*g.$
• If f is Hermitian, then $f\star g = f*g.$
• $(f\star g)\star(f\star g)=(f\star f)\star (g\star g)$
• Analogous to the convolution theorem, the cross-correlation satisfies:
$\mathcal{F}\{f\star g\}=(\mathcal{F}\{f\})^* \cdot \mathcal{F}\{g\},$
where $\mathcal{F}$ denotes the Fourier transform, and an asterisk again indicates the complex conjugate. Coupled with fast Fourier transform algorithms, this property is often exploited for the efficient numerical computation of cross-correlations. (see circular cross-correlation)
• The cross-correlation is related to the spectral density. (see Wiener–Khinchin theorem)
• The cross correlation of a convolution of f and h with a function g is the convolution of the cross-correlation of f and g with the kernel h:
$(f * h) \star g = h(-)*(f \star g)$
## Time series analysis[]
In time series analysis, as applied in statistics and signal processing, the cross correlation between two time series describes the normalized cross covariance function.
Let $(X_t,Y_t)$ represent a pair of stochastic processes that are jointly wide sense stationary. Then the cross correlation is given by
$\gamma_{xy}(\tau) = \operatorname{E}[(X_t - \mu_x)(Y_{t+\tau} - \mu_y)],$
where $\mu_x$ and $\mu_y$ are the means of $X_t$ and $Y_t$ respectively.
The cross correlation of a pair of jointly wide sense stationary stochastic process can be estimated by averaging the product of samples measured from one process and samples measured from the other (and its time shifts). The samples included in the average can be an arbitrary subset of all the samples in the signal (e.g., samples within a finite time window or a sub-sampling of one of the signals). For a large number of samples, the average converges to the true cross-correlation.
## Time delay analysis[]
Cross-correlations are useful for determining the time delay between two signals, e.g. for determining time delays for the propagation of acoustic signals across a microphone array.[2][3] After calculating the cross-correlation between the two signals, the maximum (or minimum if the signals are negatively correlated) of the cross-correlation function indicates the point in time where the signals are best aligned, i.e. the time delay between the two signals is determined by the argument of the maximum, or arg max of the cross-correlation, as in
$\tau_{delay}=\underset{t}{\operatorname{arg\,max}}((f \star g)(t))$
## Normalized cross-correlation[]
For image-processing applications in which the brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by the standard deviation. That is, the cross-correlation of a template, $t(x,y)$ with a subimage $f(x,y)$ is
$\frac{1}{n} \sum_{x,y}\frac{(f(x,y) - \overline{f})(t(x,y) - \overline{t})}{\sigma_f \sigma_t}$.
where $n$ is the number of pixels in $t(x,y)$ and $f(x,y)$, $\overline{f}$ is the average of f and $\sigma_f$ is standard deviation of f. In functional analysis terms, this can be thought of as the dot product of two normalized vectors. That is, if
$F(x,y) = f(x,y) - \overline{f}$
and
$T(x,y) = t(x,y) - \overline{t}$
then the above sum is equal to
$\left\langle\frac{F}{\|F\|},\frac{T}{\|T\|}\right\rangle$
where $\langle\cdot,\cdot\rangle$ is the inner product and $\|\cdot\|$ is the L² norm. Thus, if f and t are real matrices, their normalized cross-correlation equals the cosine of the angle between the unit vectors F and T, being thus 1 if and only if F equals T multiplied by a positive scalar.
Normalized correlation is one of the methods used for template matching, a process used for finding incidences of a pattern or object within an image. It is also the 2-dimensional version of Pearson product-moment correlation coefficient.
## References[]
1. Campbell, Lo, and MacKinlay 1996: The Econometrics of Financial Markets, NJ: Princeton University Press.
2. Rhudy, Matthew; Brian Bucci, Jeffrey Vipperman, Jeffrey Allanach, and Bruce Abraham (November 2009). "Microphone Array Analysis Methods Using Cross-Correlations". Proceedings of 2009 ASME International Mechanical Engineering Congress, Lake Buena Vista, FL.
3. Rhudy, Matthew (November 2009). "Real Time Implementation of a Military Impulse Classifier". University of Pittsburgh, Master's Thesis.
## []
Content in this section is authored by an open community of volunteers and is not produced by, reviewed by, or in any way affiliated with MedLibrary.org. Licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License, using material from the Wikipedia article on "Cross correlation", available in its original form here:
http://en.wikipedia.org/w/index.php?title=Cross_correlation
• ## Finding More
You are currently browsing the the MedLibrary.org general encyclopedia supplement. To return to our medication library, please select from the menu above or use our search box at the top of the page. In addition to our search facility, alphabetical listings and a date list can help you find every medication in our library.
• ## Questions or Comments?
If you have a question or comment about material specifically within the site’s encyclopedia supplement, we encourage you to read and follow the original source URL given near the bottom of each article. You may also get in touch directly with the original material provider.
• ## About
This site is provided for educational and informational purposes only and is not intended as a substitute for the advice of a medical doctor, nurse, nurse practitioner or other qualified health professional. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 43, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8947464823722839, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-statistics/85404-conditional-probability.html | # Thread:
1. ## Conditional probability
Suppose you have an urn with 2 yellow balls, 3 red balls, 6 white balls, and 12 blue balls. You randomly draw 4 balls from the urn without replacement. Given that at least one of the balls is yellow, what is the probability that all 4 balls will be different colors?
I know that you have to use Bayes' rule, with P(B|A) = 1 because if you have 4 colors, one of them must be yellow. But I'm not sure about the different probabilities.
I think P(A) might be 2 choose 1 + 3 choose 1 + 6 choose 1 + 12 choose 1 all divided by 23 choose 4, but I"m not sure. And I don't know how to get P(>= 1 yellow).Any one have any idea?
Thanks!
2. Hello, horan!
Suppose you have an urn with 2 yellow balls, 3 red balls, 6 white balls, and 12 blue balls.
You randomly draw 4 balls from the urn without replacement.
Given that at least one of the balls is yellow,
what is the probability that all 4 balls will be different colors?
Let $4DC$ = "4 different colors" . . .and $AL1Y$ = "at least one yellow"
According to Bayes' Theorm, we want . $P(4DC\,|\,AL1Y) \;=\;\frac{P(4DC \wedge AL1Y)}{P(AL1Y)}$ .[1]
There are: . ${21\choose4} \:=\:8855$ possible drawings.
Numerator
As you pointed out: $(4DC \wedge AL1Y) \:=\:4DC$
The number of ways to get 4 colors is: . ${2\choose1}{3\choose1}{6\choose1}{12\choose1} \:=\:432$ ways.
Hence: . $P(4DC \wedge AL1Y) \:=\:\frac{432}{8855}$
Denominator
The opposite of "at least 1 yellow" is "no yellow."
There are: . ${21\choose4} \:=\:5985$ ways to get no yellows.
So there are: . $8855-5985 \:=\:2870$ ways to get at least one yellow.
Hence: . $P(AL1Y) \:=\:\frac{2870}{8855}$
Substitute into [1]: . $P(4DC\,|\,AL1Y) \;=\;\frac{\:\frac{432}{8855}\:}{\:\frac{2870}{885 5}\:} \;=\;\frac{432}{2870} \;=\;\frac{216}{1435}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9317854046821594, "perplexity_flag": "head"} |
http://mathhelpforum.com/differential-equations/212975-bernoulli-equation.html | # Thread:
1. ## Bernoulli equation
I'm trying to figure out how to do this problem and we've just covered bernoulli equations in class so I know I have to use that. This is the problem:
Solve the equation
$y' = (X cos(t) + T)y - y^3$
where X and T are constants.
So far I have this:
$dy/dt - (X cos(t) + T)y = -y^3$
then using integrating factor:
$e^\int(Xcos(t) + T)) = e^(Xsin(t) + Tt + C) = e^(Tt)* e^(xsin(t)) * e^C$
But where do I go from here?
2. ## Re: Bernoulli equation
You can't go straight to the integrating factor, because you haven't done the substitution required to turn the DE into something first order linear.
Let $\displaystyle \begin{align*} v = y^{1 - 3} = y^{-2} \end{align*}$. Then we have $\displaystyle \begin{align*} y = v^{-\frac{1}{2}} \implies \frac{dy}{dt} = -\frac{1}{2}v^{-\frac{3}{2}}\, \frac{dv}{dt} \end{align*}$. Substituting into the DE gives
$\displaystyle \begin{align*} \frac{dy}{dt} - \left( X \cos{(t)} + T \right) y &= -y^3 \\ -\frac{1}{2}v^{-\frac{3}{2}}\,\frac{dv}{dt} - \left( X \cos{(t)} + T \right) v^{-\frac{1}{2}} &= -v^{-\frac{3}{2}} \\ \frac{dv}{dt} + 2 \left( X \cos{(t)} + T \right) v &= 2 \end{align*}$
This is now first order linear, so now you can apply an integrating factor.
3. ## Re: Bernoulli equation
Alright, thanks a lot. One questions though: How did you get $-v^{-\frac{3}{2}}$ on the right side in the 2nd line?
4. ## Re: Bernoulli equation
We know $\displaystyle \begin{align*} y = v^{-\frac{1}{2}} \end{align*}$ so $\displaystyle \begin{align*} y^3 = \left( v^{-\frac{1}{2}} \right)^3 = v^{-\frac{3}{2}} \end{align*}$.
5. ## Re: Bernoulli equation
So for the integrating factor I get $e^{2Xsin(t) + Tt + C}$, is that correct? Do I then multiply the entire equation by this? I've kind of forgotten how to do the rest of the steps .
6. ## Re: Bernoulli equation
Not quite. Your integrating factor is $\displaystyle \begin{align*} e^{\int{2 \left[ X\cos{(t)} + T \right] dt}} = e^{2 \left[ X\sin{(t)} + Tt \right]} \end{align*}$. Note that any of the family of solutions to this integral will work, so for simplicity we disregard the +C. Anyway, yes we multiply both sides of the DE by this integrating factor to give
$\displaystyle \begin{align*} e^{2\left[ X\sin{(t)} + Tt \right]}\,\frac{dv}{dt} + 2e^{2\left[ X\cos{(t)} + Tt \right]} \left[X \cos{(t)} + T \right] v &= 2e^{2\left[ X\sin{(t)} + Tt \right]} \\ \frac{d}{dt} \left[ e^{2\left[ X\sin{(t)} + Tt \right] }\, v \right] &= 2e^{2\left[ X\sin{(t)} + Tt \right]} \\ e^{2\left[ X\sin{(t)} + Tt \right]} &= \int{2e^{2\left[ X\sin{(t)} + Tt \right]}\,dt} \end{align*}$
Can you go from here? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 12, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9057661890983582, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/226121/rates-of-change-with-trigonometry | # rates of change with trigonometry
I really need some help with a question...
A hot air balloon leaves the ground at a point that is horizontal distance 190 metres from an observer and rises vertically upwards. The observer notes that the rate of increase of the angle of inclination between the horizontal and the vertical and the observer where line of sight (hyp) is a constant 0.045radians per minute.
What is the rate of increase in metres per minute of the distance between the balloon and the observer when the balloon is 180 metres above the ground?
To solve it I visualised a right angled triangle with angle 0.045radians, a base (adj) of 190 and opposite of 180m (position of balloon) so I'm guessing an element of trig is involved. I denoted the rate of increase in metres per minute of the distance ds/dt but I don't know if s is the hypotenuse and what to do to calculate ds/dt.
Can someone please break it down for me?
Thanks in advance
## Edit:
i did but i hit a brick wall. If i call the side opposite to θ x then tanθ=x/170 and dx/dθ=170sec$^2$θ (which can be expressed as cos) but I don't know what to do next
-
I think using the tangent is the most straightforward way. Write an equation using $tan(\theta)$ and then take the derivative of both sides with respect to time. Then plug for what you know and solve for the rate of change of the opposite side. – Todd Wilcox Oct 31 '12 at 16:12
I thought that and got dx/dt=170sec$^2$dθ/dt (don't know if that's right) but thwen you don't make use of the 200metres given in the question – Ricky Rozay Oct 31 '12 at 16:20
## 2 Answers
Let $y$ represent the height above ground at any moment, $x$ represent the horizontal distance from the observer to the launch point, $h$ represent the distance from the observer to the balloon, and $\theta$ represent the angle above the horizontal at which the observer sees the balloon at any moment. Then,$$\sec(\theta)=\dfrac{h}{x}$$Since $x$ is fixed we can re-write that as,$$\sec(\theta)=\dfrac{h}{190}$$ Taking the time derivative of both sides we get,$$\sec(\theta)\tan(\theta)\frac{d\theta}{dt}=\frac{1}{190}\frac{dh}{dt}$$ Now find $\theta$ using the instantaneous height and fixed distance given, plug in $\theta$ and $\frac{d\theta}{dt}$ and solve for $\frac{dh}{dt}$.
-
perhaps express the elevation of balloon as a function of $\cos \theta$, where $\theta$ the angle to the observer. Taking derivatives in time will get you to speed of elevation on one side and $\sin \theta \cdot \frac{d \theta}{dt}$ on the other side. Note that you have $d \theta/dt$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 16, "mathjax_display_tex": 3, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9360210299491882, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/15367?sort=newest | ## How many ways can we characterize gamma function?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
First let's state a well-known characterization of gamma function.
If f is a positive function on positive real numbers such that: (1).f(x+1)=xf(x); (2).f(1)=1; (3).logf is convex, then f(x) is gamma function.
Now here I'm wondering how many ways can we characterize gamma function like the above? Especially if we consider it as a function on complex plane with poles.
ps: I'm not asking different ways to express gamma function explicitly, but the abstraction of it.
-
7
You seem to be gathering a list. Such questions are traditionally community wiki, so people can vote for favorites and against nonfavorites without penalty. – Charles Siegel Feb 15 2010 at 21:31
I think this question would be more liekly to attract answers if you gave some reason for wanting such a list. The gamma function is interesting, but abstract characterizations of it are mostly interesting if you're using for something. If you are, we'd be curious to know what. – Ben Webster♦ Feb 16 2010 at 17:57
I'd like to point out this this is a terrible question, because it admits the awful answer: "many". More seriously, see Ben's comment. – Scott Morrison♦ Feb 18 2010 at 7:17
## 2 Answers
Have a look here: http://dlmf.nist.gov/5/
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
maybe I can give you some help. Gamma function is also called the second Euler integral.
Here comes some characterizations.
a f(s)= $$t(x)=\int_{0}^{+\infty}{t^(s-1)}{exp(-t)}dt$$ s>0
b f(s)=$$\lim n!n^s/[s(s+1)...(s+n)]$$ $$n\rightarrow +\infty$$
c $$B(p,q)=\Gamma(p)\Gamma(q)/\Gamma(pq)$$ p>0 q>0
d $$\Gamma(2s)=2^(2s-1)\Gamma(s)\Gamma(s+1/2)/\sqrt(2\pi)$$ s>0
e $$\Gamma(s)\Gamma(1-s)=\pi/sin(s\pi)$$ 0
May it help!
-
actually ,I don't think there is a particularly difference between definitions and characterizations. That is my opinion! – Gu Yejun Mar 7 2010 at 16:38
I disagree. A characterization would be a set of properties that turned out to be uniquely satisfied by the gamma function. It's like the difference between defining the reals as Dedekind cuts and characterizing them as the unique complete ordered field. There is a characterization of the gamma function as the unique function that satisfies f(n)=(n-1)f(n-1) and has some other smoothness property. Unfortunately, I can't remember what that other property is. – gowers Mar 7 2010 at 20:57
Maybe it is true.But the definition is also unique. – Gu Yejun Mar 8 2010 at 13:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 6, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920028030872345, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/168497-quadratic-problem.html | # Thread:
1. ## Quadratic problem
Thanks, that was pretty simple
Find two quadratic functions f and g such that $f (1) = 0, g(1) = 0$ and $f (0) = 10, g(0) = 10$ and both have a maximum value of 18.
So if $f(x) = ax^2 + bx + c$ then $c= 10$ for both f and g.
I also got $0=a+b+10$ for both f and g
what do I do from here?
2. Originally Posted by jgv115
Thanks, that was pretty simple
Find two quadratic functions f and g such that $f (1) = 0, g(1) = 0$ and $f (0) = 10, g(0) = 10$ and both have a maximum value of 18.
So if $f(x) = ax^2 + bx + c$ then $c= 10$ for both f and g.
I also got $0=a+b+10$ for both f and g
what do I do from here?
the question basically is that how many quadratic functions $f(x)=ax^2+bx+c$ are there satisfying
$f(1)=0, f(0)=10$ and the maximum value attained by f is 18.
since 1 is a root of $f$, we can write
$f(x)=k(x-1)(x-p)$; p being the other root.
given that $f(0)=10$, we have $kp=10$.
also maximum value of $f$ is 18 and we know that the maxima occurs at $x=(p+1)/2$ as p and 1 are the roots of f, we have
$f((p+1)/2)=18=(-k/4)((1-p)^2)$, which simplifies to,
$k((1-p)^2)+72=0$. now multiply both sides by k to get,
$(k-kp)^2+72k=0$. we already have the value of $kp$ which is 10, so we have,
$(k-10)^2+72k=0$ which gives two values of k which are $k=-2, k=-50$. both are acceptable as k had to be negative( if k is not negative then maxima of f does not exist).
corresponding values of p are -5 and -0.2.
so we have two such quadratic functions which can be found out by putting the values of k and p in $f(x)=k(x-1)(x-p)$
3. Originally Posted by jgv115
Thanks, that was pretty simple
Find two quadratic functions f and g such that $f (1) = 0, g(1) = 0$ and $f (0) = 10, g(0) = 10$ and both have a maximum value of 18.
So if $f(x) = ax^2 + bx + c$ then $c= 10$ for both f and g.
I also got $0=a+b+10$ for both f and g
what do I do from here?
to proceed from here you have to exploit on the information that maxima of $f$ is 18.
you must already be aware that the maxima(or minima) occurs at $x=(r1+r2)/2$ where $r1$ and $r2$ are the roots of the quadratic equation.
now you must also be knowing that sum of roots can be expressed in terms of the coefficients of powers of x, viz, $r1+r2=(-b/a)$.
so you will get that maximum of f occurs at $x=(-b/2a)$.
so use $f(-b/2a)=18$. from this you will get one more relation in a and b. knowing that $a+b+10=0$ your problem can be solved. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 38, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9729998111724854, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2008/01/30/darboux-integration/?like=1&source=post_flair&_wpnonce=420e8d64d7 | # The Unapologetic Mathematician
## Darboux Integration
Okay, defining the integral as the limit of a net of Riemann sums is all well and good, but it’s a huge net, and it seems impossible to calculate with. We need a better way of getting a handle on these things. What we’ll use is a little trick for evaluating limits of nets that I haven’t mentioned yet: “cofinal sets”.
Given a directed set $(D,\preceq)$, a directed subset $S$ is cofinal if for every $d\in D$ there is some $s\in S$ with $s\succeq d$. Now watch what happens when we try to show that the limit of a net $x_d$ is a point $x$. We need to find for every neighborhood $U$ of $x$ an index $d_0$ so that for every $d\succeq d_0$ we have $x_d\in U$. But if $d_0$ is such an index, then there is some $s_0\in S$ above it, and every $s\in S$ above that is also above $d_0$, and so $x_s\in U$. That is, if the limit over $D$ exists, then the limit over $S$ exists and has the same value.
Let’s give a cofinal set of tagged partitions by giving a rule for picking the tags that go with any partition. Then our net consists just of partitions of the interval $\left[a,b\right]$, and the tags come for free. If the function $f$ is Riemann-integrable, then the limit over this cofinal set will be the integral. Here’s our rule: in the closed subinterval $\left[x_{i-1},x_i\right]$ pick a point $t_i$ so that $\lim\limits_{x\rightarrow t_i}f(x)$ is the supremum of the values of $f$ in that subinterval. If the function is continuous it will attain a maximum at our tag, and if not it’ll get close or shoot off to infinity (if there is no supremum).
Why is this cofinal? Let’s imagine a tagged partition $x=((x_0,...,x_n),(t_1,...,t_n))$ where $t_i$ is not chosen according to this rule. Then we can refine the partition by splitting up the $i$th strip in such a way that $t_i$ is the maximum in one of the new strips, and choosing all the new tags according to the rule. Then we’ve found a good partition above the one we started with. Similarly, we can build another cofinal set by always choosing the tags where $f$ approaches an infimum.
When we consider a partition $x$ in the first cofinal set we can set up something closely related to the Riemann sums: the “upper Darboux sums”
$\displaystyle U_x(f)=\sum\limits_{i=1}^n M_i(x_i-x_{i-1})$
where $M_i$ is the supremum of $f(x)$ on the interval $\left[x_{i-1},x_i\right]$, or infinity if the value of $f$ is unbounded above here. Similarly, we can define the “lower Darboux sum”
$\displaystyle L_x(f)=\sum\limits_{i=1}^n m_i(x_i-x_{i-1})$
where now $m_i$ is the infimum (or negative infinity). If the function is Riemann-integrable, then the limits over these cofinal sets both exist and are both equal to the Riemann integral. So we define a function to be “Darboux-integrable” if the limits of the upper and lower Darboux sums both exist and have the same value. Then the Darboux integral is defined to be this common value. Notice that if the function ever shoots off to positive or negative infinity we’ll get an infinite value for one of the terms, and we can never converge, so such functions are not Darboux-integrable.
We should notice here that given any partition $x$, the upper Darboux sum must be larger than any Riemann sum with that same partition, since no matter how we choose the tag $t_i$ we’ll find that $f(t_i)\leq M_i$ by definition. Similarly, the lower Darboux sum must be smaller than any Riemann sum on the same partition. Now let’s say that the upper and lower Darboux sums both converge to the same value $s$. Then given any neighborhood of $s$ we can find a partition $x_U$ so that every upper Darboux sum over a refinement of $x_U$ is in the neighborhood, and a similar partition $x_L$ for the lower Darboux sums. Choosing a common refinement $x_R$ of both (which we can do because partitions form a directed set) both its upper and lower Darboux sums (and those of any of its refinements) will be in our neighborhood. Then we can choose any tags in $x_R$ we want, and the Riemann sum will again be in the neighborhood. Thus a Darboux-integrable function is also Riemann-integrable.
So this new notion of Darboux-integrability is really the same one as Riemann-integrability, but it involves taking two limits over a much less complicated directed set. For now, we’ll just call a function which satisfies either of these two equivalent conditions “integrable” and be done with it, using whichever construction of the integral is most appropriate to our needs at the time.
### Like this:
Posted by John Armstrong | Analysis, Calculus, Orders
## 3 Comments »
1. Darboux is turning in his grave! No pictures for his integral while 2 for Riemann’s!
Comment by | January 30, 2008 | Reply
2. [...] and Lower Integrals Way back when, we talked about Darboux sums, where we used a particular recipe to pick the tags. Specifically, we defined the upper sum by [...]
Pingback by | March 10, 2008 | Reply
3. [...] off, we remember how we set up Darboux sums. These were given by prescribing specific methods of tagging a given partition . In one, we always [...]
Pingback by | December 2, 2009 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 48, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9095873832702637, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/17433/renormalization-group-different-fixed-points/17456 | # Renormalization Group: Different fixed points
Extending the Gaussian model by introducing a second field and coupling it to the other field, I consider the Hamiltonian
$$\beta H = \frac{1}{(2\pi)^d} \int_0^\Lambda d^d q \frac{t + Kq^2}{2} |m(q)|^2 + \frac{L}{2} q^4 |\phi|^2 + v q^2 m(q) \phi^*(q)$$
Doing a Renormalization Group treatment, I integrate out the high wave-numbers above $\Lambda/b$ and obtain the following recursion relations for the parameters: $$\begin{aligned}t' &= b^{-d} z^2 t & K' &= b^{-d-2}z^2 K & L' &= b^{-d-4}y^2 L \\ v' &= b^{-d-2}yz v & h' &= zh \end{aligned}$$ where $z$ is the scaling of field $m$ and $y$ is the scaling of field $\phi$.
One way to obtain the scaling factors $z$ and $y$ is to demand that $K' = K$ and $L' = L$, i.e., we demand that fluctuations are scale invariant.
But apparently, there is another fixed point if we demand that $t' = t$ and $L' = L$ which gives rise to different scaling behavior, and I wonder
a) why I can apparently choose which parameters should be fixed regardless of their value ($K$ and $L$ in one case, $t$ and $L$ in the other case)
b) what the physical meaning of these two different fixed points is...
(My exposure to field theory/RG is from a statistical physics approach, so if answers could be phrased in that language as opposed to QFT that'd be much appreciated)
-
## 1 Answer
The short answer is that if you have coefficients for all the terms, you have two independent exact fake scale invariance for the field $\phi$ and $m$ which just rescales the fields and the coefficients appropriately to keep the Hamiltonian exactly the same. This is not a real invariance of the action, since it changes the parameters of the action, it is best thought of as choosing the dimensional scale of the two fields. You usually do this by fixing the terms "L" and "K", but you get a different scaling if you fix the "t" and "L" terms, which is physical in different limits.
I should point out that this model is exactly solvable, there are no real interactions in this model, so the Renormalization Group analysis is just dimensional analysis in disguise. There are two rotated q-modes mixing $m(q)$ and $\phi(q)$ which are completely free.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 19, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9439079165458679, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/261622/a-homomorphism-from-mathbbq-rightarrow-mathbbq | # A homomorphism from $(\mathbb{Q},+)\rightarrow (\mathbb{Q},+)$
I claim that any group homomorphism will be of the form $f(x)=qx$ for some $q\in\mathbb{Q}$, now as $f$ is non zero so $q\neq 0$ now clearly kernel of $f$ will be $\{0\}$ only hence $f$ is injective, now let $f(1)=p\neq 0$ then for any $y\in\mathbb{Q}$ and $f(y/p)=y$ so $f$ is surjective too, hence bijective. am I right?
-
1
You'd have to prove that $f(y/p)=y$ from $f(1)=p$. Not hard, but not one step. – Thomas Andrews Dec 18 '12 at 19:04
Though I knew the proof but thank you very much for the response :) – Taxi Driver Dec 18 '12 at 19:16
1
Yes, it just wasn't clear from the post if you thought it was "obvious" that if $f(1)=p$ then $f(y/p)=y$ for all $y$, or if you just were skipping the proof because you were done with that part. – Thomas Andrews Dec 18 '12 at 19:20
I am extremely sorry for my writing and skipping that part with out mentioning in post, I must not in next posts. – Taxi Driver Dec 18 '12 at 19:23
## 1 Answer
You are right. To see that all group homomorphisms have the form you claim, is not completely trivla, as $\mathbb Q$ is not even finitely generated. But note that for $n\in\mathbb Z$ you have $f(n\cdot x)=n\cdot f(x)$, hence for $q=\frac ab$ with $b\ne 0$ it follows indeed that $$f(q)=\frac1b\cdot f(b\cdot q)=\frac1b\cdot f(a)=\frac1b\cdot a\cdot f(1)=q\cdot f(1).$$ From this your observations about a,b,c follow just as you write.
-
Though I knew the proof but thank you very much for the response :) – Taxi Driver Dec 18 '12 at 19:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9718776941299438, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/tagged/examples-counterexamples?sort=votes&pagesize=15 | # Tagged Questions
Examples and counterexamples are great ways to learn about the intricacies of definitions in mathematics. Counterexamples are especially useful in topology and analysis where most things are fairly intuitive, but every now and then one may run into borderline cases where the naive intuition may ...
23answers
7k views
### Examples of apparent patterns that eventually fail
Often, when I try to describe mathematics to the layman, I find myself struggling to convince them of the importance and consequence of 'proof'. I receive responses like: "surely if the Collatz ...
28answers
8k views
### A challenge by R. P. Feynman: give counter-intuitive theorems that can be translated into everyday language
The following is a quote from Surely you're joking, Mr. Feynman . The question is: are there any interesting theorems that you think would be a good example to tell Richard Feynman, as an answer to ...
18answers
8k views
### Nice examples of groups which are not obviously groups
I am searching for some groups, where it is not so obvious that they are groups. In the lectures script there are only examples like $\mathbb{Z}$ under addition and other things like that. I ...
23answers
5k views
### Can't argue with success? Looking for “bad math” that “gets away with it”
I'm looking for cases of invalid math operations producing (in spite of it all) correct results (aka "every math teacher's nightmare"). One example would be "cancelling" the 6s in $$\frac{64}{16}.$$ ...
3answers
1k views
### Is it possible for a function to be in $L^p$ for only one $p$?
I'm wondering if it's possible for a function to be an $L^p$ space for only one value of $p \in [1,\infty)$ (on either a bounded domain or an unbounded domain). One can use interpolation to show that ...
1answer
1k views
### How discontinuous can a derivative be?
There is a well-known result in elementary analysis due to Darboux which says if $f$ is a differentiable function then $f'$ satisfies the intermediate value property. To my knowledge, not many ...
11answers
1k views
### Examples of results failing in higher dimensions
A number of economists do not appreciate rigor in their usage of mathematics and I find it very discouraging. One of the examples of rigor-lacking approach are proofs done via graphs or pictures ...
3answers
1k views
### What is the spectral theorem for compact self-adjoint operators on a Hilbert space actually for?
Please excuse the naive question. I have had two classes now in which this theorem was taught and proven, but I have only ever seen a single (indirect?) application involving the quantum harmonic ...
5answers
1k views
### False beliefs about Lebesgue measure on $\mathbb{R}$
I'm trying to develop intuition about Lebesgue measure on $\mathbb{R}$ and I'd like to build a list of false beliefs about it, for example: every set is measurable, every set of measure zero is ...
17answers
3k views
### Examples of mathematical induction
What are the best examples of mathematical induction available at the secondary-school level---totally elementary---that do not involve expressions of the form $\bullet+\cdots\cdots\cdots+\bullet$ ...
5answers
1k views
### An example of an easy to understand undecidable problem
I am looking for an undecidable problem that I could give as an easy example in a presentation to the general public. I mean easy in the sense that the mathematics behind it can be described, well, ...
3answers
804 views
### If every continuous $f:X\to X$ has $\text{Fix}(f)\subseteq X$ closed, must $X$ be Hausdorff?
Given a function $f:X\to X$, let $\text{Fix}(f)=\{x\in X\mid x=f(x)\}$. In a recent comment, I wondered whether $X$ is Hausdorff $\iff$ $\text{Fix}(f)\subseteq X$ is closed for every continuous ...
2answers
255 views
### Is every function with the intermediate value property a derivative?
As it is well known every continuous function has the intermediate value property, but even some discontinuous functions like f(x)=\left\{ \begin{array}{cl} \sin\left(\frac{1}{x}\right) & x\neq ...
17answers
988 views
### Accidents of small $n$
In studying mathematics, I sometimes come across examples of general facts that hold for all $n$ greater than some small number. One that comes to mind is the Abel–Ruffini theorem, which states that ...
2answers
709 views
### If a continuous function is positive on the rationals, is it positive almost everywhere?
I made up this question, but unable to solve it: Let $f : \mathbb R \to \mathbb R$ be a continuous function such that $f(x) > 0$ for all $x \in \mathbb Q$. Is it necessary that $f(x) > 0$ ...
16answers
2k views
### An example for a calculation where imaginary numbers are used but don't occur in the question or the solution.
In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English ...
2answers
312 views
### What is a metric for $\mathbb Q$ in the lower limit topology?
A useful source of counterexamples in topology is $\mathbb R_\ell$, the set $\mathbb R$ together with the lower limit topology generated by half-open intervals of the form $[a,b)$. For example this ...
6answers
809 views
### Uncountable closed set of irrational numbers
Could you construct an actual example of a uncountable set of irrational numbers that is closed (in the topological sense)? I can find countable examples that are closed, like \$\{ \sqrt{2} + ...
4answers
432 views
### Is the closure of a Hausdorff space, Hausdorff?
$(X,\mathcal T)$ is a topological space which has a dense Hausdorff subspace. Is $X$ Hausdorff?
3answers
2k views
### Is every Lebesgue measurable function on $\mathbb{R}$ the pointwise limit of continuous functions?
I know that if $f$ is a Lebesgue measurable function on $[a,b]$ then there exists a continuous function $g$ such that $|f(x)-g(x)|< \epsilon$ for all $x\in [a,b]\setminus P$ where the measure of ...
3answers
322 views
### Why does the Hilbert curve fill the whole square?
I have never seen a formal definition of the Hilbert curve, much less a careful analysis of why it fills the whole square. The Wikipedia and Mathworld articles are typically handwavy. I suppose the ...
2answers
341 views
### Basic counterexample re: preimages of ideals
I'm trying to think of an example of a homomorphism of commutative rings $f:A\rightarrow B$ and ideals $I,J$ of $B$ such that $f^{-1}(I)+f^{-1}(J)$ is not a preimage of any ideal of $B$. I can't seem ...
1answer
1k views
### An example of a division ring $D$ that is **not** isomorphic to its opposite ring
I recall reading in an abstract algebra text two years ago (when I had the pleasure to learn this beautiful subject) that there exists a division ring $D$ that is not isomorphic to its opposite ring. ...
3answers
669 views
### Examples of math contest problems that can be solved in a 'cheap' way?
What are some examples of math contest problems that can be solved by using a nonrigorous, 'cheap' shortcut? For instance, a problem on the 2011 AMC went: A raft and a motorboat left dock A and ...
1answer
1k views
### Isomorphic quotients by isomorphic normal subgroups
In this recent question, Iota asked if, given a finite group $G$ and two isomorphic normal subgroups $H$ and $K$, it would follow that $G/H$ and $G/K$ are isomorphic. This is not true (a simple ...
5answers
566 views
### Can two topological spaces surject onto each other but not be homeomorphic?
Let $X$ and $Y$ be topological spaces and $f:X\rightarrow Y$ and $g:Y\rightarrow X$ be surjective continuous maps. Is it necessarily true that $X$ and $Y$ are homeomorphic? I feel like the answer to ...
4answers
559 views
### Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere?
Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? I think it is probable because we can consider y ...
2answers
783 views
### examples of measurable functions on $\mathbb{R}$
Could give some examples of nonnegative measurable function $f:\mathbb{R}\to[0,\infty)$, such that its integral over any bounded interval is infinite?
2answers
807 views
### How useless can the Mayer-Vietoris sequence be in general?
In an algebraic topology course I'm taking we are often asked to compute the homology groups of a space $X = A \cup B$ using the Mayer-Vietoris sequence, and it happens in all of the examples I've ...
2answers
553 views
### Clarifying the relationship between outer measures, measures and measurable spaces: the converse direction
This is related to my measure theory class, but it's not homework. The motivation behind this post is to understand the exact relationship between the three objects mentioned in the title. ...
5answers
204 views
### Looking for Cover's hubris-busting ${\mathbb R}^{N\gg3}$ counterexamples
In lecture 4 of his Introduction to Dynamical Linear Systems course, right after interpreting the inner product in ${\mathbb R}^N$ in terms of the cosine of the subtended angle, Stanford's Stephen ...
3answers
395 views
### Weird subfields of $\Bbb{R}$
I found this problem, and I can't get an answer to it: Prove that there are subfields of $\Bbb{R}$ that are a) non-measurable. b) of measure zero and continuum cardinality. I can't ...
2answers
212 views
### Construct an example of set $A$ for which $A+A=A$ but $0∉cl(A)$
How to prove that convexity is necessary condition in this question? Need to construct an example of set $A$ for which $A+A=A$ but $0 \notin cl(A)$. From the linked question follows that $A$ must be ...
2answers
281 views
### Is every group the automorphism group of a group?
Suppose $G$ is a group. Does there always exist a group $H$, such that $\operatorname{Aut}(H)=G$, i. e. such that $G$ is the automorphism group of $H$? EDIT: It has been pointed out that the answer ...
1answer
534 views
### A counterexample in topology
Semi-local simple connectedness is a property that arises in Algebraic Topology in the study of covering spaces, namely, it is a necessary condition for the existence of the universal cover of a ...
9answers
522 views
### Examples of nonabelian groups.
Can anybody provide some examples of finite nonabelian groups which are not symmetric groups or dihedral groups?
2answers
625 views
### “Pseudo-Cauchy” sequences: are they also Cauchy?
I tried to prove something but I could not, I don't know if it's true or not, but I did not found a counterexample. Let $(a_n)$ be a sequence in a general metric space such that for any fixed \$k ...
4answers
569 views
### Is a vector space over a finite field always finite?
Definition of a vector space: Let $V$ be a set and $(\mathbb{K}, +, \cdot)$ a field. $V$ is called a vector space over the field $\mathbb{K}$ if: V1: $(V, +)$ is a commutative group V2: \$\forall ...
3answers
1k views
### Discontinuous linear functional
I'm trying to find a discontinuous linear functional into $\mathbb{R}$ as a prep question for a test. I know that I need an infinite-dimensional Vector Space. Since $\ell_2$ is infinite-dimensional, ...
4answers
437 views
### Example of a rational function such that : $(f(x))^{3} + (g(x))^{3} + (h(x))^{3}=x$
Can any one give me example of: rational functions $f, g$ and $h$ with rational coefficients such that $$(f(x))^{3} + (g(x))^{3} + (h(x))^{3}=x$$ Also, if anyone knows a procedure for constructing ...
1answer
210 views
### Measurable subset of $\mathbb{R}$ with a specific property
Let $A$ be a subset of $\mathbb{R}$ such that its intersection with every finite segment is Lebesgue measurable. I am looking for an example of such an $A$ with the additional property that the ...
2answers
374 views
### Why is $T_1$ required for a topological space to be $T_4$?
Let's say we have some topological space. Axiom $T_1$ states that for any two points $y \neq x$, there is an open neighborhood $U_y$ of $y$ such that $x \notin U_y$. Then we say that a topological ...
3answers
383 views
### Why does the Continuum Hypothesis make an ideal measure on $\mathbb R$ impossible?
On the page 43 of Real Analysis by H.L. Royden (1st Edition) we read: "(Ideally) we should like $m$ (the measure function) to have the following properties: $m(E)$ is defined for each subset $E$ of ...
1answer
316 views
### Complete space as a disjoint countable union of closed sets
It is a consequence of Baire's theorem that a connected, locally connected complete space cannot be written $$X = \bigcup_{n \geq 1}\ F_n$$ where the $F_n$ are nonempty, pairwise disjoint closed ...
1answer
192 views
### Examples of universal constructions in probability theory
I am looking for examples of universal constructions in probability theory. Like the construction a of Gaussian space from a real Hilbert space, or a Poisson jump process from a measurable space with ...
2answers
844 views
### Comparing the Lebesgue measure of an open set and its closure
Let $E$ be an open set in $[0,1]^n$ and $m$ be the Lebesgue measure. Is it possible that $m(E)\neq m(\bar{E})$, where $\bar{E}$ stands for the closure of $E$?
4answers
271 views
### Counterexamples in complex analysis
In contrast to other topics in analysis such as functional analysis with its vast amount of counterexamples to intuitively correct looking statements (see here for an example), everything in complex ...
2answers
360 views
### A (non-artificial) example of a ring without maximal ideals
As a brief overview of the below, I am asking for: An example of a ring with no maximal ideals that is not a zero ring. A proof (or counterexample) that $R:=C_0(\mathbb{R})/C_c(\mathbb{R})$ is a ...
3answers
271 views
### Nasty examples for different classes of functions
Let $f: \mathbb{R} \to \mathbb{R}$ be a function. Usually when proving a theorem where $f$ is assumed to be continuous, differentiable, $C^1$ or smooth, it is enough to draw intuition by assuming ...
2answers
523 views
### Set of zeroes of the derivative of a pathological function
For a continuous function $f : [0,1] \to {\mathbb R}$, let us set $$X_f=\lbrace x \in [0,1] \bigg| f'(x)=0 \rbrace$$ (for a $x\not\in X_f$, $f'(x)$ may be a nonzero value or undefined). There ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 117, "mathjax_display_tex": 4, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9264186024665833, "perplexity_flag": "head"} |
http://www.physicsforums.com/showthread.php?s=2212a82a4682ccb83ad1f1ed87b56820&p=4026885 | Physics Forums
## What is the determinant of a matrix?
Okay so I'm a first year engineering student and I'm taking linear algebra.
I understand how to take determinants of nxn matrices, and I know how to do co-factor expansion. But I still don't understand what a determinant is.
I don't like learning algorithms on how to do certain things in math without knowing why it works. I hope someone can answer the "Why?" question for me.
To keep is simple as possible: What the hell is a determinant?
PhysOrg.com science news on PhysOrg.com >> Front-row seats to climate change>> Attacking MRSA with metals from antibacterial clays>> New formula invented for microscope viewing, substitutes for federally controlled drug
best to read about it: http://en.wikipedia.org/wiki/Determinant or in your textbook. the determinant is an operator that maps a square matrix to a number. sometimes this class of operator is called a "functional". this number sorta-kinda roughly corresponds to what might be considered a "magnitude" of the matrix. sorta like the length of a vector, but this magnitude is raised to the nth power where n is the number of rows or columns of the square matrix.
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor Hi KingKai! It's how much larger the coordinate system is. The matrix transforms from one set of coordinates to another. The determinant is the ratio of the "volumes" of a "unit box" before and after the transformation. (and if you're doing an integral, eg ∫∫∫∫ f(w,x,y,z) dwdxdydz, once you've transformed the function and the limits, you also have to multiply by the determinant of the "Jacobian" matrix)
## What is the determinant of a matrix?
Quote by tiny-tim Hi KingKai! It's how much larger the coordinate system is. The matrix transforms from one set of coordinates to another. The determinant is the ratio of the "volumes" of a "unit box" before and after the transformation.
that is an excellent answer, tiny. and a fact i hadn't thunked of before. but you don't need both the ratio of volumes and a unit box, do you? if you have any box (that has some volume) and transform it to another box by multiplying each corner coordinates by the transforming matrix, the volume of the resulting box is the volume of the box you start with times the determinant of the transforming matrix. ain't that so?
Quote by tiny-tim Hi KingKai! It's how much larger the coordinate system is. The matrix transforms from one set of coordinates to another. The determinant is the ratio of the "volumes" of a "unit box" before and after the transformation. (and if you're doing an integral, eg ∫∫∫∫ f(w,x,y,z) dwdxdydz, once you've transformed the function and the limits, you also have to multiply by the determinant of the "Jacobian" matrix)
This is wayy above my level of understanding. I only just learned how to do an integral like a month ago lol.
τ∏αηκ∫ anyways haha.
- ℝγαη
King, leave the "Jacobian" thing alone for a couple of years. do you understand that if you multiply an n×n matrix with a length-n vector (otherwise known as a 1×n matrix or "column vector"), what you get as a result (or "product") is another length-n vector. you got that, right? now, imagine in 3-dimensional space (so n=3), and you have a box anywhere in this "3-space". there are eight corners of this box and each corner has coordinates of some ( x, y, z) values that you can represent as a vector. specifically as a column vector: $$\begin{bmatrix} x_1 \\ y_1 \\ z_1 \end{bmatrix}$$ and you map each corner of that box to a corresponding corner of another box by use of some 3×3 square matrix. it's the same matrix used to transform each corner of the first box to the new box. the new box will have a volume equal to the volume of the first box times the determinant of the mapping matrix. does that make sense to you?
Blog Entries: 27
Recognitions:
Gold Member
Homework Help
Science Advisor
Quote by rbj … you don't need both the ratio of volumes and a unit box, do you? if you have any box (that has some volume) and transform it to another box by multiplying each corner coordinates by the transforming matrix, the volume of the resulting box is the volume of the box you start with times the determinant of the transforming matrix. ain't that so?
hi rbj!
yes, you're right of course, but it usually crops up when dealing with things like dwdxdydz, so i decided it would be easier to understand if i kept to boxes
on the other hand …
Quote by KingKai This is wayy above my level of understanding. I only just learned how to do an integral like a month ago lol.
Hi Ryan!
ok, just consider a 1x1x1 box at the origin (so its volume is 1)
a 3x3 matrix transforms it into a parallelepiped (a partially-collapsed box with sloping faces), and the volume of the new box is the determinant
(and if the box has been turned inside-out, then the determinant is negative)
same for a general n x n matrix, only less easy to visualise!
I kinda sorta get it, I'll probably have to do an example of this "volume change" problem. Math is really starting to get weird lol... thanks a bunch rbj and tiny tim. You both get a check mark √ √
These are great explanations. Can someone so the same for the trace of tensor?
Blog Entries: 27 Recognitions: Gold Member Homework Help Science Advisor hi cosmik debris! so far as i know, the trace of a tensor has no particular physical visualisation it is an invariant … it is the same under any unitary change of basis … so it does turn up in formulas like the einstein's field equations but in eg the moment of inertia tensor, the trace is the sum of the principal moments of inertia, which does not seem to have any physical application
Quote by tiny-tim hi cosmik debris! so far as i know, the trace of a tensor has no particular physical visualisation
Thanks.
Quote by tiny-tim Hi KingKai! It's how much larger the coordinate system is. The matrix transforms from one set of coordinates to another. The determinant is the ratio of the "volumes" of a "unit box" before and after the transformation. (and if you're doing an integral, eg ∫∫∫∫ f(w,x,y,z) dwdxdydz, once you've transformed the function and the limits, you also have to multiply by the determinant of the "Jacobian" matrix)
Thank you. I never understood what it meant, but now I do. But how do you go from that to computing it? Also, why do books so rarely cover the definition of it?
Tags
determinant
Thread Tools
| | | |
|-----------------------------------------------------------|----------------------------------|---------|
| Similar Threads for: What is the determinant of a matrix? | | |
| Thread | Forum | Replies |
| | Linear & Abstract Algebra | 2 |
| | Precalculus Mathematics Homework | 7 |
| | Special & General Relativity | 3 |
| | Calculus & Beyond Homework | 3 |
| | Calculus & Beyond Homework | 8 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9111840128898621, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/4157-help-newton-print.html | # help with Newton
Printable View
• July 15th 2006, 11:22 AM
dan
help with Newton
Is there some one who would get a kick out of explaining Newton’s method to me? I've read some different things about it but I didn’t understand the calculus :confused: Is it possible to explain it without using calculus?
dan
• July 15th 2006, 12:08 PM
galactus
What seems to be the trouble?. Newton's Method is one of the easiest things in calculus.
It's just a matter of iterations.
Let's give an example:
Use Newton to find the 'real' solutions of $x^{3}-x-1=0$
Let $f(x)=x^{3}-x-1\;\ and\;\ f'(x)=3x^{2}-1$
Graph the function, $x^{3}-x-1$
You can see that y=0 when x is between 1 and 2.
Make something in that region your initial guess.
Try 1.5
$x_{2}=1.5{-}\frac{(1.5)^{3}-1.5-1}{3(1.5)^{2}-1}=1.34782609$
Now use the result you get from this and sub back into the equation:
What is wrong with this website that I keep getting these errors saying the image is too big?. I've never seen this on another site. The code is fine as far as I can tell..
$x_{3}=1.34782609-$ $\frac{(1.34782609)^{3}-(1.34782609)-1}{3(1.34782609)^{2}-1}=1.32520040$
Continue in this fashion until you arrive at approximations which are so close they are virtually unchanged.
We end up with:
$x_{1}=1.5\;\ x^{2}=1.34782609\;\ x^{3}$ $=1.32520040\;\ x^{4}=$ $1.32471817\;\ x^{5}=1.32471796\;\ x^{6}=1.32471796$
See, the last two are the same.
No need to continue. . The solution is $\approx{1.32471796}$
• July 15th 2006, 01:11 PM
CaptainBlack
Quote:
Originally Posted by galactus
What is wrong with this website that I keep getting these errors saying the image is too big?. I've never seen this on another site. The code is fine as far as I can tell..
The TeX system seems to not like long (not that long) strings of TeX.
Placing pairs of [ /math] and [ math] (without extra spaces) s seems to
fix the problems usually (as I have done in your post).
RonL
• July 15th 2006, 01:49 PM
galactus
Thank you Cap'N. I will keep that in mind. I have not run into that problem before, regardless of the length of the string. Thanks for the fix up.
• July 15th 2006, 06:17 PM
dan
help with newton
Ok,
What seems to be the problem is...I know practically no calculus and my dad wont let me take a course until I finish my college algebra book :( so... I don't even know what f'(x) means and how you get there from f(x) :confused: :eek:
Dan
• July 15th 2006, 06:22 PM
ThePerfectHacker
Quote:
Originally Posted by dan
Ok,
What seems to be the problem is...I know practically no calculus and my dad wont let me take a course until I finish my college algebra book :( so... I don't even know what f'(x) means and how you get there from f(x) :confused: :eek:
Dan
I would be more strict and tell you to study "Pre-calculus" however they call the course.
• July 16th 2006, 03:25 AM
galactus
Quote:
Originally Posted by dan
Ok,
What seems to be the problem is...I know practically no calculus and my dad wont let me take a course until I finish my college algebra book :( so... I don't even know what f'(x) means and how you get there from f(x) :confused: :eek:
Dan
Poor Dan. I didn't realize that. Math is done in steps; You need to finish your algebra or pre-caculus and then go to Calc I. BTW, f'(x) is the derivative of a function f(x). What brought up Newton anyway?. You need to learn differentiation before you can use Newton's Method. Do a Google search.
Good luck.
• July 16th 2006, 04:26 AM
malaygoel
How will you explain the working of Newton's Method?I can't figure out why it works!!
Keep Smiling
Malay
• July 16th 2006, 04:35 AM
Soroban
Sorry, dan!
Quote:
I don't even know what $f'(x)$ means . . .
You're saying, "Can someone explain a $B\flat\text{ minor}$ chord without using Music Theory?
. . You see, I don't read music."
Answer: $No.$
• July 16th 2006, 04:45 AM
galactus
Most any calc book will explain how it works. It's not that complicated.
Off the top of my head.
The solutions of f(x)=0 are the values of x where the graph crosses the x-axis.
Suppose that x=c is some solution we are looking for. Even if we can't find c exactly, it is usually possible to approximate it by graphing and using the Intermediate Value Theorem.
If we let, say, $x_{1}$ be our initial approximation, then we can improve by moving along the tsngent line to y=f(x) at $x_{1}$ until we meet it at a point $x_{2}$.
Repeat.
One thing we have to do is derive some sort of formula so we can use ol' Newton.
$y-f(x_{1})=f'(x_{1})(x-x_{1})$
If $f'(x_{1})\neq{0}$, then this line is not parallel to the x-axis and crosses it at some point $(x_{2},0)$.
Sub this in our point-slope form:
$-f(x)=f'(x_{1})(x_{2}-x_{1})$
Solve for $x_{2}$
$x_{2}=x_{1}-\frac{f(x_{1})}{f'(x_{2})}$
We keep going until we see that the approximation is
$x_{n+1}=x_{n}-\frac{f(x_{n})}{f'(x_{n})}$, n=1,2,3,......
All times are GMT -8. The time now is 04:25 AM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 23, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9517259001731873, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/37610/demonstrating-that-rigour-is-important/63913 | ## Demonstrating that rigour is important
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Any pure mathematician will from time to time discuss, or think about, the question of why we care about proofs, or to put the question in a more precise form, why we seem to be so much happier with statements that have proofs than we are with statements that lack proofs but for which the evidence is so overwhelming that it is not reasonable to doubt them.
That is not the question I am asking here, though it is definitely relevant. What I am looking for is good examples where the difference between being pretty well certain that a result is true and actually having a proof turned out to be very important, and why. I am looking for reasons that go beyond replacing 99% certainty with 100% certainty. The reason I'm asking the question is that it occurred to me that I don't have a good stock of examples myself.
The best outcome I can think of for this question, though whether it will actually happen is another matter, is that in a few months' time if somebody suggests that proofs aren't all that important one can refer them to this page for lots of convincing examples that show that they are.
Added after 13 answers: Interestingly, the focus so far has been almost entirely on the "You can't be sure if you don't have a proof" justification of proofs. But what if a physicist were to say, "OK I can't be 100% sure, and, yes, we sometimes get it wrong. But by and large our arguments get the right answer and that's good enough for me." To counter that, we would want to use one of the other reasons, such as the "Having a proof gives more insight into the problem" justification. It would be great to see some good examples of that. (There are one or two below, but it would be good to see more.)
Further addition: It occurs to me that my question as phrased is open to misinterpretation, so I would like to have another go at asking it. I think almost all people here would agree that proofs are important: they provide a level of certainty that we value, they often (but not always) tell us not just that a theorem is true but why it is true, they often lead us towards generalizations and related results that we would not have otherwise discovered, and so on and so forth. Now imagine a situation in which somebody says, "I can't understand why you pure mathematicians are so hung up on rigour. Surely if a statement is obviously true, that's good enough." One way of countering such an argument would be to give justifications such as the ones that I've just briefly sketched. But those are a bit abstract and will not be convincing if you can't back them up with some examples. So I'm looking for some good examples.
What I hadn't spotted was that an example of a statement that was widely believed to be true but turned out to be false is, indirectly, an example of the importance of proof, and so a legitimate answer to the question as I phrased it. But I was, and am, more interested in good examples of cases where a proof of a statement that was widely believed to be true and was true gave us much more than just a certificate of truth. There are a few below. The more the merrier.
-
10
There's a clear advantage to knowing a 'good' proof of a statement (or even better, several good proofs), as it is an intuitively comprehensible explanation of why the statement is true, and the resulting insight probably improves our hunches about related problems (or even about which problems are closely related, even if they appear superficially unrelated). But if we are handed an 'ugly' proof whose validity we can verify (with the aid of a computer, say), but where we can't discern any overall strategy, what do we gain? – Colin Reid Sep 3 2010 at 13:53
8
What kind of person do you have in mind who would suggest proofs are not important? I can't imagine it would be a mathematician, so exactly what kind of mathematical background do you want these replies to assume? – KConrad Sep 3 2010 at 15:33
7
Colin Reid- I think one can differentiate between a person understanding and a technique understanding. The latter applies even if we cannot understand the proof. We know that the tools themselves "see enough" and "understand enough", and that in itself is a significant advance in our understanding. But we still want a "better proof", because a hard proof makes us feel that our techniques aren't really getting to the heart of the problem- we want techniques which understand the problem more clearly. – Daniel Moskovich Sep 3 2010 at 16:26
12
Concerning the Zeilberger link that Jonas posted, sorry but I think that essay is absurd. If Z. thinks that the fact that only a small number of mathematicians can understand something makes it uninteresting then he should reflect on the fact that most of the planet won't understand a lot of Z's own work since most people don't remember any math beyond high school. Therefore is Z's work dull and pointless? He has written other essays that take extreme viewpoints (like R should be replaced with Z/p for some unknown large prime p). – KConrad Sep 5 2010 at 1:39
14
Every proof has it's own "believability index". A number of years ago I was giving a lecture about a certain algorithm related to Galois Theory. I mentioned that there were two proofs that the algorithm was polynomial time. The first depended on the classification of finite simple groups, and the second on the Riemann Hypothesis for a certain class of L-functions. Peter Sarnak remarked that he'd rather believe the second. – Victor Miller Sep 6 2010 at 15:56
show 20 more comments
## 37 Answers
This is not strictly an answer to the question since it is a hypothetical rather than an example, but perhaps relevant nonetheless: Suppose that you have some computer program-say for keeping an airliner stable under gusts of wind-and it relies on a numerical algorithm, proved to converge under reasonable assumptions about air pressure and wind velocity, so on. A faster and more stable algorithm is developed for which no proof of convergence is known, though all the researchers in the field assure you that it always converges and that they are certain that it will always converge given the plausible scenarios your code is likely to be used for. I think it is clear that you should not trust their judgment, but rather retain the old code, despite the clear desirability of having a faster numerical algorithm.
So from the perspective of the researchers, a proof might not be all that important; it may have only told them what they already knew and generated no new insights in the process. But from the perspective of the consumers of mathematics, knowledge that a proof exists may lead to incremental improvements in technology that would otherwise not happen. Of course, at this point we've come full circle and it becomes important to the researchers to supply a proof.
My second point again distinguishes between consumers of mathematics and researchers: It is sometimes much easier to become 100% certain of something than 99% certain. 99% certainty that a given statement is true requires thinking about many concrete examples and developing intuition, whereas 100% certainty requires logically assenting to the statements contained in a proof. By this standard, I would say that I am 100% certain about the bulk of my mathematical knowledge, and not 99% certain. Perhaps this is a lamentable state of affairs, but time is finite. We cannot hope to develop intuition about all the statements we wish to use while working on problems we do wish to develop intuition about. In that sense, proofs encode in a few kb's the vast amount of information stored as the intuitions of all the researchers working on a particular problem. Again under this model, the purpose of proofs is for the convenience of non-researchers.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Mathematics wasn't that rigorous before N. Bourbaki: in the Italian school of Algebraic Geometry of the beginning of the XXth century the standard procedure was Theorem, Proof, Counterexample. Also at the time of Cauchy some theorems in analysis began like "If the reader doesn't choose a specially bad function we have..."
The use of rigour in analysis, which Cauchy began, avoided that by being able to explain what was a "good function" in each case: analytic, $C^{\infty}$, being able to do term-by-term derivation in its expansions series...
-
1
@Gabriel : This seems like a delicate historical claim. I would actually be interested to see some sources on the evolution of standards of rigor within the mathematical community. – Andres Caicedo Dec 6 2010 at 23:11
1
@Andres www-history.mcs.st-and.ac.uk/Biographies/… "Nicolas Bourbaki, a project they began in the 1930s, in which they attempted to give a unified description of mathematics. The purpose was to reverse a trend which they disliked, namely that of a lack of rigour in mathematics. The influence of Bourbaki has been great over many years but it is now less important since it has basically succeeded in its aim of promoting rigour and abstraction." – Gabriel Furstenheim Dec 7 2010 at 8:26
2
The unified description of mathematics was not the initial intent of the Bourbaki project. By all accounts, it was to write an up to date analysis textbook (losing a whole generation to World War 1 had left a gap). Of course, the whole thing got out of hand pretty quickly... – Thierry Zell Mar 19 2011 at 5:15
show 3 more comments
The fundamental lemma is an example that most believed and on whose truth several results depend. According to Wikipedia, Professor Langlands has said
... it is not the fundamental lemma as such that is critical for the analytic theory of automorphic forms and for the arithmetic of Shimura varieties; it is the stabilized (or stable) trace formula, the reduction of the trace formula itself to the stable trace formula for a group and its endoscopic groups, and the stabilization of the Grothendieck–Lefschetz formula. None of these are possible without the fundamental lemma and its absence rendered progress almost impossible for more than twenty years.
and Michael Harris has also commented that it was a "bottleneck limiting progress on a host of arithmetic questions."
-
1
Fundamental lemma is not even 50% obvious by any stretch of imagination. Like most of the Langlands program, it is not a specific result that admits a precise, uniform statement; rather, it is a guiding principle that needs to be fine-tuned in order to be compatible with other things that we, following Langlands, would like to believe in $-$ then, and only then, does it becomes meaningful to talk about proving it. – Victor Protsak Sep 6 2010 at 21:35
1
Thank you, Victor. I'm not proposing that the Fundamental Lemma is obvious, but it seems that is was accepted as likely to be true, because others based new work on it. PC below gives the example of Skinner and Urban, and Peter Sarnak says here time.com/time/specials/packages/article/…, that "It's as if people were working on the far side of the river waiting for someone to throw this bridge across," ... "And now all of sudden everyone's work on the other side of the river has been proven." – Anthony Pulido Sep 6 2010 at 22:45
2
I was under the impression that the fundamental lemma, at least, is a set of statements clear enough to be amenable to proof attempts. I think we have on page 3 here arxiv.org/abs/math/0404454 and Theorems 1 and 2 here arxiv.org/abs/0801.0446 precise statements for the various fundamental lemmas... It's possible I'm misunderstanding you. – Anthony Pulido Sep 6 2010 at 22:55
4
The fundamental lemma had a precise formulation, due to Langlands and Shelstad, in the 1980s (following earlier special cases). It is a collection of infinitely many equations, each side involving an arbitrarily large number of terms (i.e. by choosing an appropriate case of the FL, you can make the number of terms as large as you like). It was universally believed to be true because otherwise the theory of the trace formula (some of which was proved, but some of which remained conjectural), as developed by Langlands and others, would be internally inconsistent, something which no-one ... – Emerton Sep 7 2010 at 4:05
3
... believed could be true. This a typical phenomenon in the Langlands program, I would say: certain very general principles, which one cannot really doubt (at least at this point, when the evidence for Langlands functoriality and reciprocity seems overwhelming), upon further examination, lead to exceedingly precise technical statements which in isolation can seem very hard to understand, and for which there is no obvious underlying mechanism explaining why they are true. But one could note that class field theory (which one knows to be true!) already has this quality. – Emerton Sep 7 2010 at 4:13
show 5 more comments
In my experience, the two greatest difficulties in mathematics are:
1. The obvious is not always true.
2. The truth is not always obvious.
Rigour is the essence of mathematics. A rigorous proof provides an explanation of why a particular mathematical statement is true, and, at the same time, takes care of all the "Yes, but what if"s.
Rigour and proof provide the guarantee of correctness and reliability.
Rigour and proof refine our mathematical insights and instincts so that the superficially "obvious" misleads us less frequently.
When I pose the problem "1, 2, 3, x Find x." the initial response is usually a derisory laugh, of disbelief that I am serious, because "the answer is obviously 4". It is easy to demonstrate using practical examples that this statement is, as it stands, nonsense. A rigorous analysis is required.
-
Michael Atiyah's discussion of the "proof" and it role seems to relevant to be posted here.
Taken from: "Advice to a Young Mathematician in the Princeton Companion to Mathematics." http://press.princeton.edu/chapters/gowers/gowers_VIII_6.pdf This link was provided by "mathphysicist" in answer on another MO question: http://mathoverflow.net/questions/2144/a-single-paper-everyone-should-read-closed
Quotation from M. Atiyah:
"In fact, I believe the search for an explanation, for understanding, is what we should really be aiming for. Proof is simply part of that process, and sometimes its consequence."
"... it is a mistake to identify research in mathematics with the process of producing proofs. In fact, one could say that all the really creative aspects of mathematical research precede the proof stage. To take the metaphor of the “stage” further, you have to start with the idea, develop the plot, write the dialogue, and provide the theatrical instructions. The actual production can be viewed as the “proof”: the implementation of an idea.
In mathematics, ideas and concepts come first, then come questions and problems. At this stage the search for solutions begins, one looks for a method or strategy. Once you have convinced yourself that the problem has been well-posed, and that you have the right tools for the job, you then begin to think hard about the technicalities of the proof."
-
Allow me to quote part of the introduction of chapter 9 of Lovász: Combinatorial Problems and Exercises.
The chromatic number is the most famous graphical invariant; its fame being mainly due to the Four Color Conjecture, which asserts that all planar graphs are 4-colorable. This has been the most challenging problem of combinatorics for over a century and has contributed more to the development of the field than any other single problem. A computer-assisted proof of this conjecture was finally found by Appel and Haken in 1977. Although today chromatic number attracts attention for several other reasons too, many of which arise from applied mathematical fields such as operations research, attempts to find a simpler proof of the Four Color Theorem is still an important motivation of its investigation.
So here it's not so much the proof but the search for a proof that has given something extra over just believing the theorem. Does that still count as an answer to this question?
-
I think, the Cosmonut mentioning of Stokes' theorem (as a response to Gowers' specification of the answer) must be generalized to a wider statement which can't be bypassed here: in fact, exactly the impossibility to make Calculus rigorous can be considered as a cause of why Analysis appeared.
In modern language the problem is the following. One could expect that the operations over what is called "elementary functions" in Calculus can be axiomatized independently from the axioms of real numbers, so that one gets a closed purely algebraic system, where the equalities of elementary functions are derived from the list of "axiomatic identities" between $x^y$, $\sin x$, etc., and the operations like derivative and integral are conceived in purely algebraic way - the formulation of the problem can be found at page 197 in my textbook in arxiv: http://arxiv.org/abs/1010.0824 (unfortunately, in Russian).
But as it turns out, this is impossible, at least for the complete list of elementary functions: even the equality of elementary functions can't be defined axiomatically. And, what is amazing, this is not a classical result, it is quite new. However, I can't give a reference, what I write here is what Sergei Soloviev from Toulouse told me not long ago.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9636979103088379, "perplexity_flag": "middle"} |
http://unapologetic.wordpress.com/2007/11/19/nets-part-i/?like=1&source=post_flair&_wpnonce=a34a077b61 | # The Unapologetic Mathematician
## Nets, Part I
And now we come to my favorite definition of a topology: that by nets. If you’ve got a fair-to-middling mathematical background you’ve almost certainly seen the notion of a sequence, which picks out a point of a space for each natural number. Nets generalize this to picking out more general collections of points.
The essential thing about the natural numbers for sequences is that they’re “directed”. That is, there’s an order on them. It’s a particularly simple order since it’s total — any two elements are comparable — and the underlying set is very easy to understand. We want to consider more general sorts of “directed” sets, and we define them as follows: a directed set ${D}$ is a preorder so that for any two elements $a\in D$ and $b\in D$ we have some $c\in D$ with $c\geq a$ and $c\geq b$. That is, we can always find some point that’s above the two points we started with. $c$ doesn’t have to be distinct from them, though — if $a\geq b$ then $a$ is just such a point.
In categorical terms this is not quite the same as saying that our preorder has coproducts, since we don’t require any sort of universality here. We might say instead that we have “weak” binary coproducts, but that might be inessentially multiplying entities, and Ockham don’t play that. However, if we also throw in the existence of “weak” coequalizers — for a pair of arrows $f:A\rightarrow B$ and $g:A\rightarrow B$ there is at least one arrow $h:B\rightarrow C$ so that $h\circ f=h\circ g$ — we get something called a “filtered” category. Since there’s no such thing as a pair of distinct parallel arrows in a preorder, this adds nothing in that case. However, filtered categories show up in the theory of colimits in categories. In fact originally colimits were only defined over filtered index categories $\mathcal{J}$.
Anyhow, let’s say we have such a directed set ${D}$ at hand. If it helps, just think of $\mathbb{N}$ with the usual order. A net in a set $X$ is just a function $\Phi:D\rightarrow X$. Now we have a bunch of definitions to talk about how the image of such a function behaves. Given a subset $A\subseteq X$, we say that the net $\Phi$ is “frequently” in $A$ if for any $a\in D$ there is a $b\geq a$ with $\Phi(b)\in A$. We say that the net is “eventually” in $A$ if there is an $a\in D$ so that $\Phi(b)\in A$ for all $b\geq a$. For sequences, the first of these conditions says that no matter how far out the sequence we go we can find a point of the sequence in $A$. The second says that we will not only land in $A$, but stay within $A$ from that point on.
Next let’s equip $X$ with a topology defined by a neighborhood system. We say that a net $\Phi:D\rightarrow X$ converges to a point $x\in X$ if for every neighborhood $U\in\mathcal{N}(x)$, the net is eventually in $U$. In this case we call $x$ the limit of $\Phi$. Notice that if ${D}$ has a top element $\omega$ so that $\omega\geq a$ for all $a\in D$ then the limit of $\Phi$ is just $\Phi(\omega)$. In a sense, then, the process of taking a limit is an attempt to say, “if ${D}$ did have a top element, where would it have to go?”
Now, a net may not have a limit. A weaker condition is to say that $x\in X$ is an “accumulation point” of the net $\Phi$ if for every neighborhood $U\in\mathcal{N}(x)$ the net is frequently in $U$. For instance, a sequence that jumps back and forth between two points — $\Phi(n)=x$ for even $n$ and $\Phi(n)=y$ for odd $n$ — has both $x$ and $y$ as accumulation points. We see in this example that if we just picked out the even elements of $\mathbb{N}$ we’d have a convergent sequence, so let’s formalize this concept of picking out just some elements of ${D}$.
For sequences you might be familiar with finding subsequences by just throwing out some of the indices. However for a general directed set we might not be left with a directed set after we throw away some of its points. Instead, we define a final function $f:D'\rightarrow D$ between directed sets to be one so that for all $d\in D$ there is some $d'\in D'$ so that $a'\geq d'$ implies that $f(a)\geq d$. That is, no matter “how far up” we want to get in ${D}$, we can find some point in $D'$ so that the image of everything above it lands above where we’re looking in ${D}$. For sequences this just says that no matter how far out the natural numbers we march there’s still a point ahead of us that we’re going to keep. Then, given a net $\Phi:D\rightarrow X$ and a final function $f:D'\rightarrow D$ we define the subnet $\Phi\circ f:D'\rightarrow X$.
Now the connection between accumulation points and limits is this: if $x$ is an accumulation point of $\Phi$ then there is some subnet of $\Phi$ which converges to $x$. To show this we need to come up with a directed set $D'$ and a final function $f:D'\rightarrow D$ so that $\Phi\circ f$ is eventually in any neighborhood of $x$. We’ll let the points of $D'$ be pairs $(a,U)$ where $a\in D$, $U\in\mathcal{N}(x)$, and $\Phi(a)\in U$. We order these by saying that $(a,U)\geq(b,V)$ if $a\geq b$ and $U\subseteq V$.
Given $(a,U)$ and $(b,V)$ in $D'$, then $U\cap V$ is again a neighborhood of $x$, and so $\Phi$ is frequently in $U\cap V$. Thus there is a $c$ with $c\geq a$, $c\geq b$, and $\Phi(c)\in U\cap V$. Thus $(c,U\cap V)$ is in $D'$, and is above both $(a,U)$ and $(b,V)$, which shows that $D'$ is directed. We can easily check that the function $f:D'\rightarrow D$ defined by $f(a,U)=a$ is final, and thus defines a subnet $\Phi\circ f$ of $\Phi$. Now if $N\in\mathcal{N}(x)$ is any neighborhood of $x$, then there is some $\Phi(b)\in N$. If $(a,U)\geq(b,N)$ then $\Phi(f(a,U))=\Phi(a)\in U\subseteq N$. Thus $\Phi\circ f$ is eventually in $N$.
Conversely, if $\Phi$ has a limit $x$ then $x$ is clearly an accumulation point of $\Phi$.
### Like this:
Posted by John Armstrong | Point-Set Topology, Topology
## 13 Comments »
1. [...] Part II Okay, let’s pick up our characterization of topologies with nets by, well, actually using them to characterize a topology. First we’re going to need yet [...]
Pingback by | November 20, 2007 | Reply
2. [...] so why have we been talking about nets? Because continuous functions look great in terms of [...]
Pingback by | November 21, 2007 | Reply
3. We say that the net is “eventually” in A if there is an a\in D so that \Phi(b)\in A for all b\geq A.
It seems that it should be “b\geq a”.
Comment by passby | December 10, 2007 | Reply
4. Thanks for catching that.
Comment by | December 10, 2007 | Reply
5. [...] of Functions Okay, we know what it is for a net to have a limit, and then we used that to define continuity in terms of nets. Continuity just says [...]
Pingback by | December 19, 2007 | Reply
6. [...] can now see that the collection of all the tagged partitions of an interval form a directed set! We say that a tagged partition is a “refinement” of a tagged partition if every [...]
Pingback by | January 29, 2008 | Reply
7. [...] infinite sequence would have to hit one point infinitely often. Here instead, we’ll have an accumulation point in our compact metric space so that for any and point in our sequence there is some with . [...]
Pingback by | February 1, 2008 | Reply
8. [...] the sequence of partial sums of this series is a subsequence of the . [...]
Pingback by | May 6, 2008 | Reply
9. [...] joins, so that is directed (any two elements have an upper bound), just like the elements of a net (or more pedantically, the domain of a net). I call an ultrafilter a “fat net” because [...]
Pingback by | July 18, 2008 | Reply
10. [...] this topology in terms of open sets as we usually do, we define this topology in terms of which nets converge to which points. In fact, we’ll make do with sequences, since the extension to [...]
Pingback by | September 2, 2008 | Reply
11. [...] like in the one-dimensional case, the collection of all tagged partitions of an interval form a directed set, where we say that if is a refinement of . And again we define nets on this directed set; given a [...]
Pingback by | December 1, 2009 | Reply
12. [...] in both and , it’s above both of them in our partial order, which makes this poset a directed set, and the oscillation of is a [...]
Pingback by | December 7, 2009 | Reply
13. [...] of them. When we dealt with topology, we were able to recast the basic foundations in terms of nets. That is, a function is continuous if and only if it “preserves limits of convergent [...]
Pingback by | April 26, 2010 | Reply
« Previous | Next »
## About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 115, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9572404026985168, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/44365/cauchy-problem-in-convex-neighborhood | # Cauchy Problem in Convex Neighborhood
While reading the reference
Eric Poisson and Adam Pound and Ian Vega,The Motion of Point Particles in Curved Spacetime, available here,
there is something that I don't quite understand. Eq.(16.6) is an evolution equation for de Green functional. Then in Eq.(16.7) Poisson et. al. look for a specific solution and they state that the separation of the Green functional is valid only in the convex neighborhood of a field point $x$. I assume that is because the Cauchy problem is valid only in that neighborhood... My question is why? Why is the Cauchy problem related to the imposition that the two points must be connected by a unique geodesic?
-
Thank you for the editing although, nobody seems to know the answer... – PML Nov 18 '12 at 15:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.939242959022522, "perplexity_flag": "head"} |
http://crypto.stackexchange.com/questions/282/what-alphanumeric-string-length-can-be-used-to-guarantee-no-hash-collisions-from/286 | # What alphanumeric string length can be used to guarantee no hash collisions from CRC-64?
If I'm hashing alphanumeric strings (chars in the set `0`-`9`, `a`-`z`, and `A`-`Z`) with a CRC-64 hashing function, how long can my strings be while guaranteeing no hash collisions?
Stated differently: If I have a set of all CRC-64 checksums computed from alphanumeric strings $0$ to $N$ bytes, how large can $N$ be before I see a collision in the set?
-
## 4 Answers
If you have 62 chars you can transform 62 letters ($10+26+26$) in 6 bit number (approx). CRC is guaranteed to be unique mapping (Injective function) as long as input is shorter than output – you can have at most 10 letters, but not 11 since $62^{10} < 2^{64} < 62^{11}$.
It same goes whit most other hash functions. Lets say that you have hash function that takes only 64 bits and makes 64 bit output and is bijective (like a symmetric cypher). That mean you will not have collision as long as your input is shorter than 64 bits.
If you have mappings from all 64 bit inputs you are 100% sure that each new input will produce collision (because there is no value that is unused)
-
wow... you blew my mind, is it true that CRC is an injective function? – brandx Aug 3 '11 at 7:34
1
CRC is guaranteed since it uses primitive polynomial. Primitive polynomials are generators of finite field and that mean that CRC can have any value between 1 and 2^N – ralu Aug 3 '11 at 7:38
4
Assuming a standard encoding of characters as 8 bit bytes you can guarantee uniqueness only up to 8 characters. – starblue Aug 3 '11 at 11:07
1
Attention: This 10-character limit is only valid in this form if you recode your 10 characters to 8 bytes (using something like base64?) before calculating the checksum. – Paŭlo Ebermann♦ Aug 3 '11 at 19:35
2
@ralu's answer is correct only if the input is encoded in a special format, so that the encoding of the 10 letters fits within 64 bits. That is not how inputs will normally be encoded, so ralu's answer is misleading and unlikely to apply to most practical situations. – D.W. Aug 11 '11 at 2:36
show 2 more comments
I have a different take on ralu's accepted answer and some of the comments thereafter.
Consider two $N$-bit data sequences which we think of as polynomials $$D^{(1)}(x) = \sum_{i=1}^{N-1} D_i^{(1)}x^i ~~\text{and}~~D^{(2)}(x) = \sum_{i=1}^{N=1} D_i^{(2)}x^i$$ where each $D_i^{(1)}$ and $D_i^{(2)}$ is $0$ or $1$. Let $M(x)$ of degree $64$ denote the CRC polynomial. Actual CRC implementations for data communications have many bells and whistles but let us assume that for hashing purposes, the simplest form of CRC is used so that the CRC check sums (or hashes) $R^{(1)}(x)$ and $R^{(2)}(x)$ of degree $63$ or less (and thus having $64$ bits) are are the remainders obtained by dividing $x^{64}D^{(1)}(x)$ and $x^{64}D^{(2)}(x)$ by $M(x)$. Rememeber that this is polynomial division over the binary field $\{0,1\}$ where addition (and subtraction) is the Exclusive-OR operation $\oplus$. We thus have $$\begin{align*} x^{64}D^{(1)}(x) &= Q^{(1)}(x)M(x) \oplus R^{(1)}(x)\\ x^{64}D^{(2)}(x) &= Q^{(2)}(x)M(x) \oplus R^{(2)}(x) \end{align*}$$ where $Q^{(1)}(x)$ and $Q^{(2)}(x)$ are the quotients. Adding these two equations, we have that $$x^{64}\left[D^{(1)}(x)\oplus D^{(2)}(x)\right] = \left[Q^{(1)}(x) \oplus Q^{(1)}(x)\right]M(x) \oplus \left[R^{(1)}(x) \oplus R^{(2)}(x)\right]$$ It follows that if $R^{(1)}(x) = R^{(2)}(x)$ so that $[R^{(1)}(x) \oplus R^{(2)}(x) = 0$, then it must be that $D^{(1)}(x)\oplus D^{(2)}(x)$ is a multiple of $M(x)$. Conversely, if $D^{(1)}(x)\oplus D^{(2)}(x)$ is a multiple of $M(x)$, then so is $$x^{64}\left[D^{(1)}(x)\oplus D^{(2)}(x)\right] \oplus \left[Q^{(1)}(x) \oplus Q^{(1)}(x)\right]M(x) = \left[R^{(1)}(x) \oplus R^{(2)}(x)\right]$$ a multiple of $M(x)$, and therefore $R^{(1)}(x) \oplus R^{(2)}(x)$ of degree $63$ or less is a multiple of $M(x)$ of degree $64$. Since this can happen only if $R^{(1)}(x) \oplus R^{(2)}(x) = 0$, that is, $R^{(1)}(x) = R^{(2)}(x)$, we have the following.
$D^{(1)}(x)$ and $D^{(2)}(x)$ hash to the same check sum, that is, $R^{(1)}(x) = R^{(2)}(x)$, if and only if $D^{(1)}(x)$ and $D^{(2)}(x)$ differ by a multiple of $M(x)$
This result holds even if $D^{(1)}(x)$ and $D^{(2)}(x)$ are of different degrees if we zero-pad the shorter sequence with zeroes at the high-order end to make the sequences of equal length. But if the afore-mentioned bells and whistles are included (e.g. complement the high-order two bytes before commencing CRC calculations), then the result still holds for equal length data sequences, but should not be applied blindly when $D^{(1)}(x)$ and $D^{(2)}(x)$ are of different degrees: some care is necessary.
For the simple case considered here, we have immediately that $$\deg D^{(1)}(x) = \deg D^{(2)}(x) = N-1 \geq \deg M(x) = 64$$ and so if $N \leq 64$, we are guaranteed that no two sequences hash to the same checksum.
Turning to further specifics and ralu's answer, each alphanumeric symbol can have one of $62$ different values, and while it is possible to map $11$ such symbols to $62^{11}$ different bit sequences of lengths $64$ or less, it is much more convenient to implement a symbol-by-symbol mapping into $6$-bit bytes and create a degree-$65$ data sequence $D(x)$ of $66$ bits to be hashed. The downside is that four sequences $D(x), D(x)\oplus M(x), D(x) \oplus xM(x)$, and $D(x)\oplus (1\oplus x)M(x)$ will have the same hash, and this is the price paid for simplicity of the mapping algorithm: we have to restrict ourselves to $10$ alphanumeric symbols to avoid collisions. On the other hand, compressing $11$ alphanumeric symbols to $64$ bits or less is a messy task.
An even simpler method is to use the password as entered by the user (say as a sequence of ASCII-encoded $8$-bit bytes) to create the data sequence by concatenation. Now, $8$ symbols guarantees no collisions as per the simplified analysis above, but the actual picture is somewhat different. Although with $9$ bytes and $72$ bits, collisions can occur, it is not immediately obvious that collisions will occur. For example, $D(x)\oplus M(x)$ might well be a sequence of bytes that cannot be entered by the user as a password because some of the ASCII characters are control characters that cause the computer to take other actions than to simply pass the character on to the application to be processed.
I doubt there is a simple answer to the question of what is the maximum password length for which collisions are guaranteed not to occur. The answer depends on the choice of $M(x)$ also. For example, Wikipedia's page on CRCs says that CRC-64-ISO $x^{64}+x^4+x^3+x+1$ is weak for hashing purposes, the basis of which claim the diligent reader of the above will have no difficulty understanding.
-
I will assume that the question is "If we take the CRC-64 function, and consider inputs that consist only of the ASCII characters in the specified range, what's the longest inputs we can have without having a collision". The other answers assumed some mapping between the string and the CRC-64 function (and try to answer 'what sort of mapping would be best'); I'll assume that there is no such mapping.
Well, the full answer depends on the CRC polynomial, if we assume the CRC-64-ISO polynomial of $x^{64} + x^4 + x^3 + x + 1$, then I believe that the longest is 8.
We know that it is at least 8, because (as mentioned earlier) CRC's are guarranteed to have an output differential if the input differential is no longer than the CRC.
However, if we are allowed to inputs of length 9, then we can find collisions.
If the CRC-64 processes MSBits first, then:
CRC64(BxxxxxxxP) == CRC64(CxxxxxxxK)
This is because $B \oplus C = 0x42 \oplus 0x43 = 0x01$, which corresponds to the polynomial $x^{64}$ and $P \oplus K = 0x50 \oplus 0x4B = 0x1B$ which corresponds to the polynomial $x^{4} + x^{3} + x + 1$, and hence the bit-wise difference between the two inputs is exactly a multiple of the CRC polynomial (in fact, is precisely the CRC polynomial).
If the CRC-64 processes LSBits first, then:
CRC64(AxxxxxxxP) == CRC64(QxxxxxxxK)
This is because $A \oplus Q = 0x41 \oplus 0x51 = 0x10$ which corresponds to the polynomial $x^{67}$ and $P \oplus K = 0x50 \oplus 0x4B = 0x1B$ which corresponds to the polynomial $x^7 + x^6 + x^4 + x^3$ (this differs from the first example because we're counting bits from the other side of the byte), and hence the bit-wise difference between the two inputs is again an exact multiple of the CRC polynomial (in this case, $x^3$ times the CRC polynomial).
-
This is classical Birthday paradox problem and the linked Wikipedia article has a nice set of approximations that help you evaluate the chance of collisions. Of course, this applies to random choice of strings only. It's not possible to guarantee against collisions even when two strings are involved - there will be some very low probability even for two strings.
-
I can estimate that a collision is probable when N is greater than log(2^32)/log(62), or 5.374 characters, however I was looking for a guarantee. For example, if I take all single characters in this set and a null string, I can verify that there are no collisions. I'm guessing the only way to answer this is a brute-force test, but I'm hoping there's some literature out there that talks about collisions with a reduced input domain. any ideas? – brandx Aug 3 '11 at 7:16
1
@brandx: You have two paths. Path 1 is to search for analysis of CRC64 specifically - maybe one can strictly prove something given how CRC64 works under the hood and how it treats data. Path 2 is to assume that CRC64 is like a crypto hash - hashes anything into random-looking stuff and then you can't have a guarantee unless you test an exact string set. – sharptooth Aug 3 '11 at 7:24
@sharptooth, the issue is that CRC does not actually behave like a crypto hash, so Path 2 is not valid in this context; it will give you inaccurate results. – D.W. Aug 11 '11 at 2:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 77, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9274247288703918, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/36430/reason-for-the-gaussian-wave-packet-spreading?answertab=active | # Reason for the Gaussian wave packet spreading
I have recently read how the Gaussian wave packet spreads while propagating. see: http://en.wikipedia.org/wiki/Wave_packet#Gaussian_wavepackets_in_quantum_mechanics
Though I understand the mathematics I don't understand the physical explanation behind it. Can you please explain?
-
## 4 Answers
(From my book http://physics-quest.org/Book_Chapter_Klein_Gordon.pdf)
Spreading of the free field wave packet
The speed of the wave packet is given by the derivative of the Hamiltonian against the momentum.
\begin{equation} v ~~=~~ \frac{\partial H}{\partial p} ~~=~~ \frac{\partial E}{\partial p} ~~=~~ \frac{p c^2}{\sqrt{(pc)^2+(mc^2)^2}} ~~=~~ \frac{p c^2}{E~} \end{equation}
The wave-packet would not spread in the case of a single $v$ such as in the case of a massless particle which is represented by a wave-function which moves unchanged at the speed of light.
However, a localized field with a Gaussian shape has (via the Fourier transform) a Gaussian distribution of $p$ in momentum space. The relation
\begin{equation}E = \sqrt{(pc)^2+(mc^2)^2}\end{equation}
means that there will be a range of speeds instead of a single one and so, in general, the wave-packet will spread.
figure 1.
The variation is approximately given by.
\begin{equation} \frac{\Delta v}{\Delta p}~\approx~\frac{\partial v}{\partial p} ~~\longrightarrow~\Delta v ~\approx~ \frac{\partial^2 E}{\partial\,p^2}~\Delta p \end{equation}
Given that Heisenberg's uncertainty relation $\Delta x \Delta p \geq \hbar/2$ can be derived by Fourier analysis, which in the case of a Gaussian shaped wave-function becomes $\Delta x \Delta p = \hbar/2$, the minimum value, we can write.
\begin{equation} \Delta v ~\approx~ \frac{\partial^2 E}{\partial\,p^2}~\frac{\hbar}{2\Delta x} \end{equation}
Where $\Delta x$ is the width. One can reason that the overall shape of a wave-function changes faster if $\Delta x$ is smaller for a given speed-variation $\Delta v$. We can define a dimension-less quantity shape, which has derivative in time which gives us an approximation of the relative spreading of the wave-function in time.
\begin{equation} \frac{\partial}{\partial t}\Big\{ \mbox{shape} \Big\} ~~ \approx ~~ \frac{\Delta v}{\Delta x} ~~ \approx ~~ \frac{\partial^2 E}{\partial\, p^2}~\frac{\hbar}{2(\Delta x)^2} \end{equation}
Working out the second order derivative gives us.
\begin{equation} \frac{\partial^2 E}{\partial\, p^2} ~=~ \frac{(mc^2)^2~c^2}{~\big(~(pc)^2+(mc^2)^2~\big)^{3/2}~} ~=~ \frac{E_o^2\,c^2}{E^3} ~=~\frac{c^2}{E\gamma^2} \end{equation}
Which leads us to our final expression here.
\begin{equation} \frac{\partial}{\partial t}\Big\{ \mbox{shape} \Big\} ~~ \approx ~~ \frac{\hbar\,c^2}{2E(\gamma\Delta x)^2} \end{equation}
If we remove the gamma's then we get the expression for the rest frame.
\begin{equation} \frac{\partial}{\partial t}\Big\{ \mbox{shape} \Big\} ~~ \approx ~~ \frac{\hbar\,c^2}{2mc^2(\Delta x)^2} \end{equation}
We can summarize the results as:
• The spreading of the wave-function is inversely proportional to the frequency (the phase change rate in time) of the particle, Higher mass particles spread slower.
• The spreading of the wave-function is proportional to the square of the momentum spread. The smaller the initial volume in which the initial wave-function was contained the faster it spreads and keeps spreading.
figure 2
From figure. 2 we can read the mathematical mechanism which leads to spreading. The variation $\Delta p$ of the momentum stays the same over time. It is the frequency dependency on the momentum
$E=\sqrt{(pc)^2+(mc^2)^2}$
which leads to a phase change over $p$.
The phase change is opposite at both sides of the center-momentum. These phase-changes lead to (opposite) translations of the wave-function in position space, this is the spreading. The value $\Delta x$ in our expression stays constant because $\Delta p$ stays constant, it is the initial $\Delta x$ corresponding to the pure Gaussian at $t$=$0$.
Some actual spreading rate numbers
We can work out a few numerical example to get an idea of the spreading rates. From the wide range of wavelength sizes, we can classify the Compton radius as the small size limit, although there is in principle no real barrier to go to even smaller sized wave-packets.
If we replace $\Delta x$ with twice the Compton radius $r_c=\hbar/mc$ then, assuming that our approximation is still reasonably valid in this range.
\begin{equation} \frac{\partial}{\partial t}\Big\{ \mbox{shape} \Big\} ~~ \approx ~~ \frac{c}{8\,r_c} \end{equation}
If we recall the rest-frequency of the particle: $f_o=c/(2\pi\,r_c)$ (in case of the electron $f_o=1.2355899729\,10^{20}$ Hz ), then we see that spreading speed approaches the speed of light in this range. The spread in momentum is so large that it includes velocities from close to $-c$ up to $+c$.
To confine an electron-field to a Compton radius-like volume one needs a positive charge of $~$137e, The inner-most electrons of heavy elements come close to being confined into such a small area. The Compton radius for electrons is $3.861592696\,10^{-13}$ meter.
More commonly, electrons freed from a bound state, take off with a much larger radius, comparable to the Bohr radius. ($5.291772131\,10^{-11}$ meter) This means that the spreading speed is much lower, $v < 0.01c$, but still quite high.
The size of the wave-packet will grow fast. For instance, the famous single electron interference experiment of Akira Tomomura (see figure 3), which demonstrated the single-electron build-up of an interference pattern, shows that the electron fields in the experiment must at least be several micrometers wide. This is a factor 100,000 wider as in the confinement of the Bohr radius.
figure 3
Hans
-
Though I understand the mathematics I don't understand the physical explanation behind it.
I'll take a stab at it.
For a free particle, momentum eigenstates are also energy eigenstates and thus have a simple time dependence, a time dependent phase with a frequency proportional to the energy of the state.
A free particle with a gaussian wave function is then a continuous superposition of momentum, and thus energy, eigenstates.
Since the phase of the different momentum eigenstates evolve at a different rate, the way the various components constructively/destructively add evolves in time.
When all the phases "line up" just so, we get the minimum uncertainty wave packet. As time evolves, the wave packet spreads since the phases evolve at different rates.
-
Ron's answer is (as always :-) definitive, and if you're going to accept an answer you should accept his. However I thought it was worth attempting a more general explanation.
Remember that the gaussian packet describes the probability distribution of the particle. When the packet spreads it doesn't mean the particle is in some sense swelling up and spreading out, it means the probability of finding the particle is spreading out.
The reason for this is that the gaussian has a spread of momentum related to the uncertainty principle $\Delta p\Delta x \ge \hbar/2$. That means there is a spread of velocities and this means the gaussian has to spread out. I'm reluctant to say that different bits of the packet are moving at different velocities because this is attempting a classical analogy that is misleading, but it hopefully gives you some physical intuition as to what is going on.
-
The Gaussian wavepacket only spreads in the free Schrodinger equation. It doesn't spread in the case where you have a harmonic oscillator, and the Gaussian width is equal to that of the ground state wavefunction. For other Gaussians in an oscillator, the width oscillates, growing and shrinking.
Spreading in the free Schrodinger equation is easiest to understand from the analyticity properties. The Schrodinger equation is analytically linked to the diffusion equation
$${d\over dt} \rho = {1\over 2} \nabla^2 \rho$$
The fundamental solution of the diffusion equation for initial conditions a delta function at $t=0$ and $x=0$ is the spreading Gaussian:
$$\rho(x,t) = {1\over \sqrt{2\pi t}} e^{-{x^2\over 2t}}$$
You can see this works by substituting, but it is also obvious from a path integral. To get the spreading Schrodinger packet, substitute $it$ for t.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 29, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.878395676612854, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/143135-max-min-problem.html | # Thread:
1. ## Max/Min Problem
Hey, I really need help solving this problem. Help would be greatly appreciated.
1. A rectangle is to be inscribed in a semicircle with radius 4, with one side on the semicircle's diameter. What is the largest area this rectangle can have?
This problem was under the "Maxima and Minima" section of a math book, but I can't seem to figure out how to solve it.
2. I always find drawing it first to get better idea of the problem is a good start.
Make the length of the rectangle 2x. Look at the attached picture and you can see the length from centre of rectangle to top corner is the radius; 4. The length from centre to bottom corner is x. So height of rectangle will be $\sqrt{16-x^2}$ (using pythagoras).
Area of rectangle is $A = 2x (\sqrt{16-x^2})$
differentiate that and solve for x to get the answer. You should get two answers, one of which will be zero and so obviously not the max answer!
hint: To make it easier I'd square both sides: $A^2 = 4x^2 (16-x^2) = 64x^2 - 4x^4$
Attached Thumbnails | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9459154009819031, "perplexity_flag": "head"} |
http://nrich.maths.org/6675/note | ### GOT IT Now
For this challenge, you'll need to play Got It! Can you explain the strategy for winning this game with any target?
### Is There a Theorem?
Draw a square. A second square of the same size slides around the first always maintaining contact and keeping the same orientation. How far does the dot travel?
### Reverse to Order
Take any two digit number, for example 58. What do you have to do to reverse the order of the digits? Can you find a rule for reversing the order of digits for any two digit number?
# Christmas Chocolates
### Why do this problem?
Many students are accustomed to using number patterns in order to generalise. This problem offers an alternative approach, challenging students to consider multiple ways of looking at the structure of the problem. The powerful insights from these multiple approaches can help us to derive general formulae, and can lead to students' appreciation of the equivalence of different algebraic expressions.
### Possible approach
This problem might follow on nicely from Picturing Triangle Numbers, Mystic Rose, and Handshakes.
Display this image of a full size $5$ box of chocolates, and ask students to work out how many chocolates there are, without speaking or writing anything down. Compare solutions and share approaches.
Mention that mathematicians like to find efficient methods which can be used not only for simple cases but also when the numbers involved are very large. Explain that the pictures of Penny's, Tom's and Matthew's partially-eaten chocolates could be used by a mathematician as a starting point for finding an efficient method for counting the total number of chocolates.
Hand out these chocolate box templates and ask students to show how the images of the partially-eaten chocolates can be used to calculate the total.
Ask students to report back, explaining the methods which have emerged.
Then ask students to use all three methods, along with any methods they devised for themselves, to work out the number of chocolates in a size $10$ box, and verify that all methods agree.
Challenge students to express each method for finding the number of chocolates in any size of box, perhaps introducing some algebra and the idea of a size $n$ box if appropriate.
Bring the class together to share findings. Compare the different "formulae" which have emerged, and ask students to explain why they are equivalent.
### Key questions
How does each image help you to count the total number of chocolates quickly?
Can you demonstrate the equivalence of different algebraic expressions?
### Possible extension
The problems Summing Squares and Picture Story lead to formulae for some intriguing sequences through analysis of the structure of the contexts.
### Possible support
The problem Seven Squares gives lots of simple contexts where formulae emerge by looking at structure rather than number sequences.
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9109249114990234, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/112335/representation-ring-of-sun | ## Representation ring of SU(n)?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
What's the structure of representation ring of SU(n)?
What are the representations of generators?
-
If $V$ is the standard representation, it's a free ring generated by $V$, $\wedge^2 V$, ..., $\wedge^{n-1}V$. This is very standard. – Will Sawin Nov 14 at 1:59
1
I think the underlying question is fine (speaking as a non-expert) but that it would be better for you to provide some more motivation, some explanation of what you already know, etc. – Yemon Choi Nov 14 at 1:59
1
@Will: to be fair it may be standard to many people but it might not be standard to everyone. My Mileage Does Vary. – Yemon Choi Nov 14 at 2:01
## 2 Answers
For a connected compact group, the representation ring is isomorphic to the subring of the representation ring of the maximal torus that is invariant under the action of the Weyl group.
The maximal torus of $SU(n)$ consists of the diagonal matrices in $SU(n)$. We can identify its representation ring with $Z[x_1,...,x_n]/(x_1...x_n-1)$, where $x_i$ is the representation that sends a diagonal matrix to the $i$th diagonal entry.
The Weyl group action of $S_n$ just permutes $(x_1,...,x_n)$. Thus, the representation ring is the ring of symmetric polynomials, generated by the elementary symmetric polynomials $s_1,...,s_n$, under the relation $s_n=1$.
It remains to show that the elementary symmetric polynomial $s_k$ corresponds to $\wedge^k$ of the standard representation. This is clear if we write the symmetric polynomial as a sum of monomials, and $\wedge^k$ of the standard representation of the maximal torus as a direct sum of $1$-dimensional representations. The decompositions are identical.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
Will has summarized concisely the classical structure theory for the representation ring of `$SU(n)$`, but it's worth emphasizing that all of this is found in textbooks on compact Lie groups. Doing any kind of research involving such older parts of representation theory requires some acquaintance with this kind of literature, to avoid re-inventing the wheel. It's also a good idea to place the special example in the context of simply connected semisimple compact Lie groups.
A typical source is the Springer GTM 98 Representations of Compact Lie Groups (1985) by Brocker and tom Dieck. They have a clear discussion of representation rings in section II.7, along with the general version of Will's answer in the setting of highest weights and fundamental representations in section VII.2. They also provide a lot of concrete details about classical groups, etc. (All of this material goes back to much older work of Weyl and others, and is treated in multiple sources.)
-
3
This commentary on the literature is very welcome for interlopers like me who never learned any representations of Lie groups at "grad school level", beyond a sketchy look at SU(2), and consequently have to waste a lot of time poring through books in search of two or three pages that give what we need – Yemon Choi Nov 14 at 17:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 17, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352908730506897, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/164177-prove-following-function-differentiable.html | # Thread:
1. ## Prove the following function is differentiable
Let f(t) be a continuous function under 1 variable, and we shall define
g(u,v)=Integral of f(t)dt from (v^2 - u^2) to (v^2+u^2)
prove that g(u,v) is differentiable in (u,v).
How do I approach? Should I use the Leibniz rule for integrals?
Thanks.
$\displaystyle g(u,v)=\int_{v^2-u^2}^{v^2+u^2} f(t)\text{ d}t$
Did you try using the Fundamental Theorem to compute $g_u$ and $g_v$?
Fundamental theorem of calculus - Wikipedia, the free encyclopedia
3. Which is, essentially, the Leibniz formula GIPC refers to.
$\frac{\partial g}{\partial u}= f(v^2+ u^2)(2u)- f(v^2- u^2)(-2u)$
$\frac{\partial g}{\partial v}= f(v^2+ u^2)(2v)- f(v^2- u^2)(2v)$
so not only do the partial derivatives exist but, since we are given that f is continuous, they are continuous functions and so g is differentiable. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.858367919921875, "perplexity_flag": "middle"} |
http://sbseminar.wordpress.com/2007/08/09/questions-about-the-moy-relations/ | ## Questions about the MOY relations August 9, 2007
Posted by Joel Kamnitzer in link homology, Pictorial Algebra, quantum groups.
trackback
I have some rather specific questions about the MOY knot invariant which have come up as Sabin Cautis and I have been thinking about Khovanov-Rozansky homology. I’ll start by explaining the following “theorem” and then I’ll ask some questions about it. Hopefully someone (for eg. Scott) will be able to answer them.
Consider closed crossingless Reshetikhin-Turaev diagrams for sl(n) where all strands are labelled by 1 or 2. This means that we look at oriented planar graphs with all edges either labelled 1 or 2, and each vertex trivalent with either two single edges and one double edge and matching orientations.
We can think of such a graph as being related to the representation theory of (quantum) sl(n). The single strands correspond to the standard representation $\mathbb{C}^n$ and the double strands to $\Lambda^2 \mathbb{C}^n$ The vertices correspond to morphisms of representations (for example $\mathbb{C}^n \otimes \mathbb{C}^n \rightarrow \Lambda^2 \mathbb{C}^n$). Within this representation theory context, each graph G can be evaluated to a Laurent polynomial c(G) (an element of the trivial representation of quantum sl(n)).
Now, ignore the representation theory interpretation for a moment and just consider the set of all possible graphs. Let us impose the “MOY relations” on such graphs, by which I mean the relations founds on page 4 of Khovanov-Rozansky’s link homology paper.
Theorem.
Modulo these MOY relations, any graph G is equivalent to c(G). Namely, c(G) is the only Laurent polynomial one can assign to each graph which is consistent with the MOY relations.
My first question is whether this theorem is correct as stated and if so where can I find a proof. It does not seem to be stated (or proven) in the MOY paper. It is stated on page 3 of the above Khovanov-Rozansky paper.
Now, come some more substantial questions:
1. Is there any theory to explain how such a graph is equivalent to c(G)? Is there a sequence of moves that one can do in order to reduce any graph G to c(G) and to what extent is that sequence of moves unique? The last move involving sums on both sides is the most confusing in this way of thinking. I guess I am asking for a theory of “invertible foams”, which should be the “simple” part of the theory being developed by Mackaay, et al. (and Scott).
2. Is there an extension of the above theorem to “open graphs” (ie graphs with open edges which represent invariants in a tensor product)? I am mostly interested in the case where there are just 4 open edges, all of them single with opposite orientations, so that in this case I know a basis for the corresponding invariant space $Inv(V \otimes V \otimes V^\star \otimes V^\star)$ (it is just two dimensional).
## Comments»
1. Ben Webster - August 9, 2007
The theorem is true. You can find most of the proof in Jake Rasmussen’s paper “Some Differentials…” That only deals with braid-like link diagrams, but applying Vogel’s algorithm to an arbitrary diagram shows that it is congruent to a sum of braid-like ones (in MOY language, this is using the 3rd line in Khovanov and Rozansky’s list to remove badly oriented Seifert circles). Once in braid-land, we can induct on the braid index and the number of intersection points with the inner-most circle, using the 4th line to move intersection points outward, and the relations of the second line to reduce the number of intersections on the inner circle to one, and then make it disappear.
I can explain this is person if you’re coming to berkeley some time in the next week or so.
2. Ben Webster - August 9, 2007
Oh, and for your second question, the braids modulo these moves are just the Hecke algebra. so if your question was whether it surjects onto the invariant space, the answer is yes.
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9232330918312073, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/241692/how-to-solve-this-recurrence-relation-tn-4-cdot-t-sqrtn-n/242639 | # How to solve this recurrence relation: $T(n) = 4\cdot T(\sqrt{n}) + n$
I was trying to solve this recurrence $T(n) = 4T(\sqrt{n}) + n$. Here $n$ is a power of $2$.
I had try to solve like this:
So the question now is how deep the recursion tree is. Well, that is the number of times that you can take the square root of n before n gets sufficiently small (say, less than $2$). If we write: $$n = 2^{\lg(n)}$$ then on each recursive call $n$ will have it's square root taken. This is equivalent to halving the above exponent, so after $k$ iterations we have: $$n^{1/2^{k}} = 2^{\lg(n)/2^{k}}$$ We want to stop when this is less than $2$, giving:
\begin{align} 2^{\lg(n)/2^{k}} & = 2 \\ \frac{\lg(n)}{2^{k}} & = 1 \\ \lg(n) & = 2^{k} \\ \lg\lg(n) & = k \end{align} So after $\lg\lg(n)$ iterations of square rooting the recursion stops. For each recursion we will have $4$ new branches, the total of branches is $4^\text{(depth of the tree)}$ therefore $4^{\lg\lg(n)}$. And, since at each level the recursion does $O(n)$ work, the total runtime is: \begin{equation} T(n) = 4^{\lg\lg(n)}\cdot n\cdot\lg\lg(n) \end{equation}
But appears that this is not the correct answer...
Edit:
$$T(n) = \sum\limits_{i=0}^{\lg\lg(n) - 1} 4^{i} n^{1/2^{i}}$$
I don't know how to get futher than the expression above.
-
Does it help to simplify $4^{\lg\lg n}$ as $(\lg n)^2$? – alex.jordan Nov 21 '12 at 2:16
Can you explain "For each recursion we will have 4 new branches" to me? I don't understand this, but I have little to no experience with computational complexity. – alex.jordan Nov 21 '12 at 2:18
The first call of T(n) will generate 4 branches, each one of this branches will call 4 new branches and so on – dreamcrash Nov 21 '12 at 2:26
That's what I don't understand. If I call $T(16)$, then what I see in your recursion makes me call $T(4)$, and that's it. I'm only seeing "one branch" as opposed to a recursion like $T(n)=T(\sqrt{n})+T(n/4)$, where I would see one call as branching into two. – alex.jordan Nov 21 '12 at 3:07
You are right. I just edit. – dreamcrash Nov 21 '12 at 14:55
show 13 more comments
## 2 Answers
For every $k\geqslant0$, let $U(k)=T(2^k)$, then $U(k)=4U(k/2)+2^k$ hence $U(k)\geqslant2^k$ for every $k$.
Choose $C\geqslant2$ so large that $U(k)\leqslant C 2^k$ for every $k\leqslant5$. Let $k\geqslant6$. If $U(k-1)\leqslant C2^{k-1}$, then $U(k)\leqslant 4C2^{k/2}+2^k$. Since $k\geqslant3$, $2^{k/2}\leqslant 2^k/8$ hence $U(k)\leqslant (C/2+1)2^k$. Since $C/2+1\leqslant C$, the recursion is complete.
Finally, $U(k)=\Theta(2^k)$.
-
We want to find a simple upper bound for $$T(n) = \sum_{i=0}^{\lg\lg{n}-1} 4^i n^{1/2^i}.$$
Note that the first summand is $n$ and the last summand is less than $$4^{\lg\lg{n}} n^{1/2^{\lg\lg{n}}} = \log^2{n} \cdot n^{1/\lg{n}}.$$
The first term is significantly larger and probably doing most of the work, so we will aim to show that $T(n) = O(n)$. Indeed, \begin{align*} T(n) &= \sum_{i=0}^{\lg\lg{n}-1} 4^i n^{1/2^i}\\ &\le n + \sum_{i=1}^{\lg\lg{n}} 4^i n^{1/2^i}\\ &\le n + \sum_{i=1}^{\lg\lg{n}} 4^i \sqrt{n}\\ &\le n + \sqrt{n}\lg\lg{n} + \sum_{i=1}^{\lg\lg{n}} 4^i\\ &\le n + \sqrt{n}\lg\lg{n} + \frac{4\lg^2{n} - 1}{4 - 1}\\ &= O(n). \end{align*}
Also, clearly $T(n) \ge n$, so we have $T(n) = \Theta(n)$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8975911736488342, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/21382?sort=oldest | ## If “tensor” has an adjoint, is it automatically an “internal Hom”?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let `$\mathcal C,\otimes$` be a monoidal category, i.e. `$\otimes : \mathcal C \times \mathcal C \to \mathcal C$` is a functor, and there's a bit more structure and properties. Suppose that for each `$X \in \mathcal C$`, the functor `$X \otimes - : \mathcal C \to \mathcal C$` has a right adjoint. I will call this adjoint (unique up to canonical isomorphism of functors) `$\underline{\rm Hom}(X,-) : \mathcal C \to \mathcal C$`. By general abstract nonsense, `$\underline{\rm Hom}(X,-)$` is contravariant in `$X$`, and so defines a functor `$\underline{\rm Hom}: \mathcal C^{\rm op} \times \mathcal C \to \mathcal C$`. If `$1 \in \mathcal C$` is the monoidal unit, then `$\underline{\rm Hom}(1,-)$` is (naturally isomorphic to) the identity functor.
Then there are canonically defined "evaluation" and "internal composition" maps, both of which I will denote by `$\bullet$`. Indeed, we define "evaluation" `$\bullet_{X,Y}: X\otimes \underline{\rm Hom}(X,Y) \to Y$` to be the map that corresponds to `${\rm id}: \underline{\rm Hom}(X,Y) \to \underline{\rm Hom}(X,Y)$` under the adjuntion. Then we define "composition" `$\bullet_{X,Y,Z}: \underline{\rm Hom}(X,Y) \otimes \underline{\rm Hom}(Y,Z) \to \underline{\rm Hom}(X,Z)$` to be the map that corresponds under the adjunction to `$\bullet_{Y,Z} \circ (\bullet_{X,Y} \otimes {\rm id}) : X \otimes \underline{\rm Hom}(X,Y) \otimes \underline{\rm Hom}(Y,Z) \to Z$`. (I have supressed all associators.)
Question: Is `$\bullet$` an associative multiplication? I.e. do we have necessarily equality of morphisms `$\bullet_{W,Y,Z} \circ (\bullet_{W,X,Y} \otimes {\rm id}) \overset ? = \bullet_{W,X,Z} \circ ({\rm id}\otimes \bullet_{X,Y,Z})$` of maps `$\underline{\rm Hom}(W,X) \otimes \underline{\rm Hom}(X,Y) \otimes \underline{\rm Hom}(Y,Z) \to \underline{\rm Hom}(X,Z)$`? If not, what extra conditions on `$\otimes$` are necessary/sufficient?
-
## 2 Answers
It is associative. Consider the evaluation cube drawn here. Four of the faces commute by definition of the composition map, and one by functoriality of the tensor product. The commutativity of these five faces implies that any of the maps $W \otimes \operatorname{Hom}(W, X) \otimes \operatorname{Hom}(X, Y) \otimes \operatorname{Hom}(Y, Z) \to Z$ are equal, so by adjunction, the two composites of compositions are equal.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
In S. Eilenberg and G. M. Kelly Closed categories, in Proc. C. O. C. A.. (La Jolla, 1965),
These is a comprehensive study about Monoidal and Close structure on a category, and the relation and equivalence between these.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.902707576751709, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/40804/barrier-in-an-infinite-double-well?answertab=active | Barrier in an infinite double well
I am stuck on a QM homework problem. The setup is this:
(To be clear, the potential in the left and rightmost regions is $0$ while the potential in the center region is $V_0$, and the wavefunction vanishes when $|x|>b+a/2$.) I'm asked to write the Schrödinger equation for each region, find its solution, set up the BCs, and obtain the transcendental equations for the eigenvalues.
Where I'm at: I understand the infinite potential well easily and I have done a free particle going over a finite barrier before (which I understood less well, but I can deal with it it).
• The problem asks me to make use of "a symmetry" in the problem, which is a vague hint. Are they trying to get me to make $\psi$ an even function?
• I am supposed the condition for there to be one and only one bound state for $E<V_0$. How do I go about that?
-
2 Answers
You seem to have trouble to understand the basic approach. Actually there is a systematic way to solve the Schrödinger equation for picewise constant potentials. Maybe this will give you some basic idea how to solve your problem:
Let be the potential given by $$V(z) = \begin{cases} \infty & z < z_1 \\ V_1 & z_1 <= z < z_2 \\ V_2 & z_2 <= z < z_3 \\ ... \end{cases}$$
• For the above potential the wavefunction for energy eigenvalue $E_n$ is given by $$\Psi_n(z) = \begin{cases} 0 & z < z_1 \\ A_1\exp(-i k_1 z) + B_1\exp(+i k_1 z) & z_1 <= z < z_2 \\ A_2\exp(-i k_2 z) + B_2\exp(+i k_2 z) & z_2 <= z < z_3 \\ ... \end{cases}$$ with $k_i = 2\pi/h \sqrt{2 m e (E_n-V_i)}$ and some (yet to be determined) constants $A_i$ and $B_i$. This is easily verified by plugging in. (In fact each "segment" is the solution to the Schrödinger equation with constant potential). Note that the $k_i$ can be real or imaginary, in which case the wavefunction in the respective segment is either sinusoidal or exponential.
• As required by physics the wavefunction must be continuous and continuously differentiable everywhere. Hence the constants $A_i$ and $B_i$ must be chosen so that this is fulfilled at each point where this possibly is violated (i.e. the points $z_i$).
• The above results in a linear equation system for the $A_i$ and $B_i$. This equation system now only contains the energy $E_n$ as remaining unknown. If you do it correctly the equation system contains as many unknowns as equations.
• Now you compute the determinant of the equation system and set it to zero to find the $E_n$ values for which it is solvable. This is the transcendetal equation for the eigenvalues. This equation has in your case infinitely many discrete solutions $E_n$ (each solution denoted by the running index $n$). For each $E_n$ there are sets of $A_i$ and $B_i$ (which solve the equation system) which give you the wavefunction. In case there is more than one set of linearly independent $A_i$ and $B_i$, you have more than one wavefunction to the same eigenvalue $E_n$. In that case the state is degenerate. (You have degenerate states in your problem!).
Regarding symmetry: The wavefunctions do not need to have the same symmetry as the potential. Of course if you have a solution wavefunction, then the mirrored wavefunction must be a solution as well (if the potential is symmetric as in your case). It needs to belong to the same energy eigenvalue.
Regarding the single bound state: Once you have calculated the $E_n$ you will see that there are conditions where $E_1 < V_0$ and $E_2 > V_0$ ($E_2$ the second largest eigenvalue). This depends on the geometry, i.e. width of your barrier and well. Generally speaking the energy states have higher spacing, if the well is smaller. So probably the single bound state condition will display itself as range specification for $a$ and $b$.
-
Very good, thanks. Quite helpful. – Alexander Nikolas Gruber Oct 15 '12 at 23:37
The parity operator commutes with the Hamiltonian because of the symmetry in your potential. This says that all eigenstates of the Hamiltonian are eigenstates of the parity operator. Therefore, the only possible eigenstate solutions to the system are ones with even or odd parity. This fact will allow you to simplify the process of applying the boundary conditions mentioned by Andreas, as you can immediately conclude several things regarding the unknown coefficients.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 31, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9426401853561401, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/149656/prove-that-f-n-converges-to-f-in-l-1-norm-given-int-f-n-to-int-f/149675 | # Prove that $f_n$ converges to $f$ in $L_1$ norm given $\int f_n \to \int f$
a homework question from measure and integraiton theory course.
Suppose $f_n \in L_1(\mathbb R^d)$ for each $n\in\mathbb N$, $f_n\geq 0$ , $f_n\to f$ a.e and $\int f_n\to\int f<\infty$.
Prove that $\int|f_n-f| \to 0$
(Hint: ($f_n - f)_-\leq f$. Use dominated convergence theorem )
I am thinking that since $|f_n -f | \to 0$ a.e and $|f_n-f|=(f_n - f)_+ + (fn-f)_-$ . If I can show $(f_n-f)_+ ≤ f$ and $(f_n-f)_-≤f$ . Then since $f_n$ and $f$ are integrable , by DCT, $\int|f_n-f| \to \int0 =0$ . I think my approach might be wrong ..
-
This looks like homework. Please read the homework FAQ at meta.math.stackexchange.com/questions/1803/… and update your question accordingly, using the homework tag and explaining your work so far. – Nate Eldredge May 25 '12 at 13:53
@AsafKaragila Thumbs up – AD. May 25 '12 at 13:55
I tried to make it more readable by using $\LaTeX$. If there is something wrong in how I edited the post, please correct it. – Asaf Karagila May 25 '12 at 13:55
Removed my hint since the question keeps changing... – AD. May 25 '12 at 14:34
## 2 Answers
Thomas E.'s approach is fine, but you can get this with a different use of the triangle inequality somewhat easier using the inequality $0 \leq |a - b| + |a| - |b| \leq 2|a|$. The proof of this is straightforward: We have $|a-b|+|a|-|b|\leq2|a|$ since $|a-b|\leq|a|+|b|$. Further, $||a|-|b||\leq|a-b|$ so that if $|a|\leq|b|$ then $0\leq|a-b|+|a|-|b|$. Similarly if $|b|\leq|a|$ then we have $0 \leq |a-b| + |a|-|b|$.
Apply the dominated convergence theorem to $0 \leq |f_n - f| + f - f_n \leq 2f$ (since in your case everything is positive) and the result follows.
-
Hint:
Going to positive and negative parts is not really necessary. Here's another type of approach that is also quite straight-forward.
Denote $g_{n}=|f|+|f_{n}|-|f-f_{n}|$ for all $n$, which are non-negative (by triangle-inequality) and measurable. Apply Fatou's Lemma and use the fact that $\liminf(-a_{n})=-\limsup(a_{n})$.
If I calculated them correctly, then what you should get is $$\liminf_{n\to\infty}\int g_{n}=\|f\|-\limsup_{n\to\infty} \|f-f_{n}\|$$ and $$\int \liminf_{n\to\infty} g_{n} =\|f\|,$$
where $\|\cdot\|$ is the $L^{1}$-norm. Do you see how this implies your result?
-
1
You can use `\|` for norm, as it has a correct spacing. In large quantities it even improves readability by quite a lot. – Asaf Karagila May 25 '12 at 15:23
Sure, didn't know that. Thanks. – Thomas E. May 25 '12 at 15:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9463550448417664, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/72931?sort=votes | ## What reasonable choices of morphisms are there for the category of Poisson algebras?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The first definition of the category of Poisson algebras that comes to mind is that a morphism between Poisson algebras is an algebra homomorphism that is also a Lie algebra homomorphism with respect to the Poisson bracket. This definition does not seem to be easily compatible with how people actually use Poisson algebras (in particular rings of functions on Poisson manifolds):
• A Poisson-Lie group is not a group object in the opposite of the category of Poisson algebras because inversion negates the Poisson bracket.
• The standard choice of bracket on the tensor product of two Poisson algebras is not a categorical coproduct (if I have the correct general definition: it's defined by the requirements that it restricts to the given brackets on two Poisson algebras $A, B$ and that every element of $A$ Poisson-commutes with every element of $B$).
This suggests to me that if we used a different choice of morphisms, we might get actual group objects and an actual categorical coproduct. So are there any nice choices that do this?
I read somewhere on MO that the correct definition of a morphism between Poisson manifolds is a Lagrangian submanifold of their product. How does this generalize to Poisson algebras? Does it fix the two issues above? (I'm a little more pessimistic about the second issue, so if there's a different general principle that leads to the standard choice, I would be interested in hearing about that as well.)
Edit: The discussion in my previous question about Poisson-Lie groups seems relevant, and perhaps it shows that the above point of view is misguided. Any Poisson algebra $A$ admits an "opposite" $A^{op}$ given by negating the Poisson bracket, and then inversion in a Poisson-Lie group is a "contravariant morphism" rather than a morphism. This suggests to me that it might make more sense to look for a bicategory of Poisson algebras similar to the bimodule bicategory.
-
"I read somewhere on MO that the correct definition of a morphism between Poisson manifolds is a Lagrangian submanifold of their product." Does this have something to do with coisotropic calculus? – Todd Trimble Aug 15 2011 at 15:43
2
As for the reference to bicategories, some was done, in the geometric case, by Landsmann, some years ago: arxiv.org/pdf/math-ph/0008003v2 – Nicola Ciccoli Aug 16 2011 at 6:50
Just to add a small comment on the categorical side: what happens here is quite common. A group object in the category of groups, for example, is an abelian group. This is exactly due to the fact that inversion is an antihomomorphism. I wonder whether there is a notion of category+involutive functor in which a "group-like" object can be defined clarifying the situation. – Nicola Ciccoli Aug 16 2011 at 8:51
@Nicola: There has been some discussion of just that here on MO. Anyone remember enough of the titles to find the posts? – Theo Johnson-Freyd Aug 16 2011 at 12:24
@Nicola: yes, see the link in the part after "Edit:" and also the follow-up question at mathoverflow.net/questions/66675/… . – Qiaochu Yuan Aug 16 2011 at 15:41
show 1 more comment
## 2 Answers
As you suggest in your question and Todd Trimble mentions in a comment, one interesting choice of morphism between Poisson manifolds is that of a coisotropic correspondence: if $M, M'$ are Poisson manifolds, depending on exactly how you work you either think about coisotropic submanifolds in $\bar M \times M'$, or maps $N \to \bar M \times M'$ with coisotropic image, where $\bar M$ is the same manifold as $M$ but with the opposite Poisson structure (and I give $\bar M \times M'$ the product Poisson structure that you're rightly not fond of). Then it is a straightforward fact that a correspondence $N \subseteq M\times M'$ which is the graph of a smooth map $M \to M'$ is coisotropic in $\bar M \times M'$ iff the map is a Poisson map.
Note that this all generalizes the category in which objects are symplectic manifolds and morphisms are Lagrangian correspondences --- then a correspondence that is the graph of a smooth function is the graph of a symplectomorphic open embedding iff it is Lagrangian. It also has just as many bad properties. Notably, only composition between generic morphisms is defined, as in the non generic case some intersections may not be transverse. So to make it into a category requires the same kind of $A_\infty$ work (or Wehrheim-Woodward method, or...). I know that some of Alan Weinstein's recent papers discuss this category.
This category generalizes easily to the algebraic case that you ask about. Recall that an ideal in a Poisson algebra is coisotropic if it is a Lie subalgebra for the bracket (not necessarily a Lie ideal!), and that a submanifold of a Poisson manifold is coisotropic iff its vanishing ideal is coisotropic. So what I'm suggesting is that if $P,P'$ are Poisson algebras, and writing $\bar P$ for $P$ with the opposite Poisson structure, then one interesting notion of "morphism" $P \to P'$ is a coisotropic ideal in $\bar P \otimes P'$.
Dima Shlyakhtenko has suggested more or less the same category in another answer. There is the following philosophy: Poisson manifolds / algebras are a sort of "infinitesimal" piece of noncommutative algebra, and under this rough relationship coisotropic submanifolds are supposed to correspond to (left, say) modules. Then coisotropic correspondences are roughly the same as bimodules. Recall that from an algebra point of view, bimodules are a fairly natural notion of morphism: they are precisely the left adjoints (say, or right adjoints, or adjunctions) between the corresponding categories of modules. The module theory of an algebra knows a lot about the algebra, including its Hochschild homology and cohomology (and hence its center, its perturbative deformation theory, and so on).
Of course, it is far from the case that the tensor product of algebras has much to do with the (co)product in any category. Rather, remembering only the Morita theory of algebras helps to explain what is their tensor product: it is the tensor product in the 2-category of (nice) categories with left-adjoints as morphisms, in the sense of being universal for "bilinear" maps. One can be quite precise about this: the 2-category of algebras and bimodules is a categorification of the 1-category of abelian groups. Actually, if you remember the underlying algebra, then that's the same as remembering its module theory along with the data of a "rank-1 free module", and so this is a categorification of the 1-category of abelian groups with a distinguished element. (Morita theory is like linear maps that ignore the distinguished element.)
Incidentally, it is now straightforward to invent the notion of "sesquialgebra", which is an algebra object in the 2-category of algebras and bimodules, or equivalently a closed monoidal category structure on the module theory of said algebra. The same notion in Poisson manifolds is an algebra object in the category of Poisson manifolds and coisotropic correspondences, so this includes the Poisson Lie monoids. Alan Weinstein and collaborators a few years ago tried to write down a good notion of "Hopfish algebra" for controlling when this map would be invertible, but my opinion is that their paper doesn't quite get it right. What you should do is the following. Recall that a functor between monoidal categories is strong-monoidal if it comes equipped with a natural isomorphism between the two ways of composing the functor and the corresponding monoidal structures (and maybe extra data for associativity, etc.). A strong monoidal functor between closed monoidal categories also determines a natural transformation between "inner homs", which need not be a natural iso. If it is, call the monoidal functor "hopfish" or "strongly closed". A bialgebra is a sesquialgebra with a marked right adjoint to Vect (equivalently, a marked "rank 1 free algebra", the image of the 1-dimensional vector space under the corresponding left adjoint) which is strong monoidal; a Hopf algebra is a bialgebra in which the strong monoidal functor is hopfish.
-
Thanks! This sounds like a pretty good choice. Can you explain briefly what the composition law is in this category? – Qiaochu Yuan Aug 15 2011 at 20:08
@Qiaochu: At its most basic, the composition is just that of correspondences: if you have $N \to M \times M'$ and $N' \to M' \times M''$, then you can form $N \times_{M'} N' \to M \times M' \times M'' \to M \times M''$. Except that in manifolds, fibered products like this are not well-defined. Spelling it out, what has to happen is that $N \times N' \to M \times M' \times M' \times M''$ must intersect transversally with the diagonal map $M \times M' \times M''\to M \times M' \times M' \times M''$. If the intersection is transverse, then the composition is coisotropic if $N,N'$ are. – Theo Johnson-Freyd Aug 15 2011 at 21:00
Actually, the proof of coisotropy should have something to do with the correct version of "Poisson reduction" akin to symplectic reguction. I'll try to dig up the appropriate Alan Weinstein paper. – Theo Johnson-Freyd Aug 15 2011 at 21:01
@Theo: I don't have a good sense of what that looks like in the greater generality of Poisson algebras. (Does it still make sense in that generality?) – Qiaochu Yuan Aug 16 2011 at 2:49
1
In Todd Trimble's comment to the main question, he recalls the correct buzzword "coisotropic calculus". The foundational paper seems to be Alan Weinstein, Coisotropic calculus and Poisson groupoids, J. Math. Soc. Japan Vol. 40, No. 4, 1988, projecteuclid.org/euclid.jmsj/1230129807 . – Theo Johnson-Freyd Aug 16 2011 at 12:22
show 1 more comment
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I believe your objections could as well be raised for the category of algebras (note that Poisson algebras are in a certain way infinitesimal to algebras, since they encode first-order information about the deformation of the product of an algebra). For example, you will note that the inversion of the group reverses the order of products. There is also no natural notion of tensor products of algebra representations.
One way to fix this in the category of algebras is to replace an algebra $M$ by its "enveloping algebra" $M\otimes M^o$ where $M^o$ is the opposite of $M$; in other words, to go from the category of $M$-modules to the category of $M\otimes M^o$ modules (i.e., $M,M$-bimodules). The same idea has been carried out to some extent in the Poisson category. See e.g. the book "Geometric models for noncommutative algebras" by Ana Cannas da Silva, Alan Weinstein.
For example, the idea that morphisms between Poisson manifolds are related to Lagrangian submanifolds of a certain symplectic manifold is parallel to the idea from algebra that morphisms can be generalized by considering a bimodule with a preferred vector (the symplectic manifold in question is a bit more complicated than the product of the two Poisson manifolds, since that in general need not have any symplectic structure; it is related to the point that any Poisson manifold can be viewed as the quotient of a symplectic manifold by an action of a groupoid).
However, this does not fix the problems you mention above (since, as I wrote, they are somehow the same in the category of algebras).
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 33, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.920315146446228, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/232220/optimizing-response-times-of-an-ambulance-corp-short-term-versus-average/232432 | # Optimizing response times of an ambulance corp: short-term versus average
Background: I work for an Ambulance service. We are one of the largest ambulance services in the world. We have a dispatch system that will always send the closest ambulance to any emergency call. There is a belief that this results in the fastest response time as an average across the system.
Example: Suppose we have the following simplified scenario. We have 3 ambulances available (labelled A, B and C). At any given point in time, there is an random chance of an emergency call originating equally anywhere inside this box:
If an emergency job appears (labelled 1), we will send the closest ambulance (in this case Ambulance C)
You will notice that Ambulances A and B are very close together on the left side of the box (lets pretend they are just leaving a hospital after dropping off their patients). There is now a large gap on the right. Suppose a small amount of time passes, and another emergency call drops in (labelled 2).
Ambulance C is no longer available, so you must use Ambulance A or B. They have a significantly longer distance to travel to reach incident 2. In this case we have sent Ambulance A to the job.
Hypothesis: If you always send the closest ambulance to an emergency call, you will have the quickest response time to that specific call, but the overall average response time of the ambulance service is not optimised.
Using that hypothesis - it would seem to be better to send Ambulance A or B to the original incident 1. This would mean the the next incident to happen, there will be "on average" a significantly shorter distance for the next ambulance to travel.
Question: How can I "prove" this? Is there a mathematical theory or formula? This is obviously a simplifed scenario, there are some other issues - but I just need to prove the fundamental issue that the "closest ambulance being sent to the next incident" actually results in a non-optimal response time across the system as a whole.
To answer some generic concerns:
"Just move A to C's location after C goes on the job" Yes - we already try to do this - however there is often another "incident" before A gets into the new position. And for other reasons {which are beyond this question} it is not always an option.
"Why are A+B so close together? Perhaps B should be elsewhere?" There are lots of reasons for ambulances to be 'clumped' together. The main reason is they have probably just offloaded a patient at a hospital - this often causes uneven dispersal of ambulances resources. Other reasons include the physical locations of our stations/depots.
"Volume of work" We are one of the largest ambulance services in the world. In my scenario I have 3 ambulances. In reality there are approx 100 ambulances, and we service approx 2000 "incidents" per day in our main metropolitan area. At some points in time we run at very high capacity - i.e. almost every ambulance is on an 'incident' - so applying a optimal response strategy across the system will have a significant impact to our response times.
"Ethics of not sending closest ambulance" Yes - this is not just a mathematical issue. But for the purpose of this Stack Exchange, please limit responses to a mathematical answer. In regards to ethics, I would suggest that if we can lower the "overall" response time (i.e. from 12mins to 10min average) - then "overall" we are ethically providing the best service. If we use a sub-optimal response, and "overall" we provide a slower response, then is not a bad ethical decision? Also - there would probably be exceptions to the rule (i.e. a heart attack or choking would ALWAYS get the closet ambulance - because seconds matter. But for this scenario lets not make it complicated)
-
10
The problem I can see, is time. If you know of both position 1 and 2 at the same time, then the best decision is to send C to 2 and A to 1. However, unless you have precogniscient dolphins, you don't have this information. The best decision you can make, with the information available, is that which you have in your 3rd diagram. – Korgan Rivera Nov 7 '12 at 16:24
27
The strategy of always sending the closest ambulance is akin to using a greedy algorithm for solving optimization problems, as in trying to solve the traveling salesman problem by always going to the closest unvisited location. In general, not an optimal situation. – Harald Hanche-Olsen Nov 7 '12 at 16:28
5
@TheShiftExchange the problem is that where events happen would be somewhat random, so you cannot expect to have a strategy that always optimises. What you can expect is a strategy that optimises in the long run (on average). But to actually prove whether a strategy is or is not optimal, we need to know about the typical distance traveled by an ambulance and the typical interval between incidences. If on average events happen infrequently enough then no strategy can beat the "closest responder" one. – Willie Wong♦ Nov 7 '12 at 16:29
11
This kind of problem is known as an online job scheduling problem in the unrelated machines model, where the jobs are emergency calls and the machines are ambulances. You'll probably want to cross-post this to the theoretical computer science stack exchange. – Perce Nov 7 '12 at 16:35
5
On a side note, in your scenario it might make sense to send A across to the right as soon as C goes out on its call, in order to lower the maximal waiting time for a new emergency. But this sort of consideration will of course complicate the issue further … – Harald Hanche-Olsen Nov 7 '12 at 16:40
show 29 more comments
## 16 Answers
Operations problems like this are tricky, because they always include a large element of uncertainty.
As Willie Wong points out, it is possible to construct a scenario in which "closest-first" is a non-optimal strategy. However, proving that there exists a non-optimal strategy does not mean that globally such a strategy is non-optimal.
Wait, what?
In these cases, you have to consider the probability distribution of events as a spatio-temporal process. In such a case, you may construct an algorithm for dispatching emergency services that is almost never optimal for any isolated case, but is globally optimal over all possible cases!
Here is an example: say that your region is circular, and that events only happen on the circumference. Your job is to determine where inside the circle to position your only ambulance. If the distribution of emergency events is uniformly distributed over the circumference, the solution is clearly to position the ambulance at the center of the circle. However, this solution is not optimal for any individual event.
Your question is to prove that the closest ambulance first scheme is not optimal "across the system as a whole." In order to prove that, you need to establish that the probability of a second event happening closer to $C$'s original position than any other ambulance within the time it takes for $C$ to complete its call is higher than not.
If the process is truly Poisson (i.e. completely spatially random in a 2-D region), then this process is simple -- the area of $C$'s Voronoi tessellation would need to be greater than half of the area of your district, assuming that the process intensity is such that the expected call time is short enough that only one event happens on average with Poisson intensity $\lambda$.
Let's explain that last paragraph.
Let's say that you run an ambulance service in Squareville, Australia. This city is completely flat, square, has no roads, and is one unit wide by one unit high.
Say emergencies happen randomly, anywhere in the city. This can be described by a completely uniform spatial 2D point process; the distribution of such a process is called a Poisson distribution. In a Poisson 2D point process, existing points have no influence on the location of the occurrence of subsequent points -- indeed, the location is truly random.
Let's say in an hour, $n$ events occur. The area of Squareville is 1 square unit. This leaves us with an average spatial intensity of $n$ events/unit area. This is known as the intensity factor, or Poisson parameter, often called $\lambda$ and is equivalent to both the mean and variance of the distribution.
In other words, $$\lambda = n.$$
Now, let's take a connected subset of Squareville, called Squareville Heights. This subset has some area less than 1, let's call it $a < 1$. Because the area of Squareville is 1 square unit, Squareville Heights covers exactly $100a$ percent of Squareville. (For example, if $a = .5$, the region covers 50% of the city).
If you look at the number of events that occur in this region, the average number will be $\lambda a$ -- the global probability, times the area of Squareville Heights. This turns out to be exactly $100a$ percent of the events.
We can look at this another way.
Let's take an hour, and chop it into small intervals $dt$ that are small enough that in any finite region of the city, no more than one emergency will occur. The probability of this emergency happening in Squareville Heights is exactly $a$.
Let's say we have $k$ ambulances distributed somehow throughout Squareville. We want to investigate the probabilities of events happening closer to one ambulance than to another. It turns out this is very easy.
Let $A$ be the co-ordinates of your ambulances. The Voronoi tesselation of a point pattern, $V(A)$, is a set of 2D regions such that each $v_i$ in $V(A)$ is the subset of Squareville such that every point in $v_i$ is closer to ambulance $a_i$ than to any other ambulance.
If an emergency happens in $v_i$, then $a_i$ will be the closest ambulance to the event.
We can inspect the probabilities. Let's say that an emergency happens in $v_1$, and $a_1$ responds. Now let a second event occur while $a_1$ is still responding. The probability of this happening in $a_1$'s original Voronoi tile, $v_1$, is exactly the area of $v_1$, denoted $|v_1|$.
As a result, we can then compute -- in this trivially simplified case -- the probability that sending $a_2$ to the first call would be better than $a_1$.
In this case, there is a $|v_1|$ percent chance that holding $a_1$ and sending $a_2$ to the first call will result in a response-time reduction for the second call.
If the ambulances are also randomly distributed, the mean areas of the tiles will be about .33; however, operationally, the distribution of ambulances is not Poisson, so you will probably have some ambulances that possess relatively large Voronoi tiles.
-
3
Could you clarify this last paragraph for the layman? :-) – deed02392 Nov 8 '12 at 13:53
1
@deed02392 Sure, it will take me some time, but I will attempt to do so today :) – Arkamis Nov 8 '12 at 15:12
@deed02392 I have made an attempt at an explanation. – Arkamis Nov 8 '12 at 21:44
That is fantastic, thank you very much for your time. – deed02392 Nov 9 '12 at 8:27
– George Duckett Nov 9 '12 at 12:10
I have little familiarity with ambulances and am not sure if they are stationed when on call, such as a fire truck, but if they are not then you could simply position them such that the distance from any point in the region to the nearest ambulance is minimized. When a call comes in, the nearest ambulance could respond and the entire formation would reposition immediately such that the distance from any point to any current ambulance was again minimized.
Not sending the nearest available ambulance to an emergency sounds like a liability nightmare.
(Edit) If the ambulances must be stationed, I would propose the same solution with the only placement positions for the ambulances being the available stations.
In your example, I would still respond with C, and as soon as C leaves I would send A to C's station.
-
42
Indeed, the true objective of any real-world optimization problem is not "compute the maximum of performance" but rather "compute the minimum loss due to litigation"... – Arkamis Nov 7 '12 at 17:44
2
In the US anyway, an ambulance is stationed at the base of the agency that owns it, and dispatched only by that agency. Pre-emptive shuffling of vehicles and personnel isn’t generally done outside of a large-scale emergency. – Jon Purdy Nov 7 '12 at 21:23
Thanks Tyler. We already do this (I've just added an FAQ). The problem is due to the significant volume of 'incidents' - we often see that before A can even get close to C's old position, a new incident arrives. The whole situation is very dynamic and changes ever minute. – TheShiftExchange Nov 7 '12 at 22:27
A thought comes to mind. Your solution of distributed servers(ambulances) makes it easier to define a Quality of Service metric and strive to uphold it e.g. maximum time to arrive less than X minutes. – Vorac Nov 8 '12 at 11:35
+1 The best of course is to station ambulances like a game of tennis as you describe. Whenever one "player" moves, the other(s) move to be in an optimal position. The thing is I don't think it is feasible to always have ambulances on the move, and gas is not free. – lc. Nov 9 '12 at 5:46
show 1 more comment
I actually see this problem a lot, believe it or not, in World of Warcraft.
The problem is vastly simplified, granted, as it comes in the context of a wargame with fixed points to defend and attack. But the general idea is the same: one may have, say, three points to defend. If one base comes under attack, the natural thing to do is to reinforce it--forces not engaged in combat are wasted, after all, but one still has to have adequate protection for the other two bases to deter an opportunistic attack. What I would do is move forces from the closest unthreatened base to the base being attacked, and then reinforce that base with forces from the furthest base. So lots of people are in motion, but each has a relatively short distance to go, compared to sending from the furthest base.
So, I suspect--though I certainly couldn't say I can prove--that the optimal strategy here is to still use ambulance C to respond to the initial call and then move ambulance A to a position such that, for any random point in the box, the expected best response time of A and B is minimized. When C becomes available again, move A back to the configuration that is optimal for 3 ambulances.
By no means do I want to suggest this as the overall strategy without some level of rigor. I think the things you would want to consider are what the usual timescales are between time to respond to a call, tend to a call, and time between calls.
-
2
This sounds like approximately what I would start with -- effectively send BOTH C and A. C gets there first, but shortly there after, A is in the area (certainly faster than if you wait for call 2 to come in before dispatching A). The goal being to establish maintain an even distribution (and thus reduce response times and save lives). – jmoreno Nov 8 '12 at 5:40
3
+1 Never thought WoW could help in life & death situations :) – Laurent Couvidou Nov 8 '12 at 9:56
4
WoW units don't require fuel. – Frantisek Kossuth Nov 9 '12 at 9:05
What I propose is this: Before assigning an ambulance to a call, check the coverage of that ambulance's current vicinity by OTHER ambulances. If you find the coverage to be poor, run the same algorithm on the SECOND nearest ambulance, and so forth. (Fundamentally, what we're trying to do is not just assign an ambulance to a call, but to assign an ambulance whose absence will not drag average times down too much.)
To prove its usefulness, I think you can use real-life statistical data and compare real-time response times vs. simulated response times (i.e. the calls are real-time data, but ambulance assignments are hypothetical).
-
7
+1 for using real-life statistical data: Simulate with call data over the past year, "What would the average (simulated) response times be with the nearest-responder algorithm?" (Maybe you can check these against actual response times for calibration.) "What would the average (simulated) response times be with improved algorithm A? algorithm B?" Then maybe also, "How many individual incidents would have suboptimal response times using algorithm A or B? How suboptimal?" – LarsH Nov 7 '12 at 20:59
1
Thanks Vinod - I like your approach - I was thinking of something similar. I was hoping of some formula or mathematical 'proof' that this is the most optimal strategy, apart from just running simulations. Either way, I'm looking into how I might be able to simulate this and see the results. – TheShiftExchange Nov 7 '12 at 23:14
I like this approach, because it's relatively simple. Obviously, you'd have to define what "poor" coverage is sufficiently rigorously to make this practical to apply, but simulations based on real-life data should help pin that down. – Bobson Nov 8 '12 at 0:55
When using real-life data, you need not only the travel times from ambulances to sites, but consider the distribution of calls. For example, an old-folks home might generate two calls per day, whereas Daycare Acres generates one a month. Using this data, you would first determine where the optimal starting stations are in the no-calls situation, and then you can start distributing ambulances based on usage. – John Deters Nov 8 '12 at 21:11
Let's simplify the situation to the limit so that you'll see clearly what is going on without heavy computations. Assume that you have two stations at $A$ and $B$, which are $1$ mile apart on a straight road and you get calls from random places in between independently with the uniform distribution. Assume also that the total number of calls is exactly the same as the total number of vehicles. Start with the simplest case when you have $a$ vehicles at $A$ and nothing at $B$. Then you have no choice, so the average travel distance is $\frac a2$. The same is true if your vehicles are all at $B$. Now suppose that you have $1$ vehicle at $A$ and $1$ vehicle at $B$. Then for every position $x$ of the first call ($A=0$ and $B=1$), no matter which vehicle you send, the second call will be at the average distance $1/2$ from the remaining vehicle. Thus, your best bet for the first call is to send the nearest vehicle to it, averaging the distance $1/4$ with the total average $3/4$ for two calls. Suppose now that you have $2$ vehicles at $A$ and $1$ at $B$. Now it gets interesting. Let $x$ be the position of the first call. If you send the vehicle at $A$, you'll travel $x$ and will be left with the $1-1$ situation, which averages to $3/4$. So, your total is $x+\frac 34$. If you send the $B$-vehicle, you travel $1-x$ but after that you are left with $2-0$ and average $1$. So $A$-vehicle should be sent if $x+\frac 34<1-x+1$, i.e., if $x<5/8$. The average will be then $\frac 58(\frac 5{16}+\frac 34)+\frac 38(\frac 3{16}+1)=\frac{71}{64}$. For comparison, the nearest vehicle strategy will give $\frac 12(\frac 14+\frac 34)+\frac 12(\frac 14+1)=\frac 98$. There is a small gain of $\frac 1{64}$ on three calls if you use the optimal strategy. Now let us assume that we have $3$ vehicles at $A$ and $1$ at $B$. Let's run the same analysis. Let $x$ be the place of the first call. If you send an $A$-vehicle, you get $x+\frac{71}{64}$ for the expectation. If you send the $B$-vehicle, you get $1-x+\frac 32$. Now the cutoff is at $x$ satisfying $2x=\frac 52-\frac{71}{64}=\frac{89}{64}$, so $x=\frac{89}{128}$, i.e., now it makes sense to use the better staffed station more than $2/3$ of the way on the first call. I guess you see now what's going on: the better staffed stations (or the places with more stations per unit area) should sometimes be given preference even when the distance is larger (but not too much larger). It is not hard to make a full analysis of this two station model even when the number of calls is random and you have $a$ vehicles at $A$ and $b$ at $B$ and I'll gladly do it for you if you get interested and if I find free time (if you got the idea, you can play with it yourself too now). How much will it tell about the real life? Well, as much as any overly simplified math. model: the general conclusion is correct but the exact cutoffs, etc. are not very realistic.
-
wow - that is a really good starting point and "proves" it for two stations. Thanks - I'll look at this more closely... – TheShiftExchange Nov 8 '12 at 2:03
Nice account, but I believe you have an error in calculating the expected travel in the 2 : 1 case. The expected value of x is 0.5, which is greater than 5/8. Hence, the closest ambulance should be dispatched. Please see my answer, which uses the same formalization (endpoints on a unit interval, uniform distribution), and shows that dispatching the closest ambulance is better over all. – alexis Nov 8 '12 at 18:32
---The expected value of x is 0.5, which is greater than 5/8. Hence, the closest ambulance should be dispatched.--- I don't really understand this phrase. Since I showed that your initial approach was incomplete, now it is your turn to pinpoint an error in my argument if you see any :). – fedja Nov 9 '12 at 15:49
Agreed; plus I misspoke, since 0.5 is less than 5/8! :-) Let me go over this again (what is the 5/8 factor in the next step?) – alexis Nov 9 '12 at 16:43
The 5/8 cutoff is only for the first move. After you are left with 2 vehicles, send the nearest for the second call (if you still have a choice). – fedja Nov 9 '12 at 17:07
show 2 more comments
If you just want an explicit example, make your second incident happen exactly where C is stationed. Then in the scenario where
A picks up incident 2, C picks up incident 1
the total distance traveled is "distance between 1 and C" plus "distance between A and C" (since 2 is where C is stationed).
In the scenario where
A picks up incident 1, C picks up incident 2
the total distance traveled is "distance between A and 1" plus "distance between C and 2" (which is zero).
So we draw a picture with A, C, and 1 on it. The total distance traveled in the first scenario is two legs of the triangle formed by the three points. The total distance traveled in the second scenario is only one leg of the triangle. By the triangle inequality, the sum of lengths of two legs of one triangle must be bigger than the length of the third. This shows that the total distance traveled in scenario one, which is the "closest responder" scenario", can be non-optimal.
-
This is a neat problem and can be approached using stochastic calculus and non-linear optimization. I don't believe its explicitly solvable unless trivial assumptions are made. I'll first describe a way to optimize the initial positions of the ambulance. We will then use cost functions developed in from this optimization to discuss how sending a different ambulance effects response time for future visits.
Things you will need:
EventPdf(x,y): A Probability Distribution Function that describes the probability that an emergency event will occur in over an area. Some models might be equal probability in a bound range, equal probability proportional to population density, the best may be a PDF constructed from real world emergencies occurances.
AverageEmergencyTime: A constant or PDF describing how long an ambulance will be unavailable when going to a call. You can probably get away with a normal distribution or constant on this one.
Cost function:
You'll need a cost function that you will attempt to minimize. You'll probably start with this in time. This will probably include:
Standard Movement Cost: The average cost in time to move from one position to another. You'll achieve this by integrating over a combination of the ambulance position (x_i,y_i) and their distance from a service area times the frequency of calls in that area. For a grid element you'll always take the cost from the least expensive ambulance.
````ElementCost(x,y,x_i,y_i) = distanceFrom(x,x_i,y,y_i)*EventPdf(x,y);
LeastElementCost(x,y) = Min( AmbulanceList.ElementCost(x,y) );
TotalStandardCost = integral(serviceAreaBounds) {(Least_Movement_Cost(x,y))};
````
This will give you a baseline for computing average response time.
ModifiedCost_i - Removed Ambulance Movement cost: You'll need to then compute the cost if one or more of these ambulances is removed.
W_i: You'll weight these proportional to the standard cost based on how likely they will be removed and how long they will be removed for. If you have alot of idle time, the standard will be weighted heavily. If one ambulance is almost always busy, you'll find the others weighted heavily. I'm not going to model this, but it can be modeled. You might want to throw out unlikely scenarios.
Cost: Now you construct a total cost function to be minimized.
````Cost = W_0*TotalStandardCost + W_1*ModifiedCost_1 ... W_N*ModifiedCost_N
````
Now you put this into a non-linear optimizer, with x_i,y_i for each ambulance as input variables, and your Cost variable as an output and you tell it to minimize your cost by selecting x_i and y_i for the ambulances.
Anyway, this is a good starting point and is a somewhat realistic model but needs alot of work. You may want to weight your costs more heavily for long response times (e.g. getting sued for taking to long). You may also want to account for ambulances being able to move to new positions when one is busy.
Now that we have a good model we can address your question of sending a different ambulance rather than the closest ambulance. Now we assume a predefined set of x_i,y_i for each ambulance. We can makes changes to the function:
````LeastElementCost(x,y) = Min( AmbulanceList.ElementCost(x,y) );
````
To pick out a different ambulance than the closest one for a response. This will create a different PDF for each of the weighted costs. Integrating over the new PDF will compute a new total cost that can be compared to the unmodified one, it will tell you if you benefit from sending a different ambulance.
This is a discrete case and can't be handled with a continuous solver. If you have a small number of ambulances ( N is not large), you can search over the entire solution space. I'm betting it'd take a few seconds for 10 ambulances if its coded right.
-
Scheduling algorithms have only access to information about the past, and are making guesses about what will optimize the unknown future. You have no clue when or where the next emergency will occur. By not sending the closest ambulance to a current emergency, you're risking somebody for a potential gain, which seems unethical. What if you send an ambulance from 10 miles away instead of from 2 miles away, and someone dies? And after that incident, there is no emergency for another 45 minutes?
The problem of keeping the ambulances geographically distributed can be solved without doing away with the closest ambulance policy.
If an ambulance is dispatched somewhere, what you can do is instruct other ambulances to move around in order to keep the ambulances evenly distributed.
-
Thanks Kaz. I've added a FAQ which might clear some of it up. Note - I would suspect that the most "correct" answer would factor in a weighted response - so the scenario you describe would not occur, or would be factored in. I was more thinking about instances where the difference is only 1-2 miles, not 10. In the case of "10 miles" - the weight formula would mean that the closest ambulance is still sent. – TheShiftExchange Nov 7 '12 at 23:13
I would approach this by making clear assumptions to you model first.
if you say that
1) all points in the square are equiprobable for an incident
2) all incidents must be responded to by immediately sending an ambulance
3) the cost (time) can be measured as the simple distance between two points
=>you would always try to to have the ambulances distributed across the area (here square) as uniformly as possible.
=> you can always send the nearest ambulance immediately
=> directly after, you send the remaining ambulances to the new points for optimal coverage
=> if two ambulances are equidistant from an incident, you send the one which makes the other ambulances travel as little as possible to reach the new coverage points.
I am sure that this can be proven, however I am not sure if these assumptions are acceptable for you...
Cheers Fab
-
I think first you need to prove that better assignments are possible with hindsight. Get a log with, for each call, the time the ambulance was sent out, the positions of the ambulances and of the incident, the ambulance sent, and the time of arrival. If the availability of ambulances varies, also log the times that they are in service. If this information isn't available, make it available. Collect these logs for a year (the density of incidents and maximum traveling speed are likely to fluctuate).
Now you can make a model of travel times between any two points in the area on any time of the day.
Based on this model, determine, per day in the log, all alternative assignments of ambulances to incidents and for each assignment compute the sum of the expected travel time from assigned ambulances to incidents. Plot the results. Show how much of a difference it makes. Translate into numbers of lives lost or similar. Do the circumstances matter? (Is the difference lower or higher when the density of incidents is higher?)
Once you've done this, you've established that a different assignment policy may exist that will improve the average response time in the long run. The next step is to propose such an algorithm and to compare the assignments it produces against the actual assignments in the log. One idea is to estimate the probability of an incident occurring per area and per time of day, and somehow use this in deciding which ambulance to send.
-
I think we should discuss the modeling of the scenario a bit before discussing optimizing the strategies handling it.
First thing is, we have an area relevant for the scenario that has a probability for an emergency to occur. This probability surely is directly proportional to the density of inhabitants, to the dangers connected with living there. In a big city that is only partly covered with your "area", the emergency probability is likely to be equally high everywhere. Compare this to having ambulances A and B inside a medium town, and ambulance C in a small town, with nothing but outback in between - the emergency probability would have some hot spots around each ambulance with nothing in between.
Second thing is: How urgent can every emergency call be? Not every emergency call is the same. While several situations require medical assistance, only some of them require immediate attention because lifes are in danger. This directly influences the dispatching of any ambulance car: If the second emergency needs more immediate attention, unless the ambulance C has already reached incident 1, can be rerouted to incident 2 while ambulance A or B can be sent to handle incident 1. But this topic is one of the solving strategies, let's save it for later.
Third thing is the positioning of the ambulances. Is having them stationary on defined spots required, or might they be located anywhere? Is every ambulance connected to it's home spot, or are they able to occupy ANY defined spot - at least temporarily.
Fourth: The thing missing is the fact that an emergency evolves in several phases. 1. Emergency call is received. 2. Ambulance is dispatched. 3. Ambulance is on it's way to destination. 4. Ambulance arrives and does first help. 5. Ambulance leaves emergency and transports casualty to a hospital. 6. Ambulance delivers casualty at hospital. 7. Ambulance leaves hospital and returns. 8. Ambulance arrives at home base. (Instead of the visit to the hospital, the ambulance might directly return home.) At each of these phases the ability of the ambulance to react to a new emergency call might be different, so this might offer an option to change plans if incident 2 gets known. But even being unable to change the plan for an already dispatched ambulance, the relevant factor is how long it usually takes to fully handle an emergency call.
Im pretty sure reality inflicts plenty of more factors that have to be considered, like rush hour, quality of the roads, average and maximum speed, etc. but they should be irrelevant for this discussion.
Edge cases: The time it takes for ambulances A or B to arrive at incident 2 has to be less than it takes for ambulance C to handle incidents 1 and 2 one after another, otherwise it's useless to think about dispatching them.
Now about strategies. If we construct a scenario where the two incidents happen within few minutes after each other, and it is impossible to reroute an already dispatched ambulance because the second emergency would be more life threatening, this ambulance will take way longer to handle it's incident than it takes the second ambulance to arrive, and the positions for all the ambulances are fixed, and no ambulance should be sent elsewhere because of the first incident happening, then there really is nothing left to change to improve the situation.
Having the incident handling set in stone allows you to calculate the likeliness of such unfortunate situation to occur based on the factors I described above: a) How likely is it that two incidents occur shortly after another? b) How likely is it that the second incident occurs in a place that is located nearer to the now dispatched ambulance? Think about the emergency probability distribution on the map. c) How likely is it that the second incident is more urgent than the first, and the first is so unhasty that the proper order would be to handle the second first if both emergencies would have been reported at the same time, thus allowing to choose the order.
If you think about it, it might be easy to construct a scenario where dispatching ambulance A to the first emergency makes sense: If ambulance C is located in an area with high emergency probability, thus high probability of two emergencies to occur, and both being equally urgent, and ambulance A being close enough that it can reach emergency 1 in time. But in such a scenario it might be a better option to permanently relocate ambulance A next to ambulance C to improve the situation, because the probability distribution for an emergency would make it somehow predictable that such a situation will occur.
I would conclude that it is in general the best strategy to send the nearest available ambulance to any emergency that occurs. Deviating from such strategy can only be justified if the initial location of ambulances is suboptimal compared to the probability of an emergency.
-
Considering that there is no real way to know where or when the next call comes in then there is no real way to pick an ambulance other than the current closest one.
However, you can alter the situation a bit in order to optimize response times.
To do this rebalance the equation. What I mean by this is that once ambulance C has responded to the first call, then you should reposition ambulance A to provide equal coverage across the given area.
This means that ambulances must be constantly manned and they will be in motion quite a bit moving from one part of the area to another. This will result in increased costs (for example gas). However, if you pay attention and properly place your ambulances while constantly rebalancing the target coverage you will end up with reduced response times.
-
I'd like to work the solution in a slightly different way, by refining the model to conform a bit more to reality, and also decrease the problem complexity a bit.
In reality there isn't a continuum of points across the space which are being dealt with, but rather a discrete set of vertices (addresses) connected by (or placed along) edges (roads) of some length (weights) (in other words, the ambulances can't travel through space freely, like a helicopter, but have to use roads to get between points). Additionally the roads are directed in most cities (one-way streets) so we can take lanes of traffic flow to each be a unique edge in a weighted digraph. Additionally each "incident location" can be stated to occur at the intersection closest to the address of the event along the edge (since that's just a shortest distance along a line problem and is easily found).
So to be clear my assumptions are the following:
• The incidents occur at intersections only (as they are aggregated to the closest intersection to the actual location of incident)
• The ambulances must travel along the roads between intersections
• Roads have lengths (which can also be thought of as travel times) between intersections (weights), and directions (for one-way travel).
This means that the problem can be reduced to a weighted digraph within the space. The objective then becomes to minimize the average time taken for an ambulance to get from its current location to any other location after a sequence of random events.
There is some additional information we can get from this model however, mainly the fact that a vertex will have a probability of incident relative to the number of people within its range of effect. As a result the probability distribution need not be considered uniform anymore (Which allows much easier optimization).
Additionally the graph can be thought of as a planar graph in most cases due to the way roads in any service area are likely considered. This gives a very restricted model (of less mathematical complexity) compared to the original (which is also a better approximation of reality).
Now, for optimization of the response times, let's take another thing into consideration, that the vertex is only considered "in" the graph with probability equal to the probability that an incident will occur at the vertex (a fuzzy set). There is then a "maximal" fuzzy dominating subset of the vertices which will have the highest probability of forming a dominating set over the graph (A set for which every vertex is adjacent to at least one vertex in the dominating set). The ambulances will the attempt to move towards this arrangement constantly unless they are "used" (going to or unloading at a hospital).
To be mathematically clear: Given a fuzzy graph $G = (V,E)$ with probability that a vertex v is in G of $\mu_v$
for $D \subseteq V$, D dominates V with probability P(V).
The probability of a set dominating the graph is then the aggregate probability of all of the vertices existing, and the probability that their removal from the graph will cause a non-dominated graph (the expression is a complicated dependent events probability over the entire graph).
By taking only the set which maximizes this as the objective we will minimize the distance from an event that an ambulance is likely to have to travel (which optimizes positioning). The ambulance which should be sent to any location is then the one for which a ratio of the difference in dominating probability should that ambulance go, to the distance of the ambulance from the event:
So given the set of ambulances $A = {a_1, ..., a_n}$ (which can also be thought of as the vertex closest to their location), the one which should go would then be: $$max(\frac{|P(D)-P(D-{a_i})|}{\delta(a_i,v)} | a_i \in A) = min(\frac{P(D-a_i)}{\delta(a_i,v)} | a_i \in A)$$ Where $\delta(a_i,v)$ is the minimum sum of edge weights between the ambulance $a_i$ and the vertex where the problem happened v (all of these factors will vary dependent on the particular situation in question).
-
It seems your (mathematical) intuition is wrong, at least under the constraints you specified. Let's firm them up as follows:
1. The goal is to optimize the average time it takes to reach an incident.
2. The starting locations of the ambulances are fixed. The question is what order to dispatch them in.
3. Once an ambulance is dispatched, it is permanently unavailable: It cannot serve other incidents during the same trip, or return to service before the end of the calculation.
Note that after the second ambulance is dispatched, there is only one ambulance left, and it no longer matters where, timewise: There's a 50% chance that the third incident will be in the opposite side of the map. So, the question is: Starting with the 2 : 1 configuration you present, which dispatch strategy will result in the quickest response, on average, over the first two incidents?
The result can be generalized to larger sizes, but the two-incident case is instructive. The answer will surprise you: The nearest-ambulance strategy is better!
````A, B ----------------------'---------------------- C
````
Let's draw a line from A/B to C, and label it as the interval [0, 1]. An incident is a uniformly distributed random variable t in this interval. We can equate response time to distance: If there are ambulances on both sides, the response time to an incident at point t is min(t, 1-t). If ambulance A must be dispatched, the response time is simply t.
The expected value of t, E(t), is 0.5. Obviously t is closer to the left if t < 0.5. It's easiest to consider scenarios by grouping incidents into two random events: Let's call an event L if t is in the left half of the range (t < 0.5), and R if it is in the right half. Then R and L occur with probability 0.5 each.
There can be four two-incident patterns, occurring with equal probability (0.25): LL, LR, RL, RR. If you could always dispatch from the closest side, the expected travel is 0.25 per incident. You incur a cost when you have to dispatch from the wrong side of the map. The expected length of a "long" trip is 0.75.
So, the payoff matrix is:
````incidents dispatch closest dispatch A
L L A, B A, B
L R A, C A, C
R L C, A A*, B
R R C, A* A*, C
````
The right column is your proposed strategy: dispatch A first, then whichever is closest. A star means that the ambulance had to make a long trip (from the wrong side of the map) to get there. Note that there are more stars (penalties) on the right column.
The fourth row in the table is the one you're worried about: What if there's another incident on the right side of the map, after you've dispatched C? The answer is, don't worry! The bottom row has exactly the same cost under the two strategies: When you dispatched A to the first incident, you already incurred the cost you were trying to avoid in the second incident.
But the third case is one you didn't factor in: What if you send A on a long trip, and then C is not needed after all? In this case, dispatch-A will take longer than the dispatch-closest algorithm.
Since dispatch-A is not better under any scenario, dispatch-closest is superior on average.
Informal summary: If C is closer but you dispatch A, as you propose, you incur a penalty equal to the benefit you will reap if the next incident is nearer to C. So you would break even at best. But the next incident might be closer to A, and you'll never get to collect on your investment. Conclusion: Dispatching the closest ambulance is the better strategy. You can only do better by improving the distribution of ambulances before the second call comes in, as various others suggested. Once the phone rings, sending the closest car is in fact globally optimal.
-
Thanks alexis - I'm just as "open" to proving that dispatch-closet is the correct answer. The whole point of this question is proving something, one way or the other - because at the moment it is only done this way because we "think" it is best - I want proof :) – TheShiftExchange Nov 8 '12 at 23:27
@Alexis. Yes, for the first two incidents sending the nearest car gives better average. However, for the three incidents it gets worse. If the third incident doesn't occur, then the strategy I proposed is inferior, but if it occurs, it is superior. Funny, isn't it? That's why I said "sometimes" in my conclusion. And no, there is no mistake in my computations; just the assumptions are different (3 incidents instead of two for 3 vehicles). – fedja Nov 9 '12 at 0:46
@fedja, thanks for checking but I don't think that's right. Once two vehicles are gone, the situation is completely symmetric: It doesn't matter where the last vehicle is, there's a 50% probability that the third incident will be near it. So there's no way for one strategy to perform better than the other in the third step. – alexis Nov 9 '12 at 11:52
@ShiftExchange, thanks. I'm confident my proof is correct, but I could be deluded of course. Let me know if you have any concerns. And feel free to upvote my answer so that more people can find it and weigh in. (Congrats for having such a hit as your first question, by the way :-) – alexis Nov 9 '12 at 11:58
Yeah, you are right about the third step. This means you are wrong about the first two. Note that after the first step, if you are left with 1-1, you have a great advantage over being left with 2-0 (1/4 against 1/2 for the expected distance on the second call). That's what I exploited when saying that it is beneficial to send the A-vehicle a bit further. I'll have to go over your argument carefully to see where the mistake is... – fedja Nov 9 '12 at 13:50
show 4 more comments
Do you have some legal obligations you must comply with? Some countries have laws that guarantee (to some degree of course) the citizens that ambulance arrives in some specified time frame after an incident has been reported. Which is (probably) actually more important than arriving as soon as possible, since there are some physiological processes that simply are time constrained and had help arrived later it would have just wasted resources.
In such a case you might want to take into account how big unserviceable area would be created by sending C to 1 in the sense of the time constraint and compare it with the same area created by sending A there (because A is either not located at exactly the same place as B or the probability of an incident there is much higher). You "just" need to find the least wrong metric that would take into account at least the most important factors.
Of course - as already mentioned - real-life data will help you a lot. It can even show, that the current distribution of stations is suboptimal with respect to the long-term statistical distribution of incidents.
-
2
downvoters, would you mind explaining the downvote please? Thanks. – peterph Nov 8 '12 at 13:17
Downvoting is anonymous on Stack Exchange; also, asking for explanation of downvotes is considered noise and should not take place in comments (it shouldn't be asked at all). – casperOne Nov 8 '12 at 15:10
1
If I had to conjecture, it would be because the answer provides no mathematical content. – Arkamis Nov 8 '12 at 23:48
Ignoring ethics, and only going for the most optimal solution speaking only in terms of outcomes, how about applying the Kelly Criterion? (Anyways, Kelly'll probably tell you to go ahead and send the closest ambulance allowing you to avoid all of those legal/ethical complications in court but most importantly where to place all unoccupied ambulances at a given time.)
While the stock market versions would probably be best since it's geometric rather than arithmetic, and almost everything in the natural world is geometric, the basic version can provide an example, but the distribution of your probabilities should be your guide.
So, as a quick and dirty example, calculate the probability of the call "succeeding" if it takes a call for each point on the map relative to each point on the map. Calculate the probability of the call "failing" if it takes a call for each point on the map relative to each point on the map.
"b" in the basic model might be harder to come up with, but I bet you and others in the industry probably already have some ideas. Maybe there are more experienced ambulances who can provide better outcomes for clients/patients?
You can play with this any way you want and make it as complex as you want, but I think you get the picture. Kelly will stabilize your results and improve outcomes if applied properly.
Good luck!
-
## protected by Qiaochu YuanNov 7 '12 at 22:52
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 96, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.949076235294342, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/33604/voltage-and-resistance-in-series-connection | # Voltage and resistance in series connection
In a series connection with n elements it is true that (voltage):
$$V = V_1 + V_2 + ... +V_n$$
and (resistance):
$$R = R_1 + R_2 + ... +R_n$$
If I know one of these I can infer the other. But is it possible to prove any of them without the other?
-
Do you know the definition of $U$ in terms of the more elementary electric field? – Fabian Aug 7 '12 at 8:04
I know the definition as energy per charge. – Lucy Brennan Aug 7 '12 at 8:28
1
If you now imagine having two islands in series. You need the energy (per charge) $U_1$ to bring a unit charge in the first island and the energy $U_2$ to bring it from the first to the second. Then because energy itself is additive you need the energy $U_1+U_2$ to bring it directly to the second island (so the first equation is more elementary and derives from the additivity of energy). – Fabian Aug 7 '12 at 10:13
## 2 Answers
But is it possible to prove any of them without the other?
(1) By KVL, the voltage across the N resistors is:
$V = V_{R_1} + V_{R_2} + ... + V_{R_N}$
(2) For a series connection, by definition, there is only one current, $I$.
By Ohm's Law, the voltage across any of the series resistors is:
$V_{R_n} = I \cdot R_n$
By KVL:
$V = I \cdot R_1 + I \cdot R_2 + ... + I \cdot R_N = I \cdot (R_1 + R_2 + ... R_N) = I \cdot R$
$R = R_1 + R_2 + ... + R_N$
-
Here is a very slow derivation of $U = U_1 + ... + U_n$
The energy transformed in both of them must equal the sum of the energy transformed in each of them:
$P = P_1 + P_2 + ... + P_n$
According to the definition of electric power:
$P = IV$
By combining the two:
$I*V = I_1*V_1 + I_2*V_2 + ... + I_n*V_n$
But since the current is the same at any point in a series connection, $I$ can be crossed out on both sides.
$V = V_1 + V_2 + ... + V_n$
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 15, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9361963272094727, "perplexity_flag": "head"} |
http://mathoverflow.net/questions/81321?sort=votes | ## Growth of groups versus Schreier graphs
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
This question is motivated by this one http://mathoverflow.net/questions/32899/what-is-the-relation-between-the-number-syntactic-congruence-classes-and-the-num where it essentially asks to compare the growth of the syntactic monoid with the growth of the minimal automaton. A special case is the following. Let G be an infinite finitely generated group and H a subgroup such that G acts faithfully on G/H. How different can the growth of G and the Schreier graph of G/H be?
I know the Grigrchuk group of intermediate growth has faithful Schreier graphs of polynomial growth.
Are there groups of exponential growth with faithful Schreier graphs of polynomial growth?
Schreier graphs of non-elementary hyperbolic groups with respect to infinite index quasi-convex subgroups have non-amenable Schreier graphs so ths should be avoided.
-
2
Every regular graph of even degree is a Schreier graph of a free group. Basically you can make any growth you like. Faithfulness of a Schreier graph is not a problem usually. If you want a particular example in the spirit of the Grigorchuk group, you can take the Basilica group: it has exponential growth, acts faithfully on its Schreier graphs corresponding to stabilizers of infinite sequences, and these Schreier graphs have polynomial growth. – Ievgen Bondarenko Nov 19 2011 at 12:50
I believed a free group was no problem but most of the examples I tried to draw in my head were not faithful. I was sure somebody new an example off the top of their head. Basilica is good because the polynomial growth is from contracting like in Grigorchuk group. – Benjamin Steinberg Nov 19 2011 at 18:27
## 1 Answer
This holds true, for example, for free groups. Actually, take $G$ to be a free product of three copies of $Z/2Z$, which has an index two subgroup which is rank 2 free. The Cayley graph for this group (which has undirected edges) is just a trivalent tree, with edges colored 3 colors by the generators, so that every vertex has exactly 3 colors (this is known as a Tait coloring). Any cubic graph with a Tait coloring corresponds to a Schreier graph of a (torsion-free) subgroup $H$ of $G$, which is the quotient of the Cayley graph of $G$ by the subgroup $H$ (one may choose a root vertex to correspond to the trivial coset). Closed paths starting from the root vertex correspond to elements of the subgroup $H$.
Choose a cubic graph with a Tait coloring which has linear growth and corresponds to a subgroup $H$ satisfying your condition ($G$ acts faithfully on $G/H$). This is equivalent to $\cap_{g\in G} gHg^{-1}=\{1\}$. For example, take a bi-infinite ladder, labeling the two stringers with matching sequences of colors, which then determine the colors of the rungs. By making these stringer sequences aperiodic, you can guarantee that $\cap_{g\in G} gHg^{-1}=\{1\}$. Changing the root vertex corresponds to changing the conjugacy class. In fact, we may choose stringer sequences which contain any word in $G$. Then putting a root at the endpoint of such a word, we guarantee that it is not in the corresponding conjugate subgroup of $H$.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 14, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9159923195838928, "perplexity_flag": "head"} |
http://physics.aps.org/synopsis-for/print/10.1103/PhysRevLett.108.097403 | # Synopsis:
Optical Device is More Than 100% Efficient
#### Thermoelectrically Pumped Light-Emitting Diodes Operating above Unity Efficiency
Parthiban Santhanam, Dodd Joseph Gray, Jr., and Rajeev J. Ram
Published February 27, 2012
Physicists have known for decades that, in principle, a semiconductor device can emit more light power than it consumes electrically. Experiments published in Physical Review Letters finally demonstrate this in practice, though at a small scale.
The energy absorbed by an electron as it traverses a light-emitting diode is equal to its charge times the applied voltage. But if the electron produces light, the emitted photon energy, which is determined by the semiconductor band gap, can be much larger. Usually, however, most electrons create no photon, so the average light power is less than the electrical power consumed. Researchers aiming to increase the power efficiency have generally tried to boost the number of photons per electron. But Parthiban Santhanam and co-workers from the Massachusetts Institute of Technology in Cambridge took a gentler approach, achieving power enhancement even though less than one electron in a thousand produced a photon.
The researchers chose a light-emitting diode with a small band gap, and applied such small voltages that it acted like a normal resistor. With each halving of the voltage, they reduced the electrical power by a factor of $4$, even though the number of electrons, and thus the light power emitted, dropped by only a factor of $2$. Decreasing the input power to $30$ picowatts, the team detected nearly $70$ picowatts of emitted light. The extra energy comes from lattice vibrations, so the device should be cooled slightly, as occurs in thermoelectric coolers.
These initial results provide too little light for most applications. However, heating the light emitters increases their output power and efficiency, meaning they are like thermodynamic heat engines, except they come with the fast electrical control of modern semiconductor devices. – Don Monroe
ISSN 1943-2879. Use of the American Physical Society websites and journals implies that the user has read and agrees to our Terms and Conditions and any applicable Subscription Agreement. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9183707237243652, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/222033/arc-transitivity-of-the-complete-graph | Arc transitivity of the complete graph
Recall that a graph $G$ is arc transitive if the natural action of $\mathrm{Aut}(G)$ on $A(G) = \{ (u,v) | \{u,v\} \in E(G)\}$ is transitive.
In other words, given $(u,v),(u'.v') \in A(G)$ one finds a $g \in \mathrm{Aut}(G)$ such that $g (u,v) := (g(u),g(v)) = (u',v').$
Our lecturer said it is not completely straightforward to show that $K_n$ is an arc transitive graph. As far as I can see since $\mathrm{Aut}(K_n) = S_n$ one can always find a suitable permutation that maps a pair of vertices into another pair of vertices thus showing that $K_n$ is indeed arc transitive.
Am I missing something? I recall the lecturer talked about considering some cases (i.e if the arcs are not disjoint) but I don't see how this could create any real obstacle in finding the suitable permutation.
Am I missing something crucial here?
-
It might help if you told us what $K_n$ is. Is it the complete graph on $n$ vertices? If so, the problem amounts to showing that $S_n$ acts 2-transitively, which I would agree is not very taxing (assuming $n \ge 2$). – Derek Holt Oct 27 '12 at 13:03
1 Answer
$S_n$ acting transitively on $K_n$ only gives you that for each two vertices $v,v'\in V(K_n)$, there is a $\sigma\in S_n$ so that $v\sigma=v'$. You want to show that for any four vertices $v,w,v',w'\in V(K_n)$ (with $v,w$ distinct, as well as $v',w'$), you can find a $\sigma$ so that $v\sigma = v'$ and $w \sigma = w'$. This means $S_n$ has to be $2$-transitive, which is stronger than just transitive. For example, $D_n$ is transitive but not $2$-transitive for any $n> 3$, which is easy to see visually by its action on the $n$-cycle.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 30, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9638195633888245, "perplexity_flag": "head"} |
http://theoreticalatlas.wordpress.com/2009/05/13/smooth-ottawa/ | # Theoretical Atlas
He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand.
May 13, 2009
## Conference: Smooth Structures in Ottawa
Posted by Jeffrey Morton under category theory, conferences, geometry, groupoids, moduli spaces, smooth spaces
[5] Comments
I’ve been to two conferences in the past two weeks, and seen a lot of interesting talks. A couple of weekends ago, I was in Ottawa at the Fields Institute workshop on “Smooth Structures in Logic, Category Theory, and Physics“. There were quite a few interesting talks, on a fairly wide range of points of view, and I had some interesting conversations as well. A good workshop overall. Another report on it by Alex Hoffnung is here on the n-Category Cafe. Then this past weekend, I attended the conference “Connections in Geometry and Physics“, at the Perimeter Institute in Waterloo (also jointly sponsored by the U of Waterloo math department, and the Fields Institute), where I gave a version of my Extended TQFT talk (if you’ve seen a previous version, this is similar, but references a few more recent variations, but is mainly distinguished as the first time I caved in and decided to use Beamer – frankly I find the graphical wingdings distracting and unhelpful, but it made sense given the facilities in the venue).
One personally interesting thing was that one of the talks in Ottawa was given by John Baez, who was my advisor for my Ph.D at UCR, and one of the talks at PI was given by Niky Kamran, who was my advisor for my M.Sc. at McGill, so I got to touch base with both of them in the space of a week. It also reminded me that I’ve worked on a fairly eclectic sampling of things, since John was talking about cartesian closed categories of smooth spaces, and Niky was talking about the long-time dynamics of the Dirac equation in the neighborhood of black holes.
In the near future I’ll make a post on the Connections conference but the following was getting long enough already…
Smooth Structures
So, as the title suggests, the Fields workshop addressed the topic of “smoothness” from several points of view, with the three mentioned being only the most obvious. To begin with, “smooth” carries the connotation of “infinitely differentiable”. So for example the space $C^{\infty}(\mathbb{R})$ of smooth functions on the real line has the property that, if $f$ is any function in it, you can take the derivative $f'$, and you get another smooth function, hence can take the derivative again, and so on. So one way to characterize the space $C^{\infty}(\mathbb{R})$ is that it has a differential operator $D$ (which satisfies some algebraic properties like the product rule etc.), and is closed under $D$.
One theme explored at the workshop has to do with finding a nice general notion of “smooth space”. The smooth structure on $\mathbb{R}$ that makes this possible is the model for smooth structures on other spaces. The most familiar way goes: (1) first extend the concept to $\mathbb{R}^n$ via partial derivatives, so we know what a smooth map $f : \mathbb{R}^n \rightarrow \mathbb{R}^n$ is (or between open subsets of these), and (2) define a “smooth manifold” as a (topological) space $M$ equipped with “charts” $\phi : U \rightarrow M$ for $U$ an open subset of some $\mathbb{R}^m$, satisfying a bunch of conditions. Then we can tell smooth functions on $M$ by pulling them back to $\mathbb{R}^m$ and using the familiar concept there. We can tell smooth maps $f : N \rightarrow M$ by composing with charts and their inverses and seeing that the resulting map $\phi^{-1} \circ f \circ \psi$ is smooth. This concept is great, and underlies, just to name a couple of examples, General Relativity and gauge theory, which are the basis of 20th century physics. It’s rather brittle, though, because simple operations like taking function spaces $Man(M,N) = \{ f : M \rightarrow N | f \text{ smooth } \}$, or subspaces $A \subset M$, or quotients $M/G$, give objects which are not manifolds.
John Baez talked about some of these issues in categorical language. Those three operations illustrate the general categorical constructions of exponentials, equalizers and coequalizers (or, generally, limits and colimits). The category of differentiable manifolds has important objects, but since it lacks these constructions in general, it’s not a nice category. The idea is to find a nice category – a cartesian closed one – which contains it. John described roughly how some approaches to this problem work – there were more detailed talks by Alex Hoffnung about the Diffeological spaces of Souriau, and by Andrew Stacey summarizing how the different categories are related and making a case for Frölicher spaces – but mostly focused on examples where these categories would be useful.
One interesting case deals with orbifolds (John suggested checking out this paper of Eugene Lerman, “Orbifolds as Stacks”), which were also the subject of Dorette Pronk’s talk. Dorette described some orbifolds – sometimes they arise by taking quotients of manifolds under the action of finite groups, and sometimes they only look locally as if they did. She also talked about the right way to think about maps between orbifolds, which is basically in terms of spans. That is, the right kind of “map” between orbifolds $X$ and $Y$ consists of (certain) maps into each of them from some common orbifold $Z$.
Under “logic” (and overlapping with “categories”), the first two days started off with talks by Anders Kock about Kähler differentials and synthetic differential geometry (regarding which see e.g. Mike Schulman’s intro, or the book by Anders Kock himself). SDG is a generalization of differential geometry to be internal to some topos – in particular, getting rid of the assumption that the geometry is based on some space which has an underlying set of points (which is special to the topos $\mathbf{Sets}$), and doing everything in the internal language of the topos. Kock introduced some work with Eduardo Dubuc – describing a “Fermat Theory” (an abstract characterization of a ring with a concept of partial derivatives) and showed how it fits into SDG.
Some other talks in the “logic” world included those of Rick Blute, Robin Cockett, and Thomas Erhard, about linear logic and differential categories. The basic idea is that a category can be associated to any logic, by taking formulas as objects, and (equivalence classes under rewriting rules) of proofs as morphisms. Linear logic, which is topical to quantum computation, is interesting from this point of view because it has some standard logical facts (like the deMorgan laws, which the intuitionistic logic of toposes do not entirely have) and also good categorical properties (one has a nice monoidal category). It’s a little strange if you’re used to thinking of classical logic: “propositions” are replaced by “resources” whose truth can be consumed (think of a quantum computer with a bit stored somewhere – you can move the bit, but not read and copy it); there are both “additive” and “multiplicative” versions of connectives which in classical logic go by the names “AND” and “OR” (correspondingly, there are additive and multiplicative versions of “TRUE” and “FALSE”). The relation to smoothness is that a new variation on this adds an operator to the category which behaves like differentiation. This is formally very interesting, though I haven’t really grokked what it’s good for, but apparently the derivative acts like a quantifier. Really!?
Among other talks, Konrad Waldorf spoke about parallel transport for extended objects – basically, this is a roundabout way of studying nonabelian gerbes (a kind of categorification of bundles), not by looking at the gerbes themselves but by directly looking at parallel transport for connections on them. Kristine Bauer gave two talks about “Functor Calculus” (in particular the Goodwillie calculus), which has to do with constructing something like a “Taylor series” for a functor into topological spaces, approximating the functor by “polynomials”, and which shows up in homotopy theory. I also have the sense that the Goodwillie calculus generalizes to topological spaces a lot of what Joyal’s species (related to “analytic” functors in a similar sense of being representable by “polynomials”), but I don’t understand this well enough at the moment to say just how.
### 5 Responses to “Conference: Smooth Structures in Ottawa”
1. May 13, 2009 at 9:06 am
[...] theory, geometry, moduli spaces, physics, talks No Comments As promised in the previous post, here is a little writeup of the second conference I was at [...]
2. Toby Bartels Says:
May 16, 2009 at 8:01 am
You wrote in part:
>(like the deMorgan laws, which the intuitionistic logic of toposes has)
But you meant to write:
>(like the deMorgan laws, which the intuitionistic logic of toposes lacks)
Actually, intuitionistic logic has up to 87.5% of the de Morgan laws, depending on how you count them, but it does not have ~(A and B) => ~A or ~B.
1. Jeffrey Morton Says:
May 20, 2009 at 6:35 pm
I would generally say that toposes don’t have the law of the excluded middle, but it’s true that is one of de Morgan’s laws… I guess it’s just a selling point to be able to say “This logic has ALL of de Morgan’s laws”.
3. John Baez Says:
May 17, 2009 at 8:59 pm
Nice summary!
“at one of the talks at PI” should probably be “and one of the talks at PI”.
4. April 1, 2010 at 9:13 pm
[...] this problem by moving to the category of diffeological spaces. As I mentioned in a previous post, this is one of a number of attempts to expand the category of smooth manifolds , to get a category [...]
%d bloggers like this: | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 24, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9577350616455078, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?t=587545 | Physics Forums
Page 1 of 5 1 2 3 4 > Last »
## What exactly is an electron?
HI,
Firstly I'd like to open with I know what an electron is and I know all about its charge and the role it plays in electricity, current, free electron model etc etc.
My question is what is an electron 'made' out of? My reasoning is that it cant be made out of anything physical as its charge would distribute evenly throughout its-self and would fly apart as every part of the electron would repel every other part of the electron.
In physics the electron is thought of as a mathematical point particle but in a 3-spacial dimensional universe a 1-d object cant physically exist so that rules that out.
If i could magically enlarge an electron to the size of a car what would i physically see?
or is there even any credence to asking a question like that?
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
I'd like to know, too. We don't have a model of elementary particles which gives them any structure. Many people have tried to build such a model (Lorentz, Poincare, Feynman ...), but no one has succeeded. Modern Quantum Field Theory assumes that elementary particles are pointlike entities with no internal structure. Whether this is true or whether this is only an approximation is an open question. "The Feynman Lectures on Physics" Vol 2, Chapter 28 gives a very readable history of these attempts.
Mentor What exactly is a lion? If I pointed at one and said "that's a lion", wouldn't that be an acceptable answer? What is unacceptable about pointing at an electron and saying "that's an electron?" Until you've answered that question, it will be difficult to write an answer that will satisfy you.
Recognitions:
Science Advisor
## What exactly is an electron?
To conclude: To the best of our knowledge today (i.e., in this case the standard model of particle physics) the electron is an elementary spin-1/2 Dirac particle with one negative elementary charge and a mass of about $511 \; \mathrm{keV}/c^2$. It's a lepton, i.e., participates only in the electroweak interaction (let alone gravitation, which acts universally on anything that has energy and momentum).
Recognitions: Gold Member When you ask what something is, the most accurate description is detailing the physical properties of it, such as mass, charge, etc. Asking what it "really" is simply doesn't make any sense, as there is no more available information. Any answer is simply speculation.
Recognitions:
Science Advisor
Quote by CF.Gauss My question is what is an electron 'made' out of? [...] In physics the electron is thought of as a mathematical point particle [...] If i could magically enlarge an electron to the size of a car what would i physically see? or is there even any credence to asking a question like that?
In enlarging quantum objects one makes their quantum properties disappear. Macroscopic objects behave classically.
The electron is an elementary particle, hence not composed of anything but itself. But it is not a point - only pointlike (which means, the formal, unobservable, bare electron in the defining action is a point). Due to radiative corrections stemming from the renormalization procedure for relativistic quantum field theories, an observable, renormalized electron has a positive charge radius (though far too small to be probed experimentally with current methods).
Blog Entries: 1 Recognitions: Science Advisor What is the charge radius of the electron predicted to be? Order of magnitude.
Recognitions: Gold Member CF.Gauss why do electrons not look like sparks/lightening?
Quote by nitsuj CF.Gauss why do electrons not look like sparks/lightening?
I think the lightning you see is actually emission from partially ionized nitrogen and oxygen plasma.
Recognitions: Gold Member Yes, and when I "see" anything else what am I "seeing"? Say fire for example, am I seeing fire or what. Simular to what Vanadium 50 said "What exactly is a lion? If I pointed at one and said "that's a lion", wouldn't that be an acceptable answer?" an electron looks like a bzzt, and feels like a bzzt, so it must be a bzzt.
This question is similar to asking what is a photon? Photons and electrons and other elementary particles are not actually little billiard balls that are flying around high speeds. They are both quantum excitations of their respective fields. The entire universe is filled with a photon field, and it's mostly empty. You can think of it as an empty EM field as well. At every point in space there is a quantum harmonic oscillator for each possible spatial frequency, and thing about quantum harmonic oscillators is that only allowed energy levels come in steps of hw. The minimum energy of the oscillator is 3/2hw in 3 dimensions, and then it goes up to 5/2hw, then 7/2 hw, etc. One step above the zero-point level is considered one photon at that spatial frequency. The photon could have a range of frequencies, and be localized in some way, or be more spread out and less localized. Just think of it of a field as an infinite set of harmonic oscillators at every point in space, and think of the particles as quantum vibrations of this field. In a similar way, there is an electron field that fills of space with a zero-point energy, and it has certain linearly quantized energy levels above the zero level that indicate the number of electrons. This explains why every electron has exactly the same mass, charge, spin, and g-factor. Saying an electron is the same thing as saying a quantum vibration of the electron field, but the latter is too wordy. The electron vibration can be localized, as in a vibration around an atom, or more spread out like a free particle, or an electron in a double slit experiment. The big difference between the electron field and the photon field is that with electron vibrations, they can't stack directly on top each other. This is described as the Pauli Exclusion rule. The electron field is a fermion field, described by the Dirac equation. Two electron vibrations can be in almost the same state very close to each other, but they can never occupy the same exact state. I like to visual all quantum particles, whether they are photons or electrons, as 3 dimensional fuzz balls, and those fuzz balls oscillate and move around and sometimes disappear according the probabilistic laws of QFT. It's the sudden collapse of the fuzz balls that's most shocking to me, (wavefucntion collapse is mysterious).
Recognitions: Gold Member Fastman, while your explanations seems to make sense, I am hesitant to really accept it, as I've never heard of "photon fields" or "electron fields" and the like. What model is this from?
You've heard of fermionic fields though right? I just sort of made it up the terms "photon field" and "electron field" on the spot. http://en.wikipedia.org/wiki/Fermionic_field The electron field is just a type of fermionic field governed by the Dirac equation. That's my definition anyway. It's what helps me envision quantum field theory better. The most disappointing aspect of the field theory is that it predicts a large zero-point energy. It's been dismissed before, but now that dark energy is around, we need some explanation for why there is a negative energy field permeating the entire universe and causing cosmic acceleration. The zero-point energy of the QFs were a candidate, but the calculations were done and it's 120 orders of magnitude larger than the measured value! That's a terrible model error. http://en.wikipedia.org/wiki/Zero-po..._and_cosmology
Recognitions:
Gold Member
Quote by Fastman99 You've heard of fermionic fields though right? I just sort of made it up the terms "photon field" and "electron field" on the spot.
Actually no. My knowledge of QFT is severely lacking. Thanks for the links by the way!
Recognitions:
Science Advisor
Quote by Bill_K What is the charge radius of the electron predicted to be? Order of magnitude.
$$10^{-16}cm$$ according to
http://adsabs.harvard.edu/cgi-bin/np...hDT.......130L
Mentor False. There is no prediction for the charge radius of the electron. There are experimental limits suggesting that any charge radius must be smaller than some number, but the number that A. Neumaier posted is neither a prediction nor a measurement.
If you could magnify an electron to the size of a car,you would have to slow it down as well ,so as to observe the individual oscillations.
Page 1 of 5 1 2 3 4 > Last »
Tags
electron
Thread Tools
| | | |
|---------------------------------------------------|----------------------------------------|---------|
| Similar Threads for: What exactly is an electron? | | |
| Thread | Forum | Replies |
| | High Energy, Nuclear, Particle Physics | 5 |
| | Introductory Physics Homework | 8 |
| | Quantum Physics | 1 |
| | Classical Physics | 1 |
| | High Energy, Nuclear, Particle Physics | 1 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9423501491546631, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/66473/group-cohomology-of-an-abelian-group-with-nontrivial-action/66481 | ## Group cohomology of an abelian group with nontrivial action
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
How do I compute the group cohomology $H^2(G,A)$ if G is a finite abelian group acting nontrivially on a finite abelian group A?
-
5
Type it into a computer. Seriously. Magma will definitely do it. – Kevin Buzzard May 30 2011 at 19:44
GAP too........ – Fernando Muro Sep 14 2011 at 12:10
## 3 Answers
If $G$ is any group and $A$ is any $G$-module, then $H^2(G,A)$ can be identified with the set of the equivalence classes of extensions $$1\to A\to H\to G\to 1$$
such that the action of $G$ on $A$ is the given action. Two extensions $H_1,H_2$ are said to be equivalent if there is an isomorphism $H_1\to H_2$ that makes the extension exact sequences commute. See K. Brown, Group cohomology, chapter 4.
-
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You can compute it using the Bar resolution, see [Weibel, H-book].
-
I know exactly how it is related to extensions and how to compute it explicitly if the actions is trivial (using Kunneth formula to translate the problem to cyclic groups). I was just trying to find out if there is some relatively easy procedure that gives you an explicit answer for nontrivial actions (perhaps by somehow using the long exact sequence to translate the problem to that of trivial actions). – Mitja May 30 2011 at 21:06
1
I don't think it's a good idea to use the bar resolution here since it's way to large. You'll get a projective resolution of mimimal complexity by tensoring the periodic resolutions of the cyclic summands of $G$. – Ralph May 31 2011 at 4:14
One can do the calculation using Kunneth theorem and the cohomology of cyclic group.
See eqn J18 and appendix J.6 and J.7 in a physics paper http://arxiv.org/pdf/1106.4772v2
-
I don't think this works so easily with non-trivial coefficients. – Fernando Muro Sep 14 2011 at 12:12
Dear Fernando: Eqn. J60 - eqn. J70 in the above paper give some explicit results for non-trivial coefficients, for some simple Abelian groups. But do your suggest that I cannot use Kunneth theorem for non-trivial coefficients? – Xiao-Gang Wen Sep 14 2011 at 16:02
I agree with Fernando. At least the derivation of (J54) is doubtful: It's based on (J43), but in (J43) one has $H^i(G_1;M)\otimes_M H^{n-i}(G_2;M)$ while by setting $G_1 := Z_2^T, G_2 := Z_n$, (J54) reads: $H^i(G_1;Z_T) \otimes_Z H^{d-i}(Z_n;Z)$, i.e. in both components of the tensor product in (J43) the coefficients are equal, while in (J54) they differ! Also be aware of the wikipedia references for Kuenneth-formulars: They require trivial coefficients! (otherwise wikipedia had to use (co)homology with local coefficients, what they don't). – Ralph Sep 14 2011 at 20:58
Dear Ralph: Thanks for the comments. I agrees with you that eqn. J43 from a webpage is intended for trivial coefficient. But if Kuenneth formula only depends on the cohomological structure algebraically, should it also apply to non-trivial coefficients, provided that the group action "splits" in some way? Here $Z_T$ is the same as $Z$. Just that $G_1$ has a non-trivial action on $Z_T$. In fact, $G1\times G_2$ acts "naturally" on $H^i(G_1,Z_T)\otimes_Z H^{d−i}(G2,Z)$, Let $a\in H^i(G_1,Z_T)$ and $b\in H^{d−i}(G_2,Z)$. We have a group action $(g1,g2) \cdot (a\otimes b)=(g1\cdot a)\otimes b$. – Xiao-Gang Wen Sep 14 2011 at 22:31
In the case that I am interested in, the action does not split in any way at all. To make things more precise, in my case $G$ is a transitive abelian subgroup of $S_n$ acting on $A=(Q/Z)\times\ldots \times (Q/Z)$ by permuting factors (in the concrete problem I actually have the multiplicative group of complex numbers without $0$ insead of $Q/Z$). – Mitja Sep 15 2011 at 12:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 27, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9073618650436401, "perplexity_flag": "middle"} |
http://mathoverflow.net/revisions/71939/list | ## Return to Question
1 [made Community Wiki]
# What is the strongest, most natural, conjectural form of Langlands?
This is inspired by my previous question: http://mathoverflow.net/questions/71743/what-is-the-precise-relationship-between-langlands-and-tannakian-formalism
As well as the excellent link that Tom Leinster put in a comment to that thread: http://golem.ph.utexas.edu/category/2010/08/what_is_the_langlands_programm.html
It seems that people are reluctant to say a form of Langlands that is too strong, but as consequence the statement is less natural, and more convoluted. So here I prefer that the statement be natural and bold rather than unnatural (for example, I consider the statement that each $L$ function coming from Galois representations is the $L$ function of some automorphic form to be unnatural).
### Question
What is the strongest, most natural statement of Langlands? It would be nice if you can give a short definition of the words you use, but I am mostly interested in the narrative (each this has a blah, to each blah is a this, this is associated to this category by blah, and this is conjectured to be an equivalent category to blah, and so forth)
Words like: stack, motive, Tannakian, motivic Galois group, L-packets are encouraged. (of this list $L$-packets are by far the thing I know the least about)
This is subjective, so community wiki it is. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.924717128276825, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/51489/rotate-vector-in-spherical-coordinates?answertab=oldest | Rotate vector in spherical coordinates
I have two arbitrary vectors $\vec{x}$ and $\vec{x}'$ given in spherical coordinates $(|\vec{x}|=x,\theta,\phi)$ (as convention I take the "physics notation" given on Wikipedia http://en.wikipedia.org/wiki/Spherical_coordinate_system). I now want to rotate the coordinate system so that it's $z$-direction points along $\vec{x}$. That means, $\vec{x}$ would have the values $(0, 0, x)$. I now need to compute the angles of $\vec{x}'$. The absolute value does not change, but the angles do. I need to figure out how the angles are in the new coordinate system. With the help of rotation matrices, one is able to get:
$\vec{x}' = x' (\sin(\theta')\cos(\phi'-\phi)\cos(\theta)-\sin(\theta)\cos(\theta'),\sin(\theta')\sin(\phi'-\phi),\sin(\theta')\cos(\phi'-\phi)\sin(\theta)+\cos(\theta)\cos(\theta')) \equiv x' (\sin(\alpha')\cos(\beta'),\sin(\alpha')\sin(\beta'),\cos(\beta'))$
Now $\alpha', \beta'$ are the angles in the normal sense but in the new coordinate system. I need a converting rule $\theta', \phi' \to \alpha', \beta'$. Anyone a hint?
Edit (some further explanations): I need this to compute an integral of the form $\int \mathrm{d}^3x' g(\theta',\phi')f(|\vec{x}-\vec{x}'|)$ and I converted $\mathrm{d}^3x'=x'^2\mathrm{d}x\mathrm{d}\phi' \sin(\theta')\mathrm{d}\theta'$ to spherical coordinates. The problem is that $|\vec{x}-\vec{x}'|=x^2+x'^2-2xx'[\sin(\theta')\cos(\phi'-\phi)\sin(\theta)+\cos(\theta)\cos(\theta')]$ contains angles of both vectors and I need to get rid of the unprimed angles (which is possible in transforming the coordinate system under the integral to point with it's z-direcion along $\vec{x}$).
-
1
are you sure your transformation is right? One check I did was plug in $\theta' = \theta$, $\phi' = \phi$, $x' = x$, i.e. see how $\vec{x}$ itself is described in this new coordinate system in which $\vec{x}$ is in the z-direction. The answer should be $x(0,0,1)$. But I get $x(\sin(2\theta),0,1)$. – nervxxx Jan 17 at 21:49
Correct, I made two typos. Corrected it and then it fits. – DaPhil Jan 18 at 8:25
I think this should be migrated to math, it is not really a physics problem. – daaxix Jan 18 at 20:16
1
@daaxix, but it is a very frequent problen one finds in physics. I think it is best placed here. In math.se they are usually interested in questions of a more abstract nature. – Eduardo Guerras Valera Jan 18 at 20:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 22, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9347748756408691, "perplexity_flag": "head"} |
http://mathhelpforum.com/pre-calculus/154984-problem-involving-complex-numbers-polar-form.html | # Thread:
1. ## A problem involving complex numbers in polar form
I'm having some problems with this question. It is as follows;
A and B are two points on a computer screen. A program produces a trace on the screen to execute the following algorithm.
Step 1 - Start at any point P on the screen.
Step 2 - From the current position describe a quarter circle about A.
Step 3 - From the current position describe a quarter circle about B.
Step 4 - Repeat step 2.
Step 5 - Repeat step 3, and stop.
Show that the trace ends where it began.
The section of my text book that this question is in deals with spiral enlargements of complex numbers but I am not sure how to do this question using them.
I have started by equating A to the origin. From here the point P may be described as $P = |p|(cos\alpha + isin\alpha)$. The point P' reached at the end of step 2 may be described as $P' = |p|(cos(\alpha + \frac{1}{2}\pi) + isin(\alpha + \frac{1}{2}\pi)$
From this is can be established that if $P' = sP$ then $s = cos\frac{1}{2}\pi + isin\frac{1}{2}\pi$
It is after this that I don't know where to go next. It looks like if P is multiplied by s four times then the trace will arrive back at P but that doesn't seem to fit with describing quarter circles about B.
Any pointers would be appreciated!
2. We'll obviously have to assume that all the quarter-turns are described in the same direction.
A single (anticlockwise) rotation through a quarter turn about the origin corresponds to multiplication by i. But for this problem there are two different centres of rotation, A and B. We can't put them both at the origin, so we need a more general setup.
Represent A, B, P by the complex numbers a, b, z. If you rotate z (anticlockwise) a quarter turn about a, it takes z to the point $a + i(z-a) = (1-i)a + iz$. If you rotate this new point a quarter turn about b, it takes it to the point $b + i\bigl((1-i)a + iz\bigr) = (1+i)a + (1-i)b - z$.
If you now repeat both operations then you get the same formula again, except that z must be replaced by $(1+i)a + (1-i)b - z$. So the funal position of z is given by $(1+i)a + (1-i)b - \bigl((1+i)a + (1-i)b -z\bigr) = z$. In other words, z ends up where it started from. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9472017288208008, "perplexity_flag": "head"} |
http://quant.stackexchange.com/questions/4823/easy-one-step-option-replication/4830 | # easy one step option replication
I have the following question. Consider that you have a stock currently trading at 100\$. In one month it can jump to 120\$ with probability 99% and go to 80\$with probability 1%. How much is call option with strike at 110$ worth now?
So when you do replication portfolio consisting of 0.25 stock and -20\$gives exactly the same payoff as the option, hence the option should be worth 5\$. But again consider someone who wants to sell you security that in 99% cases gives you 10\$and in remaining 1% gives you 0\$. How much would you be willing to pay for it? My whole intuition says it should be close to 10\$, but again replication tells 5\$.
Can someone comment on this? I feel really annoyed by this, as I tend to believe the replication model (because it replicates option with certainty), but it contradicts my intuition.
-
2
– Alexey Kalmykov Dec 23 '12 at 8:39
@AlexeyKalmykov good paper I like it! – SRKX♦ Dec 25 '12 at 0:17
## 4 Answers
While another user touched on the hedging argument in order to reconcile your intuition with the correct value of the option he went off track (imho). I like to focus entirely on the hedging issue because it is key in understanding the differences in intuition and the fair price of such option. Unfortunately I have hardly ever found a simple 1-2 paragraph explanation in any of the PhD level papers. (should grad school students not start with the most basic understanding and be able to communicate such basics?). Anyway, here it goes:
2 important concepts apply that make all the difference:
1) In order to apply the fundamentals of derivatives pricing in a discrete space several conditions have to be met. One such condition is the condition of complete markets (or in the binomial case, completeness of the binomial model). The theorem of completeness states that "every derivative security can be replicated by trading in the underlying stock and money market. In a complete market, every derivative security has a unique price that precludes arbitrage" (Shreve, Stochastic Calculus for Finance I, p. 14, ed. 2004).
So, as long as those conditions are met, especially the one that requires the ability to trade in the underlying asset then you can price a derivative security as the discounted expected value applying risk-neutral probabilities. Thus, in your example the price of the option should be `$5`.
2) Now, why should the price be `$5` regardless of the real-world probabilities of attaining each future stock node? I think this is exactly where the intuition leads most astray. The answer is that if you use real-world probabilities to price an option then you also must discount the future expected payoffs at the appropriate real-world discount rate. An option of a stock that has a probability of 99% of going from 100-> 120 and 1% of going from 100-> 80 requires a much higher return and thus the option on such asset must be discounted at a much higher rate. Obviously it is hard to estimate the required returns investors demand on specific investment opportunities, and even hard to estimate the required return, used to discount future expected values, for the option of such underlying assets. That is the whole point of trying to price derivatives with the construct of risk-neutral probabilities because the discounting will be done through the risk-free interest rate one can receive as yield by investing in the money market/bond market (much can even be discussed about what risk-free nowadays means...). The finding that has been rewarded with a Nobel Price has exactly been that: The proof that using risk-neutral probabilities and constructing risk-less portfolios that make one indifferent about future outcomes will lead to the same solution than going the hard way by using real-life probabilities and discounting future expected cash flows at investors' required returns.
-
+1. Excellent answer. – SRKX♦ Dec 24 '12 at 12:00
the reason the derivative is worth that much is because it is replicating it with stock. in your example, you have a security that in 99% cases gives you \$10 and else \$0. However, what is the underlying that you use to hedge? Nothing. Therefore it must be worth the expected value or 9.90
-
So imagine we know that this derivative is option on this stock. I still don't find it intuitive that option is worth 5\$, where with 99% chance you get 10\$. – user3378 Dec 22 '12 at 21:16
The reason this is because, we can hedge out uncertainty by using the stock. – Andrew Dec 23 '12 at 4:33
I understand the hedging argument. but it conflicts with my intuition. Think in this way, I want to sell you this option that gives you 10\$in 99% and 0\$ otherwise. Would you pay 6\$for it? But please don't answer no because hedging argument tells me that 5\$ is the real price. – user3378 Dec 23 '12 at 4:36
please read the new answer below – Andrew Dec 23 '12 at 5:07
Andrew why didn't you simply edit this answer? – SRKX♦ Dec 24 '12 at 11:39
The reason is because we can hedge out uncertainty by using the stock.
In our case, let's say a stock is trading at 100, with 75% chance it will be 110, and with 25% chance it will be 90. How much would you sell a 100 strike call for?
In 75% of cases, the payoff is 10 else 0. You would think the value of this should be 7.50. Wrong.
However, now view it from the following perspective:
The second I buy the call, I can short the stock. We can derive what the risk-neutral probabilities are and in our case, it will be 50% up and 50% down. Now we can evaluate the value of the call as:
$$(S_{up} -K)^+ * p_{up} + (S_{down} -K)^+ * p_{down}$$
In our case, it will be $(110-100)*.5 = 5$. After I sell this asset for 5 dollars, I do the following: borrow half a share of stock.
In the case that it goes up to 110, I will lose exactly 5 dollars because I now have to buy back half a share of stock at 110 when I sold it at 100. $(110-100)*.5 = 5$, but I sold it to you for 5. I break even. Now in the case that the stock goes down to 90, I make exactly 5 dollars from the stock, but I have to pay you 10 dollars. However, I sold it to you originally for 5 and I also made 5. Again I break even. Therefore, ALL uncertainty is lost and the value of this option MUST be 5 or else, if you are willing to buy it for 7.50, I will sell it to you all day (quite literally :) ).
-
If you sell the call, you have to buy the stock. Otherwise you're not hedging anything. If you buy the call then your reasoning is correct. – SRKX♦ Dec 24 '12 at 11:50
You should correct your answer... – SRKX♦ Dec 25 '12 at 15:20
You simply have to think about this enough to become convinced, it is unintuitive but correct. Next you convince yourself that this does not mean you cannot believe options to be cheap or expensive, if you have a different opinion about the future return distribution of the underlying.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 11, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9495092034339905, "perplexity_flag": "middle"} |
http://csgillespie.wordpress.com/tag/acceptance-rejection/ | Why?
December 2, 2010
Random variable generation (Pt 2 of 3)
Filed under: AMCMC, R — Tags: acceptance-rejection, AMCMC, MCMC, R, random-numbers — csgillespie @ 5:44 pm
Acceptance-rejection methods
This post is based on chapter 1.4 of Advanced Markov Chain Monte Carlo.
Another method of generating random variates from distributions is to use acceptance-rejection methods. Basically to generate a random number from $f(x)$, we generate a RN from an envelope distribution $g(x)$, where $\sup f(x)/g(x) \le M \le \infty$. The acceptance-rejection algorithm is as follows:
Repeat until we generate a value from step 2:
1. Generate $x$ from $g(x)$ and $U$ from $Unif(0, 1)$
2. If $U \le \frac{f(x)}{M g(x)}$, return $x$ (as a random deviate from $f(x)$).
Example: the standard normal distribution
This example illustrates how we generate $N(0, 1)$ RNs using the logistic distribution as an envelope distribution. First, note that
$\displaystyle f(x) = \frac{1}{2\pi} e^{-x^2/2} \quad \mbox{and} \quad g(x) = \frac{e^{-x/s}}{s(1+ e^{-x/s})^2}$
On setting $s=0.648$, we get $M = 1.081$. This method is fairly efficient and has an acceptance rate of
$\displaystyle r = \frac{1}{M}\frac{\int f(x) dx}{\int g(x) dx} = \frac{1}{M} = 0.925$
since both $f$ and $g$ are normalised densities.
R code
This example is straightforward to code:
```myrnorm = function(M){
while(1){
u = runif(1); x = rlogis(1, scale = 0.648)
if(u < dnorm(x)/M/dlogis(x, scale = 0.648))
return(x)
}
}
```
To check the results, we could call `myrnorm` a few thousand times:
```hist(replicate(10000, myrnorm(1.1)), freq=FALSE)
lines(seq(-3, 3, 0.01), dnorm(seq(-3, 3, 0.01)), col=2)
```
Example: the standard normal distribution with a squeeze
Suppose the density $f(x)$ is expensive to evaluate. In this scenario we can employ an easy to compute function $s(x)$, where $0 \le s(x) \le g(x)$ . $s(x)$ is called a squeeze function. In this example, we’ll use a simple rectangular function, where $s(x) = 0.22$ for $x=-1, \ldots, 1$. This is shown in the following figure:
The modified algorithm is as follows:
Repeat until we generate a value from step 2:
1. Generate $x$ from $g(x)$ and $U$ from $Unif(0, 1)$
2. If $U \le \frac{s(x)}{M g(x)}$ or $U \le \frac{f(x)}{M g(x)}$, return $x$ (as a random deviate from $f(x)$).
Hence, when $U \le \frac{s(x)}{M g(x)}$ we don’t have to compute $f(x)$. Obviously, in this example $f(x)$ isn’t that difficult to compute.
Theme: Shocking Blue Green. Blog at WordPress.com. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 34, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8568770289421082, "perplexity_flag": "middle"} |
http://en.m.wikibooks.org/wiki/Cg_Programming/Unity/Silhouette_Enhancement | # Cg Programming/Unity/Silhouette Enhancement
A semitransparent jellyfish. Note the increased opaqueness at the silhouettes.
This tutorial covers the transformation of surface normal vectors. It assumes that you are familiar with alpha blending as discussed in Section “Transparency” and with shader properties as discussed in Section “Shading in World Space”.
The objective of this tutorial is to achieve an effect that is visible in the photo to the left: the silhouettes of semitransparent objects tend to be more opaque than the rest of the object. This adds to the impression of a three-dimensional shape even without lighting. It turns out that transformed normals are crucial to obtain this effect.
Surface normal vectors (for short: normals) on a surface patch.
### Silhouettes of Smooth Surfaces
In the case of smooth surfaces, points on the surface at silhouettes are characterized by normal vectors that are parallel to the viewing plane and therefore orthogonal to the direction to the viewer. In the figure to the left, the blue normal vectors at the silhouette at the top of the figure are parallel to the viewing plane while the other normal vectors point more in the direction to the viewer (or camera). By calculating the direction to the viewer and the normal vector and testing whether they are (almost) orthogonal to each other, we can therefore test whether a point is (almost) on the silhouette.
More specifically, if V is the normalized (i.e. of length 1) direction to the viewer and N is the normalized surface normal vector, then the two vectors are orthogonal if the dot product is 0: V·N = 0. In practice, this will rarely be the case. However, if the dot product V·N is close to 0, we can assume that the point is close to a silhouette.
### Increasing the Opacity at Silhouettes
For our effect, we should therefore increase the opacity $\alpha$ if the dot product V·N is close to 0. There are various ways to increase the opacity for small dot products between the direction to the viewer and the normal vector. Here is one of them (which actually has a physical model behind it, which is described in Section 5.1 of this publication) to compute the increased opacity $\alpha'$ from the regular opacity $\alpha$ of the material:
$\alpha'=\min\left(1, \frac{\alpha}{\left\vert\mathbf{V}\cdot\mathbf{N}\right\vert}\right)$
It always makes sense to check the extreme cases of an equation like this. Consider the case of a point close to the silhouette: V·N ≈ 0. In this case, the regular opacity $\alpha$ will be divided by a small, positive number. (Note that GPUs usually handle the case of division by zero gracefully; thus, we don't have to worry about it.) Therefore, whatever $\alpha$ is, the ratio of $\alpha$ and a small positive number, will be larger. The $\min$ function will take care that the resulting opacity $\alpha'$ is never larger than 1.
On the other hand, for points far away from the silhouette we have V·N ≈ 1. In this case, α' ≈ min(1, α) ≈ α; i.e., the opacity of those points will not change much. This is exactly what we want. Thus, we have just checked that the equation is at least plausible.
### Implementing an Equation in a Shader
In order to implement an equation like the one for $\alpha$ in a shader, the first question should be: Should it be implemented in the vertex shader or in the fragment shader? In some cases, the answer is clear because the implementation requires texture mapping, which is often only available in the fragment shader. In many cases, however, there is no general answer. Implementations in vertex shaders tend to be faster (because there are usually fewer vertices than fragments) but of lower image quality (because normal vectors and other vertex attributes can change abruptly between vertices). Thus, if you are most concerned about performance, an implementation in a vertex shader is probably a better choice. On the other hand, if you are most concerned about image quality, an implementation in a pixel shader might be a better choice. The same trade-off exists between per-vertex lighting (i.e. Gouraud shading, which is discussed in Section “Specular Highlights”) and per-fragment lighting (i.e. Phong shading, which is discussed in Section “Smooth Specular Highlights”).
The next question is: in which coordinate system should the equation be implemented? (See Section “Vertex Transformations” for a description of the standard coordinate systems.) Again, there is no general answer. However, an implementation in world coordinates is often a good choice in Unity because many uniform variables are specified in world coordinates. (In other environments implementations in view coordinates are very common.)
The final question before implementing an equation is: where do we get the parameters of the equation from? The regular opacity $\alpha$ is specified (within a RGBA color) by a shader property (see Section “Shading in World Space”). The normal vector `gl_Normal` is a standard vertex input parameter (see Section “Debugging of Shaders”). The direction to the viewer can be computed in the vertex shader as the vector from the vertex position in world space to the camera position in world space `_WorldSpaceCameraPos`, which is provided by Unity.
Thus, we only have to transform the vertex position and the normal vector into world space before implementing the equation. The transformation matrix `_Object2World` from object space to world space and its inverse `_World2Object` are provided by Unity as discussed in Section “Shading in World Space”. The application of transformation matrices to points and normal vectors is discussed in detail in Section “Applying Matrix Transformations”. The basic result is that points and directions are transformed just by multiplying them with the transformation matrix, e.g. with `modelMatrix` set to `_Object2World`:
``` output.viewDir = normalize(_WorldSpaceCameraPos
- float3(mul(modelMatrix, input.vertex)));
```
On the other hand normal vectors are transformed by multiplying them with the transposed inverse transformation matrix. Since Unity provides us with the inverse transformation matrix (which is `_World2Object * unity_Scale.w` apart from the bottom-right element), a better alternative is to multiply the normal vector from the left to the inverse matrix, which is equivalent to multiplying it from the right to the transposed inverse matrix as discussed in Section “Applying Matrix Transformations”:
``` output.normal = normalize(float3(
mul(float4(input.normal, 0.0), modelMatrixInverse)));
```
Note that the incorrect bottom-right matrix element is no problem because it is always multiplied with 0. Moreover, the multiplication with `unity_Scale.w` is not necessary since the scaling doesn't matter because we normalize the vector.
Now we have all the pieces that we need to write the shader.
### Shader Code
```Shader "Cg silhouette enhancement" {
Properties {
_Color ("Color", Color) = (1, 1, 1, 0.5)
// user-specified RGBA color including opacity
}
SubShader {
Tags { "Queue" = "Transparent" }
// draw after all opaque geometry has been drawn
Pass {
ZWrite Off // don't occlude other objects
Blend SrcAlpha OneMinusSrcAlpha // standard alpha blending
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
uniform float4 _Color; // define shader property for shaders
// The following built-in uniforms are also defined in
// "UnityCG.cginc", which could be #included
uniform float4 unity_Scale; // w = 1/scale; see _World2Object
uniform float3 _WorldSpaceCameraPos;
uniform float4x4 _Object2World; // model matrix
uniform float4x4 _World2Object; // inverse model matrix
// (all but the bottom-right element have to be scaled
// with unity_Scale.w if scaling is important)
struct vertexInput {
float4 vertex : POSITION;
float3 normal : NORMAL;
};
struct vertexOutput {
float4 pos : SV_POSITION;
float3 normal : TEXCOORD;
float3 viewDir : TEXCOORD1;
};
vertexOutput vert(vertexInput input)
{
vertexOutput output;
float4x4 modelMatrix = _Object2World;
float4x4 modelMatrixInverse = _World2Object;
// multiplication with unity_Scale.w is unnecessary
// because we normalize transformed vectors
output.normal = normalize(float3(
mul(float4(input.normal, 0.0), modelMatrixInverse)));
output.viewDir = normalize(_WorldSpaceCameraPos
- float3(mul(modelMatrix, input.vertex)));
output.pos = mul(UNITY_MATRIX_MVP, input.vertex);
return output;
}
float4 frag(vertexOutput input) : COLOR
{
float3 normalDirection = normalize(input.normal);
float3 viewDirection = normalize(input.viewDir);
float newOpacity = min(1.0, _Color.a
/ abs(dot(viewDirection, normalDirection)));
return float4(float3(_Color), newOpacity);
}
ENDCG
}
}
}
```
The assignment to `newOpacity` is an almost literal translation of the equation
$\alpha'=\min\left(1, {\alpha} / {\left\vert\mathbf{V}\cdot\mathbf{N}\right\vert}\right)$
Note that we normalize the vertex output parameters `output.normal` and `output.viewDir` in the vertex shader (because we want to interpolate between directions without putting more nor less weight on any of them) and at the begin of the fragment shader (because the interpolation can distort our normalization to a certain degree). However, in many cases the normalization of `output.normal` in the vertex shader is not necessary. Similarly, the normalization of `output.viewDir` in the fragment shader is in most cases unnecessary.
### More Artistic Control
While the described silhouette enhancement is based on a physical model, it lacks artistic control; i.e., a CG artist cannot easily create a thinner or thicker silhouette than the physical model suggests. To allow for more artistic control, you could introduce another (positive) floating-point number property and take the dot product |V·N| to the power of this number (using the built-in Cg function `pow(float x, float y)`) before using it in the equation above. This will allow CG artists to create thinner or thicker silhouettes independently of the opacity of the base color.
### Summary
Congratulations, you have finished this tutorial. We have discussed:
• How to find silhouettes of smooth surfaces (using the dot product of the normal vector and the view direction).
• How to enhance the opacity at those silhouettes.
• How to implement equations in shaders.
• How to transform points and normal vectors from object space to world space (using the transposed inverse model matrix for normal vectors).
• How to compute the viewing direction (as the difference from the camera position to the vertex position).
• How to interpolate normalized directions (i.e. normalize twice: in the vertex shader and the fragment shader).
• How to provide more artistic control over the thickness of silhouettes .
### Further Reading
If you still want to know more
• about object space and world space, you should read the description in Section “Vertex Transformations”.
• about how to apply transformation matrices to points, directions and normal vectors, you should read Section “Applying Matrix Transformations”.
• about the basics of rendering transparent objects, you should read Section “Transparency”.
• about uniform variables provided by Unity and shader properties, you should read Section “Shading in World Space”.
• about the mathematics of silhouette enhancement, you could read Section 5.1 of the paper “Scale-Invariant Volume Rendering” by Martin Kraus, published at IEEE Visualization 2005, which is available online.
< Cg Programming/Unity
Unless stated otherwise, all example source code on this page is granted to the public domain. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 12, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9015005230903625, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/65352/model-category-structures-on-the-category-of-l-infty-algebras | ## Model category structures on the category of $L_\infty$-algebras
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
Let $k$ be a characteristic zero field. Then it is known that the forgetful functor $dgla(k)\to chain(k)$ from differential graded Lie algebras (over $k$) to cochain complexes induces a model category structure on $dgla(k)$ with "the same" fibrations and weak equivalences as on $chain(k)$, i.e., fibrations are surjective dgla morphisms and weak-equivalences are quasi-isomorphisms.
Also on the category of $L_\infty$-algebras over $k$ there is a forgetful functor $L_\infty(k)\to chain(k)$, which picks the linear part of an $L_\infty$-morphism. Then, on $L_\infty(k)$ we have two natural functors: the forgetful functor $L_\infty(k)\to chain(k)$ and the embedding $L_\infty(k)\hookrightarrow dgcu(k)$, where $dgcu(k)$ is the category of differential graded counitary cocommutative coalgebras over $k$.
This suggests we could have two natural model category structures on $L_\infty(k)$, and my question is: how are they related? do they coincide? in particular, is a morphism of $L_\infty$-algebras whose linear part is surjective a fibration in the $dgcu(k)$ model structure? is a quasi-isomorphism of $L_\infty$-algebras (i.e., a morphism of $L_\infty$-algebras whose linear part is a quasi-isomorphism) a weak-equivalence in the the $dgcu(k)$ model structure?
-
How do you define your model structure on dgcu(k)? – Kevin Lin May 18 2011 at 18:06
Hi Kevin, the model structure on $dgcu(k)$ is a bit less explicit than the one on $dgca(k)$. It is described, for instance, in section 3 of Vladimir Hinich's "DG coalgebras as formal stacks", but I guess it actually dates back to Dan Quillen's "Rational homotopy theory" paper. – domenico fiorenza May 18 2011 at 18:37
mathoverflow.net/questions/40772/… maybe useful to understand weak equivalences for coalgebras. – Daniel Pomerleano May 19 2011 at 7:08
1
To make sure that weak equivalences agree is exactly the reason the notion of weak equivalences for coalgebras is stronger than quasi-isomorphism. In the simply-connected, finite type case that Quillen was considering, rational homotopy theory tells you that weak equivalence is enough... – Daniel Pomerleano May 19 2011 at 7:21
## 3 Answers
I've been discussing this with Jonathan Pridham, who pointed my attention to his Unifying derived deformation theories, where a model category structure is described on a suitable subcategory $DG_{\mathbb{Z}}Sp(k)$ of $dgcu(k)$. (actually, the definition of $DG_{\mathbb{Z}}Sp$ is more general, but on a characteristic zero field $k$ it is naturally a subcategory od $dgcu(k)$).
The category $DG_{\mathbb{Z}}Sp(k)$ has a few remarkable properties: on the one hand it is Quillen equivalent to the larger category $dgcu(k)$ (endowed with the Hinich's model structure); on the other hand $L_\infty$-algebras over $k$ are precisely the fibrant objects in $DG_{\mathbb{Z}}Sp(k)$ and a morphism $\varphi$ between $L_\infty$-algebras is a fibration (resp. a weak equivalence) in $DG_{\mathbb{Z}}Sp(k)$ if its image via the "linearization" functor $L_\infty(k)\to chains(k)$ is a fibration (resp. a weak equivalence) in $chains(k)$, i.e. if the linearization of $\varphi$ is surjective (resp. a quasi-isomorphism).
-
1
Dear Domenico, you are answering your own question ? I am right ? ;) – Bruno V. May 26 2011 at 21:53
Eager to get the self-learner badge :) (no, really: that was to report here Jon's suggestion, I thought that was best suited as an answer rather than as a comment to my question) – domenico fiorenza Jun 7 2011 at 12:11
I am collecting some of that material from Jonathan Pridham's arcticle here: ncatlab.org/nlab/show/… – Urs Schreiber Feb 6 at 18:52
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
If you start with the category of $L_\infty$-algebras with "strict morphisms" ($L_\infty$-morphisms such that the higher components vanish), then you can put a model category structure by the classical operadic means: this category is the category of algebras over the operad $L_\infty:=\Omega( \text{Koszul dual of}\ Lie)$.
Now if you consider the category of $L_\infty$-algebras with $L_\infty$-morphisms, this is not encoded by an operad, but rather by the Koszul dual cooperad of Lie, which is equal to the linear dual of $Com$ up to suspension. One can prove that it is the category of fibrant-cofibrant objects of a certain model category on dg cocommutative coalgebras.
In this case, an $L_\infty$-morphism is an $L_\infty$-quasi-isomorphism if and only if its image under the "bar construction" between $L_\infty$-algebras and dg cocommutative coalgebras is a weak equivalence.
[You can find all the constructions and functors in Chapter 11 of http://math.unice.fr/~brunov/Operads.html. For the model category structure on dg coalgebras over the Koszul dual cooperad of an operad, please wait a little bit; I am typing this these days. :) ]
-
Dear Bruno, thanks a lot for this answer and for the pointer to your book! – domenico fiorenza Jun 7 2011 at 12:13
Wow, the first two paragraphs just cleared something up for me which I'd been quite confused by over on this thread: mathoverflow.net/questions/62386/…. Thanks for writing such a clear answer two years ago! – David White Feb 26 at 21:40
I think there is no model structure on the category of $L_\infty$-algebras. The category of $L_\infty$-algebra is nevertheless a category of fibrant objects.
Concerning your last question, yes, $L_\infty$-quasi-isomorphisms are weak equivalences between fibrant objects in $dgcu(k)$.
-
2
yes, it is the category of fibrant objects in $DG_\mathbf{Z}Sp(k)$, as I'm saying in my self-answer above. Actually in that answer I'm leaving quite implicit the fact that "category of fibrant objects" rather than "model category" is the right answer to my original question: thanks for having stressed this. – domenico fiorenza May 20 2011 at 14:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 53, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9199648499488831, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/244302/quaternions-and-rotations-in-bbb-r3?answertab=oldest | # Quaternions and rotations in $\Bbb R^3$
I'm having some trouble understanding the following example in Armstrong's Groups and Symmetry:
''An element of $\mathbb{H}$ of the form $bi + cj + dk$ is called a ''pure quaternion'. Identify the set of all pure quaternions with $\mathbb{R}^3$ via the correspondence $bi + cj + dk \rightarrow (b,c,d).$ If $q$ is a non-zero quaternion, conjugation by $q$ sends the pure quaternions to themselves and induces a rotation of $\mathbb{R}^3$. This construction provides a homomorphism from $\mathbb{H} \smallsetminus\{0\}$ to $SO_3$. Its image is all of $SO_3$, its kernel is $\mathbb{R} \smallsetminus\{0\}$ and therefore $\bigl(\mathbb{H} \smallsetminus\{0\}\bigr) /\bigl(\mathbb{R} \smallsetminus\{0\}\bigr)$ is isomorphic to $SO_3$''.
So suppose that $v$ is a pure quaternion. How exactly does $qvq^{-1}$ induce a rotation of $\mathbb{R}^3$?
-
## 4 Answers
Quaternions don't really capture the full structure of 3d space, but one can abuse the mathematical properties of it to hack the system and get a solution.
Geometric algebra is a system that subsumes quaternions in a straightforward way. To do this, it uses a "geometric product" of basis vectors, which is written by juxtaposition:
Under the geometric product:
$$e_i e_j = \begin{cases} 1, & i = j \\ -e_j e_i, & i \neq j\end{cases}$$
The geometric product is associative, and this allows one to build chains of such products in an unambiguous way. For example, $e_1 e_2 e_1 e_3 e_1 = -e_2 e_3 e_1$ (just switch the first $e_1$ with $e_2$ at the cost of a minus sign).
How does this connect with rotations and quaternions? Bear with me. We can use the geometric product to build linear operators. One such linear operator is the reflection:
$$\underline N(v) = -e_1 v {e_1}^{-1}$$
The effect of this linear operator is to take the $e_1$ component of any vector and negate it. This is a reflection.
Chaining two separate reflections together allows us to build up a rotation. This is because while a single reflection is not orientation preserving (it has determinant $-1$), two of them put together will have determinant $+1$. Let's consider another reflection that changes the $(e_1+e_2)/\sqrt{2}$ component. The result is
$$\underline R(v) = \frac{e_1 + e_2}{\sqrt{2}} e_1 v e_1 \frac{e_1 + e_2}{\sqrt{2}} = \frac{1}{2}(1 + e_1 e_2) v (1 - e_1 e_2)$$
This describes a particular rotation. The quantity $(1 + e_1 e_2)/\sqrt{2}$ is called a spinor or rotor, and $(1-e_1 e_2)/\sqrt{2}$ is its multiplicative inverse.
In general, though, a spinor or rotor can take the following form:
$$q = a + b e_1 e_2 + c e_2 e_3 + d e_3 e_1$$
which allows us to write any general rotation as
$$\underline R(v) = q v q^{-1}$$
This is the basic idea of quaternions, but instead of an abstract mathematical field, we have derived the form of rotations geometrically, built from reflections. The rotors themselves are linear combinations of scalars and "bivectors", which represent oriented planes in 3d space.
Now, why does using a "pure" quaternion for $v$ give us rotations of vectors? Well, the rotation operator I have described works for pure bivectors (which correspond to pure quaternions), so we can set $a=0$ for $v$ and carry out the proper multiplications. The result will then be
$$\underline R(v) = \underline R(b e_1 e_2 + c e_2 e_3 + d e_3 e_1) = f e_1 e_2 + g e_2 e_3 + h e_3 e_1$$
for some numbers $f, g, h$. One then converts this back to a vector through duality, multiplying on the left or right by $e_1 e_2 e_3$, which we call $i$. The result is
$$(e_1 e_2 e_3) (f e_1 e_2 + g e_2 e_3 + h e_3 e_1) = f e_3 + g e_1 + h e_2$$
This converts the result back to a vector. But the nice thing about geometric algebra is that you don't have to do this hijinks of going back and forth between vectors and bivectors. The rotation operator actually works on vectors--and moreover, this entire approach to rotations works in dimensions other than 3! This generalizes how we use complex exponentials in 2d, and you can use it in 4d to describe rotations, too. This is why I consider the GA approach to be far superior to quaternions, and far easier to connect the concepts to geometric interpretations.
-
You can just explicitely calculate $qvq^{-1}$ for a given $q=w+xi+yj+zk$, read the matrix of the transformation of the coordinates $b$, $c$, $d$ and check that it has determinant 1 and determine its kernel by direct calculation.
-
You're not supposing some particular pure quaternion $v$. Instead, fix some nonzero quaternion $q$ and consider the map $f:\mathbb R^3\to\mathbb R^3$ given by $$f(v) = \alpha(q\alpha^{-1}(v)q^{-1})$$ where $\alpha$ is the correspondence $bi + cj + dk \mapsto (b,c,d)$.
The book then claims that $f$ is a rotation. This really consists of two subclaims:
1. $f$ is well-defined, which requires that $q\alpha^{-1}(v)q^{-1}$ is always in the domain of $\alpha$ -- i.e., that $qxq^{-1}$ is a pure quaternion whenever $x$ is.
2. The function $f:\mathbb R^3\to \mathbb R^3$ happens to be a rotation.
Proving these two subclaims seems to be a "hidden exercise". Since everything is quite clearly linear in $v$, as well as symmetric in the three imaginary directions, it suffices to prove for an arbitrary $q$ that
1. $qiq^{-1}$ is a pure quaternion.
2. $qiq^{-1}$ has length 1.
3. $qiq^{-1}$ and $qjq^{-1}$ are orthogonal.
4. The matrix for $f$ has determinant 1. (From 2 and 3 you know that it must be $\pm 1$; it is also clearly a continuous function of $q$, and since the determinant is $1$ for $q=1$, it must be $1$ everywhere since $\mathbb H\setminus\{0\}$ is path connected).
It may help first to prove that $qvq^{-1}=pvp^{-1}$ if $p$ and $q$ are real multiples of each other, and thus without loss of generality you can assume $\|q\|=1$ and so $q^{-1}=q^*$.
-
The book also claims that any rotation is generated in this way, which is harder to prove it seems. – Peter Nov 25 '12 at 15:33
@Peter: Yes, unless one already knows a simple family of generators of SO(3). But if we do, all that is needed is to display some particular quaternions that map to them, such as $\cos\frac\theta2+i\sin\frac\theta2$ and $\cos\frac\theta2+j\sin\frac\theta2$. – Henning Makholm Nov 25 '12 at 16:05
The book does not mention generators of $SO_3$ in the chapters preceeding this part. The group $SO_3$ is simply introduced as the subgroup of $GL_n(\mathbb{R})$ consisting of the orthogonal matrices with determinant +1. Yet it tells the reader to check that the construction is a surjective homomorphism. However, I don't see any way to check this without finding a set of generators first. – Peter Nov 25 '12 at 16:16
The map $v\rightarrow qvq^{-1}$ is a rotation of $R^3$, viewed as the set of pure quaternions.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 74, "mathjax_display_tex": 8, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.930167555809021, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/tagged/forces?sort=votes&pagesize=30 | # Tagged Questions
This tag is for the classical concept of forces, i.e. the quantities causing an acceleration of a body. It expands to the strong/electroweak force only insofar as they act comparable to ‘classical’ forces. Use [tag:particle-physics] for decay channels due to forces and [tag:newtonian-mechanics] or ...
7answers
3k views
### Does juggling balls reduce the total weight of the juggler and balls?
A friend offered me a brain teaser to which the solution involves a $195$ pound man juggling two $3$-pound balls to traverse a bridge having a maximum capacity of only $200$ pounds. He explained that ...
3answers
514 views
### Is gecko-like friction Coulombic? What is the highest known Coulombic $\mu_s$ for any combination of surfaces?
Materials with large coefficients of static friction would be cool and useful. Rubber on rough surfaces typically has $\mu_s\sim1-2$. When people talk about examples with very high friction, often ...
4answers
531 views
### Particles for all forces: how do they know where to go, and what to avoid?
Here's an intuitive problem which I can't get around, can someone please explain it? Consider a proton P and an electron E moving through the electromagnetic field (or other particles for other ...
2answers
650 views
### Why can't Humans run any faster?
If you wanted to at least semi-realistically model the key components of Human running, what are the factors that determine the top running speed of an individual? The primary things to consider would ...
8answers
5k views
### How exactly does curved space-time describe the force of gravity?
I understand that people explain (in layman's terms at least) that the presence of mass "warps" space-time geometry, and this causes gravity. I have also of course heard the analogy of a blanket or ...
3answers
600 views
### Why doesn't a fly fall off the wall?
Pretty simple question, but not an obvious answer at least not to me. I mean you can't just place a dead fly on the wall and expect it to stay there, he will fall off due to gravity. At first I ...
5answers
4k views
### Is there an equation for the strong nuclear force?
The equation describing the force due to gravity is $$F = G \frac{m_1 m_2}{r^2}.$$ Similarly the force due to the electrostatic force is $$F = k \frac{q_1 q_2}{r^2}.$$ Is there a similar equation ...
2answers
1k views
### Did the researchers at Fermilab find a fifth force?
Please consider the publication Invariant Mass Distribution of Jet Pairs Produced in Association with a W boson in $p\bar{p}$ Collisions at $\sqrt{s} = 1.96$ TeV by the CDF-Collaboration, ...
7answers
2k views
### What does it mean to say “Gravity is the weakest of the forces”?
I can understand that on small scales (within an atom/molecule), the other forces are much stronger, but on larger scales, it seems that gravity is a far stronger force; e.g. planets are held to the ...
1answer
368 views
### Do modern Formula One cars produce enough down-force to drive upside-down?
For example, if they were driving at top speed through a long tunnel, could they transition to and stay on the ceiling?
2answers
1k views
### Why isn't Higgs coupling considered a fifth fundamental force?
When I first learned about the four fundamental forces of nature, I assumed that they were just the only four kind of interactions there were. But after learning a little field theory, there are many ...
2answers
818 views
### Why do ships lean to the outside, but boats lean to the inside of a turn?
Small vessels generally lean into a turn, whereas big vessels lean out. Why do ships lean to the outside, but boats lean to the inside of a turn?
4answers
873 views
### How to sail downwind faster than the wind?
Recently a group set a record for sailing a wind-powered land vehicle directly down wind, and a speed faster than wind speed. Wikipedia has a page talking about it, but it doesn't explain exactly how ...
4answers
797 views
### Helicopter in an Elevator
You buy one of those remote control toy helicopters. You bring it into an elevator. The elevator goes up. Does the helicopter hit the floor or does the floor of the elevator push the air up into the ...
1answer
209 views
### Can a fly pierce itself onto a cactus needle?
Somebody on reddit posted a ridiculous picture today of a fly pierced onto a needle of a cactus: http://www.reddit.com/r/pics/comments/xarue/what_are_the_odds_of_this_accident/ Whilst the OP claims ...
9answers
5k views
### What's the core difference between the electric and magnetic forces?
I require only a simple answer. One sentence is enough... (It's for high school physics)
3answers
363 views
### Whether $m$ in $E=mc^{2}$ and $F=ma$ are both relativistic mass?
I know that $m$ in $E=mc^{2}$ is the relativistic mass, but can $m$ in $F=ma$ can also be relativistic? If the answer is yes, then can you tell me whether this equation is valid $E=\frac{F}{a}c^{2}$? ...
2answers
237 views
### What IS Color Charge?
This question has been asked twice already, with very detailed answers. After reading those answers, I am left with one more question: what is color charge? It has nothing to do with colored light, ...
1answer
788 views
### Are all central forces conservative? Wikipedia must be wrong
It might be just a simple definition problem but I learned in class that a central force does not necessarily need to be conservative and the German Wikipedia says so too. However, the English ...
4answers
663 views
### How many fundamental forces could there be?
We’re told that ‘all forces are gauge forces’. The process seems to start with the Lagrangian corresponding to a particle-type, then the application of a local gauge symmetry leading to the emergence ...
4answers
5k views
### Can the coefficient of static friction be less than that of kinetic friction?
I was recently wondering what would happen if the force sliding two surfaces against each other were somehow weaker than kinetic friction but stronger than static friction. Since the sliding force is ...
4answers
773 views
### Is the EmDrive, or “Relativity Drive” possible?
In 2006, New Scientist magazine published an article titled Relativity drive: The end of wings and wheels1 [1] about the EmDrive [Wikipedia] which stirred up a fair degree of controversy and some ...
1answer
248 views
### Is acceleration an average?
Background I'm new to physics and math. I stopped studying both of them in high-school, and I wish I hadn't. I'm pursuing study in both topics for personal interest. Today, I'm learning about ...
4answers
403 views
### Why do we still need to think of gravity as a force?
Firstly I think shades of this question have appeared elsewhere (like here, or here). Hopefully mine is a slightly different take on it. If I'm just being thick please correct me. We always hear ...
3answers
305 views
### Force through quantum mechanics
In classical physics force is: $$F=\frac {dp}{dt}$$ How about quantum mechanics? In Old Quantum Mechanics momentum is: $p=\hbar \cdot k$ so force will be: $$F=\hbar \frac {dk}{dt}$$ what does \$\frac ...
4answers
443 views
### Which direction will the yoyo move?
This question has been around the net for a while, and I haven't seen a good explanation for it: A yo-yo is initially at rest on a horizontal surface. A string is pulled in the direction shown in ...
2answers
1k views
### Tension in a curved charged wire (electrostatic force) - does wire thickness matter?
Consider a conducting wire bent in a circle (alternatively, a perfectly smooth metal ring) with a positive (or negative) electric charge on it. Technically, this shape constitutes a torus. Assume ...
3answers
488 views
### How can we make an order-of-magnitude estimate of the strength of Earth's magnetic field?
The source of Earth's magnetic field is a dynamo driven by convection current in the molten core. Using some basic physics principles (Maxwell's equations, fluid mechanics equations), properties of ...
1answer
172 views
### How would nucleosynthesis be different if the neutron were stable?
If the strong nuclear force were just 2% stronger, the neutron would be a stable particle instead of having a half life of about 13 minutes. What difference would that have made to Big Bang ...
4answers
790 views
### How far does a trampoline vertically deform based on the mass of the object?
If a baseball is dropped on a trampoline, the point under the object will move a certain distance downward before starting to travel upward again. If a bowling ball is dropped, it will deform further ...
3answers
534 views
### Is the normal force a conservative force?
Most of the time the normal force doesn't do any work because it's perpendicular to the direction of motion but if it does do work, would it be conservative or non-conservative? For example, consider ...
5answers
272 views
### Why don't we use the concept of force in quantum mechanics?
I'm a quarter of the way towards finishing a basic quantum mechanics course, and I see no mention of force, after having done the 1-D Schrodinger equation for a free particle, particle in an ...
2answers
2k views
### What is the force on the arms in a pushup?
What force do the arms have to generate to do a pushup? Let us look a this simplified model: The body can be represented by the green plank of mass B. Its angle to the ground is $\theta$. This ...
4answers
2k views
### What is the cause of the normal force?
I've been wondering, what causes the normal force to exist? In class the teacher never actually explains it, he just says "It has to be there because something has to counter gravity." While I ...
0answers
281 views
### Measurement of Tangential Momentum Accomodation?
(this question is a crosspost from theoretical physics.) I am using atomic force microscopy (AFM) for characterizing pores of the size of nanometers for application in gas flow. For this, knowing ...
3answers
415 views
### How does $F = \frac{ \Delta (mv)}{ \Delta t}$ equal $( m \frac { \Delta v}{ \Delta t} ) + ( v \frac { \Delta m}{ \Delta t} )$?
That's how it's framed in my Physics school-book. The question (or rather, the explanation) is that of the thrust of rockets and how the impulse is equal (with opposite signs) on the thrust-gases and ...
6answers
530 views
### Derive $\frac{\mathrm{d}}{\mathrm{d}t}(\gamma m\mathbf{v}) = e\mathbf{E}$ from elementary principles?
It is experimentally known that the equation of motion for a charge $e$ moving in a static electric field $\mathbf{E}$ is given by: \frac{\mathrm{d}}{\mathrm{d}t} (\gamma m\mathbf{v}) = ...
2answers
8k views
### How much force is in a keystroke? (estimated, of course)?
I'm a software developer, and I need to calculate the estimated amount of force expended typing stored text. Preferrably in some interesting way. (i.e. the force exerted on keys thus far is enough to ...
3answers
1k views
### Physics of a skateboard ollie
Does anyone have a good explanation of the physics and vectors of force involved in the skateboarding trick the ollie (where the skater jumps and causes the skateboard to rise off the ground with ...
3answers
4k views
### How does the 'water jet pack' work?
So I was cruising around at YouTube and saw this sweet vid, and as I was watching started to wonder: "How is this possible?". For a little bit of background, in case you decide to not watch the ...
2answers
547 views
### What physical forces pull/press water upwards in vegetation?
Each spring enormous amounts of water rise up in trees and other vegetation. What causes this stream upwards? Edit: I was under the impression that capillary action is a key factor: the original ...
2answers
301 views
### Why don't we consider centrifugal force on a mass placed on earth?
Let us say a block of mass is placed on the surface of earth. Then while drawing the forces on that body, we say: Force F = mg acting towards the center of earth Normal reaction N offered by the ...
3answers
226 views
### Tidal force on far side
I have a question about tidal forces on the far side of a body experiencing gravitational attraction from another body. Let's assume we have two spherical bodies $A$ and $B$ whose centers are $D$ ...
2answers
150 views
### Is it possible to use a balloon to float so high in the atmosphere that you can be gravitationally pulled towards a satellite?
A recent joke on the comedy panel show 8 out of 10 cats prompted this question. I'm pretty sure the answer's no, but hopefully someone can surprise me. If you put a person in a balloon, such that ...
1answer
358 views
### Degeneracy Pressure, What is it?
There has been numerous question, some violent even in physics@SE regarding PEP and EM forces. But what baffles me is what is degeneracy pressure? I know there are 4 fundamental forces- EM, gravity, ...
1answer
622 views
### How does placing objects in liquids affect the mass?
I was dazing off in my physics class when I came up with this question and I was wondering about it all day. I could not provide myself with an adequate solution, so here I am asking the forum about ...
1answer
272 views
### Is Pauli-repulsion a “force” that is completely separate from the 4 fundamental forces?
You can have two electrons that experience each other's force by the exchange of photons (i.e. the electromagnetic force). Yet if you compress them really strongly, the electromagnetic interaction ...
2answers
584 views
### Can a force in an explicitly time dependent classical system be conservative?
If I consider equations of motion derived from the pinciple of least action for an explicilty time dependend Lagrangian $$\delta S[L[q(\text{t}),q'(\text{t}),{\bf t}]]=0,$$ under what ...
1answer
462 views
### Ping-pong ball pontoon
Imagine a vertical pipe (both ends opened) in the water. Drop several ping-pong balls into pipe and cover them with a cylinder. When you have enough balls, the cylinder will float. Now start adding ...
4answers
4k views
### Rope tension question
If two ends of a rope are pulled with forces of equal magnitude and opposite direction, the tension at the center of the rope must be zero. True or false? The answer is false. I chose true though and ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 26, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9410869479179382, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?s=0e9396f867ed4a7661124cc1417d0819&p=4252441 | Physics Forums
## Imposing Klein-Gordon on Dirac Equation
Hey,
My question is on the Dirac equation, I am having a little confusion with the steps that have been taken to get from this form of the Dirac equation:
$$i\frac{\partial \psi}{\partial t}=(-i\underline{\alpha}\cdot \underline{\nabla}+\beta m)\psi$$
to
$$-\frac{\partial^2 \psi}{\partial t^2}=[-\alpha^{i}\alpha^{j}\nabla^{i}\nabla^{j}-i(\beta\alpha^{i}+\alpha^{i}\beta)m\nabla^{i}+\beta ^{2}m^{2}]\psi$$
I believe we are imposing the Klein-Gordon (maybe not) on the Dirac Equation to determine the conditions required for a free particle description via the Dirac equation, however I cannot see how this is done from those steps above.
I'm not exactly sure what these mean ∇^i and alpha's^i... We are told we apply the 'operator' to both sides of the top equation - I'm not sure what operator this is - I'm guessing it's the Klein Gordon operator though.
Any help would be appreciated on how to get from equation 1 to equation 2,
Thanks,
SK
PhysOrg.com physics news on PhysOrg.com >> Promising doped zirconia>> New X-ray method shows how frog embryos could help thwart disease>> Bringing life into focus
Ok I've just realised we must square it to attain the second equation, though I'm still unsure what the ∇^i's represent and ditto for the alpha's. I'll keep having a look.
Actually I've figured it now, it's just an index to sum over, I think! SK
## Imposing Klein-Gordon on Dirac Equation
it is treated in last chapter of baym'quantum mechanics'.
Thread Tools
| | | |
|--------------------------------------------------------------|---------------------------|---------|
| Similar Threads for: Imposing Klein-Gordon on Dirac Equation | | |
| Thread | Forum | Replies |
| | Quantum Physics | 6 |
| | Quantum Physics | 14 |
| | Quantum Physics | 9 |
| | Advanced Physics Homework | 2 |
| | Advanced Physics Homework | 14 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.91648268699646, "perplexity_flag": "middle"} |
http://mathhelpforum.com/calculus/88361-sampling-smooth-function.html | # Thread:
1. ## sampling a smooth function
Hi everybody,
First, I am sorry if it is not the right place to ask my question. Could someone please help me with the following question:
After sampling a smooth function (or at least a continuously differentiable function) like f(.) with a small enough sampling time, is it a correct assumption to say that f(n+1) (sample at (n+1)th instant) is approximately equal to f(n) (sample at nth instant)? is there any theorem in this respect? Thank you very much for your kind help.
2. ## Understanding
Can we have an example please?
I think what you are asking is given a function $f$ that it differentiable at $x=a$ and a maximum discrepancy $\delta>0$, find a corresponding $\epsilon>0$ such that $|f(a+\epsilon)-f(a)|<\delta$, and yes, this will always be the case, but it will be different for each function and $\delta$. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9391041994094849, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/6220/can-a-car-get-better-mileage-driving-over-hills/7222 | # Can a car get better mileage driving over hills?
Two towns are at the same elevation and are connected by two roads of the same length. One road is flat, the other road goes up and down some hills. Will an automobile always get the best mileage driving between the two towns on the flat road versus the hilly one, if operated by a perfect driver with knowledge of the route? Is there any hilly road on which a better mileage can be achieved?
-
## 8 Answers
"Is there any hilly road on which a better mileage can be achieved?"
The answer is: YES.
Let's just use three simple reasonable assumptions: 1) There is rolling friction 2) There is air friction which increases with speed 3) In the graph of power vs gas consumption, there is a peak (a maximum)
note: (power)/(gas/time) has the same units as energy/gas. If there was no air friction and rolling friction was a constant, we'd want to run the engine at this peak until we could just coast the rest. However, because there is air friction, it may actually be better for gas mileage to run below this power output. That is the trade off on the flat road.
An appropriately shaped hill lets us beat this trade off, because we can go into a gear so that we go very slow essentially giving all our engine power into gravitational energy. We just coast to the hill (or after the hill), with the engine off.
Viewed this way, the answer is obvious, since we are essentially using the hill as a gravitational battery. It lets us beat the flat straight away, because we can give that energy back at 100% efficiency compared to the < 100% efficiency of the motor in the "air friction with speed" vs "power output vs gas consumption" tradeoff we are forced into with the flat road.
The best road I believe would then be just enough slope to allow rolling against friction all the way until a steep hill right at the end.
EDIT:
Some of the comments and other answers to this question are quite bizarre. To make it clear, I am not claiming that all hilly roads are better. The question asked "Is there any hilly road on which a better mileage can be achieved?" The answer is YES. Nor am I claiming that this depends only on the road, as it clearly depends on how the driver decides to run the engine for speeds along the route. Again, the question seems clear on this, as it says to consider the car "operated by a perfect driver with knowledge of the route". So I am not sure where the confusion is coming from. So I am extending discussion here, in hopes to clear that confusion up.
There are three places where stored or mechanical energy is lost to heat: the engine performance curve, the rolling friction of the tires on the road, and the air friction. The tire friction is to good approximation a constant, while the air friction increases with speed. Putting this all together:
the total mechanical energy used to get from A to B: $E = mgh|_A^B + \int_A^B (F_{air} + F_{rolling}) ds$
Since A and B are at the same elevation in this problem, the gravitational potential energy terms sum to zero and are the same for both routes. To good approximation the rolling friction is a constant, then if the length of the road is L: $E = L F_{rolling}+ \int_A^B F_{air}(v) \ ds$
So the rolling friction is the same between the two routes. The only thing left then is the engine performance curve (the efficiency at which we can get the mechanical energy from the gas) and the air friction. The answers neglecting the engine performance curves, or air friction, are thus neglecting the real difference between these routes. I hope I made that abundantly clear now.
It is easy to see that if the engine performance curve was flat (a constant), then on a flat road, we'd want to go very slow (the limit v->0 is the best driving strategy for gas mileage in this unrealistic case). However, in realistic cases (and as I took as one of my three assumptions in my answer above) the engine performance curve will have a peak. There is now a trade off between how far off peak performance we run the engine, and how much mechanical energy we waste on air friction. The details of solving this require detailed knowledge of the engine performance curve, but the general result that there is a trade off is clear regardless. The issue is that on a flat road: the engine can only generate mechanical energy in the form of kinetic energy, and kinetic energy in turn causes more energy loss in air friction.
Now consider the case where the road slopes just enough to a hill at the end, that we can just coast down to the hill, and only need to use the engine to get up the hill. (Or, alternatively, as other poster suggested, a hill at the start, and then coast the rest of the way.) When running the engine now, we can generate mechanical energy in the form of kinetic energy and gravitational energy. So a hill allows us to run the engine closer to its peak performance, since we can put the engine output into gravitational energy (which has no loss over the trip) as opposed to just kinetic energy (which we get losses in air friction).
-
2
I'm absolutely shocked by the level of misunderstanding of basic physics concepts in this answer and even more shocked that three people chose to upvote it. – delete Mar 21 '11 at 15:38
@Master of Disaster: Can you please be specific in what you believe is a mistake. Currently your comment is not helpful, for I've reread this answer and it sounds correct to me. – John Mar 22 '11 at 2:06
1
"essentially giving all our engine power into gravitational energy" . At that point you are in the less economic part of the car engine, high rpms slow mileage, not peak performance that you assume. Most of the gas is consumed in heating up the engine so it is not just against gravitational energy. This heat you will not get back downhill, you can only get the gravitational potential to kinetic back. The gas saved from having 0 rpms on the engine versus the economic rpms on the flat will not compensate for this. – anna v Mar 22 '11 at 8:57
1
@Edward I drive very hilly roads half the week. I am not talking equal speed here. One incline I have to do 1st gear, the rpms hit the roof, there is one economic rpm range for cars. The engine gets hot.If I go lower rpms the engine will stall. – anna v Mar 22 '11 at 11:47
3
@Edward, your claim that the gravitational potential energy sums to zero can be challenged on earth. After ascending a hill, you can not get all that stored up energy back as it will be consumed by increased air friction or brake heat on the way down, thus not being used fully to propel the car. You will be overcoming more gravity on the way up than you can consume on the way down. I would say that calling it a zero sum is valid only in a vacuum. This may be a large enough discrepancy in your logic to sway the conclusion. – Steve H Mar 22 '11 at 15:21
show 4 more comments
Unfortunately the answer very much depends on the poorly specified "an automobile" portion of the question.
We'll use a riding lawnmower to prove the hilly case:
• Assume the flat route is flat.
• Assume the "hilly" route is not more than perhaps a dozen degrees going up to the midpoint, then similar angle going down to the endpoint - just barely steep enough that gravity is sufficient to overcome the rolling friction of the lawnmower.
• Assume that at the speed the lawnmower is traveling, about 5-10MPH, there is no significant air friction.
A riding lawnmower's engine and gear train is so inefficient that it will consume the same amount of gas riding on a the specified slight incline as it will riding on a flat surface.
For the first half of both routes, the same amount of fuel is consumed. For the second half, the flat route requires more fuel, but the hill route does not. Therefore with the efficiency of the engine assumed, the hill route could take up to 50% less fuel than the flat route.
If we scale this up to a regular car, then we need only pay attention to the following:
• Does air resistance come into play
• Is the engine efficiency different between the two routes
Internal combustion engines have a lower bound for energy output. You can't get an automobile engine to put out some fraction of this lower bound when it's running - it's spending the same amount of fuel whether you're drawing 100 watts or 500 watts. Once you get into the higher end, the engine consumes fuel at a rate that corresponds to output energy. Fuel consumption only increases from this lower bound.
This is why the riding lawnmower is an easy case - the total range of efficiency of the riding lawnmower is so small, that there's no point in that range where going faster (therefore spending less fuel due to time spent driving) will use less total fuel.
However, some cars will have such low air resistance, and such high efficiency at higher speeds, that the flat route will consume less than the hilly route, because even the slight slope we're positing will make a 2x difference in the energy required to drive the whole route.
Therefore the question is inadequately specified to answer for the general case.
-
If the hill permits hypermiling to be done. In the most idea case, at the top of the hill the driver would turn off the ignition and shift into neutral, and descend unpowered.[Don't try this as your power brakes and power steering won't work as expected]. I assume the hill is just the right grade to maintain speed unpowered. The advantage comes from avoiding engine braking (losses within the engine/transmission) during the downhill leg. Of course this also requires the engine to not lose much efficiency during the climb. Hybrid cars work like this, charging the battery resembles the uphill climb, and using the battery in "stealth" mode resembles the downhill portion. So hypermiling is simply using gravity as a low tech (but very efficient) hybrid. If you leave the car in gear on the downhill, and thus suffer from the engine braking during the descent I doubt you get the benefit. So if you are a legal driver, you may only see the milage improvement if you drive a hybrid, which is designed to shut the internal combustion motor off when it doesn't need the power. [Of course the hybrid has the additional advantage, that if the hill is too steep, the excess energy (within limits) can be captured by the battery.
I'm assuming here that speed is constant. It is almost a trivial excercise to show that we can trade off travel time versus fuel consumption. But modern society rarely gives us the option (especially with other cars on the road). If we assume linear engine braking, then the total engine braking loses for a given trip are proportional to the total number of revolutions the engine made. Running at a fixed RPM, but at higher torque, then in neutral with the engine either idling (or turned off) minimizes the total number of engine revolutions for the trip.
-
damn, I can't believe you beat me writing this! I saw this after submitting mine... – Dov Mar 19 '11 at 15:23
I've had similar things happen to me. I think I had one case, where I and another SE responder must have been editing/submitting similar answers nearly simultaneously. I wish stack exchange had a merge mechanism. Trying to improve a given explanation with comments is pretty imperfect. – Omega Centauri Mar 19 '11 at 19:44
This is the only practical answer. – Carl Brannen Mar 22 '11 at 0:27
I am not sure whether this is primarily a "car" question or a "physics" question. As a physics question there are a few points to note: the fact that the roads are the same length and that one is hilly means that the flat road is necessarily curved between the two towns. Also the time to travel is not relevant here.
So the flat drive must use petrol/gas for the entire journey between the two towns. However the hilly route only needs to use petrol/gas for the uphill portions: it can glide downhill. So the hilly route will use less fuel if the following equation gets satisfied:
Fuel used uphill < Fuel used on Flat (at any speed)
EDIT: (With more explanation of the physics.)
The physics of this situation involves the use of internal power with which the elementary mechanics examples are misleading. Elementary mechanics examples consider objects moving in frictionless planes. In order to set such an object moving it requires some impulse I for a short time to reach a velocity $v$. Thereafter the object can travel from A to B without further energy. Indeed it can travel to any C (linear with AB) without any further energy. This fact, together with the elementary "conservation of energy" equation in the form
E = T + V
is the basis of both elementary mechanics examples and also fundamental physics examples.
What is different about the "internal power" scenario is that, for reasons to be discussed below, movement from A to B requires energy. Furthermore movement from A to C will require a different amount of energy in general. In elementary physics examples this energy comes from the external potential V, but in "internal power" systems it comes from some compact fuel. For these reasons, although all the basic laws of physics still apply, simplistic applications of "conservation of mechanical energy" do not apply.
The reason why movement from A to B requires energy is that the movement is not over a frictionless plane, but over a friction bearing road surface. The friction contributions are from:
(a) Tyre against road - probably a constant
(b) Internal friction in vehicle - proportional to $v$
(c) Air resistance - proportional to $v^2$
For these reasons a constant use of fuel is required to continue motion; with no additional use of fuel the vehicle will stop (on a flat road).
So the equation has to take account of the amount of fuel used (=amount of energy used), and some approximations need to be made:
So if a journey from A to B uses F units of fuel, a journey of twice the length uses 2F fuel.
The solution proposed here uses 2F fuel for the flat journey and for the uphill journey will use: F + mgH units of fuel. This needs to be less than 2F. However there is still energy transfer in the downhill part resulting from the mgH potential energy transfered to kinetic energy: but this does not use fuel/ internal power in the extreme case.
Recently I have just seen a TV programme showing "gravity racer" cars: no fuel on the downhill and speeds of 40mph+ were reached without much braking - lots of energy transfer of course.
-
It is both, and I agree with all of your assertions. Suppose it is possible to coast downhill at a 10% grade without using any fuel. Does it require twice as much fuel to climb uphill at the same grade as it takes to drive flat for the same distance? – Dan Brumleve Mar 17 '11 at 11:26
One point is that it is not the same distance, but only (perhaps) half the distance. Still I admit that one could develop some more equations to improve the model description. – Roy Simpson Mar 17 '11 at 11:37
In my previous comment I was supposing that the uphill and downhill parts of the hilly road are of the same length (each part half the length of the flat road): "/\". This is not necessarily the case but I wonder if such a road could satisfy the question positively for some real car? – Dan Brumleve Mar 17 '11 at 11:56
This is complete nonsense, absolute balderdash. – delete Mar 21 '11 at 15:40
I have updated the physics description here, to explain that aspect more. – Roy Simpson Mar 21 '11 at 22:54
Answer: YES
It is possible under the following circumstances.
1) The energy spent on itself by the engine (internal friction of engine) is constant (this is true when the engine running at constant speed regardless of the speed of the vehicle (may be by using a infinitely variable gear). Assume the energy spent per second by internal friction is A and the work done (external) is Y. Total fuel consumed is (X + Y).
2) The efficiency of the engine is very low under normal working conditions. That is Y/(X+Y) is very low (say 10%) at 50 km per hour speed.
3) Total frictional force is very high at higher speed. Lower at low speed.
Condition (1) and (3) means that there will be point of maximum efficiency - say at 20 km/hour speed.
Now construct a road with a downward slop such that the vehicle just rolls down without engine (engine off) till a an intermediate point. From that intermediate point the road starts upward slop with maximum steep (depends on the grip of the tyres). That means intermediate point is more close the upward sloping end.
Now vehicle runs till the intermediate point without any fuel cost and the remaining section with marginal increase in fuel cost than a vehicle running at plane road (as most of the energy spent is on internal friction).
In this case the efficiency of the downhill/uphill road is higher than a plane road.
I assume the mathematical calculation behind my arguments are obvious. If not let me know I can elaborate that further.
-
In a simple minded way I would say that the flat road for the same miles and the same speed would be more economical:
Moving the car uphill against gravity takes extra energy which is not being gained coming downhill because of braking (constant speed) and not turning off the engine for safety.
For each car there is an economic rpm , mine is at 2500 rpm . One can use that on a flat road, but a hill needs lower gears going up , not the economic rpms, and this is not gained back due to braking downhill so I cannot see how a hilly road can be more economical in any case.
edit: Looking at the preferred answer which claims gains by the method of driving the car I searched for a curve of rpms to fuel consumption and power output. Surprisingly there are not many out in the links. Here is the only one I found probably scanned from an engineering textbook. I observe that consumption goes up with rpm. Going uphill needs more rpm. I will also add that most of the inefficiencies are in heating the engine, and the higher rpm the hotter the engine.
Let me simplify the problem: If I pump 100 litres of water up hill and let it run down hill. Will the kinetic energy of the running water be greater than the energy utilized in pumping it up? In the best of conditions, without the inefficiencies of pump, it will break even.
In the car analogue then one is playing with inefficiencies of flat versus hill and maybe a computer program with real car data would give a definitive asnwer.
-
"I cannot see how a hilly road can be more economical in any case." - yes, absolutely right. – delete Mar 21 '11 at 15:41
You are restricting yourself to a constant speed. That defeats the point of the question. So you are oversimplifying the question and then mistakingly concluding that in general "I cannot see how a hilly road can be more economical in any case". As one can see from Master of Disaster's answer, he is restricting himself to an even more limited case (no friction) and trying to conclude something general. These in my opinion are horrific mistakes in logic. – John Mar 22 '11 at 2:03
@John I cannot think of a more limiting case than energy conservation, which is what I am talking about. I just simplified the description to show up this. All the rest is dressing, like my uncle trying to make a perpetual motion machine with bicycle wheels and gears. Seems car enthusiasts do not believe in energy conservation !! – anna v Mar 22 '11 at 5:34
@Anna No one is claiming you can make a perpetual motion machine. The issue is that you are making a mistake in logic and essentially arguing against a straw man, and coming to an incorrect conclusion. Energy conservation doesn't let you make the general claim you are making. As just one (of many) simple examples: consider a hill shaped such that the friction is enough that you don't need to use a brake like you assume. The main issue you are missing is the effect of the efficiency curve for an engine. See Edward's and some of the other answers for more information. – John Mar 22 '11 at 6:09
@John if you do not need to use the brake, you will just break even. On this hill you will be spending more gas going up than on the level, and you will be on the inefficient rpms of your car. You cannot get more energy than you put in. Nobody seems to consider that the hilly road is a smaller distance than on the straight, for the same miles, to increase those miles per gallon for the straight part. – anna v Mar 22 '11 at 7:16
show 4 more comments
absolutely.
If you assume what Edward assumed, I can give you the optimal path. I'm assuming a gas engine, which has an optimal fuel curve and which has a fuel use when its on, even in neutral.
Climb a hill immediately, at a speed governed by the ratio of gas used to climb it/gas used by idling. There will be some optimal climb rate. Example: if you burn 1 gal/hr, and climbing the hill at 200mph costs you 1 gal in 2 minutes (purely imaginary example) then the right answer is somewhere less than an hour, but more than two minutes, whatever gives you the minimum fuel use. Then, turn off the engine, and crawl downhill toward the goal. The optimal shape is the minimum hill that will overcome rolling friction at a crawl.
-
Do the uphill and downhill parts of the road have the same slope, or equivalently (because of the same-elevation constraint), do they have the same length? If not, what determines the ratio? Also, are you saying that the most efficient mileage on the flat road is obtained by driving at the slowest possible speed? (I see that this minimizes air resistance but I just want to confirm). Favorite answer so far but having some trouble working out a specific example. – Dan Brumleve Mar 20 '11 at 6:02
This answer tries to handle the question with a concrete example, but despite including numbers doesn't actually do so. You essentially just say "There will be some optimal climb rate", and the end with "Then, turn off the engine, and crawl downhill toward the goal." The phrase "at a speed governed by the ratio of gas used to climb it/gas used by idling" is also troubling as it doesn't even give a speed unit wise, nor does it make sense why that ratio is useful (the important issue is the power vs gas efficiency peak ... not the gas idling rate). – Edward Mar 20 '11 at 12:14
I am curious in all these answers, where is energy conservation? Going uphill, no matter what speed, requires extra energy then on a flat road. This could be gained back going downhill, if one could turn the engine off and ignore speed limits, which cannot be the case. Anyway at most one would just break even as far as energy goes. Am I wrong that mileage is gas consumed, i.e. energy per mile? – anna v Mar 20 '11 at 14:08
@anna: The gravitational energy is the same at the beginning and end of the trip, since the beginning and end are at the same elevation. So of course the energy put into gravitational energy can be retrieved going down the hill. I don't understand why you are worried about speed limits, for in the ideal case the slope would be small enough that the car is creeping down it. Remember, we're trying to answer if the hilly route can EVER be better, or whether the flat route is always better. It turns out that the hilly route can in at least some cases beat out the flat route. – John Mar 20 '11 at 16:15
This is a question which might be best for “Click and Clack” on “Car Talk.” The question is a bit unclear on a couple of things. So I will assume that the path length on the straight road and the hilly road is the same. I will also assume that the average velocity of the drive on the two roads is the same. Clearly if the straight road is much longer then the hilly road would be more economical. So suppose you drove the hilly road and you take your foot off the fuel on the descending portions of the drive. The idea is then that the potential energy you “banked” in driving up is converted to kinetic energy, and your gas mileage on the trip down is near “infinity.” If you use the brakes a lot on the way down, which might be advised for safety, then some of that energy is dissipated as heat.
Would the two be equal? I don’t think so. The reason is thermodynamics, for a good chuck of the energy used to climb the hills is lost as heat, so you recover a rather small amount of that energy as kinetic energy on the way down. We might think of the thermodynamics of the hilly drive as thermodynamic losses of the straight drive plus thermodynamic losses which are incurred in raising the potential energy of the car on the various hills. This is even more so if the engine idles on the way down, which is advised with modern cars. An old VW bug might save a bit on saving this energy loss with old fashioned brakes.
-
Reference to reality and to thermodynamics is not appreciated in this thread. Good luck! – Georg Mar 21 '11 at 18:40
So let me be sure what you're assuming. The two roads run between two cities. One road is straight and flat. The other is curved. And yet they are the same length. Hmmmmmmm. Somehow I can't picture this in my mind's eye. – Carl Brannen Mar 21 '11 at 21:37
One is curved in the vertical direction, the other straight. The distance of the two paths on the tangent plane of the Earth is the same. – Lawrence B. Crowell Mar 22 '11 at 12:24
Lawrence, the flat road must be curved because it has the same length as the hilly road and connects the same endpoints. Neither road is straight. – Dan Brumleve Mar 22 '11 at 21:04
We can assume the two roads arc around or something, where one remains flat and the other goes up and down a mountain. I am not sure what the difficulty is with this. One thing seems apparent is that not that many people have driven in mountains. It eats up gas --- big time, even if you try to use physics (gravity etc) to economize. – Lawrence B. Crowell Mar 23 '11 at 1:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 5, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9502119421958923, "perplexity_flag": "middle"} |
http://en.m.wikipedia.org/wiki/Nine-point_circle | # Nine-point circle
The nine points
In geometry, the nine-point circle is a circle that can be constructed for any given triangle. It is so named because it passes through nine significant concyclic points defined from the triangle. These nine points are:
• The midpoint of each side of the triangle
• The foot of each altitude
• The midpoint of the line segment from each vertex of the triangle to the orthocenter (where the three altitudes meet; these line segments lie on their respective altitudes).
The nine-point circle is also known as Feuerbach's circle, Euler's circle, Terquem's circle, the six-points circle, the twelve-points circle, the n-point circle, the medioscribed circle, the mid circle or the circum-midcircle.
## Significant nine points
The diagram above shows the nine significant points of the nine-point circle. Points D, E, and F are the midpoints of the three sides of the triangle. Points G, H, and I are the feet of the altitudes of the triangle. Points J, K, and L are the midpoints of the line segments between each altitude's vertex intersection (points A, B, and C) and the triangle's orthocenter (point S).
For an acute triangle, six of the points (the midpoints and altitude feet) lie on the triangle itself; for an obtuse triangle two of the altitudes have feet outside the triangle, but these feet still belong to the nine-point circle.
↑Jump back a section
## Discovery
Although he is credited for its discovery, Karl Wilhelm Feuerbach did not entirely discover the nine-point circle, but rather the six point circle, recognizing the significance of the midpoints of the three sides of the triangle and the feet of the altitudes of that triangle. (See Fig. 1, points D, E, F, G, H, and I.) (At a slightly earlier date, Charles Brianchon and Jean-Victor Poncelet had stated and proven the same theorem.) But soon after Feuerbach, mathematician Olry Terquem himself proved the existence of the circle. He was the first to recognize the added significance of the three midpoints between the triangle's vertices and the orthocenter. (See Fig. 1, points J, K, and L.) Thus, Terquem was the first to use the name nine-point circle.
↑Jump back a section
## Tangent circles
The nine-point circle is tangent to the incircle and excircles
In 1822 Karl Feuerbach discovered that any triangle's nine-point circle is externally tangent to that triangle's three excircles and internally tangent to its incircle; this result is known as Feuerbach's theorem. He postulated that:
... the circle which passes through the feet of the altitudes of a triangle is tangent to all four circles which in turn are tangent to the three sides of the triangle... (Feuerbach 1822)
The point at which the incircle and the nine-point circle touch is often referred to as the Feuerbach point.
↑Jump back a section
## Other properties of the Nine-point circle
• The radius of a triangle's circumcircle is twice the radius of that triangle's nine-point circle.
Figure 3
• A nine-point circle bisects a line segment going from the corresponding triangle's orthocenter to any point on its circumcircle.
Figure 4
• The center of any nine-point circle (the nine-point center) lies on the corresponding triangle's Euler line, at the midpoint between that triangle's orthocenter and circumcenter.
• The nine-point center lies at the centroid of four points comprising the triangle's three vertices and its orthocenter.
• Of the nine points, the three midpoints of line segments between the vertices and the orthocenter are reflections of the triangle's midpoints about its nine-point center.
• The center of all rectangular hyperbolas that pass through the vertices of a triangle lies on its nine-point circle. Examples include the well-known rectangular hyperbolas of Keipert, Jeřábek and Feuerbach. This fact is known as the Feuerbach conic theorem.
• If an orthocentric system of four points A, B, C and H is given, then the four triangles formed by any combination of three distinct points of that system all share the same nine-point circle. This is a consequence of symmetry: the sides of one triangle adjacent to a vertex that is an orthocenter to another triangle are segments from that second triangle. A third midpoint lies on their common side. (The same 'midpoints' defining separate nine-point circles, those circles must be concurrent.)
• Consequently, these four triangles have circumcircles with identical radii. Let N represent the common nine-point center and P be an arbitrary point in the plane of the orthocentric system. Then NA2+NB2+NC2+NH2 = 3R2 where R is the common circumradius and if PA2+PB2+PC2+PH2 = K2, where K is kept constant, then the locus of P is a circle centered at N with a radius $\scriptstyle \frac{1}{2} \sqrt{K^2-3R^2}$. As P approaches N the locus of P for the corresponding constant K, collapses onto N the nine-point center. Furthermore the nine-point circle is the locus of P such that PA2+PB2+PC2+PH2 = 4R2.
• The centers of the incircle and excircles of a triangle form an orthocentric system. The nine-point circle created for that orthocentric system is the circumcircle of the original triangle. The feet of the altitudes in the orthocentric system are the vertices of the original triangle.
• If four arbitrary points A, B, C, D are given that do not form an orthocentric system, then the nine-point circles of ABC, BCD, CDA and DAB concur at a point. The remaining six intersection points of these nine-point circles each concur with the midpoints of the four triangles. Remarkably, there exists a unique nine-point conic, centered at the centroid of these four arbitrary points, that passes through all seven points of intersection of these nine-point circles. Furthermore because of the Feuerbach conic theorem mentioned above, there exists a unique rectangular circumconic, centered at the common intersection point of the four nine-point circles, that passes through the four original arbitrary points as well as the orthocenters of the four triangles.
• If four points A, B, C, D are given that form a cyclic quadrilateral, then the nine-point circles of ABC, BCD, CDA and DAB concur at the anticenter of the cyclic quadrilateral. The nine-point circles are all congruent with a radius of half that of the cyclic quadrilateral's circumcircle. The nine-point circles form a set of four Johnson circles. Consequently the four nine-point centers are cylic and lie on a circle congruent to the four nine-point circles that is centered at the anticenter of the cyclic quadrilateral. Furthermore the cyclic quadrilateral formed from the four nine-pont centers is homothetic to the reference cyclic quadrilateral ABCD by a factor of −1/2 and its homothetic center (N) lies on the line connecting the circumcenter (O) to the anticenter (M) where ON = 2NM.
• The orthopole of lines passing through the circumcenter lie on the nine-point circle.
• Trilinear coordinates for the nine-point center are cos (B − C) : cos (C − A) : cos (A − B)
• Trilinear coordinates for the Feuerbach point are 1 − cos (B − C) : 1 − cos (C − A) : 1 − cos (A − B)
• Trilinear coordinates for the center of the Kiepert hyperbola are (b2 − c2)2/a : (c2 − a2)2/b : (a2 − b2)2/c
• Trilinear coordinates for the center of the Jeřábek hyperbola are cos A sin2(B − C) : cos B sin2(C − A) : cos C sin2(A − B)
• Letting x : y : z be a variable point in trilinear coordinates, an equation for the nine-point circle is
x2sin 2A + y2sin 2B + z2sin 2C − 2(yz sin A + zx sin B + xy sin C) = 0.
↑Jump back a section
## See also
↑Jump back a section
## References
• Feuerbach, Karl Wilhelm; Buzengeiger, Carl Heribert Ignatz (1822), Eigenschaften einiger merkwürdigen Punkte des geradlinigen Dreiecks und mehrerer durch sie bestimmten Linien und Figuren. Eine analytisch-trigonometrische Abhandlung (Monograph ed.), Nürnberg: Wiessner .
↑Jump back a section
## Read in another language
This page is available in 27 languages
Last modified on 19 March 2013, at 14:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 1, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8580958843231201, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/20548/how-to-solve-diophantine-equations-in-f-p | ## How to solve Diophantine equations in $F_{p}$?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
For example, how to solve the equation `$\sum^{p-1}_{i}x_{i}^{2}=0$` in $F_{p}$? This is not a homework problem. I think it should have a definite answer, so not an open problem. I just don't know how to solve it.
-
Thanks for the answers! I want to ask is there any good bound for the number of solutions(sorry forgot to post in first place)? – Changwei Zhou Apr 6 2010 at 21:57
1
The number of solutions up to simultaneous scaling by a non-zero element of $\mathbb{F}_p$ (if p>3) should be $\frac{p^{p-2}-1}{p-1}$ if $p=1 \pmod{4}$ and $\frac{p^{p-2}-1}{p-1}+p^{\frac{p-3}{2}}$ if $p=3 \pmod{4}$. – damiano Apr 6 2010 at 22:15
damiano, how can you show that? I did some numerical work and conjectured it but I have no idea how such things are proven. – Michael Lugo Apr 6 2010 at 22:22
2
The way I computed it, was by using the Weil conjectures. In this case you can argue your way through using the fact that as soon as you have a solution you can find all the remaining ones by projection away from it (like in Bjorn's answer). What remains to show is what happens when the projection never contains a line (case 1) or when it does contain at least one (and hence two) (case 2). – damiano Apr 6 2010 at 22:28
2
The number of solutions was first worked out by Victor Amande Lebesgue (the number theorist, not the Lebesgue famous for his integration theory) in 1838 and was used for proving the quadratic reciprocity law. A simplified version of his proof was recently published by W. Castryck, A shortened classical proof of the quadratic reciprocity law, Amer. Math. Monthly (2007). – Franz Lemmermeyer Apr 7 2010 at 9:14
show 2 more comments
## 3 Answers
There is a deterministic polynomial-time algorithm for finding solutions to diagonal equations of degree less than or equal to the number of variables over finite fields. See Christiaan van de Woestijne's thesis.
(A solution of your example equation can be found much more simply, however: try small integers, not necessarily distinct... . And for quadratic forms, the other solutions can be found by drawing lines through the point and intersecting with the quadric hypersurface: there will either be one more intersection point, or a whole line of points.)
-
1
Hi! I really thanks for the link of the paper. I will read it. – Changwei Zhou Apr 6 2010 at 22:03
3
Sorry for the impertinent remark, but a deterministic algorithm for finding all solutions to an equation in the variables $x_1,\ldots,x_n$ over $\mathbb{F}_q$ is obtained by plugging in all $q^n$ possible values and counting how many give solutions. I can well believe the van de Woestijne's algorithm is better than this, but can you say exactly how? – Pete L. Clark Apr 6 2010 at 23:07
I don't really understand the part "the other solutions can be found by drawing lines through the point and intersecting with the quadric hypersurface: there will either be one more intersection point, or a whole line of points", I will read the paper. – Changwei Zhou Apr 6 2010 at 23:35
@Pete: Good point! I forgot to write the words "polynomial time" (not to mention "over finite fields"!) It's fixed now. – Bjorn Poonen Apr 7 2010 at 0:00
1
To amplify on what Bjorn said -- if you have one solution you can use it to rationally parametrize all others via pencils of line. So the real problem is there a fast (i.e. polynomial time algorithm in log of the size of the finite field) algorithm to find one point. It turns out that if you are given a quadratic non-residue (generator of the 2-sylow subgroup of $F^*$) then there is a deterministic poly time algorithm to find a solution. This is still an open problem but van de Woestijne found a clever way around this for diagonal forms. – Victor Miller Apr 7 2010 at 1:08
show 2 more comments
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
You want to know if the sum of $p-1$ squares can be equal to 0 mod $p$. I'll assume that you don't want to allow the trivial (all-zeroes) solution.
If $k$ is a quadratic residue mod $p$, not equal to $1$, then this is simple; take $x_1$ such that $x_1^2 = k$, take $x_2 = \ldots = x_{p-k+1} = 1$, and take $x_{p-k+2} = \ldots = x_{p-1} = 0$.
So the equation $x_1^2 + \cdots + x_{p-1}^2 = 0$ has solutions mod $p$ as long as there exists a quadratic residue mod $p$ which is not equal to $1$. The number of quadratic residues mod $p$ is $\phi(p)/2$, where $\phi$ is Euler's totient function; if $\phi(p)/2 \ge 2$, or $\phi(p) \ge 4$, then there is at least one non-$1$ quadratic residue mod p. Now for a prime, $\phi(p) = p-1$, so that means your equation has solutions when $p-1 \ge 4$, i. e. when $p \ge 5$. We can check by brute force that $x_1^2 = 0 \mod 2$ and $x_1^2 + x_2^2 = 0 \mod 3$ have only the trivial solutions. So the equation $x_1^2 + \cdots + x_{p-1}^2 = 0 \mod p$ has nontrivial solutions for all primes $p \ge 5$.
(Basically, this is a more explicit version of the second paragraph of Bjorn Poonen's answer.)
-
1
In fact, what I was trying to hint at by "small integers" was your solution with k=4 (for p at least 5); i.e., 2^2 + 1^2 + 1^2 + ... + 1^2 + 0^2 + 0^2 = p. (No need to count quadratic residues!) – Bjorn Poonen Apr 6 2010 at 22:56
This answer is tangential in the sense that it is speaking of the existence of solutions rather than counting them all. But I rather suspect that you would find this interesting.
Suppose you have a quadratic form in at least thee variables over $\mathbb F_p$. Then the Chevalley-Warning Theorem would tell you that it has a nontrivial solution.
If you want to check out more, I refer you to the first chapter of J.-P. Serre's "A Course in Arithmetic", rather than the wikipedia page linked above.
-
Hi! I am reading the book "A course in arithmetic". I finished about 4 chapters by now. Thanks! – Changwei Zhou Jul 21 2010 at 3:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 38, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9382911920547485, "perplexity_flag": "head"} |
http://mathhelpforum.com/calculus/151295-surface-integral-solved-directly-using-gauss-theorem.html | # Thread:
1. ## Surface integral solved directly and using Gauss' Theorem.
To find the flow of the field $F\left( {x,y,z} \right) = \left( {x^3 ,y^3 ,z^3 } \right)$
across the surface of the cone $x^2 + y^2 = z^2$ with 0 < z < H
a) Directly
b) Applying Gauss's theorem
2. What have you tried?
I will give you this much: we can write the cone in parametric coordinates using polar coordinates: $x= r cos(\theta)$, $y= r sin(\theta)$, $z= r$ with $r$ from 0 to H and [/tex]\theta[/tex] from 0 to $2\pi$.
With that you can write the "position vector" of a point on the surface of the cone as $\vec{p}(r, \theta)= r cos(\theta)\vec{i}+ r sin(\theta)\vec{j}+ r \vec{k}$.
The deriviatives of that with respect to r and $\theta$:
$\vec{p}_r= cos(\theta)\vec{i}+ sin(\theta)\vec{j}+ \vec{k}$ and
$\vec{p}_\theta= -r sin(\theta)\vec{i}+ r cos(\theta)\vec{j}$
are in the tangent plane to the cone and their cross product is
$-r cos(\theta)\vec{i}- r sin(\theta)\vec{j}+ r \vec{k}$.
The (upward oriented) "vector differential of area" is $(-r cos(\theta)\vec{i}- r sin(\theta)\vec{j}+ r \vec{k})drd\theta$
Take the dot product of your field with that and integrate.
And what is "Gauss's theorem"? | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8950177431106567, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/tagged/integral?sort=faq&pagesize=15 | # Tagged Questions
Questions on the evaluation of definite and indefinite integrals
learn more… | top users | synonyms
13answers
4k views
### Proving $\int_{0}^{+\infty} e^{-x^2} dx = \frac{\sqrt \pi}{2}$
How to prove $$\int_{0}^{+\infty} e^{-x^2} dx = \frac{\sqrt \pi}{2}$$
4answers
707 views
### Simpler way to compute a definite integral without resorting to partial fractions?
I found the method of partial fractions very laborious to solve this definite integral : $$\int_0^\infty \frac{\sqrt[3]{x}}{1 + x^2}\,dx$$ Is there a simpler way to do this ?
2answers
897 views
### Evaluating $\int P(\sin x, \cos x) \text{d}x$
Suppose $\displaystyle P(x,y)$ a polynomial in the variables $x,y$. For example, $\displaystyle x^4$ or $\displaystyle x^3y^2 + 3xy + 1$. Is there a general method which allows us to evaluate the ...
1answer
1k views
### Will moving differentiation from inside, to outside an integral, change the result?
I'm interested in the potential of such a technique. I got the idea from Moron's answer to this question, which uses the technique of differentiation under the integral. Now, I'd like to consider ...
7answers
3k views
### Proof for an integral involving sinc function
I am looking for a short proof that $$\int_0^\infty \left(\frac{\sin x}{x}\right)^2 dx=\frac{\pi}{2}.$$ What do you think? It is kind of amazing that $$\int_0^\infty \frac{\sin x}{x} dx$$ is also ...
4answers
1k views
### Calculating the integral $\int_{0}^{\infty} \frac{\cos x}{1+x^2}\mathrm{d}x$ without using complex analysis
Suppose that we do not know anything about the complex analysis (numbers). In this case, how to calculate the following integral in closed form? $$\int_0^\infty\frac{\cos\;x}{1+x^2}\mathrm{d}x$$
5answers
1k views
### Nonzero $f \in C([0, 1])$ for which $\int_0^1 f(x)x^n dx = 0$ for all $n$
As the title says, I'm wondering if there is a continuous function such that $f$ is nonzero on $[0, 1]$, and for which $\int_0^1 f(x)x^n dx = 0$ for all $n \geq 1$. I am trying to solve a problem ...
3answers
3k views
### $\int_{-\infty}^{+\infty} e^{-x^2} dx$ with complex analysis
Inspired by this recently closed question, I'm curious whether there's a way to do the Gaussian integral using techniques in complex analysis such as contour integrals. I am aware of the calculation ...
4answers
2k views
### Explain $\iint \mathrm dx\mathrm dy = \iint r \mathrm d\alpha\mathrm dr$
It is changing the coordinate from one coordinate to another. There is an angle and radius on the right side. What is it? And why? I got: \$2\mathrm dy\mathrm dx = r(\cos^2\alpha-\sin^2\alpha)\mathrm ...
2answers
283 views
### Methods to evaluate $\int _{a }^{b }\!{\frac {\ln \left( tx + u \right) }{m{x}^{2}+nx +p}}{dx}$
Today I saw a question with an answer that made me rethink of the following question, since it's not the first time I try to find an answer to it. If you look at the answer of Mhenni Benghorbal here ...
6answers
3k views
### Ways to evaluate $\int \sec \theta \, d \theta$
The standard approach for showing $\int \sec \theta \, d \theta = \ln |\sec \theta + \tan \theta| + C$ is to multiply by $\frac{\sec \theta + \tan \theta}{\sec \theta + \tan \theta}$ and then do a ...
4answers
2k views
### Prove: $\int_{0}^{\infty} \sin (x^2) dx$ converges.
$\sin(x^2)$ is an example for a function which its limit when $x \to \infty$ is not $0$, and still its integral from $0$ to $\infty$ is finite. I'd like your help with understanding why and a ...
5answers
701 views
### Help solving $\int {\frac{8x^4+15x^3+16x^2+22x+4}{x(x+1)^2(x^2+2)}dx}$
$\displaystyle\int {\frac{8x^4+15x^3+16x^2+22x+4}{x(x+1)^2(x^2+2)}\,\mathrm{d}x}$ I used partial fractions, solved $A = 2, C = 3$. \frac{A}{x} + \frac{B}{x+1} + \frac{C}{(x+1)^2} ...
3answers
7k views
### Why is the area under a curve the integral?
I understand how derivatives work based on the definition, and the fact that my professor explained it step by step until the point where I can derive it myself. However when it comes to the area ...
3answers
327 views
### A log improper integral
Evaluate : $$\int_0^{\frac{\pi}{2}}\ln ^2\left(\cos ^2x\right)\text{d}x$$ I found it can be simplified to $$\int_0^{\frac{\pi}{2}}4\ln ^2\left(\cos x\right)\text{d}x$$ I found the exact value in the ...
3answers
534 views
### $f$ uniformly continuous and $\int_a^\infty f(x)\,dx$ converges imply $\lim_{x \to \infty} f(x) = 0$
Trying to solve $f(x)$ is uniformly continuous in the range of $[0, +\infty)$ and $\int_a^\infty f(x)dx$ converges. I need to prove that: $$\lim \limits_{x \to \infty} f(x) = 0$$ Would ...
4answers
329 views
### Evaluating $\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1}{1-2t\cos\theta +t^2}d\theta$
I need solve this integral, and I tried various methods of solving and did not get it. The integral is: $$\frac{1}{2\pi}\int_{0}^{2\pi}\frac{1}{1-2t\cos\theta +t^2}d\theta,$$ where $t$ is a ...
5answers
693 views
### Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods?
I have seen the Fresnel integral $$\int_0^\infty \sin x^2\, dx = \sqrt{\frac{\pi}{8}}$$ evaluated by contour integration and other complex analysis methods, and I have found these methods to be the ...
4answers
906 views
### Evaluate the integral: $\int_{0}^{1} \frac{\ln(x+1)}{x^2+1} dx$
Compute $$\int_{0}^{1} \frac{\ln(x+1)}{x^2+1} dx$$
2answers
413 views
### Frullani proof integrals
How can i prove the theorem of Frullani? I did not even know all the hypothesis that f must satisfy, but I think that this are Let $f:\left[ {0,\infty } \right] \to \mathbb R$ be a a continuously ...
1answer
380 views
### Summing over General Functions of Primes and an Application to Prime $\zeta$ Function
Along the lines of thought given here, is it in general possible to substitute a summation over a function $f$ of primes like the following: $$\sum_{p\le x}f(p)=\int_2^x f(t) d(\pi(t))\tag{1}$$ and ...
3answers
190 views
### Indefinite integral of secant cubed
I need to calculate the following indefinite integral: $$I=\int \frac{1}{\cos^3(x)}dx$$ I know what the result is (from Mathematica): $$I=\tanh^{-1}(\tan(x/2))+(1/2)\sec(x)\tan(x)$$ but I don't ...
2answers
725 views
### How to integrate $\int e^{-t^{2}} \space dt$ using introductory calculus methods
Earlier today I stumbled across this when I was doing some practice questions for a physics course: $$\int e^{-t^2} \space dt$$ To expand, the limits of integration were something like $1$ and $4$ ...
6answers
484 views
### Integral of $\int e^{2x} \sin 3x\, dx$
I am suppose to use integration by parts but I have no idea what to do for this problem $$\int e^{2x} \sin3x dx$$ $u = \sin3x dx$ $du = 3\cos3x$ $dv = e^{2x}$ $v = \frac{ e^{2x}}{2}$ From this I ...
5answers
797 views
### Evaluating $\int\limits_0^\infty \! \frac{x^{1/n}}{1+x^2} \ \mathrm{d}x$
I've been trying to evaluate the following integral from the 2011 Harvard PhD Qualifying Exam. For all $n\in\mathbb{N}^+$ in general: $$\int\limits_0^\infty \! \frac{x^{1/n}}{1+x^2} \ \mathrm{d}x$$ ...
4answers
545 views
### Evaluation of the integral $\int_0^1 \frac{\ln(1 - x)}{1 + x}dx$
How can I evaluate the integral $$\int_0^1 \frac{\ln(1 - x)}{1 + x}dx$$ I tried manipulating the known integral $$\int_0^1 \frac{\ln(1 - x)}{x}dx = -\frac{\pi^2}{6}$$ but couldn't do anything with ...
4answers
572 views
### How can I compute the integral $\int_{0}^{\infty} \frac{dt}{1+t^4}$?
I have to compute this integral $$\int_{0}^{\infty} \frac{dt}{1+t^4}$$ to solve a problem in a homework. I have tried in many ways, but I'm stuck. A search in the web reveals me that it can be do it ...
4answers
1k views
### Computing the integral of $\log (\sin x)$
How to compute the following integral $$\int \log(\sin x)\,\mathrm dx?$$
2answers
1k views
### integral with exponential function and logarithm
$$\int_0^{\infty } \frac{\log (x)}{e^x+1} \, dx = -\frac{1}{2} \log ^2(2)$$ Anyone an idea on how to prove this?
3answers
734 views
### Name of this identity? $\int e^{\alpha x}\cos(\beta x) \space dx = \frac{e^{\alpha x} (\alpha \cos(\beta x)+\beta \sin(\beta x))}{\alpha^2+\beta^2}$
Again: $$\int e^{\alpha x}\cos(\beta x) \space dx = \frac{e^{\alpha x} (\alpha \cos(\beta x)+\beta \sin(\beta x))}{\alpha^2+\beta^2}$$ Also the one for $\sin$: \int e^{\alpha x}\sin(\beta x) ...
2answers
181 views
### Complex Fourier series
I need to find the complex Fourier series of this function, and I'm having problems calculating these integers: $$|a|<1$$ $$x\in [-\pi,\pi]$$ $$f(x)=\frac{1-a\cos(x)}{1-2a\cos(x)+a^2}$$ ...
3answers
910 views
### Integrate square of the log-sine integral: $\int_0^{\frac{\pi}{2}}\ln^{2}(\sin(x))dx$
$\displaystyle \int_{0}^{\frac{\pi}{2}} \ln(\sin(x))dx=-\frac{\pi}{2}\ln(2)$ is an integral that is common. But, how can we show ...
2answers
475 views
### $\int_{0}^{\infty} \frac{\cos x - e^{-x^2}}{x} \ dx$ Evaluate Integral
Evaluate $$\int_{0}^{\infty} \frac{\cos x - e^{-x^2}}{x} \ dx$$
1answer
419 views
### A nice log trig integral
Show that : \int_{0}^{\frac{\pi }{2}}{\frac{{{\ln }^{2}}\cos x{{\ln }^{2}}\sin x}{\cos x\sin x}}\text{d}x=\frac{1}{4}\left( 2\zeta \left( 5 \right)-\zeta \left( 2 \right)\zeta \left( 3 \right) ...
2answers
184 views
### Laplace's method
I'm still having a little trouble applying Laplace's method to find the leading asymptotic behavior of an integral. Could someone help me understand this? How about with an example, like: ...
3answers
200 views
### A generalized integral need help
I was thinking this integral : $$I(\lambda)=\int_0^{\infty}\frac{\ln ^2x}{x^2+\lambda x+\lambda ^2}\text{d}x$$ What I do is use a Reciprocal subsitution, easy to show that: ...
5answers
2k views
### If $\int_0^x f \ dm$ is zero everywhere then $f$ is zero almost everywhere
I have been thinking on and off about a problem for some time now. It is inspired by an exam problem which I solved but I wanted to find an alternative solution. The object was to prove that some ...
1answer
190 views
### A few improper integral
\displaystyle \begin{align*} & \int_{0}^{+\infty }{\frac{\text{d}x}{1+{{x}^{n}}}} \\ & \int_{-\infty }^{+\infty }{\frac{{{x}^{2m}}}{1+{{x}^{2n}}}\text{d}x} \\ & \int_{0}^{+\infty ...
1answer
112 views
### Integral calculus proof
If $f(x)$ is continuous in $[a,b]$, prove that $\displaystyle \lim_{n \to \infty} \dfrac{b-a}{n} \displaystyle \sum^n _{k=1} f\left( a + \dfrac{k(b-a)}{n} \right) = \displaystyle \int_a ^ b f(x)dx$ ...
3answers
67 views
### how to solve an definite integral of floor valute function?
the question is, how to proof that this integral: (the integral is definite from 0 to n^2) $$\int_0^{n^2}\lfloor\sqrt{t}\rfloor dt = \frac{1}{6}n(n-1)(4n+1)$$ i'd very much appreciate your help on ...
1answer
337 views
### Insidious exponential integral
I hope that someone's up for the challenge; I'm attempting to solve this via computer: \begin{equation} \int_{-\pi}^\pi{\displaystyle \frac{e^{i\cdot a\cdot t}(e^{i\cdot b\cdot t}-1)(e^{i\cdot c ...
3answers
5k views
### The Integral that Stumped Feynman?
In "Surely You're Joking, Mr. Feynman!," Nobel-prize winning Physicist Richard Feynman said that he challenged his colleagues to give him an integral that they could evaluate with only complex methods ...
7answers
2k views
### Lebesgue integral basics
I'm having trouble finding a good explanation of the Lebesgue integral. As per the definition, it is the expectation of a random variable. Then how does it model the area under the curve? Let's take ...
4answers
1k views
### How do I integrate the following? $\int{\frac{(1+x^{2})\mathrm dx}{(1-x^{2})\sqrt{1+x^{4}}}}$
$$\int{\frac{1+x^2}{(1-x^2)\sqrt{1+x^4}}}\mathrm dx$$ This was a Calc 2 problem for extra credit (we have done hyperbolic trig functions too, if that helps) and I didn't get it (don't think anyone ...
5answers
1k views
### Compute $\int \frac{\sin(x)}{\sin(x)+\cos(x)}\mathrm dx$
I've got troubles in computing the below integral: $$\int \frac{\sin(x)}{\sin(x)+\cos(x)}\mathrm dx$$ I hope it can be expressed in elementary functions. I've tried simple substitution as $u=\sin(x)$ ...
5answers
486 views
### Evaluate: $\int_0^{\pi} \ln \left( \sin \theta \right) d\theta$
Evaluate: $\displaystyle \int_0^{\pi} \ln \left( \sin \theta \right) d\theta$ using Gauss Mean Value theorem. Given hint: consider $f(z) = \ln ( 1 +z)$. EDIT:: I know how to evaluate it, but I am ...
1answer
471 views
### How to show that $\int_0^1 \left(\sqrt[3]{1-x^7} - \sqrt[7]{1-x^3}\right)\;dx = 0$
Evaluate the integral: $$\int_0^1 \left(\sqrt[3]{1-x^7} - \sqrt[7]{1-x^3}\right)\;dx$$ The answer is $0,$ but I am unable to get it. There is some symmetry I can not see.
1answer
509 views
### Interesting integral formula
I looked around and found that integrals of the form $$\int_{0}^{\infty} \frac{x^{m-1}}{a+x^n}, a,m,n \in \mathbb{R}, 0<m<n, 0<a$$ seem to occur very often: Just to give a few examples ...
4answers
631 views
### Notation question: Integrating against a measure
Suppose $\mu$ is a measure. Is there any difference in meaning between the notation $\int f(x)d\mu(x)$ and the notation $\int f(x) \mu(dx)$? Many thanks.
4answers
318 views
### Evaluating $\int_0^{\infty}\frac{\ln(x^2+1)}{x^2+1}dx$
How would I go about evaluating this integral? $$\int_0^{\infty}\frac{\ln(x^2+1)}{x^2+1}dx.$$ What I've tried so far: I tried a semicircular integral in the positive imaginary part of the complex ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 81, "mathjax_display_tex": 34, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9205859303474426, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/37756/measure-absolute-speed | # Measure absolute speed
Currently I'm 17 years old, going to secondary school. So, my ideas might be totally wrong...
I know that everything is relative. In the example of speed, the earth moves, and the galaxy moves, etc.
My physics teacher told me that the speed of light is absolute, which means that the speed of the light source doesn't influence the speed of light in space. So, I was thinking that that fact could helps us to measure the absolute speed of our planet in space. Not relative to the sun, or the galaxy.
The way of measuring it was following:
`A` is the light emitter.
`B` is the light sensor, in combination with a very very precise timer.
`D` is the signal broadcasting point.
`~~>` is light, going from `A` to `B`.
```` A ~~~~~>~~~~~~~>~~~~>~~~~~>~~~~~~~~>~~~~~~>~~~~~~~~~> B
\__________________________D_________________________/
````
So, how it works — in my head — is that you send a signal from `D` to both `A` and `B`. The distances from `A` to `D` and from `B` to `D` are equal, so this should get the signal to both A and B in the same time. The distance between A and B is constant, say K.
As soon as the signal reaches B and A at the same moment, B starts the very precise timer and waits for the light coming from A, at the same time, A starts emitting light.
According to my knowledge, you should be able to calculate the speed of the whole situation along the axis A,B. Why? Because if our absolute speed is along with the light direction, it should take longer for the light to reach B, because the distance is bigger. Otherwise, we are moving in the opposite direction of the emitted light, so we are going towards the light, so, we make the distance for the light to travel shorter, because we are concede towards the light.
Compare it with a car (`C`) driving on the highway, next to a high speed train (`T`). The train goes faster than the car.
Compare both situations:
1) Train and car moving in the same direction, train starts behind the car.
````T ---------------->
C ------>
````
2) Train and car moving in opposite direction towards each other.
````T ---------------->
<------- C
````
In Situation 1, it will take longer for the car and train to meet.
In Situation 2, it will be pretty quick that they meet, because they going towards each other.
It is that difference in time that can be used to calculate our absolute speed, I think.
To define our absolute velocity vector, we can do this measurements three times, each test perpendicular to the two others, so we can apply Pythagoras to get our absolute speed as a scalar.
My teacher could hardly believe that it would work, so he thought that something should be wrong to my theory. What do you think, assuming that we have very precise measuring tools?
-
– David Zaslavsky♦ Sep 20 '12 at 5:16
## 5 Answers
Suppose two light pulses are released from A and B in opposite directions at the same time. Clock A’s timer will read 0 and clock B’s timer will read 0 at this instant. Now they both measure the time it takes until they receive a light pulse. Let’s suppose B measures 5 seconds for the pulse to get from A to B. Now I will assume the clocks of both A and B will read 5 when they receive the light pulse, and show that this doesn’t violate relativity, even if the A-D-B system is moving.
We now introduce an observer C. Let us assume the A-D-B system is moving to the right (or “B – direction”) relative to observer C. Let us assume that observer C’s clock reads 0 at the same time that observer C reads B’s clock as 0. Now when B’s clock reads 5, observer C’s clock will read a number greater than 5, say 8 because of time dilation. This can also be thought of as being due to the light pulse from A to B travelling a longer distance. (I think you understand this so far.)
Now, when observer C sees the light pulse from B to A reaching observer A, observer C sees the light pulse travel a shorter distance, and therefore C’s clock reads less than 5 seconds. So when A’s clock reads 5 seconds, C’s clock reads less than 5 seconds. But you are thinking, shouldn’t time dilation mean A’s clock reads more? No. The solution to this apparent paradox is that when C’s clock reads 0, A’s clock is actually negative. As I said earlier, C’s clock reads 0 when B’s clock reads 0. So therefore the solution to the paradox is that from C’s perspective, A and B’s clocks aren’t synchronised. This dissynchronisation allows A and B’s clocks to both read 5 seconds when they receive a light pulse, irrespective of the velocity of the A-D-B system.
-
Your experiment will measure the speed of light to be the same no matter what direction you do you it in. When we say the speed of light is constant we mean that every local experiment to measure the speed of light will find the same value. Even if we put our apparatus into a rocket and fire it off at 0.999c the experiment will still measure the same speed of light as it did when it was sitting on the Earth.
From a common sense perspective this seems silly. How can the measured speed of light be unaffected by the motion of the experiment? I don't know of any intuitive way to explain this, but it's a premise of special relativity, and all the weird effects like time dilation can be explained from the constancy of the speed of light.
I think the best way to explain the constancy of the speed of light is to explain it as a geometirical property of the universe. I went into some detail about this in the answer to Special Relativity Second Postulate. You might be interested to have a look at this.
-
Sadly I think your problem, as it is formed in your mind, and your test system, is based on the principles of classical mechanics - special relativity explains that simultaneity is just not absolute.
Look up the Michelson-Morley experiment. They couldn't believe their results either.
You have to accept that there is no actual or absolute speed of ourselves through space. Therefore there is no way to measure that speed.
-
However, if I understand your experiment correctly, it is premised on two contradictory statements:
(1) "As soon as the signal reaches B and A at the same moment"
(2) "because if our absolute speed is along with the light direction, it should take longer for the light to reach B, because the distance is bigger"
These cannot both be true at the same time.
In fact, if there is motion along the direction of A to B in the lab frame of reference, the light pulses from D will not arrive at B and A simultaneously according to the lab clocks.
However, according to the clocks at A and B, the light pulses do arrive at the same time. This is because, in the lab frame, the clocks at A and B are not synchronized. This is due to the relativity of simultaneity.
There is also time dilation and length contraction that must be taken into account. Together, these "conspire" to make it impossible for your apparatus to detect an absolute motion.
EDIT (to further clarify my original answer in light of the comments): it must be understood that measurements of the one-way speed of light depend on spatially separated clocks and thus the issue of clock synchronization arises. To quote from the Wiki article One-way speed of light:
When using the term 'the speed of light' it is sometimes necessary to make the distinction between its one-way speed and its two-way speed. The "one-way" speed of light from a source to a detector, cannot be measured independently of a convention as to how to synchronize the clocks at the source and the detector. What can however be experimentally measured is the round-trip speed (or "two-way" speed of light) from the source to the detector and back again. Albert Einstein chose a synchronization convention (see Einstein synchronization) that made the one-way speed equal to the two-way speed. The constancy of the one-way speed in any given inertial frame, is the basis of his special theory of relativity although all experimentally verifiable predictions of this theory do not depend on that convention.
The OP's notion of signaling from D to A & B, regardless of whether it's light or electrons in a wire, is an attempt to synchronize the clocks according to the Einstein convention. If, in fact, his apparatus manages to do this, as explained in the linked article, this guarantees that the measured one-way speed of light will be $c$.
-
Is it still impossible for the signal to reach both sides simultaneously even if the cooled so much down that we reach zero resistivity? – Martijn Courteaux Sep 19 '12 at 13:46
I'm not sure you've grasped my point. When you say "reach both sides simultaneously", you have to realize that simultaneity is not absolute. It's not that it's impossible to reach A & B simultaneously, it's that observers moving with respect to the apparatus observe the pulses arriving at A at a different time than at B. – Alfred Centauri Sep 19 '12 at 14:00
2
I think you're still not grasping my point. Regardless of what kind of signals you use, you're treating their simultaneous arrival at A & B as a given. But the simultaneous arrival of the signals can only be judged by spatially separated clocks which must have been synchronized. But, as I've already pointed out, due to the relativity of simultaneity, clocks synchronized in one frame are not synchronized according to a relatively moving frame. – Alfred Centauri Sep 19 '12 at 14:20
1
In OPs experiment, I don't think A & B are moving relative to each other. The device essentially measures the time light takes to get from A to B, and is looking for a variation in the speed of light depending on if the earth is moving in an A->B direction or the oppisite direction, relative to some 'absolute' frame of reference. – Nathan Reed Sep 19 '12 at 14:33
2
@NathanReed, yes, I understand completely that this experiment is to measure the one-way speed of light. And, as I have attempted to explain, this can only be done with spatially separated clocks the synchronization of which relatively moving observers don't agree on. As Wiki puts it: "The "one-way" speed of light from a source to a detector, cannot be measured independently of a convention as to how to synchronize the clocks at the source and the detector." en.wikipedia.org/wiki/One-way_speed_of_light. – Alfred Centauri Sep 19 '12 at 15:22
show 3 more comments
There is a way to measure the 'absolute velocity' WRT the space, but nothing like your experiment.
We already do that feat of measuring the abs. velocity of the Earth (in the WMAP experiment and several others), or any other spaceship.
The Cosmic Microwave Background is so uniform in spacial distribution and so sharp in the frequency that it is a perfect referential for time and space.
The only measuring apparatus in need is diretional microwave antenas, and associated electronics, to measure the intensity of radiation that is received from all directions.
Due to the doppler effect we sense a higher frequency from ahead and a lower from behind the mobile. (google for Dipole CMB) . Because the Earth rotates around the axis we have a daily large variation of the dipole (by 360º), because the Earth rotates around the Sun in 365 days, yes .. and because the Earth, and the Milky Way, is in motion in direction to Leo constelation ... yes we can draw a vector in the physical space with the absolute direction and the speed of the motion. There is no need of any other celestial body as reference, you can even swith off the stars and still we know our path.
The speed of light is indeed constant, absolute, but only in relation to the medium and it seems that you have sorted it correctly. It propagates like the waves originated by a boat in motion in the surface of a calm lake.
• problem : very very precise timer -- take two identical atomic clock when side to side then the motion from here to there affects the rate and the grav field idem
• problem : equal distances -- there are no rigid bodies, and we measure with fields (light, wavelengts) and it is the same problem as 'precise timer'
• problem : at the same time -- define a criterium: the usual Poincaré/Einstein one is important in the reference of the mobile, or one that do not depends on light (a super-observer that instantly knows where the photon is).
All the measures of light speed we have were made in a two-way closed path.
EDIT ADD:
There are plenty of references to support this viewpoint, for instance this one from CERN: Does the motion of the solar system affect the microwave sky?
Angélica de Oliveira-Costa and colleagues studied the cosmic quadrupole and octopole and realized that both are very planar and aligned, i.e. all minima and maxima happen to fall on a great circle on the sky - another unexpected feature (de Oliveira- Costa et al. 2004)
and from this one from arxiv: Is the low-l microwave background cosmic?
The large-angle (low-l) correlations of the Cosmic Microwave Background exhibit several statistically significant anomalies compared to the standard inflationary big-bang model, ... Three of these planes are orthogonal to the ecliptic at a level inconsistent with gaussian random statistically isotropic skies at 99.8% C.L., and the normals to these planes are aligned at 99.9% C.L.
or this one Large-angle anomalies in the CMB
These apparent correlations with the solar system geometry are puzzling and currently unexplained.
-
The frame of the CMB is identifiable, but that does not make it in any way privileged. – dmckee♦ Sep 20 '12 at 4:16
2
@dmckee It is so special that it is the only reference such that an observer at rest in relation to it, the CMB, senses the universe as isotropic. And this is a fact that make it privileged. – Helder Velez Sep 20 '12 at 11:01
To the downvoters: go to the positive way and show that I'm wrong. I am saying the truth, based in facts, beyond any theory. If you think that I am wrong you must explain why, as I did. Much obliged. – Helder Velez Sep 20 '12 at 11:20
1
The laws of physics are in no way special in the rest frame of the CMB, thus it is not a special frame. This is what is meant by the equivalence of frames in relativity. – dmckee♦ Sep 20 '12 at 13:14
1
"Absolute motion" implies a preferred frame of reference, not a frame that people can agree upon, but one that is special. Certainly you can agree to measure relative the CMB---it is even a fairly natural choice---but that does not make it "absolute". – dmckee♦ Sep 20 '12 at 15:12
show 4 more comments | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.946575403213501, "perplexity_flag": "middle"} |
http://electronics.stackexchange.com/questions/33487/low-power-design-switch-out-voltage-divider-using-transistor | Low power design - switch out voltage divider using transistor?
I have a very simple voltage divider circuit for measuring the resistance of a platinum 100 Ohm resistor.
I want to be able to switch out the voltage divider circuit from the power supply in order to save power.
Is this possible?
````---------------------------+3.3v
|
|
Transistor----low/high
|
|
R1
|
|-------to A/D pin
|
R2
|
|
----------------------------GND
````
-
Silly question: doesn't your microcontroller (if any) have a temperature sensor? Or why don't you use an integrated one, since you have low power requirements? – clabacchio♦ Jun 8 '12 at 11:52
Yes, it's measuring soil temperature. So I have to stick it in the ground... – Eamorr Jun 8 '12 at 11:58
Well then, although you could find 3-pin digital sensors that you can wire externally... – clabacchio♦ Jun 8 '12 at 12:00
4 Answers
What you suggest is possible, but you have to be aware of some gotchas. The biggest issue is for the transistor to not distort the measurement. You didn't give any accuracy requirements, but let's say it's a 10 bit A/D and you don't want the transistor to add more than 1 count of error. On the 3.3 V scale, one count of a 10 bit A/D is 3.2 mV. With the two resistors equal, the transistor therefore can't drop more than 6.5 mV. That completely rules out a bipolar transistor.
A P channel FET can do this. Again, if you want the transistor to not add more than .1% error it needs to be under 200 mΩ when the two resistors are equal, and half that in the worst case.
100 mΩ P channel FETs can be found, but N channel FETs are more plentiful and have better characteristics, especially at these low voltages. I would use a N channel low side switch instead:
The IRLML2502 is guaranteed to 80 mΩ max at only 2.5 V gate drive, so will add very little error. If much lower error is required, then you can measure the bottom of R2 in addition to the voltage divider and then the drop accross the switch can be accounted for in firmware.
Added:
You have now changed the question by saying you are really using a bridge circuit. This made sense when the measurement was to be displayed with a analog meter movement, but is unnecessary when using a modern microcontroller. With a normal microcontroller A/D you already have a bridge since the A/D result is ratiometric to the power supply range. In effect, the other side of the bridge is built into the micro. Using another external bridge and a second A/D input will only add error. If you're fine with .1% voltage accuracy coming out of the divider, then just use the circuit above.
Some microcontrollers have a separate negative A/D voltage reference line. This is called Vref- on Microchip PIC line, for example. You could drive Vref- from the bottom of R2 to ignore the voltage accross Q1. However, check the valid range of the the Vref- pin. This may not be allowed to go as high as Vdd. This is actually one case where you may be able to use the absolute maximum rating instead of the operating values. When the sensor circuit is off, you only care that the A/D not be damaged, not that it work correctly. Of course if you are using the A/D for other things this scheme won't work.
More on bridges:
It has been suggested that a "bridge" circuit is better in this case and would cancel out any voltage dropped by Q1 in the circuit above. This is not the case, at least not with my interpretation of "bridge" circuit. Here is how I think the bridge is intended to be connected:
R1 is the variable resistance sensor being measured. R2, R3, and R4 are fixed resistors with known values. SW1 is the switch used to turn this circuit off when not in use to conserve power. When a measurement is being taken, SW1 is closed. In this schematic, SW1 is assumed to be a perfect switch with R5 shown separately to represent its on resistance.
The point of a bridge circuit is to provide a differential voltage between V1 and V2. This was useful in old analog meters when the meter required significant current and could be directly connected between V1 and V2. Note that the voltage V1-V2 is still proportional to Vdd. This circuit is not independent of Vdd, and therefore not independent of apparent error in the supply voltage caused by the current thru R5. Bridge circuits are independent of Vdd in only one case, and that is when V1-V2 is zero. This is why old analog meters that used bridge circuits combined them with a precision calibrated variable R3. You wouldn't use the measurement of V1-V2 displayed on the meter as a direct measurement, but rather as feedback of setting R3 such that V1-V2 was zero. In that singular case, Vdd then does not matter, and neither does the impedance of the meter between V1 and V2.
What we have here today with microcontroller A/D inputs is a totally different case. These A/Ds are not set up for differential measurement, and we don't have a calibrated reliable way of varying R3 anyway. However, we can make fairly accurate voltage measurements realtive to the GND to Vdd range.
If R5 were 0, then the voltage at V1 would a ratio of Vdd dependent only on R1. Since both the sensor circuit and the A/D in the microcontroller produce and measure the voltage relative to the GND to Vdd range, the exact value of that range cancels out.
The only problem is when R5 is non-zero and unknown over some range. This adds a unknown error to V1 even when it is considered relative to the Vdd range. In effect the sensor is producing a voltage a fixed fraction of the Vlow to Vdd range, while the micro is measuring it as a fixed fraction of GND to Vdd. The simplest way to deal with this is to guarantee that Vlow is a sufficiently small fraction of Vdd so that this error can be ignored.
The suggestion to use a bridge circuit is apparently so that measuring both V1 and V2 allows this error to be cancelled out. If R3 and R4 are well known, then the V2 is a direct function of Vlow, but attenuated by the R4,R3 divider. With high precision, V2 could be measured, Vlow inferred, and the result used to correct the V1 reading. However, there is no advantage to the R4,R3 divider. If you need to correct for Vlow, it is best to measure it directly. In no case is measuring V2 better than measuring Vlow directly. Since we are better off measuring Vlow and therefore have no need for V2, there is no point in producing V2. R3 and R4 can therefore be eliminated, leaving nothing that could be called a "bridge" circuit.
-
Yes, accuracy requirements are not too stringent - 0.5 degrees Celcius. Thank you very much for your useful post. I think it's exactly what I need. – Eamorr Jun 8 '12 at 11:33
Quick question: those IRLML2502's are rated to 4.2A. Do I really need this when my supply voltage battery powered to 3.3V? Could you recommend a lower current transistor? Or will I be fine with the IRLML2502? – Eamorr Jun 8 '12 at 11:37
2
@Eamorr - The 80m$\Omega$ goes with the current. High current FETs are designed to have a low $R_{DS(ON)}$ to minimize power losses. Low current FETs usually have higher $R_{DS(ON)}$. Don't worry about it. – stevenvh Jun 8 '12 at 11:39
@Eamorr: No, you don't need the full current capability of the IRLML2502, but it does you no harm. You will notice that most FETs with low on resistance have decent current capability for their size. This is because so little power is dissipated due to the low resistance. – Olin Lathrop Jun 8 '12 at 11:40
1
@Eamorr: then why bother about the resistance of the MOS? – clabacchio♦ Jun 8 '12 at 12:05
show 10 more comments
The question shows a simple resistor voltage divider, but in comments you say you're using a Wheatstone bridge.
R5 is the resistance of the switching component. Measurements for both setups will be influenced by R5. For the resistor divider:
$\mathrm{ V_1 = \dfrac{R2 + R5}{R1 + R2 + R5}V_{DD} }$
and it's clear that a higher R5 will increase V1. For the Wheatstone bridge we have:
$\mathrm{ V_{OUT} = \left( \dfrac{R3}{R3 + R4} - \dfrac{R2}{R1 + R2} \right)(V_{DD} - V_{LOW}) }$
where
$\mathrm{ V_{LOW} = \dfrac{R5}{R5 + \dfrac{(R1 + R2)(R3 + R4)}{R1 + R2 +R3 + R4)}} V_{DD} }$
So also the Wheatstone bridge output changes when VLOW > 0. Taking the difference doesn't cancel out VLOW!, except in the trivial situation where V1 = V2.
If R1 is a Pt100 RTD (Resistance Temperature Detectors), which has a resistance of 100.0 $\Omega$ at 0 °C, and 138.5 $\Omega$ at 100 °C. We assume that's the required measuring range. If the other resistors in the bridge are all 100 $\Omega$ the output voltage will be 0 V at 0 °C, and highest at 100 °C. We can expect the error due to R5 to be the highest at 100 °C.
The graph shows the reading error in % due to an R5 resistance varying from 0 $\Omega$ to 1 $\Omega$. The purple graph is for the resistor divider, the blue graph for the Wheatstone bridge. Wheatstone has a higher error! This may be surprising at first look, but can easily be explained: the two branches of the bridge halve the 200 $\Omega$ of one branch, like the divider has one. That means that VLOW for the bridge will be twice as high.
The graph shows the error in output voltage reading, we have to calculate that back to a temperature value. This FET has an $\mathrm{R_{DS(ON)}}$ of 90 m$\Omega$ maximum. If we calculate our 100 °C reading back as if the resistance were zero, we'd get 99.90 °C. With this FET, with a 22 m$\Omega$ $\mathrm{R_{DS(ON)}}$ our reading would be 99.97 °C.
Conclusion
The resistance of the switch does influence the reading, but it will be less than 0.1 % when you use a FET with $\mathrm{R_{DS(ON)}}$ < 100 m$\Omega$.
(schematic images borrowed again from Olin. Thanks, Olin)
-
If you already use a Wheatstone bridge (as you say in the comment), then it's ok to use a MOSFET switch, since it will only affect the common mode voltage, and not the signal. Just make sure that it doesn't affect your eventual offset zeroing.
The circuit should be something like this:
Of course it's possible.
But surely it won't be appropriate for a measurement circuit. Depending on the $r_{DS}$ of your MOSFET, you will have a significant accuracy loss. Consider that the $r_{DS}$ is not a stable nor accurate value, and it's most often specified as a maximum value.
Now comes the question: why do you use a voltage divider to measure a resistor? You could achieve better accuracy (and also be able to use a MOSFET switch without accuracy loss) with a Wheatstone bridge.
Another side note: it's better to use an amplifier before sending the output signal to the ADC, otherwise you will greatly limit the dynamic range of the signal, and lose accuracy. Just a non-inverting amplifier with a precision Opamp (not 741 :)), rail-to-rail if you want to avoid the dual supply.
-
Hi, many thanks for your useful reply. Yes, I'm actually going to be using a wheatstone bridge connected to a unity gain op-amp. I just put in the voltage divider for simplicity's sake. You mentioned that if I use a wheatstone bridge, I could use a transistor to switch out the circuit. How to do this? – Eamorr Jun 8 '12 at 11:28
1
@Eamorr: But that makes things completely different, because the MOSFET unbalances the divider but not the bridge. I'd suggest you to refine the question with the real circuit. You can use CircuitLab, until we get a proper schematic editor – clabacchio♦ Jun 8 '12 at 11:29
Mmmm. Perhaps I should just use very large resistances to minimise the leakage current. I only want to take a temperature measurement every 60 seconds. Many thanks for your response, – Eamorr Jun 8 '12 at 11:30
Thanks for the circuit diagram. I have something almost exactly the same... – Eamorr Jun 8 '12 at 12:55
@clabacchio doesn't using an active amplifier kind of defeat the "low power" aspect the OP's question? – vicatcu Jun 8 '12 at 17:12
show 4 more comments
Yes, it's possible - you can use a P-channel MOSFET with source to Vdd, drain to divider and gate to uC or whatever you want to control it with. Also a pullup resistor from gate to source (say 10K)
Then to turn on just pull gate to ground, to turn off let it float (set uC pin to Hi-Z)
As noted, depending on what kind of accuracy you are aiming for this may not be the way to go. It's certainly not the most accurate, but if you are not too bothered about this then it is the simplest.
If you select a MOSFET with low Rds and check the min/max, then you can easily work out how it may affect your readings and decide.
EDIT - reading the comments, if you are measuring soil temperature and only need 0.5 deg C accuracy then I think something like the DS18B20 would probably be more suitable and easier to use than a PT100. Everything all in one little package with 2 or 3 wires to connect. You can also get them in convenient waterproof casing on eBay - here's an example.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 20, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9389791488647461, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/93791/alpha-derivations | ## alpha derivations
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
I am curious. Is there a "slick" way of showing that given an arbitrary algebra $A$ with generating set $X$, an algebra endomorphism $\alpha : A\to A$ and a function (satisfying some conditions) $d : X\to A$, that $d$ extends uniquely to an $\alpha$-derivation $D : A\to A$?
-
I don't understand the tag "quantum-group". Isn't this just commutative algebra, specifically Kähler differentials? – Martin Brandenburg Apr 11 2012 at 18:37
@Martin, the question is not about derivations but $\alpha$-derivations, a notion which is usually found in "quantum contexts". – Mariano Suárez-Alvarez Apr 11 2012 at 19:20
When $M$ is an $A$-module, then I can talk about derivations $A \to M$. This is standard commutative algebra. When $M=A$ is considered as an $A$-module via $\alpha$, then we get $\alpha$-derivations. So this is really just a special case. My answer is valid for arbitrary modules. – Martin Brandenburg Apr 11 2012 at 19:29
1
Martin, given an algebra endomorphism $\alpha:A\to A$ of a not-necessaryli-commutative algebra and an $A$-bimodule $M$, a (left) $\alpha$-derivation is a map $d:A\to M$ such that $d(ab)=\alpha(a)d(b)+d(a)b$. This notion is not the same as the one you describe in your comment above. – Mariano Suárez-Alvarez Apr 11 2012 at 20:03
1
Oh, thanks. I've misunderstood this terminology. – Martin Brandenburg Apr 11 2012 at 20:12
show 2 more comments
## 2 Answers
Suppose $V$ is a vector space, let $TV=\bigoplus_{n\geq0}V^{\otimes n}$ be the tensor algebra on $V$ and let $I$ be an ideal of $TV$. Let $A=TV/I$ be the quotient algebra, let $p:TV\to A$ be the canonical projection and let $\alpha:A\to A$ be an endomorphism of algebras. Let, moreover, $\delta:V\to A$ be any linear map.
There is a (in fact unique) linear map $\delta_1:TV\to A$ such that
• the restriction of $\delta_1$ to $V\subseteq TV$ is $\delta$, and
• $\delta_1(xy)=\alpha(p(x))\delta(y)+\delta(x)p(y)$ for all $x$, $y\in TV$.
Indeed, these two conditions show that its restriction to $V^{\otimes n}$ must be given by `$$\delta_1(x_1\otimes\cdots\otimes x_n)=\sum_{i=1}^n\alpha(p(x_1\cdots x_{i-1}))\delta(x_i)p(x_{i+1}\cdots x_n),$$` and if we use this formula to define $\delta_1$, a boring verification will show that we get a map that actually works.
Let $I$ is generated by elements ${r_j}_{j\in J}\subseteq TV$ and let us suppose that
$$\text{$\delta_1(r_j)=0$ for all $j\in J$.} \tag{$\star$}$$
It is then easy to see that $\delta_1(I)=0$, using the fact that the ideal $I$ is the linear span of all elements of the form $xr_jy$ with $x$, $y\in TV$ and $j\in J$. As a consequence, $\delta_1$ passes down to the quotient to give a linear map $\delta_2:A=TV/I\to A$ which by design is an $\alpha$-derivation.
We conclude that $(\star)$ is a sufficient condition for the existence of an $\alpha$-derivation extending $d:V\to A$, and a little reflection will show that it is also necessary. As Martin has shown earlier that we have uniqueness, we are happy.
The condition is one that one can check in concrete examples with little trouble.
An example. Let $q$ be a scalar and let $A$ be the free algebra generated by $K$, $L$, and $F$ subject to the relations \begin{gather} KL=1=LK, \\ FK=q^2KF. \end{gather} The first two relations tell us that $L=K^{-1}$, and using the second one one can see without much pain that ${F^aK^b:a\in\mathbb N,b\in\mathbb Z}$ is a basis of $A$. A little extra work will show that there is an automorphism $\alpha:A\to A$ such that $$\alpha(F^aK^b)=q^{-2b}F^aK^b.$$ We want to construct an $\alpha$-derivation $d$ of $A$ such that $$d(F)=\frac{K-K^{-1}}{q-q^{-1}}$$ and $$d(K)=d(L)=0.$$
We let $V$ be the vector space with basis ${K,L,F}$, define $\delta:V\to A$ putting $\delta(K)=\delta(L)=0$ and $\delta(F)=(q-q^{-1})^{-1}(K-K^{-1})$ and use the technology developed above. We have the map $\delta_1:TV\to A$ amd we have to check that it vanishes on the elements $KL-1$, $LK-1$ and $FK-q^2KF$. We have, for example, $$\delta_1(KL-1)=\alpha(K)\delta(L)-\delta(L)K=0$$ simply because $\delta$ kills $K$ and $L$, and similarly for the second relator. The third one is more interesting: \begin{align} \delta_1(FK-q^2KF)&=\alpha(F)\delta(K)+\delta(F)K-q^2\alpha(K)\delta(F)-q^2\delta(K)F\\ &=\delta(F)K-q^2\alpha(K)\delta(F)\\ &=\frac{K-K^{-1}}{q-q^{-1}}K-q^2(q^{-2}K)\frac{K-K^{-1}}{q-q^{-1}}\\ &=0. \end{align} We have checked our condition $(\star)$, so there is an $\alpha$-derivation $d:A\to A$ which does what we wanted.
This example is one of the steps required in showing that $U_q(\mathfrak{sl}_2)$ is an iterated Ore extension —indeed, $U_q(\mathfrak{sl}_2)=A[E;\alpha, d]$.
-
Note that this is exactly the same thing as for usual derivations! – Mariano Suárez-Alvarez Apr 12 2012 at 6:31
This also works if we have a base ring (instead of a base field); then $V$ is a module etc. – Martin Brandenburg Apr 12 2012 at 6:38
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
The key is to observe that the definition of a derivation $d$ immediately implies that $\ker(d)$ is a subalgebra of $A$. Now if $d,d'$ are derivations, then $d-d'$ is a derivation. Thus, if $d$ and $d'$ coincide on algebra generators, they coincide everywhere.
-
This seems to give me uniqueness if I have a derivation, however, am I correct that one can always extend an appropriate function $d : X\to A$ to a derivation? By "appropriate", the function should respect the defining relations of the algebra $A$ in some way correct? – Ryan Apr 12 2012 at 0:45
Yes, Mariano has given a perfect answer. I didn't know that we can give these sufficient conditions in general, because even for algebraic extensions of fields it depends on the minimal polynomials and the characteristic. – Martin Brandenburg Apr 12 2012 at 6:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 92, "mathjax_display_tex": 5, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9027934074401855, "perplexity_flag": "head"} |
http://physics.stackexchange.com/questions/46573/what-are-the-strings-in-string-theory-made-of/47539 | # What are the strings in string theory made of?
This is a follow-up to an intriguing question last year about tension in string theory.
What are the strings in string theory composed of?
I am serious. Strings made of matter are complex objects that require a highly specific form of long-chain inter-atomic bonding (mostly carbon based) that would be difficult to implement if the physics parameters of our universe were tweaked even a tiny bit. That bonding gets even more complicated when you add in elasticity. The vibration modes of a real string are the non-obvious emergent outcome of a complex interplay of mass, angular momentum, various conservation laws, and convenient linearities inherent in of our form of spacetime.
In short, a matter-based vibrating real string is the outcome of the interplay of most of the more important physics rules of our universe. Its composition -- what is is made of -- is particularly complex. Real strings are composed out of a statistically unlikely form of long-chain bonding, which in turn depend on the rather unlikely properties that emerge from highly complex multiparticle entities called atoms.
So how does string theory handle all of this? What are the strings in string theory made of, and what is it about this substance that makes string-theories simple in comparison to the emergent and non-obvious complexities required to produce string-like vibrations in real, matter-based strings?
Addendum 2012-12-28 (all new as of 2012-12-29):
OK, I'm trying to go back to my original question after some apt complaints that my addendum yesterday had morphed it into an entirely new question. But I don't want to trash the great responses that addendum produced, so I'm trying to walk the razor's edge by creating an entirely new addendum that I hope expands on the intent of my question without changing it in any fundamental way. Here goes:
The simplest answer to my question is that strings are pure mathematical abstractions, and so need no further explanation. All of the initial answers were variants of that answer. I truly did not expect that to happen!
While such answers are sincere and certainly well-intended, I suspect that most people reading my original question will find them a bit disappointing and almost certainly not terribly insightful. They will be hoping for more, and here's why.
While most of modern mathematical physics arguably is derived from materials analogies, early wave analogies tended towards placing waves within homogeneous and isotropic "water like" or "air like" media, e.g. the aether of the late 1800s.
Over time and with no small amount of insight, these early analogies were transformed into sets of equations that increasingly removed the need for physical media analogies. The history of Maxwell's equations and then SR is a gorgeous example. That one nicely demonstrates the remarkable progress of the associated physics theories away from using physical media, and towards more universal mathematical constructs. In those cases I understand immediately why the outcomes are considered "fundamental." After all, they started out with clunky material-science analogies, and then managed over time to strip away the encumbering analogies, leaving for us shiny little nuggets of pure math that to this day are gorgeous to behold.
Now in the more recent case of string theory, here's where I think the rub is for most of us who are not immersed in it on a daily basis: The very word "string" invokes the image of a vibrating entity that is a good deal more complicated and specific than some isotropic wave medium. For one thing the word string invokes (perhaps incorrectly) an image of an object localized in space. That is, the vibrations are taking place not within some isotropic field located throughout space, but within some entity located in some very specific region of space. Strings in string theory also seem to possess a rather complicated and certainly non-trivial suite materials-like properties such as length, rigidity, tension, and I'm sure others (e.g. some analog of angular momentum?).
So, again trying to keep to my original question:
Can someone explain what a string in string theory is made of in a way that provides some insight into why such an unusually object-like "medium of vibration" was selected as the basis for building all of the surrounding mathematics of string theory?
From one excellent comment (you know who you are!), I can even give an example of the kind of answer I was hoping for. Paraphrasing, the comment was this:
"Strings vibrate in ways that are immediately reminiscent of the harmonic oscillators that have proven so useful analytically in wave and quantum theory."
Now I like that style of answer a lot! For one thing, anyone who has read Feynman's section on such oscillators in his lectures will immediately get the idea. Based on that, my own understanding of the origins of strings has now shifted to something far more specific and "connectable" to historical physics, which is this:
Making tuning forks smaller and smaller has been been shown repeatedly in the history of physics to provide an exceptionally powerful analytical method for analyzing how various types of vibrations propagate and interact. So, why not take this idea to the logical limit and make space itself into what amounts to a huge field of very small, tuning-fork-like harmonic oscillators?
Now that I can at least understand as an argument for why strings "resonated" well with a lot of physicists as an interesting approach to unifying physics.
-
3
This is a google-searchable question, and I vote to close... The fundamental objects in String theory are strings (more generally, p-branes), so you cannot talk about their composition. – Chris Gerig Dec 11 '12 at 18:09
4
WHAT? YOU CHANGED THE QUESTION ENTIRELY. "Why strings" instead of "what are strings made of"... this is inappropriate etiquette, you should make a new question! (To answer this new question, the point is that string theory works, it recovers a ton of stuff we want/need, and that's why strings -- a wave is fundamental.) – Chris Gerig Dec 28 '12 at 8:40
1
And this new question is also google-searchable!! And questioning the theory as a philosophy is more appropriate as a meta question, or in the Philosophy StackExchange (if one exists). – Chris Gerig Dec 28 '12 at 8:57
2
No need to get crazy about it @Chris ;-) Terry, your addendum does diverge from the original point of the question. Could you reword it to stay more on point, or remove it post a separate question to ask why strings are assumed instead of some other system? – David Zaslavsky♦ Dec 28 '12 at 20:01
1
Dear Terry Bollinger. @Chris Gerig shouldn't be shouting but he nevertheless got a point. Why string theory? is a huge question in itself, and it would not be constructive to ask it as a subquestion to another question. Please roll back your question to e.g.: In string theory, what are strings made of? – Qmechanic♦ Dec 29 '12 at 17:00
show 10 more comments
## 7 Answers
OP wrote(v4):
[...] Strings in string theory also seem to possess a rather complicated and certainly non-trivial suite materials-like properties such as length, rigidity, tension, and I'm sure others (e.g. some analog of angular momentum?). [...]
Well, the relativistic string should not be confused with the non-relativistic material string, compare e.g. chapter 6 and 4 in Ref. 1, respectively. In contrast, the relativistic string is e.g. required to be world-sheet reparametrization-invariant, i.e. the world-sheet coordinates are no longer physical/material labels of the string, but merely unphysical gauge degree of freedom.
Moreover, in principle, all dimensionless continuous constants in string theory may be calculated from any stabilized string vacuum, see e.g. this Phys.SE answer by Lubos Motl.
OP wrote(v1):
What are strings made of?
One answer is that it is only meaningful to answer this question if the answer has physical consequences. Popularly speaking, string theory is supposed to be the innermost Russian doll of modern physics, and there are no more dolls inside that we can explain it in terms. However, we may be able to find equivalent formulations.
For instance, Thorn has proposed in Ref. 2 that strings are made of point-like objects that he calls string bits. More precisely, he has shown that this string bit formulation is mathematically equivalent to the light-cone formulation of string theory; first in the bosonic string and later in the superstring. The corresponding formulas are indeed quadratic a la harmonic oscillators (cf. a comment by anna v) with the twist that the "Newtonian mass" of the string bit oscillators are given by light-cone $P^+$ momentum. Thorn was inspired by fishnet Feynman diagrams (think triangularized world-sheets), which were discussed in Refs. 3 and 4. However, string bit formulation does not really answer the question What are strings made of?; it merely adds a dual description.
References:
1. B. Zwiebach, A first course in String Theory.
2. C.B. Thorn, Reformulating String Theory with the 1/N Expansion, in Sakharov Memorial Lectures in Physics, Ed. L. V. Keldysh and V. Ya. Fainberg, Nova Science Publishers Inc., Commack, New York, 1992; arXiv:hep-th/9405069.
3. H.B. Nielsen and P. Olesen, Phys. Lett. 32B (1970) 203.
4. B. Sakita and M.A. Virasoro, Phys. Rev. Lett. 24 (1970) 1146.
-
This is nice, +1 – Dilaton Dec 30 '12 at 0:21
Qmechanic, wow, thank you!! That was superb and very much the kind of answer I had been hoping for! – Terry Bollinger Dec 30 '12 at 7:12
@TerryBollinger: seriously? This says nothing different than what has been said about strings. I do not understand your problem. – Chris Gerig Dec 30 '12 at 18:05
The question "what is xxx made of" is really asking "what can xxx be decomposed into"?
For example we know matter is made of atoms because it can be decomposed into atoms. We know atoms are made of electrons, protons and neutrons because atoms can be broken down into electrons, protons and neutrons. But electrons can't be decomposed into anything, so it's meaningless to ask what an electron is made of. We can ask what an electron's mass is, or its energy, or its spin, etc etc, but to ask what it's made from is a question that has no answer.
Exactly the same applies to a string. It is an object that has properties, but it's meaningless to ask what it's made from because it can't be broken down into anything.
-
It is not that meaningless to ask if strings are made of something else, it depends on the parameters of the theory and the context, see my (complementary) answer. – Dilaton Dec 29 '12 at 12:11
Lenny Susskind explains that the answer to this question depends on the parameters of the theory at 1:10:50 to the end of this video.
He makes use of the fact that the question if strings are fundamental or if they are composed of something else is analogous to the question if in electrodynamics, electrons or magnetic monopols have to be considered fundamental to by able to develope a perturbation theory with Feynman diagrams. It can be shown that magnetic charges q and electric charges e are related by
$$e\, q = 2 \pi$$
This means, that if the charge of the electron (and therefore their mass) is small, the charge of the magnetic monopoles is (and their mass) is huge and vice versa. If the charge and the mass of the electron are small, the electron is considered fundamental and a converging theory (QED) can be developed because the coupling constant $e$ is small. At the same time the magnetic monopoles are heavy complicated things composed of whole bunches of photons and magnetic charges because the coupling constant $q$ is large. This regime corresponds to what we observe with QED being a weakly coupled theory and the magnetic monopoles (if they exist) being to large to be observed. Increasing the electric charge of the electron would lead to a transition to a situation, where the electons become heavy and complicated and in this case it would be more useful to consider a quantized electromagnetic theory with the ligth magnetic monopoles described as fundamental particles.
A similar relationship as described to hold for the pair of electric and magnetic charges exists in string theory between fundamental (f-) strings and D-branes. Depending on the parameters of the theory, either it is more appropriate to consider the D-branes as complicated heavy things composed of fundamental strings, or the D-branes are light and fundamental whereas strings are heavy and complicated things composed of D-branes. The technical term describing this ambiguity is S-duality.
In summary, a uniques and universally valid answer to the question what strings are made of can not be given; it depends on the parameters of the theory and the context if it is more useful to consider strings or D-branes as fundamental.
-
In string theory the strings are fundamental objects and cannot be made of anything.
However, the strings of string theory, like more general extended objects --e.g., the membranes in brane theory--, can be considered to be made of more fundamental point-like objects.
An interesting picture is given in Point-Like Structure in Strings
I would finally emphasize that the D0-branes used in Banks' matrix theory or the Thorn's bits are not the more fundamental pointlike objects. For instance, believing that universe is really made of D0-branes would be so misguided as when string theorists believed that universe was really made of strings.
-
This is to complete @Dilaton 's answer.
The very basic reason theoretical physicist are entranced by string models and their extensions is because they promise to be the framework of the Theory Of Everything, the holy grail of theoretical physics.
String theories and their extensions provide for the quantization of gravity, the long standing difficulty in formulating a TOE. If there existed a competing theory with composite preons or what not, which included the quantization of gravity in a unified manner, one would estimate the merits of each as a TOE. At the moment string theory has no competition, and strings are the fundamental entities of reality in these theories.
-
Yeah, thanks for the nice introduction Anna ;-) – Dilaton Dec 29 '12 at 12:50
In short, string is a dislocation defect of the vacuum crystalline structure.
This by the way explains why apparent full angle around a string seems to be smaller than $2 \pi$.
-
(Please note that I am attempting to answer my own question.)
In string theory it has been implicitly hypothesized, but not rigorously proven, that the mathematical constructs used to describe the vibrations of certain of isotropic materials in simple geometric simple shapes (e.g. strings or rings) are examples of mathematical constructs so fundamental that they show up at many different levels and circumstances in physics. If this string hypothesis is true, then the vibrations of ordinary material strings are a distant echo of these far more fundamental mathematical rules.
There is a precedent for this in modern physics. When James Clerk Maxwell first integrated the knowledge of electricity and magnetism of his time into a single unified theory, he did not at first use differential equations. Instead, he intentionally built a purely mechanical rotating-cell model based on generalizations of specific properties encountered in everyday material objects.
It was not until after Maxwell had used his mechanical model to prove that light was electromagnetic radiation that he realized that all the behaviors of his model could instead be expressed using 20 quaternion-based differential equations with 20 variables, which Oliver Heaviside later compressed down to a mere four vector-based equations that are now called (not entirely accurately) "Maxwell's equations".
The situation hypothesized for string theory is roughly comparable. It also starts with a materials-based dynamic analogy, and similarly proposes that the mathematics that describe that dynamic model are expressions of some deeper and more profound mathematical model. However, it is also true that what constitutes a "fundamental" set of mathematical constructs is much less clear in string theory than it was for the far simpler domain of electromagnetism. That is in part because for string theory the analogies from materials science are extended to much higher numbers of dimensions, and because string theory postulates modes of vibration that are not accessible to direct experimentation and refinement.
-
2
-1, you gave literally no explanation, but went off on random tangents in history. My comment above on your question takes care of this. – Chris Gerig Dec 25 '12 at 0:49
Heh! To be accurate, Chris, reading your comment is why I thought I should at least try to put a more positive spin on the answers so far. Declaring something "fundamental" because, well, it just is has to be one of the weakest approaches imaginable for analyzing the structure of a truly complex problem. Are you aware of no data whatsoever to defend why the p-brane approach arguably fits the structure of universe better and with fewer assumptions? If so, what is it? Have analyses or comparisons been done? Does a p-brane approach produce persuasive predictions, or massive simplifications? – Terry Bollinger Dec 26 '12 at 7:49
"Declaring something "fundamental" because, well, it just is has to be one of the weakest approaches imaginable"... This is irrelevant. Your question was "what are strings made of", and the answer is that they are not made of anything. QED. If you want to change the question, make a new post (do not abruptly alter the current one!). – Chris Gerig Dec 28 '12 at 8:39
2
Terry, when I first heard of string theory as the TOE, my first reaction was "of course, the ubiquitous harmonic oscillator" . If you have solved enough quantum mechanical problems you would know that the first approximation to the solution of symmetrical potentials is the harmonic oscillator, because it is the first term in the Tailor expansion of any symmetrical potential. It could be that the real functional form of "string theories" has higher order terms, but at the moment it seems that this is a good candidate for a TOE. – anna v Dec 29 '12 at 12:35
anna v, I love your "If you have solved enough quantum mechanical problems" whack (mea culpa, mea culpa)! I also sort of wish you had made that into an answer. Since most folks reading this haven't had that very cool experience base, the transition from intensive use of harmonic oscillators (a much earlier materials-to-fundamental transition) just doesn't leap out as a reason for going to the specific structure (1D with tension) of strings. Even with your great point, I still can't help thinking "but isn't that taking the analogy in the wrong direction -- more towards material objects?" – Terry Bollinger Dec 29 '12 at 16:34
## protected by Qmechanic♦Dec 29 '12 at 15:58
This question is protected to prevent "thanks!", "me too!", or spam answers by new users. To answer it, you must have earned at least 10 reputation on this site. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 4, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9428943991661072, "perplexity_flag": "middle"} |
http://math.stackexchange.com/questions/17533/laws-of-distribution | Laws of distribution
Is it legal to distribute the $\land$ or the $\lor$ operators over the $\implies$ operator? For example is it legal for me to to do the following?
$(p\land(p\implies q))\implies q$
$(p\land p) \implies (p\land q) \implies q$
Put more simply: Is the "AND" operator distributive over the "IMPLIES" operator.
-
It is legal in the U.S. in the sense that there is no law that requires you to do only logically correct things, so writing this will not break any law (and I'm fairly sure that this is also true elsewhere in the world). However, this is not a rule of inference that is a tautology. – Arturo Magidin Jan 15 '11 at 0:19
3 Answers
No, in general you can't replace $a \wedge (b \implies c)$ with $(a \wedge b) \implies (a \wedge c)$. If $a$ is true, then both expressions have the same truth value. However, if $a$ is false, then the first expression is false and the second expression is true.
-
The "truth table" http://en.wikipedia.org/wiki/Truth_table is an easy method to verify and even proove such assumptions.
-
Thanks for the link, but I'm not trying to prove this proposition. I'm trying to find out if the "AND" operator is distributive over the "IMPLIES" operator. – lampShade Jan 14 '11 at 20:48
2
Then build a truth table for both equations and see if its identical? – Listing Jan 14 '11 at 20:54
Assuming that $\implies$ distributes to the right (so by $A \implies B \implies C$ you mean $A \implies (B \implies C)$), then for your examples it works out, since they're both tautologies.
Otherwise, your example doesn't work it, since the second statement is no longer a tautology.
- | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 12, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.927598774433136, "perplexity_flag": "middle"} |
http://mathhelpforum.com/statistics/105884-p-h-heads-row-given-n-coin-flips.html | # Thread:
1. ## P(H heads in a row) given N coin flips
Hi,
I recently thought of the following problem and my basic grasp of probability and combinatorics is not enough to handle it:
You flip a coin N times. What is the probability P(N,H) that you get at least one sequence of at least H heads in a row?
Since this question deals with the number of times a result will happen, I tried to use a Poisson Distribution, but I'm having trouble separating the problem into mutually exclusive events.
I wrote a program to do a simulation and found that P(20,7) ≈ 0.058, so please make sure your answer fits with this result.
2. Originally Posted by lambda
Hi,
I recently thought of the following problem and my basic grasp of probability and combinatorics is not enough to handle it:
You flip a coin N times. What is the probability P(N,H) that you get at least one sequence of at least H heads in a row?
Since this question deals with the number of times a result will happen, I tried to use a Poisson Distribution, but I'm having trouble separating the problem into mutually exclusive events.
I wrote a program to do a simulation and found that P(20,7) ≈ 0.058, so please make sure your answer fits with this result.
I would do it this way: treat the "H heads in a row" as a single object and call it "R" for "row". With N flips and "H heads in a row", there are N- H other flips. How many different ways are there to order 1 object and N- H other objects?
3. ## Problem - counts certain cases twice
Originally Posted by HallsofIvy
I would do it this way: treat the "H heads in a row" as a single object and call it "R" for "row". With N flips and "H heads in a row", there are N- H other flips. How many different ways are there to order 1 object and N- H other objects?
I tried this approach and got $P(n,h) = \frac{acceptable\_series}{total\_possible\_series} = \frac{2^{n-h}(n-h + 1)}{2^n} = \frac{n-h + 1}{2^h}$ but kept getting answers that were roughly 2x too large. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9640918374061584, "perplexity_flag": "middle"} |
http://mathhelpforum.com/number-theory/122192-infinitely-many-composite-numbers-print.html | # Infinitely many composite numbers
Printable View
• January 2nd 2010, 06:58 AM
usagi_killer
Infinitely many composite numbers
Let http://alt1.artofproblemsolving.com/...ea377667b8.gif and http://alt2.artofproblemsolving.com/...bd49418f98.gif be positive integers and let the sequence http://alt2.artofproblemsolving.com/...fd0f8f56b2.gif be defined by http://alt1.artofproblemsolving.com/...6ebcf18b17.gif and http://alt1.artofproblemsolving.com/...774c64065a.gif for all non-negative integers http://alt2.artofproblemsolving.com/...00c274bdaa.gif. Prove that for any choice of http://alt1.artofproblemsolving.com/...ea377667b8.gif and http://alt2.artofproblemsolving.com/...bd49418f98.gif, the sequence http://alt1.artofproblemsolving.com/...26110acee0.gif contains infinitely many composite numbers.
How do I do this question? I just started self-learning number theory as recreation so yeah if anyone can show me how to solve it, I would really appreciate it.
• January 2nd 2010, 08:11 AM
Drexel28
Quote:
Originally Posted by usagi_killer
Let http://alt1.artofproblemsolving.com/...ea377667b8.gif and http://alt2.artofproblemsolving.com/...bd49418f98.gif be positive integers and let the sequence http://alt2.artofproblemsolving.com/...fd0f8f56b2.gif be defined by http://alt1.artofproblemsolving.com/...6ebcf18b17.gif and http://alt1.artofproblemsolving.com/...774c64065a.gif for all non-negative integers http://alt2.artofproblemsolving.com/...00c274bdaa.gif. Prove that for any choice of http://alt1.artofproblemsolving.com/...ea377667b8.gif and http://alt2.artofproblemsolving.com/...bd49418f98.gif, the sequence http://alt1.artofproblemsolving.com/...26110acee0.gif contains infinitely many composite numbers.
How do I do this question? I just started self-learning number theory as recreation so yeah if anyone can show me how to solve it, I would really appreciate it.
I understand that you are doing mathematics recreationaly and that you just want to see the solution so that you better understand the topic, but that being said I would like to quote the mathematician George Simmons: "Despite our vast advances in mathematics, nothing has replaced the necessity of problem solving."
So, what have you tried?
• January 2nd 2010, 09:30 AM
NonCommAlg
note that the sequence is strictly increasing and also we may assume that $\gcd(a,b) = 1.$ suppose that the claim is false. so there exists some $\ell \in \mathbb{N}$ such that $x_k$ is prime for all $k \geq \ell.$
choose $k \geq \ell$ large enough so that $x_k=p > 2.$ see that $p \nmid a$ and $x_{k+m}=a^mp + (a^{m-1} + \cdots + a + 1)b,$ for all integers $m \geq 1.$ let $a \equiv r \mod p,$ where $1 \leq r \leq p-1.$ now, if $r=1,$
choose $m=p$ and if $r > 1,$ choose $m=p-1.$ see that $p \mid x_{k+m},$ which is a contradiction.
Note: i left a couple of thing for the OP to discover, so Drexel28 wouldn't get too mad at me!
All times are GMT -8. The time now is 03:13 PM. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.8581963777542114, "perplexity_flag": "middle"} |
http://mathoverflow.net/questions/96510?sort=votes | ## Have we ever lost any mathematics?
### Remember to vote up questions/answers you find interesting or helpful (requires 15 reputation points)
The history of mathematics over the last 200 years has many occasions when the fundamental assumptions of an area have been shown to be flawed, or even wrong. Yet I cannot think of any examples where, as the result the mathematics itself had to be thrown out. Old results might need a new assumption or two. Certainly the rewritten assumptions often allow wonderful new results, but have we actually lost anything?
Note I would like to rule out the case where an area has been rendered unimportant by the development of different techniques. In that case the results still hold, but are no longer as interesting.
I wrote up a longer version of this question with a look at a little of the history: http://maxwelldemon.com/2012/05/09/have-we-ever-lost-mathematics/
Edit in response to comments
My thinking was about results that have been undermined from below. @J.J Green's example in the comments of Italian algebraic geometry seems like the best example I have seen. The trisection and individually wrong results do not seem to grow into areas, but certainly I would find interesting any example where a flawed result had built a small industry before it was found to be wrong. I am fascinated by mathematics that has been overlooked and rediscovered (ancient and modern) but that is perhaps a different question.
-
12
If we had truly lost it, can you expect us to know enough about it to tell you? Gerhard "Still Looking For A Proof" Paseman, 2012.05.09 – Gerhard Paseman May 9 2012 at 22:03
6
I had some mathematics in my pocket the other day, but I seemed to have lost it. Perhaps it is just buried in the mess of my desk... – Asaf Karagila May 9 2012 at 22:34
14
Didn't something along these lines happen to Italian algebraic geometry in the 1930s? see en.wikipedia.org/wiki/… for example – J.J. Green May 9 2012 at 23:13
9
This sounds like a different phenomenon from the one that you are refering to (is it?), but Indian Mathematics, Chinese Mathematics, Babylonian Mathematics, etc. were effectively "lost" (at least in large part), and have only recently been partially "rediscovered" as "archeology". More recently, 19th century invariant theory. It wasn't that they were false; it was that numerical methods became less valuable for these problems, because general methods were discovered, or else the calculations didn't draw enough attention because of lack of practical applications. – Daniel Moskovich May 10 2012 at 0:06
7
According to cecm.sfu.ca/organics/covering/html/node4.html, "We have reached the point of decay in some areas. Richard Askey has observed that Gregory Chudnovsky knows things about hypergeometric functions that no one has understood since Riemann and that, with Chudnovsky's eventual passing, no one is likely to understand again." I've wondered what this refers to, but I've never asked Askey whether this quote is accurate or what he meant. – Henry Cohn May 10 2012 at 4:25
show 16 more comments
## 9 Answers
There are "Lectures on Lost Mathematics" by B. Grünbaum. They were given at the University of Washington in 1975. The notes are available here
-
9
Not to forget the book "A la recherche de la topologie perdue" edited by Guillot and Marin. – anonymous May 10 2012 at 6:27
@anonymous: why do you mention that book? Is it about lost math? – HJ May 13 2012 at 7:29
3
Given that the title translates roughly as "Remembering Lost Topology," I'd assume so. – Daniel McLaury May 14 2012 at 7:19
### You can accept an answer to one of your own questions by clicking the check mark next to it. This awards 15 reputation points to the person who answered and 2 reputation points to you.
I was once told of a paper in homological algebra where a new class of functors was introduced, generalizing Ext and Tor. For some years they were studied, and various properties were proved. Finally someone managed to give a complete description of the entire class. It consisted of two elements, Ext and Tor. (Sorry, I don't have more details.)
-
2
This is interesting, anyone have more details? – Edmund Harriss May 10 2012 at 9:44
6
This reminds me of a colloquium talk I heard at Harvard some 40 years ago, in which a famous speaker generalized some results of Bott to an abstract setting. At the end, Bott, who was sitting in front, asked the speaker if he knew any examples of his theoretical objects other than, as I recall, sections of vector bundles over (possibly compact?) manifolds. The answer was "No, I don't." – roy smith May 10 2012 at 16:04
1
Edmund, this may be related to your question. mathoverflow.net/questions/93716/… – roy smith May 10 2012 at 16:14
I heard the same story about a PhD thesis, where someone in the audience announced during the defense that the class of objects having the amazing properties described by the student was actually empty. I assume all such stories are apocryphal. – JeffE May 13 2012 at 8:47
I feel the answer is obviously "yes", and indeed that much of 19th century mathematics was lost, in a serious sense, for much of the 20th century. I was struck recently by discovering that Henry Fox Talbot, the photographic pioneer, had written on what is clearly the area around Abel's theorem for curves, and that probably it is a long time since anyone reconstructed what he was doing. Also that George Boole's main work, as far as his contemporaries were concerned, dropped out of sight within a couple of decades.
The fact is that mathematics now is (a) axiomatic and (b) dominated by a canon. I'm reminded of Bertrand Russell's nightmare - where, a century after his death, the last copy of the Russell-Whitehead Principia Mathematica is in danger of being thrown out by an ignorant librarian. It actually isn't obvious that even such a pioneering work makes it into the mathematical logic "canon" around later developments. (I hear protests!) Maybe it is worth pointing out Hilbert's interest in Anschauliche Geometrie, in other words non-axiomatic, intuitive geometry. And the canon should be "porous", as has been argued by some of the Moscow school. It seems quite an illuminating take on mathematics as a living tradition that simple accretion of "known results" is misleading.
-
I always understood Russell's nightmare as reflecting insecurity in the lasting importance of his work, rather than the intelligence of its judges. – Pablo Zadunaisky Dec 16 at 22:34
Hilbert's $16^{\rm th}$ problem.
In 1923 Dulac "proved" that every polynomial vector field in the plane has finitely many cycles [D]. In 1955-57 Petrovskii and Landis "gave" bounds for the number of such cycles depending only on the degree of the polynomial [PL1], [PL2].
Coming from Hilbert, and being so central to Dynamical Systems developments, this work certainly "built a small industry". However, Novikov and Ilyashenko disproved [PL1] in the 60's, and later, in 1982, Ilyashenko found a serious gap in [D]. Thus, after 60 years the stat-of-the-art in that area was back almost to zero (except of course, people now had new tools and conjectures, and a better understanding of the problem!).
See Centennial History of Hilbert's 16th Problem (citations above are from there) which gives an excellent overview of the problem, its history, and what is currently known. In particular, the diagram in page 303 summarizes very well the ups and downs described above, and is a good candidate for a great mathematical figure.
-
Grothendieck burned some of his works. I would probably think what he thought of is lost!
-
3
I think it's clear from the body of the question that OP is asking about mathematics that was first accepted and then shown to be wrong, so this is not the kind of thing OP is asking for. – Gerry Myerson May 13 2012 at 5:40
Good point! Grothendieck's math almost always seem to be universally pursued though!! – unknown (google) May 13 2012 at 8:04
I don't know if this is an example of what you're asking. In mathematical logic, the Hilbert Program of the 1920's intended to come up with a finitary consistency proof and a decision procedure for analysis and set theory. Many luminaries including Hilbert himself, Bernays, Ackermann, von Neumann, etc. gathered in Göttingen for this purpose. Ackermann in 1925 published a consistency proof for analysis (that turned out to be incorrect) and many other promising results emerged. Then in 1931, Gödel's incompleteness theorem shut the whole thing down. Some valid theorems came out of it, but the program as a whole had to be (in some interpretations) completely abandoned.
http://en.wikipedia.org/wiki/Hilbert_program
-
This is a great example, but it shows more that mathematics itself is not lost. In fact this, as part of the quest for the foundations is perhaps the canonical example. Doubt was cast on the foundations of the whole subject. Yet we only lost research directions rather than worlds of results. – Edmund Harriss May 12 2012 at 22:47
A exposition along this vein about Arabic mathematics.
http://www-history.mcs.st-andrews.ac.uk/HistTopics/Arabic_mathematics.html
-
Volume II of Frege's Grundgesetze der Arithmetik (Basic Laws of Arithmetic) had already been sent to the press when Bertrand Russell informed him that what we now call "Russell's paradox" could be derived from one of his basic laws. I do not know to what extent Frege's work was known and publicly accepted (volume I was published 10 years before volume II), but this seems a clear case where a major body of work was undermined "from below", to use the words of the OP.
Upon learning of Russell's observation, Frege quickly wrote up an appendix to volume II, where he writes, "Hardly anything more unfortunate can befall a scientific writer than to have one of the foundations of his edifice shaken after the work is finished. This was the position I was placed in by a letter of Mr. Bertrand Russell, just when the printing of this volume was nearing its completion." (This translation appears in the Wikipedia article.)
-
I guess Conways "lost proofs" qualify:
http://www.aimsciences.org/journals/pdfsnews.jsp?paperID=2447&mode=full
-
The link doesn't work for me. – JeffE May 13 2012 at 8:44
I think it's clear from the body of the question that OP is asking about mathematics that was first accepted and then shown to be wrong, so this is not the kind of thing OP is asking for. – Gerry Myerson May 13 2012 at 11:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9720627069473267, "perplexity_flag": "middle"} |
http://www.physicsforums.com/showthread.php?p=3444317 | Physics Forums
## Need help raising the voltage on a power supply that I am making.
I am currently building a DC power supply for a HHO Fuel cell. I am looking for HIGH current at a relatively low voltage (12-15 volts). I already have modified a MOT (Microwave oven Transformer) for high current. I replaced the high voltage secondary with 3 secondaries each consisting of 12 windings of 16 gauge wire. Each of the secondaries are in parallel and then rectified. The 16 gauge wire is rated at 24 amps according to MCMASTER-CARR where it was purchased(Here is the product number of the exact wire that I am using if you don't believe me - 7587K079). So do the math and my 3 coils can withstand 72 amps within the rating of the wire.
Anyways my problem is that I get about 9.2 volts from the secondary and I am sure my bridge rectifier will drop that below 8 volts which is where it needs to be at to power my HHO cell. I have thought of a solution but was wondering if anyone could point out flaws or offer better solution. So here is my solution:
I want to modify a 2nd MOT for high current and then put it in series with the other MOT I have. I think that this would double the voltage, however I have not had much experience with AC and especially power sources connected in a series-parallel fashion(Remember that each MOT has 3 secondaries that are in parallel).
PS: FYI After much research I have found that a Router Speed Controller works wonders for controlling transformers.
PhysOrg.com engineering news on PhysOrg.com >> Researchers use light projector and single-pixel detectors to create 3-D images>> GPS solution provides 3-minute tsunami alerts>> Single-pixel power: Scientists make 3-D images without a camera
How about the primary, you cannot just increase the current capability of the secondary and think that you can increase the current and maintain the same voltage. As you draw more power from the secondary, it reflect back to the primary. If the wire in the primary is too small, too much voltage drop in form of resistor loss on the primary and you get lower secondary voltage. Even if you parallel up the primary, you still have to worry the core size. If the core is too small and you can run into saturation and the $\mu$ drop and you don't get the power coupling. All you do is generate a lot of heat and burn the transformer. This is like getting a 110 to 12 V 1A rating secondary, parallel two more secondary and expecting the transformer magically become `12V 3A capability!!! Don't work this way!!! If you have two of the transformer, put them in parallel and you get the increase capability and you don't have to monkey with winding. Just make sure you get the polarity correct or else you can short both of them out by putting them in parallel.
I am not trying to squeeze more power out of the primary than it was originally designed for. At first the original primary consisted of hundreds of turns of very thin wire. This was designed to carry very high voltage at low current. I replaced the very thin wire with thick wire that will carry high current at low voltage. Right now with only one MOT I can 9x72=648 watts out of the single MOT, which is much lower than the MOT is rated to handle. I have tested my MOT and it can handle the current that I want to run at without the primary even getting warm. My question is can I connect 2 transformers in series. Thanks for the reply
## Need help raising the voltage on a power supply that I am making.
Quote by HHOboy I am not trying to squeeze more power out of the primary than it was originally designed for. At first the original primary consisted of hundreds of turns of very thin wire. This was designed to carry very high voltage at low current. I replaced the very thin wire with thick wire that will carry high current at low voltage. Right now with only one MOT I can 9x72=648 watts out of the single MOT, which is much lower than the MOT is rated to handle. I have tested my MOT and it can handle the current that I want to run at without the primary even getting warm. My question is can I connect 2 transformers in series. Thanks for the reply
OK, you did not mention that you replace the primary also. I re-read your post, Do you mean your secondary winding is only 12 turns of 16 gauge? What frequency are you running? You are expecting 1V per turn, make sure your frequency is high enough. It will never work with 60Hz. I am no transformer expert. I had a switching HV power supply engineer working for me and we talked a lot. You need certain frequency to get certain voltage per turn. So look into this.
So the frequency changes the voltage in the coils I will look into that. Thanks
Recognitions: Gold Member Science Advisor You need a high enough primary reactance. That's the reason for needing more turns per volt.
Quote by HHOboy So the frequency changes the voltage in the coils I will look into that. Thanks
Not the frequency changing the voltage, you need to have more turns if you run in lower freq. For 60Hz, you need more turns per volt. This is due to the core. I don't know the detail theory but my whole idea at the time was doing high voltage isolation dc to dc converters that can standoff 15KV, we need to have as few turns as possible, I decided to run the transformer at about 1MHz and we get up to 6 or 7 V per turn. My power supply engineer did the design.
Tags
hho, mot, power, supply, transformer
Thread Tools
| | | |
|----------------------------------------------------------------------------------------|----------------------------------------------|---------|
| Similar Threads for: Need help raising the voltage on a power supply that I am making. | | |
| Thread | Forum | Replies |
| | Engineering, Comp Sci, & Technology Homework | 2 |
| | Engineering, Comp Sci, & Technology Homework | 0 |
| | Electrical Engineering | 9 |
| | Electrical Engineering | 3 |
| | Computing & Technology | 11 | | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9597874283790588, "perplexity_flag": "middle"} |
http://physics.stackexchange.com/questions/tagged/porous-media+fluid-dynamics | # Tagged Questions
1answer
132 views
### Darcy law yields extreme speed for gas flow throgh packed spheres?
The darcy law is as follows: $u=-\frac{k}{\mu}\nabla p$. Assume we have a gas, then $\mu$ is about $10^{-5}$. $k$ for packed spheres a few mm in diameter is of order $10^{-8}$ $m^2$. Say the ...
0answers
78 views
### How do I simulate a constant velocity flow in porous media
I am modelling gas combustion in porous media. Most contemporary models assume that the pressure drop from the porous media is small enough to disregard, but I want to include that in my ...
1answer
74 views
### How can one build a multi-scale physics model of fluid flow phenomena?
I am working on a problem in Computational Fluid Dynamics, modeling multi-phase fluid flow through porous media. Though there are continuum equations to describe macroscopic flow (darcy's law, ...
2answers
280 views
### Laws of fluid flow in porous medium
What are the equations that describe the flow of a fluid in a porous medium? Is there a variation of the Navier-Stokes equations? I would like to model the flow of air through a sponge-like ...
3answers
1k views
### When water climbs up a piece of paper, where is the energy coming from?
Take a glass of water and piece of toilet paper. If you keep the paper vertical, and touch the surface of the water with the tip of the paper, you can see the water being absorbed and climbing up the ... | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 6, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9194862842559814, "perplexity_flag": "middle"} |
http://crypto.stackexchange.com/questions/3110/impacts-of-not-using-rsa-exponent-of-65537 | # Impacts of not using RSA exponent of 65537
This RFC says the RSA Exponent should be 65537. Why is that number recommended and what are the theoretical and practical impacts & risks of making that number higher or lower?
What are the impacts of making that value a non-Fermat number, or simply non prime?
-
## 2 Answers
Using $e\ne65537$ would reduce compatibility with existing hardware or software, and break conformance to some standards or prescriptions of security authorities. Any higher $e$ would make the public RSA operation (used for encryption, or signature verification) slower. Some lower $e$, in particular $e=3$, would make that operation appreciably faster (up to 8.5 times). If using a proper padding scheme, the choice of $e$ is not known to make a security difference; but for many less than perfect padding schemes that have been (or are still) used, high values of $e$ are generally safer.
$e=65537$ is a common compromise between being high, and increasing the cost of raising to the $e$-th power: any higher odd $e$ cost at least one more multiplication (or squaring), which is true for odd exponents of the form $2^k+1$. Also, $e=65537$ is prime, which slightly simplify generating a prime $p$ suitable as RSA modulus, implying $\gcd(p-1,e)=1$, which reduce to $p\not\equiv 1\pmod e$ for prime $e$. Only the Fermat primes $3,5,17,257,65537$ have both properties, and all are common choices of $e$.
It is conjectured there are no other Fermat prime; and if there was any, it would we unusably huge.
Using $e=65537$ (or higher) in RSA is an extra precaution against a variety of attacks that are possible when bad message padding is used; these attacks tend to be more likely or devastating with much smaller $e$. Using $e=3$ would otherwise be attractive, since raising to the power $e=3$ cost 1 squaring and 1 multiplication, to be compared to 16 squaring and 1 multiplication when raising to the power $e=65537=2^{16}+1$.
For example, RSA with $e=65537$ has a security advantage over $e=3$ when:
1. Sending a message naively encrypted as $\mathtt{ciphertext}=\mathtt{plaintext}^e\bmod n$; the greater $e$ makes it more likely that $\log_2(\mathtt{plaintext})\gg \log_2(n)/e$ (which is necessary for security).
2. Sending the same message encrypted to $k$ recipients using the same padding (including any deterministic padding independent of $n$); the greater $e$ makes it less likely that $k\ge e$ (which allows a break).
3. Signing messages chosen by the adversary with a bad signature scheme. For example, with the scheme of the (withdrawn) ISO/IEC 9796 standard (described in HAC section 11.3.5), the adversary could obtain a forged signature from only 1 legitimate signature if $e=3$, but needs 3 legitimate signatures for $e=65537$; trust me on that one. The security advantage of $e=65537$ is wider for attacks against scheme 1 of the (current) ISO/IEC 9796-2.
For more explanations and examples of the risk of the combination of questionable message padding and low $e$, see section 4 in Twenty Years of Attacks on the RSA Cryptosystem.
There is no known technical imperative not to use $e=3$ when using a sound message padding scheme, such as RSAES-OAEP or RSASSA-PSS from PKCS#1, or scheme 2 or 3 from ISO/IEC 9796-2. However, it still makes sense to use $e=65537$:
• The only known drawbacks are the performance loss (by a factor like 8), and the risk of leaving a bug in the key generator when a prime $p\equiv 1\pmod{65537}$ is hit; and when performance matters, there is an even better choice than $e=3$, with provable security.
• Some attacks on less than perfect RSA schemes that are of have been in wide use are significantly harder than with $e=3$ (as discussed above).
• $e=65537$ has become an industry standard (I have yet to find any RSA hardware of software that does not allow it), and is prescribed by some certification authorities.
-
– makerofthings7 Jul 1 '12 at 13:07
@makerofthings7: Yes, thank you. I updated the answer. – fgrieu Jul 2 '12 at 8:01
65537 is commonly used as a public exponent in the RSA cryptosystem. This value is seen as a wise compromise, since it is famously known to be prime, large enough to avoid the attacks to which small exponents make RSA vulnerable, and can be computed extremely quickly on binary computers, which often support shift and increment instructions. Exponents in any base can be represented as shifts to the left in a base positional notation system, and so in binary the result is doubling - 65537 is the result of incrementing shifting 1 left by 16 places, and 16 is itself obtainable without loading a value into the register (which can be expensive when register contents approaches 64 bit), but zero and one can be derived more 'cheaply'. -wikipedia ('twas lazy)
-thus, lower is vulnerable to quick factoring, higher is not insecure, but computationally more expensive.
-
1
Lower isn't vulnerable with proper padding. But it is harder to screw up with higher exponents, indeed. – Thomas Jul 1 '12 at 6:40
No, lower is NOT vulnerable to quick factoring; whatever (odd) e won't increase or decrease the risk of having the public modulus factored. – fgrieu Jul 1 '12 at 10:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 41, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9352800846099854, "perplexity_flag": "middle"} |
http://mathhelpforum.com/algebra/25128-logarithm-problem.html | # Thread:
1. ## logarithm problem
logarithm problem
Attached Thumbnails
2. Originally Posted by afeasfaerw23231233
logarithm problem
The first question can be transformed into a quadratic in $2^x+2^{-x}$, by observing that:
$(2^x+2^{-x})^2=2^{2x}+2+2^{-2x}=[4^x+4^{-x}]+2$
So the equation becomes:
$<br /> 2[2^x+2^{-x}]^2 -7[2^x+2^{-x}]=8=0<br />$
Now solve this quadratic in $2^x+2^{-x}$ and proceed from there.
RonL
3. Originally Posted by afeasfaerw23231233
logarithm problem
Hello,
$(2^x+2^{-x})^2=2^{2x}+2+2^{-2x} = (4^x+4^{-x})+2$
Therefore use $(2^x+2^{-x}) = y$. Then $(4^x+4^{-x}) = y^2-2$ . Your equation becomes:
$2(y^2-2)-7y+10=0$ . Now continue. Don't forget to re-substitute to calculate x.
= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =
With the second problem you can use the property of logarithms:
$\log_b(a)=c~\implies~\log_a(b)=\frac1c~\implies~\f rac1{\log_a(b)} = c$
4. Hello, afeasfaerw23231233!
$\log_x4 - \log_2x \:=\:1$
From the Base-Change Formula: . $\log_x4 \:=\:\frac{\log_24}{\log_2x}\:=\:\frac{2}{\log_2x}$
The equation becomes: . $\frac{2}{\log_2x} - \log_2x \:=\:1$
. . which simplifies to: . $\left(\log_2x\right)^2 + \log_2x - 2 \:=\:0$
. . which factors: . $\left(\log_2x + 2\right)\left(\log_2x - 1\right) \:=\:0$
Then: . $\log_2x + 2 \:=\:0\quad\Rightarrow\quad \log_2x \:=\:-2$ . . . no real roots
.And: . $\log_2x - 1\:=\:0\quad\Rightarrow\quad \log_2x \:=\:1\quad\Rightarrow\quad \boxed{x \:=\:2}$ | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 16, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9005716443061829, "perplexity_flag": "middle"} |
http://mathhelpforum.com/advanced-algebra/194126-orthonormal-vector-property.html | # Thread:
1. ## Orthonormal Vector Property
This problem comes from a second-semester course in linear algebra. We are currently covering Gram Scmidt - but outside of the application problems, I'm a bit lost.
Problem: Suppose V is an inner product space and {v_1, ... , v_k} is an orthonormal set of vectors in V. Show that for all v in V, we have:
$\sum_{i\ =\ 0}^k |<v,v_j>|^2 \leq ||v||^2$.
Furthermore, show that the equality holds if and only if v is in the span of {v_1, ... , v_k}.
Intuition: I can't picture the problem in anything larger than 2-space. If in two space, it seems plausible that a right triangle could be formed using the orthonormal set and v_j (which would be the hypotenuse). I'm not sure if this stems from a theorem, or if I should just be playing with the definitions in the summation/inequality.
2. ## Re: Orthonormal Vector Property
Originally Posted by jsndacruz
This problem comes from a second-semester course in linear algebra. We are currently covering Gram Scmidt - but outside of the application problems, I'm a bit lost.
Problem: Suppose V is an inner product space and {v_1, ... , v_k} is an orthonormal set of vectors in V. Show that for all v in V, we have:
$\sum_{i\ =\ 0}^k |<v,v_j>|^2 \leq ||v||^2$.
Furthermore, show that the equality holds if and only if v is in the span of {v_1, ... , v_k}.
Intuition: I can't picture the problem in anything larger than 2-space. If in two space, it seems plausible that a right triangle could be formed using the orthonormal set and v_j (which would be the hypotenuse). I'm not sure if this stems from a theorem, or if I should just be playing with the definitions in the summation/inequality.
Write $v=v'+v''$ where $v'\in {\rm{Span}}(v_1, .. v_k)$ and $v''$ is in the orthogonal complement of ${\rm{Span}}(v_1, .. v_k)$
CB | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9348505735397339, "perplexity_flag": "head"} |
http://math.stackexchange.com/questions/137130/induction-as-peano-axiom/137305 | # Induction as Peano Axiom
Let P be some proposition. If we have that $P(0)$ is true and that if $P(n)$ is true, then $P(S(n))$ is true, where $S(n)$ is the successor of natural number $n$. Then we have that $P(n)$ is true for all natural numbers.
To my understanding, we need this axiom to eliminate formulations like $\{0, 0.5, 1, 1.5, 2, \ldots\}$ which otherwise fulfill the peano axioms. That is, the induction axiom forces the natural numbers to all 'stem' from zero. So why don't we just edit the axiom that says 'no element has $0$ as its successor' to be '$0$ is the only element that isn't a successor to another element'.
Are these two formulation equivalent or am I confused? I'm not sure whether this works for $\mathbb{R} \setminus \mathbb{N}^+$, which also otherwise fulfills peano axioms, but I haven't gotten to real numbers yet.
-
3
You need induction to exclude systems like naturals + disjoint copy of integers, where only $0$ has no predecessor. – sdcvvc Apr 26 '12 at 6:01
Your first paragraph makes no sense (nor it "compiles") you are stating that If ... and (if ... then ...), you are missing then ... is true for all the numbers. – Asaf Karagila Apr 26 '12 at 6:54
1
The formulation you give is a perfectly fine model of the Peano Axioms, where the successor of $n$ is $n+\frac{1}{2}$. – Arturo Magidin Apr 26 '12 at 15:09
@Arturo: You are correct, of course, however one should remark that while addition in this model would coincide with the usual addition of those as real numbers, multiplication will not be the same since it should be that $S(0)\times S(0)=S(0)$. – Asaf Karagila Apr 26 '12 at 16:49
@Asaf: Good point. – Arturo Magidin Apr 26 '12 at 16:53
## 2 Answers
Imagine for example the following structure. It consists of the natural numbers, coloured blue, together with the integers, coloured red. The successor operation is the natural one.
If you want a more formal description, our structure $S$ is the union of the set of all ordered pairs $(a,0)$, where $a$ ranges over the natural numbers, and the set of all ordered pairs $(b,1)$, where $b$ ranges over the integers. If $x=(a,0)$, define $S(x)$ by $S(x)=(a+1,0)$, and if $x=(b,1)$, define $S(x)$ by $S(x)=(b+1,1)$.
This structure $S$ satisfies your axiom, but is quite different from the natural numbers.
There are much worse possibilities. In the above description of $S$, instead of using all pairs $(b,1)$ where $b$ ranges over the integers, we can use all pairs $(b,1)$, where $b$ ranges over the reals.
Remark: Because of the wording of the question, we addressed only the issue of order type, which is settled by the second-order version of the induction axiom, and is indeed equivalent to it. Order types of models provide only weak information about the structure of models of first-order Peano arithmetic.
-
@Praslow D.: I read in Goldsern and Judah, The Incompleteness Phenomenon, that you still can't force everything to come from from $0$. You wind up needing $\mathbb {Q \times Z}$. You would like a postulate $\forall n n\in \mathbb N$ but that is second order logic. – Ross Millikan Apr 26 '12 at 17:12
One way to think about it is that the induction axiom guarantees that you can 'reach' every natural number, starting from $0$, in finitely many successor steps. This is a bit stronger than just "everything stems from $0$", which, in principle, might allow for transfinite processes (such as we have with ordinals).
In the presence of the induction axiom, the single axiom
The successor function is a bijection from $\mathbb{N}$ to $\mathbb{N}-\{0\}$.
is equivalent to the first four Peano postulates, (that is, this axiom plus Induction is equivalent to the five Peano postulates); but this latter axiom is, by itself, stronger than the first four Peano axioms (which do not suffice to show that everything except $0$ is a successor, whereas it follows easily from the statement above).
Your set, $\{0,0.5,1,1.5,\ldots\}$ is a model for the Peano Axioms, if you define $S(n) = n+\frac{1}{2}$. (As Asaf points out in comments, with this definition, the "addition" in the set coincides with that of these numbers as reals, $n+0 = n$, and $n+S(m) = S(n+m)$; but multiplication does not coincide with multiplication of the numbers in $\mathbb{R}$). What you "really" want to eliminate are models such as the ones that André Nicolas mentions (which satisfies the single axiom I list above, hence the first four Peano Axioms, but not induction). Note that in that model, the set of elements you can 'reach' from $0$ in finitely many successor steps does not exhaust the set.
-
Very good comment, thank you. – Praslow D. Apr 27 '12 at 6:10
But to be clear, in the example of André Nicolas: While the set of elements you can reach from 0 in finitely many steps does not exhaust the set, neither does the set of elements you can reach from 0 in infinitely many steps. You'd go (0,0), (1,0), (2,0), ...., (n,0), ... Right? I suppose the finite steps is needed for $\mathbb{Q} \setminus \mathbb{N}^+$ where not every element is reachable in finite steps but can be reached in infinitely many steps. – Praslow D. Apr 27 '12 at 6:20
Sorry, make the set $\mathbb{Q} \setminus \mathbb{Q}^-$ – Praslow D. Apr 27 '12 at 6:26
@PraslowD.: "The set you can reach in infinitely many steps" is not necessarily a well-defined concept; the common "transfinite induction" is to show that if you can do it for "everything strictly smaller", then you can do it for the element in question. But in any case, the point is to exclude sets in which not every element can be reached from zero via successor in finitely many steps, whether they are well-ordered or not. – Arturo Magidin Apr 27 '12 at 14:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 47, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9316917061805725, "perplexity_flag": "head"} |
http://mathoverflow.net/revisions/48590/list | ## Return to Answer
1 [made Community Wiki]
In descriptive set theory, we study properties of Polish spaces, typically not considered as topological spaces but rather we equip them with their "Borel structure", i.e., the collection of their Borel sets. Any two uncountable standard Borel Polish spaces are isomorphic, and the isomorphism map can be taken to be Borel. In practice, this means that for most properties we study it is irrelevant what specific Polish space we use as underlying "ambient space", it may be ${\mathbb R}$, or ${\mathbb N}^{\mathbb N}$, or ${\mathcal l}^2$, etc, and we tend to think of all of them as "the reals".
In Lebesgue "Sur les fonctions representables analytiquement", J. de math. pures et appl. (1905), Lebesgue makes the mistake of thinking that projections of Borel subsets of the plane ${\mathbb R}^2$ are Borel. In a sense, this mistake created descriptive set theory.
Now we know, for example, that in ${\mathbb N}^{\mathbb N}$, projections of closed sets need not be Borel. Since we usually call reals the members of ${\mathbb N}^{\mathbb N}$,
it is not uncommon to think that projections of closed subsets of ${\mathbb R}^2$ are not necessarily Borel.
(This is false. Note that closed sets are countable union of compact sets. The actual result in ${\mathbb R}$ is that projections of complements of projections of closed subsets of ${\mathbb R}^3$ are not Borel.) | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 9, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9371801018714905, "perplexity_flag": "head"} |
http://unapologetic.wordpress.com/2010/12/11/young-tabloids/?like=1&source=post_flair&_wpnonce=5d5a17dad5 | The Unapologetic Mathematician
Young Tabloids
Close cousins to Young tableaux, Young tabloids give us another set on which our symmetric group will act.
We say that two Young tableaux are “row-equivalent” if they contain the same entries in the same rows. That is, if we start with a Young tableau and shuffle the entries in each row — but never send an entry from one row to another row — then the resulting tableau is row-equivalent to the one we started with. Any two row-equivalent tableaux are related in this way.
We define a Young tabloid to be a row-equivalence class of Young tableaux, and we write it by writing down any tableau in the class, but with horizontal bars through it. As an example, there are three Young tabloids of shape $(2,1)$:
$\displaystyle\begin{aligned}\begin{array}{cc}\cline{1-2}1&2\\\cline{1-2}3&\\\cline{1-1}\end{array}&=\left\{\begin{array}{cc}1&2\\3&\end{array},\begin{array}{cc}2&1\\3&\end{array}\right\}\\\begin{array}{cc}\cline{1-2}1&3\\\cline{1-2}2&\\\cline{1-1}\end{array}&=\left\{\begin{array}{cc}1&3\\2&\end{array},\begin{array}{cc}3&1\\2&\end{array}\right\}\\\begin{array}{cc}\cline{1-2}2&3\\\cline{1-2}1&\\\cline{1-1}\end{array}&=\left\{\begin{array}{cc}2&3\\1&\end{array},\begin{array}{cc}3&2\\1&\end{array}\right\}\end{aligned}$
If we have written the tableau abstractly as $t$, then the corresponding tabloid is $\{t\}$ — the equivalence class of $t$.
10 Comments »
1. [...] introduced Young tableaux and Young tabloids. We’ve also said that they carry symmetric group actions, but we never really said what they [...]
Pingback by | December 13, 2010 | Reply
2. [...] that we have an action of on the Young tabloids of shape , we can consider the permutation representation that corresponds to it. Let’s [...]
Pingback by | December 14, 2010 | Reply
3. [...] corresponding to the partition . For a permutation , the character value is the number of Young tabloids such that . This might be a little difficult to count on its face, but let’s analyze it a [...]
Pingback by | December 15, 2010 | Reply
4. [...] were stymied. But at least we can calculate their dimensions. The dimension of is the number of Young tabloids of shape [...]
Pingback by | December 16, 2010 | Reply
5. [...] leave every entry in on the row where it started. Clearly, this is the stabilizer subgroup of the Young tabloid [...]
Pingback by | December 22, 2010 | Reply
6. [...] if its rows and columns are all increasing sequences. In this case, we also say that the Young tabloid and the polytabloid are [...]
Pingback by | January 5, 2011 | Reply
7. [...] notions of Ferrers diagrams and Young tableaux, and Young tabloids carry over right away to compositions. For instance, the Ferrers diagram of the composition [...]
Pingback by | January 6, 2011 | Reply
8. [...] is a Young tabloid with shape , we can define tabloids for each from to by letting be formed by the entries in [...]
Pingback by | January 10, 2011 | Reply
9. [...] Dominance Lemma for Tabloids If , and appears in a lower row than in the Young tabloid , then dominates . That is, swapping two entries of so as to move the lower number to a higher [...]
Pingback by | January 11, 2011 | Reply
10. [...] , and use it to build the “column tabloid” . This is defined just like our other tabloids, except by shuffling columns instead of [...]
Pingback by | January 20, 2011 | Reply
« Previous | Next »
About this weblog
This is mainly an expository blath, with occasional high-level excursions, humorous observations, rants, and musings. The main-line exposition should be accessible to the “Generally Interested Lay Audience”, as long as you trace the links back towards the basics. Check the sidebar for specific topics (under “Categories”).
I’m in the process of tweaking some aspects of the site to make it easier to refer back to older topics, so try to make the best of it for now. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9272807836532593, "perplexity_flag": "middle"} |
http://mathhelpforum.com/differential-equations/175782-laplaces-equation-question-involving-showing-value-centre-square.html | # Thread:
1. ## Laplaces Equation, a question involving showing the value at the centre of a square
Can anyone help me think of the three similar cases I need to examine, I was thinking 0<x<pi/2 0<y<pi/2, 0<x<pi 0<y<pi/2, 0<x<pi/2 0<y<pi, with the same boundaries as those parts of the original square, but it doesn't really work for me, any help would be greatly appreciated!
2. I think the idea is to have $T = 1$ on one side and $T = 0$ on the remaining three sides. The problem you state gives $T = 1$ on the top boundary. The remaining cases would be to have $T = 1$ on the botton, right and left boundaries, respectively. | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "github"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "english_confidence": 0.9628642797470093, "perplexity_flag": "head"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.