text
stringlengths 104
605k
|
---|
## The Annals of Applied Probability
### Regular Variation in the Tail Behaviour of Solutions of Random Difference Equations
D. R. Grey
#### Abstract
Let $Q$ and $M$ be random variables with given joint distribution. Under some conditions on this joint distribution, there will be exactly one distribution for another random variable $R$, independent of $(Q,M)$, with the property that $Q + MR$ has the same distribution as $R$. When $M$ is nonnegative and satisfies some moment conditions, we give an improved proof that if the upper tail of the distribution of $Q$ is regularly varying, then the upper tail of the distribution of $R$ behaves similarly; this proof also yields a converse. We also give an application to random environment branching processes, and consider extensions to cases where $Q + MR$ is replaced by $\Psi(R)$ for random but nonlinear $\Psi$ and where $M$ may be negative.
#### Article information
Source
Ann. Appl. Probab., Volume 4, Number 1 (1994), 169-183.
Dates
First available in Project Euclid: 19 April 2007
Permanent link to this document
https://projecteuclid.org/euclid.aoap/1177005205
Digital Object Identifier
doi:10.1214/aoap/1177005205
Mathematical Reviews number (MathSciNet)
MR1258178
Zentralblatt MATH identifier
0802.60057
JSTOR |
# Chapter 2 - Section 2.4 - Linear Functions - 2.4 Exercises - Page 211: 5
A
#### Work Step by Step
If the function passes throw the origin, this means that it has x-intercept and also y-intercept 0. Therefore, we let x=0 and f(x)=0. By substituting these values in the function, it must be true. Also, this is a linear function, so both x and y must be raised to the first power. The only such function is A. $f(x)=5x$, if $x=0$, then $f(x)=5\times0=0$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
Unlimited Plugins, WordPress themes, videos & courses! Unlimited asset downloads! From $16.50/m Advertisement # Creating Custom WordPress Administration Pages, Part 3 Difficulty:IntermediateLength:LongLanguages: In this series, we've been looking at how to create custom administration pages in WordPress without the use of the Settings API. This isn't to say the Settings API isn't useful (because it is!), but there may be times when we need to implement some custom functionality or more specialized implementations of features that the available APIs don't afford. Additionally, we're looking at some of the most important software development principles, such as the single responsibility principle, and applying them to our work. If you're just now joining the series, I recommend reading the previous posts so that you're familiar with what we've done up to this point and so you can understand why we're making some of the decisions that we're making when writing our code. ## A Quick Review Though I can't summarize everything we've covered thus far in the series, I can make sure that I highlight the important points. • We've introduced the core plugin and added a submenu item and options page for the plugin in the WordPress dashboard. • We've discussed the single responsibility principle and the role it plays in our development. • A single input element has been added that will accept users' input. • We've added a nonce value to the page, but we haven't actually done anything with it. With all of that said, I'm assuming that you have the latest version of the source code (which is available as an attachment in the previous article) and you're ready to move forward. ## Before We Start As with the other articles, I assume that you have a local WordPress development environment set up on your machine. Furthermore, I assume that you have the latest version of the source code and you're ready to continue building on top of it or you're comfortable reading through the code that we have here and implementing it when you have more time. Finally, we'll be stepping through each bit of code incrementally. First, I'll talk about what we're going to do, then I'll show the code, and then I'll explain whatever it is that the code is doing so there's nothing left that could be confusing. If, however, you find yourself confused about anything in the code and the tutorial doesn't do a good job of explaining what's going on, then please leave a comment and I'll be sure to follow up with you. Let's get started. ## Implementing New Features In the last article, we left off with a plugin that looks as if it does something but doesn't actually save anything to the database, let alone retrieve anything from the database. In short, we have a plugin that looks functional but isn't. And that's where we're going to pick up with this tutorial. Specifically, we're going to be tackling the following topics: • We're going to verify the nonce value that we created and defined in the previous tutorial to gain an understanding as to how one component of WordPress security works. • We'll verify that the existing user has permission to actually submit the information (and prevent them from doing so, if they don't). • If the submission is secure and the user has permission, we'll then sanitize the information to make sure no malicious content gets into the database. With that as our roadmap, we're ready to jump back into the code and continue working on the plugin. ### Security Recall from the previous post, we took advantage of the WordPress API function wp_nonce_field. This particular function does the following: The nonce field is used to validate that the contents of the form request came from the current site and not somewhere else. A nonce does not offer absolute protection, but should protect against most cases. It is very important to use nonce fields in forms. If you attempt to save the options page, you will likely be presented with a white screen. That's never good, but it's what we expect given the current state of our plugin. We need to introduce a function that will hook into one of the available WordPress hooks, and will check if the nonce value is valid. If it is valid, then it will let us proceed with saving the information; otherwise, it should not let us proceed. Since we're in the business of creating a custom administration page, we're going to need a different hook than what we may be used to using in situations like this. In this example, we're going to use the admin_post hook Fires on an authenticated admin post request where no action was supplied. Recall from our previous discussions, though, that we don't want to overload our classes with more responsibility than necessary. Remember, the question that we must constantly ask ourselves is: "What reason would this class have to change?" Right now, we don't have a class that can manage saving options. So let's introduce one. In the admin directory of the plugin, let's create a Serializer class. This will be responsible for saving the value of our options. As you can see, I've named my file class-serializer.php. We know from experience and from the code above that it's going to need to hook into the admin_post hook mentioned above, and we know that we're going to need a function responsible for saving the information. Let's define those now. Obviously, there's still work to do (in fact, we haven't even instantiated this class!) but the above code could be enough to see where we're heading. #### A Quick Conversation About Dependencies Before we add any functionality, let's go ahead and set this up when our plugin first loads. First, return the custom-admin-settings.php. Now, at this point, we need to ask ourselves if any of our existing classes should have the Serializer as a dependency. I think that a case can be made that the Submenu_Page should have a reference to the serializer since the page has the options to save. Alternatively, it's also possible to leave this file completely separate and have it available for another pattern. If we were to do that, we'd be diverging from the topic at hand. Although I think it's important, it's outside the scope of what we're aiming to do. So let's instantiate the Serializer class, initialize it, and then pass it into the constructor of the submenu page. The code in the plugin's bootstrap file should now look like this: With that, we're ready to continue saving our options. #### Back to Development Let's return to the Serializer. Now that we've got it wired up to the rest of the plugin, it's time actually to write some code so, as per the comment suggests, let's verify the nonce value that we've created on the front-end. Luckily, WordPress make this easy through a built-in API function: wp_verify_nonce. This function accepts two arguments: 1. The action 2. The name If you recall from the previous article, we used acme-settings-save as our action and acme-custom-message as our nonce value. To validate it, we need to check that it exists in the $_POST collection and that it passes WordPress's native checks.
To do this, I like to create a private method that allows me to encapsulate this logic into a function that I can use in the save function we've defined above.
Once done, I can incorporate a call to this function that will allow us to check the validity of the submission and either exit from the routine or proceed to the next check (which we'll get to momentarily).
Note that simply returning false in this conditional is not a suitable way to handle this. Instead, it would be cleaner to introduce an error message that displays on the WordPress dashboard. This is something that we'll be revisiting in a future tutorial.
For now, though, we're primarily concerned with making sure that we're able to submit data successfully. This brings us to the next portion of our code.
### Permission
Even though the number used once (or the nonce) validation checked out, there's still one more thing we need to check: we need to make sure the current user has permission to save the data.
For our purposes, we want to make sure the current user is an administrator. To do this, we can look at the capabilities of the current user (you can see this page provides a reference for each role and its associated capabilities).
Notice that one of the capabilities of the administration is to manage options. We can now use the WordPress API function current_user_can to check to see if the current user can save the options on this page.
But first, this raises a question: If the user can't save options, why should they be allowed actually to see the page in the first place?
If you recall from earlier in the series, we wrote the following bit of code:
This ensures that the options page is only available to administrators; however, we want to be extra careful and place a check for this during our serialization process, as well.
Now we can update the conditional where we're also checking the nonce value also to check the current user's permission:
Now that we have code in place to make sure the nonce value is set and that the current user can save the value, we can move forward with sanitization.
Remember, we will return to the place where it says we need to display an error message. But that's not in this tutorial.
### Sanitization
"But wait," you say. "I thought we were getting ready to save the option!" We are, but before we can do that we have to go through a process of sanitization. In short, sanitization is the idea of making sure the data is clean, safe, and, ahem, sanitary for the database.
Simply put, it prevents malicious users from inserting information into the database that could ultimately negatively affect our site.
Thankfully, WordPress provides a nice helper function that allows us to make sure this is as easy as possible. For those who are interested, you can read all about validation and sanitizing data (though we'll be looking at validation in the next tutorial).
In our code, we're going to be using sanitize_text_field (as linked above). This function will do the following:
• Checks for invalid UTF-8
• Converts single < characters to entities
• Strips all tags
• Removes line breaks, tabs, and extra whitespace
• Strips octets
Pretty nice to have this available, isn't it? Let's put this to work. To do so, locate the save function that we've been working with and update it so that it looks like this:
Notice that we're reading the input from the $_POST collection, sanitizing it, and then saving the result in a separate variable. Next, that variable is being written to the database using the update_option function. For this article, I'm opting to use the key tutsplus-custom-data. Whatever you use, it's important that it's prefixed with something unique so that another plugin or theme doesn't overwrite the option and you don't overwrite an existing option. Finally, we need to redirect back to the options page. Since we're not using a built-in API, we'll need to write a function to do this for us. Luckily, it's very easy. First, create a function called redirect, and make sure it looks like this: The code above should be self-explanatory, but to make sure it's clear, it's doing the following: 1. It checks to make sure a private WordPress value is present in the $_POST collection. If it's not set, then it will set it equal to the WordPress login URL. This will force people to the login page if the referral URL is not set; however, there's no reason why it shouldn't be.
2. Next, we take the referrer and sanitize the data. This is something that the coding standards call for, and it makes sure that the data is clean.
3. Finally, we initialize a wp_safe_redirect to the URL so that we are returned to the options page.
Once all this is done, add this as the last line in the save function above. The final version of the code should look like this:
Here's the thing: We've got security, sanitization, serialization, and redirection in place. But we're not showing error messages, and we aren't retrieving the data.
That's where we will pick up with the next tutorial.
## Conclusion
At this point, we've got a semi-functional plugin, but there's still more work to do. Obviously, the information that we're submitting to the database isn't displayed anywhere, and that's not a good thing.
But just as with saving information, there are important things to consider when retrieving information. In the next tutorial, we'll look at retrieving the information, displaying it on the front-end, displaying it on the options page, and also updating the information as a user changes the value of the input element.
In the meantime, if you're looking for other utilities to help you build out your growing set of tools for WordPress or for code to study and become more well-versed in WordPress, don't forget to see what we have available in Envato Market.
Remember, you can catch all of my courses and tutorials on my profile page, and you can follow me on my blog and/or Twitter at @tommcfarlin where I talk about various software development practices and how we can employ them in WordPress.
Finally, don't hesitate to leave any questions or comments in the feed below. I do my best to participate and answer every question or critique you offer as it relates to this project. |
# bin() in Python
PythonServer Side ProgrammingProgramming
The bin() function converts a decimal to binary. You can use a positive or negative integer as the parameter to be converted.
## Syntax
Below is the syntax of the function.
bin(n)
Parameters : an integer to convert
Return Value : A binary string of an integer or int object.
Exceptions : Raises TypeError when a float value is sent as argument.
In the below example we convert a positive and a negative integer to binary. The results come out with a prefix of 0b to indicate that the number is a binary representation.
## Example
n = input("Enter an integer :")
dec_number = int(n)
bin_number = bin(dec_number)
print(bin_number)
## Output
Running the above code gives us the following result −
Write the code result here.
Result
Enter an integer :23
0b10111
Enter an integer :-31
-0b11111
If we do not want the 0b prefix in front of the converted number, then we need to apply string function to remove the initial 2 characters.
## Example
n = input("Enter an integer :")
dec_number = int(n)
bin_number = bin(dec_number)
print(type(bin_number))
x = bin_number[2:]
print(x)
## Output
Running the above code gives us the following result −
Enter an integer :13
1101
Published on 07-Aug-2019 11:27:15 |
### Sukarna_Paul's blog
By Sukarna_Paul, history, 6 days ago, ,
This problem is from ACM ICPC Dhaka Regional 2018 (Contest Link)
How can I approach for this problem?
Problem Satement:
The function d(n) denotes the number of positive divisors of an integer n. For example d(24) = 8, because there are 8 divisors of 24 and they are 1, 2, 3, 4, 6, 8, 12 and 24. The function sndd(n) is a new function defined for this problem. This denotes “The summation of number of divisors of the divisors” of an integer n. For example,
sndd(24) = d(1) + d(2) + d(3) + d(4) + d(6) + d(8) + d(12) + d(24) = 1 + 2 + 2 + 3 + 4 + 4 + 6 + 8 = 30.
Given the value of n, you will have to find sndd(n!), here n! means factorial of n. So n! = 1 × 2 × 3 × … × n.
Input The input contains at most 1000 lines of test cases. Each line contains a single integer that denotes a value of n (1 ≤ n ≤ 10^6). Input is terminated by a line containing a single zero.
Output For each line of input produce one line of output. This line contains an integer that denotes the modulo 10000007 (or 10^7 + 7) value of sndd(n).
Sample Input 4 5 0
Sample Output 30 90
•
• +5
•
By Sukarna_Paul, history, 4 weeks ago, ,
By Sukarna_Paul, history, 2 months ago, ,
How can I approach for this Geometry problem . Here is the link URI 1291
By Sukarna_Paul, history, 2 months ago, ,
How can I approach for this problem Problem Link
•
• +3
•
By Sukarna_Paul, history, 3 months ago, ,
How can I approach for this number theory problem ? [Problem Link](http:// https://codeforces.com/contest/1070/problem/A)
A. Find a Number time limit per test3 seconds memory limit per test256 megabytes inputstandard input outputstandard output
You are given two positive integers d and s. Find minimal positive integer n which is divisible by d and has sum of digits equal to s.
Input The first line contains two positive integers d and s (1≤d≤500,1≤s≤5000) separated by space.
Output Print the required number or -1 if it doesn't exist.
Examples input 13 50 output 699998
input 61 2 output 1000000000000000000000000000001
input 15 50 output -1
By Sukarna_Paul, history, 3 months ago, ,
How can I approach for this problem ?
The number of divisor function or d(n) is a very interesting function in number theory. It denotes the number of positive divisors of a particular number. For example d(24) = 8 as 24 has eight divisors 1, 2, 3, 4, 6, 8, 12 and 24 . In mathematics factorial of a positive integer number n is written as n! and is defined as below: n! = 1 × 2 × 3 × · · · × n
Another interesting function AF(n) (Again factorial in short) is defined as: AF(n) = 1! × 2! × 3! × . . . × n!
Given n , your job is to find the value of d(AF(n)).
Input The input file contains at most 101 lines of inputs. Each line contains an integer n (0 < n < 5000001) . Input is terminated by a line containing a single zero. This value should not be processed.
Output For each line of input produce one line of output.
This line contains the modulo 100000007 (10^8 + 7) of d(AF(n)).
Sample Input 1 2 3 4 100 0
Sample Output 1 2 6 18 59417661
•
• +1
•
By Sukarna_Paul, history, 6 months ago, ,
Some Big integer problems for beginners Solving BigInteger Problem is fun. There are many problem solvers around the world who come to use Java BigInteger class though they do not code in Java usually. Here are some simple problems with their links and solution. This problem are chosen from different online judges. Try yourself before you see the solution.
Codeforces Gym : 112
# Solution
import java.util.Scanner ;
import java.math.BigInteger;
public class Main{
public static void main(String[] args){
int a,b;
Scanner input = new Scanner(System.in);
a = input.nextInt();
b = input.nextInt();
BigInteger A = BigInteger.valueOf(a);
BigInteger B= BigInteger.valueOf(b);
System.out.printf("%d",A.pow(b).subtract(B.pow(a)));
}
}
Uva 10183 — How Many Fibs?
# Solution
import java.util.*;
import java.math.*;
public class Main{
public static void main(String[] args){
Scanner input = new Scanner(System.in);
BigInteger arr[] = new BigInteger[50000];
arr[0]= BigInteger.valueOf(1);
arr[1]= BigInteger.valueOf(2);
for(int i=2;i<50000;i++){
}
while(true){
BigInteger a,b;
a = input.nextBigInteger();
b = input.nextBigInteger();
if(b.compareTo(BigInteger.valueOf(0))==0){
break;
}
int count=0;
for(int i=0;i<50000;i++){
if(arr[i].compareTo(a)>=0 && arr[i].compareTo(b)<=0){
count++;
}
if(arr[i].compareTo(b)>0){
break;
}
}
System.out.println(count);
}
}
}
URI 1279
# Solution
import java.util.*;
import java.math.*;
import java.io.*;
public class Main{
public static void main(String[] args){
Scanner input = new Scanner(System.in);
boolean start = true;
while(input.hasNext()){
boolean leap_year = false;
boolean ordinary = true;
if(start == false){
System.out.print("\n");
}
start = false;
BigInteger year;
year = input.nextBigInteger();
BigInteger four;
four = BigInteger.valueOf(4);
BigInteger fourh;
fourh = BigInteger.valueOf(400);
BigInteger oneh;
oneh = BigInteger.valueOf(100);
BigInteger temp = year.remainder(four);
if(temp.compareTo(BigInteger.valueOf(0))==0){
temp = year.remainder(oneh);
if(temp.compareTo(BigInteger.valueOf(0))==0){
temp = year.remainder(fourh);
if(temp.compareTo(BigInteger.valueOf(0))==0){
System.out.println("This is leap year.");
leap_year = true;
ordinary = false;
}
}
else{
System.out.println("This is leap year.");
leap_year = true;
ordinary = false;
}
}
temp = year.remainder(BigInteger.valueOf(15));
if(temp.compareTo(BigInteger.valueOf(0))==0){
System.out.println("This is huluculu festival year.");
ordinary = false;
}
if(leap_year==true){
temp = year.remainder(BigInteger.valueOf(55));
if(temp.compareTo(BigInteger.valueOf(0))==0){
System.out.println("This is bulukulu festival year.");
ordinary = false;
}
}
if(ordinary == true){
System.out.println("This is an ordinary year.");
}
}
}
}
Project Euler Problem 13
# Solution
import java.util.*;
import java.math.*;
public class problem13{
public static void main(String[] args){
Scanner input = new Scanner(System.in);
BigInteger sum, a;
sum = BigInteger.valueOf(0);
for(int i=0;i<100;i++){
a = input.nextBigInteger();
}
System.out.println(sum); //take the first 10 digits manually
}
}
Project Euler Problem 15
# Solution
import java.math.*;
public class Problem15{
public static void main(String[] args){
BigInteger n=BigInteger.valueOf(1);
BigInteger ans=BigInteger.valueOf(1);
for(int i=0;i<40;i++){
ans=ans.multiply(n);
}
n=BigInteger.valueOf(1);
BigInteger ans2=BigInteger.valueOf(1);
for(int i=0;i<20;i++){
ans2=ans2.multiply(n);
}
System.out.println(ans.divide(ans2.multiply(ans2)));
}
}
Project Euler Problem 16
# Solution
import java.math.*;
public class Problem16{
public static void main(String[] args){
//BigInteger n=BigInteger.valueOf(1000);
BigInteger two = BigInteger.valueOf(2);
BigInteger ans = two.pow(1000);
String s = ""+ans;
System.out.println(s);
int sum=0;
for(int i=0;i<s.length();i++){
sum+=s.charAt(i)-'0';
}
System.out.println(sum);
}
}
Thank you. Happy coding. |
# MATH 595: Longest Common Subsequences
This is a graduate research seminar, so there will not be any official homeworks nor assessments. This page will simply be used for posting relevant materials. |
# Tag Info
20
In the comments to the question, I notice something which might be an error, or at least is an incomplete response. It is pointed out in the comments that there exist nonisomorphic groups with isomorphic subgroup lattices. While true, that fact doesn't answer this question, since it is possible to have isomorphic subgroup lattices and nonisomorphic subgroup ...
11
For large prime $p$, there are uncountably many non-isomorphic Tarski monsters of exponent $p$. For these groups $G$, the subgroups lattice consists of basically a partition of $G\smallsetminus\{1\}$ into countably many subsets of cardinal $p-1$ (so the subgroups are the whole group, $\{1\}$ and the union of $\{1\}$ with any component of the partition. ...
11
The answer is no. Let $\kappa$ be any infinite cardinal, regular or singular, and assume for a contradiction that there is a set $E\subseteq\mathcal P(\kappa)$ satisfying your conditions. I will call the elements of $\kappa$ points and the elements of $E$ lines. The lines do not all go through one point: Given a point $\alpha$, choose a point $\beta\ne\alpha$...
10
The answer is no. A space is called resolvable if it contains two disjoint dense subspaces. Clearly $X$ is resolvable if and only if $\chi(X)=2$. Lets prove by induction on $n \geq 2$ that if $\chi(X) \leq n$ then $X$ is resolvable (and hence $\chi(X)=2$). The base case $n=2$ is clear so suppose there is a coloring $f:X \to n+1$. Let $V$ be the union of ...
9
Update. Here is a new simpler answer that works for all regular $\kappa$, including $\kappa=\omega$. And I have omitted the use of Fodor's lemma, using instead merely the pigeon-hole principle. Suppose that $\kappa$ is infinite and we have a projective plane on $\kappa$ many points, with all lines of size less than $\kappa$. Since there are $\kappa$ many ...
9
Partition $\omega$ into three infinite subsets $A_0,A_1,A_2$. Let $S$ consists of subsets which intersects precisely two of the $A_i$ at infinitely many elements. It can obviously be $3$-colored. Suppose there was a $2$-coloring, with color classes $c_0,c_1$. Then either $c_0$ or $c_1$ contains infinitely many elements of some two of $A_0,A_1,A_2$, say $c_0\... 9 It can be$O(n^{\frac32})$for$a\ge 1$if the sets$A_i$correspond to the$p^2$points of a smooth surface in an appropriate surface in a 3-dimensional space over$\mathbb F_p$and your points are the$p^3$general position planes, with$p^2$planes through each point. There are no 3 collinear points if the surface is chosen appropriately, so for any 3 ... 9 It is continuum. The coloring with continuum many colors is clear (all points may have different color). Assume that we have$\kappa<c$colors. Consider the Cantor set$K$. All its subsets are Lebesgue measurable. If some color contains uncountably many points from$K$, it constitutes a monochromatic edge. So, each color contains at most$\omega$points ... 8 Yes, your conjecture is true. Suppose otherwise. Then there exists a counterexample$f : \mathcal{P}(8) \rightarrow \{0, 1\}$. For each set$X \in \mathcal{P}(8)$, let the proposition$P_X$denote$f(X) = 1$. There are$5440$different choices of the tuple$(A, B, C) \in \mathcal{P}(8)^3$satisfying your constraints. For each such tuple, we obtain two ... 8 As explained in a previous MO question, there is no unique generalization of the eigenvalue of an$n\times n$matrix to an$n\times n\times n$tensor. One approach is to construct a higher-order singular value decomposition. This has been worked out for the specific case of$2\times 2\times 2$tensors by Ana Rovi in a M.Sc. thesis. Even for this simple case, ... 7 An elementary counting argument shows that$2$-$(v,3,3)$exists only if$v$is odd (or, more precisely, for$\lambda \equiv 3 \pmod{6}$a$2$-$(v,3,\lambda)$exists only if$v \equiv 1 \pmod{2}$). This necessary condition is sufficient. Arguably the simplest direct construction for the case$\lambda = 3$that covers all odd$v$is to use commutative ... 7 The following paper of Alon shows that the quantity you're after,$m(k)$, the minimum number of edges of a$3$-uniform hypergraph which is not$k$-colourable, is indeed$\asymp k^3$. More precisely, he shows that $$2\left\lceil \frac{k}{3}\right\rceil \left\lfloor \frac{2k}{3}\right\rfloor^2 < m(k) \leq \binom{2k+1}{3}$$ where the implied constants ... 7 Others have already answered, but I think the following counting argument is worth pointing out: there are$2^{2^{\aleph_0}}$hypergraphs on$\omega$(since a hypergraph on$\omega$is just a collection of nonempty subsets of$\omega$), each isomorphism class contains at most$2^{\aleph_0}$elements (since there are that many permutations of$\omega$), so ... 7 This is essentially done by the Bernstein set construction: if one has$\kappa$many sets each of size$\kappa$, then order them into ordinal$\kappa$and recursively choose 2 points from each, so that all these points are distinct. That is, we have$x_\alpha,y_\alpha\in A_\alpha$with all$x_\alpha,y_\alpha$distinct. At the end, color each$x_\alpha$red, ... 7 No, there isn't. This is essentially the dual version of the De Bruijn-Erdos theorem if the elements of$\mathcal C$are the points, and the elements from$\{1,\ldots,n\}$are the lines. The original proof is here. 6 The case$s=1$is Erdős hypergraph mathcing conjecture from Paul Erdős (1965). A problem on independent$r$-tuples. Ann. Univ. Sci. Budapest. Eötvös Sect. Math. 8 (1965), 93–95. users.renyi.hu/~p_erdos/1965-01.pdf A recent paper about it is Peter Frankl (2017) Proof of the Erdős matching conjecture in a new range, Israel Journal of Mathematics 222(1), pp ... 6 The intersecting family in your example has$\binom{n-1}{\lfloor\frac{n-1}{2}\rfloor}$members by Sperner's theorem. An example that achieves a larger value would be to take all the subsets of$[n]$that have$1+\lfloor \frac{n}{2}\rfloor$elements. This is best possible. Milner proved that the largest size of a$k$-intersecting antichain in$\mathcal P([n])...
6
As @Keith Kearnes says, the negative answer ought to be somewhere in Roland Schmidt's book. Unless I'm mistaken, it suffices to find two non isomorphic groups with isomorphic coset lattices. Indeed, the elements of $G$ correspond directly with the cosets of the trivial subgroup. By applying the regular group action, you can take any such element (or 1-...
6
Observe that $|\mathcal L|\leq\lambda$, since mapping $k$ to the pair of its two smallest elements gives an injection $\mathcal L\to\lambda^2$. Enumerate elements of $\mathcal L$ as $k_\alpha,\alpha<\lambda$. Then we can define by transfinite recursion $f(k_\alpha)$ to be the least element of $k_\alpha$ distinct from $f(k_\beta),\beta<\alpha$.
6
In general no. Partition the vertices onto $n/k$ subsets (I call them classes) of size $k$, where $k$ grows as $n^{2/3}$. Take into your hypergraph all 4-edges with the vertices in the same class. It has about $(n/k)k^4=nk^3\sim n^3$ edges, but each independent set contains at most $3$ vertices from each class, thus $O(n^{1/3})$ vertices. Well, your graph ...
6
This is Baranyai's theorem. Other than in Baranyai's original paper you can also find a cool proof in the article "Uniform hypergraphs" by Brouwer and Schrijver which uses max-flow min-cut.
6
I will turn my comment above into a self-contained answer. Given a hypergraph $H=(V,E)$ and $X \subseteq V$, we say that $X$ is shattered if for all $X' \subseteq X$, there exists $e \in E$ such that $e \cap X=X'$. The VC dimension of $H$ is the size of a largest shattered set. Given a finite dimensional vector space $\mathbb V$, let $H$ be the hypergraph ...
6
The cardinals $\bf k_n$ ($2\le n\lt\omega$) are all equal. Lemma. Let $\kappa$ be an infinite cardinal. Given a set $A\subseteq[\omega]^\omega$ with $|A|=\kappa$ and $\chi(\omega,A)\gt n$, we can construct a set $B\subseteq[\omega]^\omega$ with $|B|=\kappa$ and $\chi(\omega,B)\gt n^2$. Proof. For each $a\in A$ choose a collection $B_a\subseteq[a]^\omega$ so ...
6
I will show, with some non-rigorous steps, that a bound of this form that is valid for arbitrary tensors and useful for sparse tensors (fewer than $n^{k/2}$ nonvanishing entries) does not exist. First note that there is a problem with using the $\ell^2$ norm to define the spectral norm for very sparse tensors $A$, regardless of how you try to bound it. The ...
6
Let $m$ be chosen later, and let $A_1, A_2, \dots, A_n$ be independently chosen random subsets of $\{1,2,\dots m\}$, each having size $n$. For a fixed $a+1$-tuple $(x_1, x_2, \dots, x_{a+1})$ of distinct elements from $\{1,\dots,m\}$, and a fixed triple $(i,j,k)$, the probability that $\{x_1, \dots, x_a\} \subseteq A_i \cap A_j \cap A_k$ is at most $\left(\... 5 Here is my intuition that it may not be possible. I am guessing that as in the case of the original LLL, such an inequality would in turn imply a simpler inequality of the following form: "If the dependency graph is$d$-degenerate and every event has probability at most$p$and$4pd<1$, then we can avoid all events". But this latter statement appears ... 5 Here is another way of thinking about the problem. Suppose for simplicity that your hypergraph$\mathcal{H}$has exactly$|V(\mathcal{H})|$hyperedges (as was mentioned by Dominic, we can immediately exclude any hypergraph with more hyperedges). Let$G'$be a bipartite graph where both parts are of size$|V(\mathcal{H})|$. We associate the left side of$G'$... 5 Yes. Let$S$be a family of finite subsets of some linearly ordered set$L.$Suppose that each member of$S$has at least two elements, and that no two members of$S$form a "globally ordered pair". Then we can color every element of$L$red or blue so that, for each$X\in S,$the top element of$X$is red and the bottom element of$X$is blue. 5 This is equivalent reformulation of Erdös-Faber-Lovász conjecture, see Wikipedia page about it. https://en.m.wikipedia.org/wiki/Erd%C5%91s%E2%80%93Faber%E2%80%93Lov%C3%A1sz_conjecture 5 Yes, and we do not even use the restrictions on mutual intersections of edges. For weak colorings, we may replace each edge with at least 2 vertices to an edge with exactly 2 vertices (possibly we get the same edge several times). It remains to properly color a graph with$n\geqslant 2$edges with$n\$ colors. This is done by induction: color any vertex of ...
Only top voted, non community-wiki answers of a minimum length are eligible |
tutorials
Building your own Linux OS from scratch is no dark magic, believe me! As long as you feel comfortable using a command line, it isn't such a daunting task, only requiring a fair amount of patience.
We'll be setting up an environment, compiling the kernel, userspace tools, a root filesystem and then test booting it. I'll assume you already run Linux on a machine, or in a Virtual Machine. Let's dive right in!
Environment
We'll first install the programs necessary for building. All these tools should be available in the major Linux distributions, but I'll only give the commands to install them on Debian/Ubuntu and Alpine Linux.
To compile the kernel, we need:
debian$sudo apt install build-essential xz-utils libncurses5-dev bison flex bc perl libelf-dev libssl-dev linux-headers-generic alpine$ apk add apk add alpine-sdk xz ncurses-dev bison flex bc perl libelf-dev openssl-dev linux-headers findutils
Kernel
First we have to fetch the source.
$KERNEL_VERSION=linux-4.18.6$ wget https://cdn.kernel
Quick start to HTTPS with Caddy
Caddy is a easy-to-use web server and reverse proxy. You can use it to enable HTTPS on your self-hosted app with little effort.
Now in the same folder, you will need to write a Caddyfile, which is just a text file. Open your text editor and paste this:
:2015 {
proxy / localhost:8080
}
Save it as Caddyfile without any file extension.
This will start a normal HTTP server at port 2015 and proxy all requests to your app at port 8080.
Now, in the terminal or command prompt, cd into the working directory and run Caddy. On Windows you would type caddy.exe and on macOS or Linux ./caddy
You can now visit localhost:2015 to check that everything works.
Next, you have to get a domain to point to your server. You can get free domains from freenom.tk, or use your dynamic DNS provider. You can test that your domain works by visiting your-domain.com:2015 over mobile
DIY solder reflow oven - last words
Part 1: Toaster
Part 2: Electronics
Part 3: Software
Part 4: Conclusion
So it has been a while and I've been using the solder reflow oven for a while now. For the past weeks, I've been experimenting with finer and finer pitch components and I've been having really good progress so far.
The reflow oven performs way better than what I expected as it has fairly even heating through the oven and can solder super fine pitch components. One such example is the edison or hirose 70 pin connector. Despite the 0.5mm pin pitch my reflow oven managed to perfectly reflow the connector countless times.
A photo posted by Sudharshan Sudhar (@engineeringdragon) on
A photo posted by Sudharshan Sudhar (@engineeringdragon) on
So having said that, I think there are still a few software bugs that plagu
DIY solder reflow oven - software
Part 1: Toaster
Part 2: Electronics
Part 3: Software
Part 4: Conclusion
In this penultimate installment to the reflow oven project, we will wrap up by quickly going through some basics of the final hardware assembly and the software.
Insulation
Insulation is a important concern when you are building a reflow oven. If poor attention is given to insulation, you might end up reflowing your control electronics along with the other board. I implemented insulation by putting all the sensitive electronics into a cardboard box.
Power
Another important concern is the power for the arduino and the processing board. I implemented that by hacking a 5V 2A power supply and converting 240AV to 5VDC in the toaster. This 5VDC can then be used to power the electronics.
A photo posted by Sudharshan Sudhar (@engineeringdragon) on
Software
The software is critical to this project. Every reflow oven goes through 3 main phases. These phase
DIY solder reflow oven - electronics
Part 1: Toaster
Part 2: Electronics
Part 3: Software
Part 4: Conclusion
In this second installment of the solder reflow oven series, I'll be going over the electronics that makes it work. The solder reflow oven can be split into 3 main sections.
Temperature Detection
The first section is the temperature detection. The temperature detection at this kind of high temperatures (around 250°C) requires the use of a thermocouple to work. This is because a thermistor doesn't work at high temperatures. The thermocouple I obtained was from SparkFun and was a type K thermocouple. As a typical microcontroller cannot directly read the output of this thermocouple, it is necessary to have an external circuit to process the thermocouple voltages. Fortunately, SparkFun also sells a MAX31855K breakout board which I also grabbed.
Heating Element Control
The second section is the control method. To manipulate the temperature inside the oven, we need to toggle the power to the heating elements. For this,
DIY solder reflow oven - toaster
So this will be the first post in a series of me making my own solder reflow oven. As you all know, the number of PCB related posts are increasing steadily with EdiCopter and our custom quadcopter boards on the way. I though that this would be the perfect time to get ourselves a solder reflow oven and maybe start small scale manufacture and went over to a nearby shop and grabbed myself the cheapest toaster money could buy.
For its price, this toaster's not bad. It comes equipped with a bimetallic strip based thermostat design which is pretty commonplace in toasters these days. When the strip heats up to the right temperature, it bends away from the contact and disconnects the power to the heating element. Pretty neat low-cost mechanism.
It has a nice big cavity inside which will work wonderfully with our extra electronics inside. It looks like with the amount of space we have to work with, we could easily enclose everything within the frame itself making it look as stock as possible
PCB making in Singapore
PCB making in Singapore is a topic I've seen not many people talk about. Being one of the bustling cities in South East Asia one would expect to receive quality PCB making services locally in Singapore. On contrary, PCB making in my experience has not exactly been a honeymoon experience one would hope for. In this post, I will share my experience with making PCB's in Singapore and talk about the pros and cons of making PCB's locally.
Experience
I was trying to make a PCB for a school research project last year. More specifically, I was trying to make a Intel Edison based flight controller for a custom drone. Another post will be done detailing the PCB itself. Anyway, because along with the normal SMD connectors the Intel Edison connector cannot be soldered by hand, I required assembly services along with the conventional fabrication requirements.
Because of this limitation, I needed to build it locally to avoid incurring shipping costs and complications of sending the components overse |
Internetworks
# Introduction#
The Internetworks library in NetSim supports various protocols across all the layers of the TCP/IP network stack. These include Ethernet, Address Resolution Protocol (ARP), Wireless LAN – 802.11 a / b / g / n / ac / p and e (EDCA), Internet Protocol (IP), Transmission Control Protocol (TCP), Virtual LAN (VLAN), User Datagram Protocol (UDP), and routing protocols such as Routing Information Protocol (RIP), Open Shortest Path First (OSPF) and Internet Group Management Protocol (IGMP).
An internetwork is generally a collection of two or more networks (typically LANs and WLANs) which are interconnected to form a larger network. All networks in an Internetwork have a unique network address. Routers interconnect different networks.
Users can use the following devices to design Internetworks: wireless node, wired node, switch, router, and access point (AP). Wired nodes (term for computers, servers etc.) connect via wired link to switches or routers, and wireless nodes connect via wireless links to Access Points (APs). Multiple links terminate at a switch/router, which enables connectivity between them. Many switches/routers are present in an internetwork to connect all the end-nodes. The end-nodes provide and consume useful information via applications like data, voice, video etc.
Figure 1‑1: A typical Internetworks scenario in NetSim
Figure 1‑2: The Result dashboard and the Plots window shown in NetSim after completion of a simulation
# Simulation GUI#
Open NetSim and click New Simulation à Internetworks as shown Figure 2‑1.
Figure 2‑1: NetSim Home Screen
## Create Scenario#
Internetworks come with a palette of various devices like L2 Switch, L3 Switch, Router, Wired Node, Wireless Node, and AP (Access Point).
## Devices specific to NetSim Internetworks Library#
• Wired node: A Wired node can be an end-node or for a server. It > is a 5-layer device that can be connected to a switch and router. > It supports only 1 Ethernet interface and has its own IP and MAC > Addresses.
• Wireless Nodes: A Wireless node can be an end-node or a server. > It is a 5-layer wireless device that can be connected to an Access > point. It supports only 1 Wireless interface and has its own IP > and MAC Addresses.
• L2 Switch: Switch is a layer-2 device that uses the devices MAC > address to make forwarding decisions. It does not have an IP > address.
• Router: Router is a layer-3 device and supports a maximum of 24 > interfaces each of which has its own IP address.
• Access point: Access point (AP) is a layer-2 wireless device > working per 802.11 Wi-Fi protocol. It can be connected to > wireless nodes via wireless links and to a router or a switch via > a wired link.
Figure 2‑2: Internetworks Device Palette in GUI
### Click and drop into environment #
• Add a Wired Node or Wireless Node: In the toolbar, click the > Node > Wired_Node icon (or) Node >Wireless_Node icon, and place > the device in the grid.
• Add a Router: In the toolbar, click on the Router icon and place > the Router in the grid.
• Add a L2 Switch or L3 Switch: In the toolbar, click on Switch > > L2_Switch icon (or) Switch > L3_Switch icon and place the device > in the grid.
• Add an Access Point: In the toolbar, click on the Access Point > icon and place the Access Point in the grid
• Connect the devices by using Wired/Wireless Links present in the top > ribbon/toolbar. Click on the first device and then click on the > second device. A link will get formed between the two devices.
• Configure an application as follows:
• Click the application icon on the top ribbon/toolbar.
• Specify the source and destination devices.
• Specify other application parameters per your model
Figure 2‑3: Top Ribbon/Toolbar
• Multiple applications can be generated by using add button in > Application properties. Set the values and click on OK button. > Detailed information on Application properties is available in > section 6 of NetSim User Manual.
• Right-click on any device (Router, Access_Point, L2_Switch, > Wireless_Node, Wired_
Node etc) and set the parameters.
Figure 2‑4: Device Properties
• Interface_Wireless’ Physical Layer and DataLink Layer parameters are > local but in Physical layer Standerd parameter is global. To set > the same parameter value in all devices, ensure that you > accordingly update the parameter values in all other devices > (Access_Point or Wireless_Node) manually as the parameter change > does not propagate to the other devices since it is local.
Figure 2‑5: MAC properties of Access Point
Figure 2‑6: PHY Layer properties of Access Point
Right click on the link and click on properties to set link properties. Note that when simulating Internetworks if the link propagation delay is set too high then the applications may not see any throughput since it would take too long for OSPF to converge, and furthermore, TCP may also timeout (since max RTO is 3s).
## Enable Packet Trace, Event Trace & Plots (Optional)#
Click Packet Trace / Event Trace icon in the tool bar and click on OK button. For detailed help, please refer sections 8.1, 8.4 and 8.5 of the User Manual. Select Plots icon for enabling Plots and click on OK button see Figure 2‑8.
Figure 2‑8: Packet Trace, Event Trace & Plots options on top ribbon
## Run Simulation#
Click on Run Simulation icon on the top ribbon/toolbar. For detailed help, please refer sections 3.2.7 of the User Manual.
Figure 2‑9: Run Simulation on top ribbon
Set the Simulation Time and click on OK button.
Figure 2‑10: Run Simulation window
# Model Features#
## WLAN 802.11#
NetSim implements the 802.11 MAC and the 802.11 PHY abstracted at a packet-level. We start with the 3 types of nodes supported in 802.11 Wi-Fi.
• Wireless Nodes (Internetworks) or STAs. In Internetworks APs and > Wireless nodes (STAs) are associated based on the connecting > wireless link
• Wi-Fi Access Points (Internetworks) or APs. Every STA in the WLAN > associates with exactly one AP. Each AP, along with its associated > STAs, define a cell. Each cell operates on a specific channel.
• Standalone Wireless nodes (Mobile Adhoc networks).
The MAC Layer features:
• RTS/CTS/DATA/ACK transmissions.
• Packet queuing, aggregation, transmission, and retransmission.
• 802.11 EDCA.
The PHY layer implements:
• RF propagation (documented separately).
• Received power based on propagation model.
• Interference and signal to interference noise (SINR) calculation.
• MCS (and in turn PHY Rate) setting based on RSS and rate adaptation > algorithms.
• BER calculation and packet error modelling.
Figure 3‑1: NetSim’s Wi-Fi design window, the results dashboard and the plots window
### WLAN standards supported in NetSim#
802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11e (EDCA) and 802.11p are the WLAN standards available in NetSim. The operating frequencies and bandwidths are given in the table below.
WLAN standard Frequency (GHz) Bandwidth (MHz)
802.11 a 5 20
802.11 b 2.4 20
802.11 g 2.4 20
802.11 n 2.4, 5 20, 40
802.11 ac 5 20, 40, 80, 160
Table 3‑1: WLAN standards supported in NetSim
802.11 p and WAVE are described in the VANET Technology library documentation.
### The 2.4 GHz Channels#
The following channel numbers are well-defined for 2.4GHz standards:
Channel Number Center Frequency (MHz)
1 2412
2 2417
3 2422
4 2427
5 2432
6 2437
7 2442
8 2447
9 2452
10 2457
11 2462
12 2467
13 2472
14 2484
Table 3‑2: 2.4 GHz Wi-Fi Channels per IEEE Std 802.11g-2003, 802.11-2012
Channels 1 through 14 are used in 802.11b, while channels 1 through 13 are used in 802.11g/n.
### The 5 GHz Channels#
The following channel numbers are defined for 802.11a/n/ac.
Channel Number Center Frequency (MHz)
36 5180
40 5200
44 5220
48 5240
52 5260
56 5280
60 5300
64 5320
100 5500
104 5520
108 5540
112 5560
116 5580
120 5600
124 5620
128 5640
132 5660
136 5680
140 5700
144 5720
149 5745
153 5765
157 5785
161 5805
165 5825
169 5845
173 5865
177 5885
Table 3‑3: 5GHz Wi-Fi Channels per IEEE Std 802.11a -1999, 802.11n -2009, 802.11ac -2013
### The 5.9 GHz Channels#
Channel Number Center Frequency (MHz)
100 5500
104 5520
108 5540
112 5560
116 5580
120 5600
124 5620
128 5640
132 5660
136 5680
140 5700
171 5855
172 5860
173 5865
174 5870
175 5875
176 5880
177 5885
178 5890
179 5895
180 5900
181 5905
182 5910
183 5915
184 5920
Table 3‑4: 5.9 GHz Wi-Fi Channels per IEEE Std 802.11p-2010
### Channel Numbering#
The standard method to denote 5 GHz channels has been to always use the 20 MHz center channel frequencies for both 20 MHz and 40 MHz wide channels.
The following are the channel numbers of the non-overlapping channels for 802.11ac in NetSim:
• 20MHz: 36, 40, 44, 48, 52, 56, 60, 64, 100, 104, 108, 112, 116, 120, > 124, 128, 132, 136, 140, 144, 149, 153, 157, 161, 165, 169, 173, > 177
• 40MHz: 36, 44, 52, 60,100, 108, 116, 124, 132, 140, 149, 157, 165, > 173
• 80MHz: 36, 52, 100, 116, 132, 149, 165
• 160MHz: 36, 100, 149
### WLAN PHY Rate in NetSim#
WLAN Standard Frequency (GHz) Bandwidth (MHz) MIMO streams PHY rate (Mbps)
a 5 20 N/A 6, 9, 12, 18, 24, 36, 48, 54
b 2.4 22 N/A 1, 2, 5.5, 11
g 2.4 20 N/A 6, 9, 12, 18, 24, 36, 48, 54
n 2.4, 5 20 4 Up to 288.8
40 Up to 600
ac 5 20 8 Up to 346.8
40 Up to 800
80 Up to 1733.2
160 Up to 3466.8
Table 3‑5: WLAN PHY Rates in NetSim
### SIFS, Slot Time, CW Min, and CW Max settings#
Sub Std. b (20MHz)
SIFS 10
Slot Time 20
CW Min 31
CW Max 1023
Table 3‑6: DSSS PHY characteristics (IEEE-Std-802.11-2020 -Page no -2762)
Sub Std. a g p
Bandwidth 20MHz 20MHz 5MHz 10MHz 20MHz
SIFS 16 16 64 32 16
Slot Time 9 9 21 13 9
CW Min 15 15 15 15 15
CW Max 1023 1023 1023 1023 1023
Table 3‑7: OFDM PHY characteristics (IEEE-Std-802.11-2020 -Page no -2846)
Sub Std. n
Frequency Band 2.4MHz 5
SIFS 10 16
Slot Time 20 9
CW Min 15 15
CW Max 1023 1023
Table 3‑8: HT PHY characteristics (IEEE-Std-802.11-2020 -Page no -2951) and MIMO PHY characteristics (IEEE-Std-802.11n-2009 -Page no -335)
Sub Std. ac (5GHz)
SIFS 16
Slot Time 9
CW Min 15
CW Max 1023
Table 3‑9: Slot time in IEEE-Std-802.11-2020 -Page no -3094 and IEEE-Std-802.11ac-2013-Page no -297
### PHY Implementation#
NetSim is a packet level simulator for simulating the performance of end-to-end applications over various packet transport technologies. NetSim can scale to simulating networks with 100s of end-systems, routers, switches, etc. NetSim provides estimates of the statistics of application-level performance metrics such as throughput, delay, packet-loss, and statistics of network-level processes such as buffer occupancy, collision probabilities, etc.
In order to achieve scalable, network simulation, that can execute in reasonable time on desktop level computers, in all networking technologies the details of the physical layer techniques have been abstracted up to the point that bit-error probabilities can be obtained from which packet error probabilities are obtained.
NetSim does not implement any of the digital communication functionalities of the PHY layer. For the purpose of PHY layer simulation, the particular modulation and coding scheme, along with the transmit power, path loss, noise, and interference, yields the bit rate and the bit error rate by using well-known formulas or tables for the particular PHY layer being used. User would need to use a PHY Layer/RF/Link Level simulator for simulating various digital communication and link level functionalities. Typically, these simulators will simulate just one transmitter-receiver pair, rather than a network.
Generally, in NetSim, the PHY layer parameters available for the user to modify are Channel Bandwidth, Channel Centre Frequency, Transmit-power, Receiver-sensitivity, Antenna-gains, and the Modulation-and-Coding-Scheme. When simulating standard protocols, these parameters can only be chosen from a standard-defined set. NetSim also has standard models for radio pathloss; the parameters of these pathloss models can also be set.
### PHY States#
The PHY radio states implemented in NetSim 802.11 are RX_ON_IDLE, RX_ON_BUSY, TRX_ON_BUSY.
• RX_ON_IDLE: This is the default radio state
• RX_ON_BUSY: This state is set at receiver radio when the reception > of data begins. Upon completion of reception it changes to > RX_ON_IDLE
• TRX_ON_BUSY: This state is set at the transmitter radio at the start > of frame transmission. Upon completion of transmission, it changes > to RX_ON_IDLE
• A node in back off slots can be considered as equivalent to CCA > busy. In NetSim, the radio state continues to be in RX_ON_IDLE
• SLEEP state is not implemented since NetSim 802.11 does not > currently implement power save mode.
### 802.11 implementation details#
Packets arriving from the NETWORK Layer gets queued up in an access buffer from which they are sorted according to their priority per 802.11 EDCA. An event MAC_OUT with SubEvent CS (Carrier Sense – CSMA) is added to check if the medium is free
Figure 3‑2: Packets transmission form Network layer to Mac Layer and how queued up in an access buffer
During CS, if the medium is free, then the NAV is checked. This occurs if the RTS/CTS mechanism is enabled which can be done so by adjusting the RTS Threshold. If the Present_Time > NAV, then an Event MAC_OUT with SubEvent DIFS End added at the time Present_Time + DIFS time.
Figure 3‑3: Event and SubEvent in Mac layer
The medium is checked at the end of DIFS time period and a random time BackOff is calculated based on the Contention Window (CW). An Event MAC_OUT with SubEvent BackOff is added at time Present_Time + BackOff Time.
Once BackOff is successful, NetSim starts the transmission process wherein it gets the aggregated frames from the QOS buffer and stores it in the Retransmit buffer. If the A-MPDU size is > RTS Threshold, then it enables RTS/CTS mechanism which is an optional feature.
Figure 3‑4: Event and SubEvent in Mac layer and Phy layer
NetSim sends the packet by calling the PHY_OUT Event with SubEvent AMPDU_Frame. Note that the implementation of A-MPDU is in the form of a linked list.
Whenever a packet is transmitted, the medium is made busy and a Timer Event with SubEvent Update Device Status is added at the transmission end time to set the medium again as idle.
Figure 3‑5: Event and SubEvent in Phy layer
Events PHY_OUT SubEvent AMPDU_SubFrame, Timer Event SubEvent Update Device Status and Event PHY_IN SubEvent AMPDU_SubFrame are added in succession for each MPDU (Subframe of the aggregated frame). This is done for collision calculations. If two stations start transmission simultaneously, then some of the SubFrames may collide. Only those collided SubFrames will be retransmitted again. The same logic is followed for an Errored packet. However, if the PHY header (the first packet) is errored or collided, the entire A-MPDU is resent.
At the receiver, the device de-aggregates the frame in the MAC Layer and generates a block ACK which is sent to the transmitter. If the receiver is an intermediate node, the de-aggregated frames are added to the access buffer of the receiver in addition to the packets which arrive from Network layer. If the receiver is the destination, then the received packets are sent to the Network layer. At the transmitter side, when the device receives the block acknowledgement, it retransmits only those packets which are errored. The rest of the packets are deleted from the retransmit buffer. This is done till all packets are transmitted successfully or a retransmit limit is reached after which next set of frames are aggregated to be sent.
### 802.11ac MAC and PHY Layer Implementation#
Improvements in 802.11ac compared to 802.11n
Feature 802.11n 802.11ac
Spatial Streams Up to 4 streams Up to 8 streams
MIMO Single User MIMO Multi-User MIMO
Channel Bandwidth 20 and 40 MHz 20, 40, 80 and 160 MHz (optional)
Modulation BPSK, QPSK, 16QAM and 64QAM BPSK, QPSK, 16QAM, 64QAM and 256QAM (optional)
Max Aggregated Packet Size 65536 octets 1048576 octets
Table 3‑10: Feature Comparison between 802.11ac to 802.11n
MAC layer improvements include only the increment of number of aggregated frames from 1 to 64. The MCS index for different modulation and coding rates are as follows:
MCS index Modulation Code Rate
0 BPSK 1/2
1 QPSK 1/2
2 QPSK 3/4
3 16QAM 1/2
4 16QAM 3/4
5 64QAM 2/3
6 64QAM 3/4
7 64QAM 5/6
8 256QAM 3/4
9 256QAM 5/6
Table 3‑11: Different Modulation schemes and Code Rates
Receiver sensitivity for different modulation schemes in 802.11ac (for a 20MHz Channel bandwidth) are as follows.
MCS Index Receiver Sensitivity (in dBm)
0 -82
1 -79
2 -77
3 -74
4 -70
5 -66
6 -65
7 -64
8 -59
9 -57
Table 3‑12: MCS index vs. Receiver Sensitivity (Rx-sensitivity)
The Rx-sensitivity is then set per the above table in conjunction with Max Packet Error Rate (PER) as defined in the standard.
If users wish to apply just the Rx-sensitivity (also termed as rate dependent input level), then the calculate_rxpower_by_per() function call in the function
fn_NetSim_IEEE802_11_HTPhy_UpdateParameter() in the file IEEE802_11_HT_PHY.c can be commented.
Number of subcarriers for different channel bandwidths
PHY Standard Subcarriers Capacity relative to 20MHz in 802.11ac
802.11n/802.11ac 20MHz Total 56, 52 Usable (4 pilot) x1.0
802.11n/802.11ac 40MHz Total 114, 108 Usable (6 pilot) x2.1
802.11ac 80MHz Total 242, 234 Usable (8 pilot) x4.5
802.11ac 160MHz Total 484, 468 Usable (16 pilot) x9.0
Table 3‑13: Number of subcarriers for different channel bandwidths
With the knowledge of MCS index and bandwidth of the channel data rate is set in the following manner
• Get the number subcarriers that are usable for the given bandwidth of the medium.
• Get the Number of Bits per Sub Carrier (NBPSC) from selected MCS
• Number of Coded Bits Per Symbol (NCBPS) = NBPSC*Number of Subcarriers
• Number of Data Bits Per Symbol (NDBPS) = NCBPS*Coding Rate
• Physical level Data Rate = NDBPS/Symbol Time (4micro sec for long GI and 3.6 micro sec for short GI).
### MAC Aggregation in NetSim#
NetSim supports A-MPDU aggregation and does not support A-MSDU aggregation. MAC Aggregation is independent of MCS (PHY Rate) or BER. It is the PHY Rate that adapts to BER via Rate Adaptation algorithms.
In the aggregation scheme shown in Figure 3‑6, several MPDU’s (MAC Protocol Data Units) are aggregated into a single A-MPDU (Aggregated MPDU). The A-MPDUs are created before transfer to the PHY. The MAC does not wait for MPDUs to aggregate. It aggregates the frames already queued to form an A-MPDU. The maximum size of an A-MPDU is 65,535 bytes.
Figure 3‑6: Aggregation scheme
In 802.11n, a single block acknowledgement is sent for the entire A-MPDU. The block ack acknowledges each packet that is received. It consists of a bitmap (compressed bitmap) of 64bits or 8 bytes. This bitmap can acknowledge up to 64 packets, 1bit for each packet.
The value of a bitmap field is 1, if respective packet is received without error else it is 0. Only the error packets are resent until a retry limit is reached. The number of packets in an A-MPDU is restricted to 64 since the size of block ack bitmap is 64bits.
Figure 3‑7: Block Ack Control Packet
• NetSim uses the parameter, Number of frames to aggregate, while the > standard uses the parameter A-MPDU Length Exponent. Per standard > the A-MPDU length in defned by two parameters: Max AMPDU length > exponent and BLOCK ACK Bitmap. The AMPDU length in bytes is > 2(13+MaximumAMPDULengthExponent) − 1 .
• Since NetSim doesn't model A-MSDU, a design decision was made to > model A-MPDU based on Block ACK bitmap size (to indicate the > received status of up to 64 frames) and therefore the parameter - > Number of frames to aggregate - in the GUI
• When EDCA is enabled, packet aggregation is done separately for each > QoS class
• The MAC aggregates packets destined to the same receiver, > irrespective of the end destination. Receiver is to be understood > as the next hop in a wireless transmission.
• RTS threshold is compared against the total A-MPDU size.
• Aggregation functionality may be incorrectly executed if
NumberOfFramesToAggregate × PacketSize (B) > 65, 535 (B)
### Signal to interference and noise ratio (SINR)#
At each receiver, in the beginning when the first packet is transmitted and every time the transmitter or receiver moves, NetSim calculates the received signal level from transmitter. The received signal level would be equal to transmit power less propagation losses. Next, NetSim calculates the interference received (at the same receiver), from all the interfering transmissions. Only co-channel interference is accounted, and adjacent channel interference is not calculated. Finally, NetSim takes the ratio of the signal level, to the sum of the total interference from other transmissions plus the thermal noise. This ratio is SINR.
Once the SINR is calculated the BER is got from the SINR-BER tables for the applicable modulation scheme. This BER is then converted to Packet-Error-Rate. Packet error (Yes/No) is determined by drawing a random number in (0, 1) and comparing against PER1.
The same is explained diagrammatically below.
Figure 3‑8: Radio Tx-Rx for one transmission
* Propagation model covers path loss, fading and shadowing. The models are documented in a separate document named Propagation-Models.pdf
** Interference noise due to other transmissions within the network
### Transmit Power#
The user can set a fixed transmit power via the GUI. Transmit power is a local variable; each STA and AP can be set to have different transmit powers. The transmit power can be dynamically varied by modifying the underlying 802.11 source C code.
### Carrier Sense#
Transmit power less propagation losses is the received power. The propagation loss is the sum (in dB scale) of pathloss, shadowing loss and fading loss. Various propagation models are available and are detailed in the Propagation model manual. Pathloss, Fading, and Shadowing can be turned on/off in GUI.
If ReceiverSensitivity(Lowest MCS)≥ Receiver − Power ≥ ED − Threshold the medium is set to busy. Note that CSMA/CA algorithm operates according to the medium state (busy/idle).
If Received − Power > Reciver − Sensitivity (LowestMCS) then MCS is set depending on the Received power and signal is decoded. Packet error is decided by looking up the SINR-BER table for the given MCS.
These variables can also be dynamically by modifying the underlying 802.11 source C code.
### Transmission Range, Carrier Sense Range, and Interference Range#
• Transmission Range: The transmission range is the range within which > the receiver of a signal can decode the source’s transmission > correctly (when no other transmitting node’s signal interferes). > This is typically smaller than the carrier-sensing range of the > transmitter.
• Carrier Sense Range: The carrier-sense range is the range within > which the transmitter’s signal exceeds the Carrier Sense Threshold > of the receiver (or another transmitter). The receiver (or another > transmitter) detects the medium to be busy and does not transmit > at this time.
• Interference Range: The interference range (defined by the receiver) > is the range within which any signal transmitted by other sources > interfere with the transmission of the intended source, thereby > causing a loss (marked as a collision in NetSim) at the receiver.
These three ranges are affected by the power of the transmitter. The greater the transmission power, the further a node can receive the transmission, and also the more nodes whose communication with other nodes will be affected by this transmission. The transmission range is also affected by the MCS used by the transmitter. The higher the MCS the shorter the range, and vice versa
### Carrier Sense (CS) Threshold#
In NetSim (from v13.2 onwards) the Carrier sense (CS) threshold is set equal to Control rate receive sensitivity.
CSThreshold = RecieverSensitivity(ControlRate)
Users can modify the CS threshold using the variable CSRANGEDIFF which is set to 0 dB in code by default. This implies a 0 dB differential between the lowest MCS (Control rate) Receive sensitivity (which determines DecodeRange) and CS Threshold (which determines CarrierSenseRange). The value of CSRANGEDIFF can be modified by the user in NetSim Standard or Pro versions, which ship with source code. We believe the term EDThreshold used in literature is the same as CSThreshold.
If the interference signal power (sum of the Received-power from all other transmitters), measured at the transmitter, is greater than ED-Threshold, then the transmitter assumes the medium is busy. Carrier is sensed by the transmitter; all CS activity occurs at the transmitter, and not at the receiver
### Transmitter’s choice of MCS#
If the rate adaptation algorithm is turned off, then the transmitter chooses MCS by comparing the RSS (calculated per the equation below) against the Receiver-Sensitivity for different MCS (per the tables in the standards). The highest possible MCS is then chosen. This means the MCS is not fixed but adapts to the received signal strength, even with rate adaption turned off in the MAC layer.
NetSim exploits the AP-STA and the STA-AP channel reciprocity. Therefore, Pathloss plus Shadow loss is identical in both directions.
Note that when computing BER (from SINR) fading loss is added to this RSSI value. Thus, fading loss is not accounted when choosing MCS, but is accounted when computing BER.
NetSim has rate adaptation algorithms which take care of selecting the right MCS for a given SINR. In the simplest algorithm for every 20 successful transmissions the rate (MCS) goes up 1 step, and for every 3 continuous failures, the rate goes down one step.
### Hidden Node Behaviour#
Consider N1 and N3 transmitting to N2 whereby N1 and N3 are beyond Carrier sense (CS) range. N1 is said to be hidden from N3 and vice versa.
When N1 and N3 transmit, there are “likely” to be collisions at N2. However, collisions do not occur all the time. The CSMA/CA algorithm exponentially increases the backoff and hence after a few collisions it is possible that one of the nodes gets a low back-off number while the other draws a very high back-off number. Thus, the node with low back-off can complete transmissions (of one and even more than one packet) while the other node (with the large backoff) is still in backoff.
When N1 transmits to N2, N3 can’t hear the transmission since N3 is beyond CS. Therefore, N3 can attempt if its backoff counts down to 0. However, when N2 sends back the WLAN-ACK, N3 will hear it since N3 is within range of N2. Therefore, in NetSim, N3 will sense the medium as busy and freeze its back off when N2 is sending the WLAN ACK to N1.
In case of N2 to N1/N3 transmissions, then the reverse is true for the MAC-ACK from the nodes. When N2 sends a packet to N1 (or N3) it is within range of N3 (or N1), however, when N1 (or N3) sends back the MAC ACK there is a chance of collision with a data packet of N3 (or N1).
### IEEE 802.11 e QoS and EDCA#
Quality of Service (QoS) provides you with the ability to specify parameters on multiple queues for increased throughput and better performance of differentiated wireless traffic like Voice-over-IP (VoIP), other types of audio, video, and streaming media, as well as traditional IP data over the Access Point.
QoS was introduced in 802.11e and is achieved using enhanced distributed channel access functions (EDCAFs). EDCA provides differentiated priorities to transmitted traffic, using four different access categories (ACs). With EDCA, high-priority traffic has a higher chance of being sent than low-priority traffic: a station with high priority traffic waits a little less before it sends its packet, on average, than a station with low priority traffic. This differentiation is achieved through varying the channel contention parameters i.e., the amount of time a station would sense the channel to be idle, and the length of the contention window for a backoff.
In addition, EDCA provides contention-free access to the channel for a period called a Transmit Opportunity (TXOP). A TXOP is a bounded time interval during which a station can send as many frames as possible (as long as the duration of the transmissions does not extend beyond the maximum duration of the TXOP). If a frame is too large to be transmitted in a single TXOP, it should be fragmented into smaller frames. The use of TXOPs reduces the problem of low rate stations gaining an inordinate amount of channel time in the legacy 802.11 DCF MAC. A TXOP time interval of 0 means it is limited to a single MPDU.
Figure 3‑9: Enhanced Distributed Channel Access (EDCA) in 802.11
NetSim categorizes application packets based on QoS class set in application properties as follows
• VO: UGS and RTPS
• VI: NRTPS and ERTPS
• BE: BE and all control packets suck as TCP ACKs
• BK: Everything else
#### Default EDCA Parameters#
The following tables shows the default EDCA parameters. This default parameter set is per page 899, IEEE Std 802.11-2016
Access Category CWmin CWmax AIFSN Max TXOP (μs)
Background (AC_BK) 31 1023 7 3264
Best Effort (AC_BE) 31 1023 3 3264
Video (AC_VI) 15 31 2 6016
Voice (AC_VO) 7 15 2 3264
Table 3‑14: Default EDCA access parameters for 802.11 b for both AP and STA
Access Category CWmin CWmax AIFSN Max TXOP (μs)
Background (AC_BK) 15 1023 7 2528
Best Effort (AC_BE) 15 1023 3 2528
Video (AC_VI) 7 15 2 4096
Voice (AC_VO) 3 7 2 2080
Table 3‑15: Default EDCA access parameters for 802.11 a / g / n / ac for both AP and STA
Access Category CWmin CWmax AIFSN Max TXOP (μs)
Background (AC_BK) 15 1023 9 0
Best Effort (AC_BE) 15 1023 6 0
Video (AC_VI) 7 15 3 0
Voice (AC_VO) 3 7 2 0
Table 3‑16: Default EDCA access parameters for 802.11 p (dot11OCBActivated is true)
Note: The EDCA parameters can be configured by changing the Physical type parameter according to the different standard, IEEE802.11b (Medium Access Protocol à DSSS), IEEE802.11n (Medium Access Protocol à HT), IEEE802.11ac (Medium Access Protocol à VHT), IEEE802.11a and g (Medium Access Protocol à OFDMA and OCBA àFALSE), IEEE802.11p (Medium Access Protocol à OFDMA and OCBA àTRUE).
In NetSim (with default code), rate adaptation works as follows:
1. FALSE: This is similar to Receiver Based Auto Rate (RBAR) algorithm. In this, the PHY rate gets set based on the target PEP (packet error probability) for a given packet size, as given in the standard. The adaptation is termed as “FALSE” since the rate is pre-determined as per standard and there is no subsequent “adaptation”.
1. 802.11 n/ac: Target PEP = 0.1, Packet Size: 4096 B
2. 802.11 b: Target PEP = 0.08, Packet Size: 1024B
3. 802.11 a/g/p: Target PEP:0.1, Packet size1000B
1. GENERIC: This is similar to the Auto Rate Fall Back (ARF) algorithm. In this algorithm:
1. Rate goes up one step for 20 consecutive packet successes
2. Rate goes down one step for 3 consecutive packet failures
1. MINSTREL: Per the minstrel rate adaptation algorithm implemented in Linux
If users, wish to set the PHY rate (MCS) by comparing the received signal strength against the Receiver minimum input sensitivity tables provided in the standards, they should comment the following line (line #38) in IEEE802_11.h, and rebuild the code
//#define _RECALCULATE_RX_SENSITIVITY_BASED_ON_PEP_
NetSim then chooses the rate at the beginning of the simulation and the rate doesn’t subsequently adapt. The receiver minimum input sensitivity levels are provided in the files
• 802.11b: IEEE802_11_DSSSPhy.c
• 802.11a, 802.11g and 802.11p: IEEE802.11_OFDMPhy.c
• 802.11n and 802.11ac: IEEE802_11_HTPhy.c
Selecting the different rate adaptation options would have no impact when running this modified code.
### Model Limitations#
1. Mobility of Wireless nodes is not available in infrastructure mode (when connected via an Access Point) and is only available in Adhoc mode. Hence mobility for wireless nodes can only be set when running MANET simulations.
2. Authentication and encryption are not supported
3. While different APs can operate in different channels, all the Wireless nodes connected to one AP operate in the same channel.
4. No beacon generation, probing or association
5. RTS, CTS and ACK are always transmitted at the base rate (lowest MCS)
6. Roaming whereby a STA leaves serving AP to associate with target AP (usually based on RSSI/SNR)
### Wi-Fi GUI Parameters#
The WLAN parameters can be accessed by right clicking on a Access Point or Wireless Node and selecting Interface Wireless Properties ->Datalink and Physical Layers
Access Point and Wireless Node Properties
Parameter Scope Range Description
Rate Adaptation Cell False The algorithm is similar to Receiver Based Auto Rate (RBAR) algorithm. In this, the PHY rate gets set based on the target PEP (packet error probability) for a given packet size. The adaptation is termed as “FALSE” since the rate is pre-determined as per standard and there is no subsequent “adaptation”
Minstrel Rate adaptation algorithm implemented in Linux
Generic The algorithm is similar to the Auto Rate Fall Back (ARF) algorithm. In this algorithm (i) Rate goes up one step for 20 consecutive packet successes, and (ii) Rate goes down one step after 3 consecutive packet failures
Short Retry Limit Local 1 to 255 Determines the maximum number of transmission attempts of a frame. The length of MPDU is less than/ equal to Dot11 RTS Threshold value, made before a failure condition is indicated.
Long Retry Limit Local 1 to 255 Determines the maximum number of transmission attempts of a frame. The length of MPDU is greater than Dot11 RTS Threshold value, made before a failure condition is indicated.
Dot11 RTS Threshold Local 0 to 65535 The size of packets (or A-MPDU if applicable) above which RTS/CTS (Request to Send / Clear to Send) mechanism gets triggered.
MAC Address Fixed Auto Generated The MAC address is a unique value associated with a network adapter. This is also known as hardware address or physical address. This is a 12-digit hexadecimal number (48 bits in length).
Buffer Size Local 1 to 100 Buffer is the memory in a device which holds data packets temporarily. If incoming rate is higher than the outgoing rate, incoming packets are stored in the buffer. NetSim models the buffer as an egress buffer. Unit is MB.
Medium Access Protocol Local DCF DCF is the process by which CSMA/CA is applied to Wi-Fi networks. DCF defines four components to ensure devices share the medium equally: Physical Carrier Sense, Virtual Carrier Sense, Random Back-off timers, and Interframe Spaces (IFS). DCF is used in non-QoS WLANs.
EDCAF QoS was introduced in 802.11e and is achieved using enhanced distributed channel access functions (EDCAFs). EDCA provides differentiated priorities to transmitted traffic, using four different access categories (ACs). With EDCA, high-priority traffic has a higher chance of being sent than low-priority traffic: a station with high priority traffic waits a little less before it sends its packet, on average, than a station with low priority traffic.
Physical Type Local DSSS Direct Sequence Spread Spectrum. The physical type parameter is set to DSSS if the standard selected is IEEE802.11b.
OFDM Orthogonal Frequency Division Multiplexing is utilized as a digital multi-carrier modulation method. The physical type parameter is set to OFDM if the standard selected is IEEE802.11 a, g and p.
HT Operates in frequency bands 2.4GHz or 5GHz band. The physical type parameter is set to HT if the standard selected is IEEE802.11n.
VHT The physical type parameter is set to VHT if the standard selected is IEEE802.11ac.
OCBA Activated Local True or False
This parameter determines the type of standard to be chosen for the OFDM physical type.
• The standard is set to IEEE802.11p if OCBA is True.
• The standard is set to IEEE802.11a and g if OCBA is False.
BSS Type Fixed Auto Generated The BSS type is fixed to Infrastructure mode. The wireless device can communicate - with each other or with a wired network - through an Access Point.
CW min (Slots) Local 0 to 255 Specifies the initial Contention Window (CW) used by an Access Point (or STA) for a particular AC for generating a random number for the back-off.
CW max (Slots) Local 0 to 65535 At each collision the CW is doubled. CWMax specifies the final maximum CW values used by an Access Point (or STA) for a particular AC for generating a random number for the back-off.
AIFSN (Slot) Local 2 to 15 Specifies the number of slots after a SIFS duration.
Max TXOP Local 0 to 65535 Specifies the maximum number of microseconds of an EDCA TXOP for a given AC. Unit is microseconds.
MSDU Lifetime (TU) Local 0 to 500 Specifies the maximum duration an MSDU would be retained by the MAC before it is discarded, for a given AC. MSDU Lifetime is specified in TU.
Interface Wireless- Physical Layer
Protocol Fixed IEEE802.11 Defines the MAC and PHY specifications like IEEE802.11a/b/g/n/ac/p for wireless connectivity for fixed, portable and moving stations within a local area.
Connection Medium Fixed Auto Generated Defines how the devices are connected or linked to each other.
Standard Cell IEEE802.11 a/b/g/n/ac/p
Refers to a family of specifications developed by IEEE for WLAN technology. The IEEE standards supported in NetSim are IEEE 802.11 a, b, g, n, ac and p.
802.11a provides up to 54 Mbps in 5GHz band.
802.11b provides 11 Mbps in the 2.4GHz bands.
802.11gprovides 54 Mbps transmission over short distances in the 2.4 GHz band.
802.11ac provides support for wider channels and beamforming capabilities.
802.11p provides support to Intelligent Transportation Systems.
Transmission Type Fixed DSSS The transmission type parameter is DSSS if the standard selected is IEEE802.11b.
OFDM The transmission type parameter is OFDM if the standard selected is IEEE802.11a, g and p.
HT The transmission type parameter is HT if the standard selected is IEEE802.11n.
VHT The transmission type parameter is VHT if the standard selected is IEEE802.11ac.
Number of Frames to Aggregate Cell
1 to 1024 (11ac)
1 to 64
(11n)
Number of frame aggregated to form an A-MPDU. This is fixed and cannot be dynamically varied (except by modifying the code). See 3.1.12 for more information.
Transmit Power Local 0 to 1000 Transmitted signal power. Note that the transmit power is not split among the antennas. This value is applied to each antenna in a multi-antenna transmitter. Unit is mW.
Antenna Gain Local 0 to 1000 Unit is dBi.
Antenna Height Local 0 to 1000 The height of antenna above the ground. Unit is m.
SIFS Fixed Auto Generated The time interval required by a wireless device in between receiving a frame and responding to the frame. Unit is microseconds.
Frequency Band Cell
2.4, 5
(Depends on the standard chosen)
Range of frequencies at which the device operates. The frequency band depends on the standard selected. Unit is GHz.
Bandwidth Cell 20, 40, 60, 80, 160 (Depends on the standard chosen) The bandwidth depends on the standard and the frequency band selected. Unit is MHz
CCA Mode Fixed Auto Generated A mechanism to determine whether a medium is idle or not. It includes Carrier sensing and energy detection.
Slot Time Fixed Auto Generated Time is quantized as slots in Wi-Fi. Unit is microseconds.
Standard Channel Local Depends on the standard chosen The channel options defined in the standards. The options would also depend on the frequency band if the standard supports multiple bands.
CW Min Fixed Auto Generated The minimum size of the Contention Window in units of slot time. The CW min is used by the MAC to calculate the back off time for channel access during a carrier sense.
CW Max Fixed Auto Generated The maximum size of the Contention Window in units of slot time. The CW is doubled progressively when collisions occur.
Transmitting Antennas Local 1 to 8 The number of transmit antennas. Note that power is not split among the transmit antennas but is assigned to each antenna. (The pair of Tx and Rx antenna present only for 802.11ac and 802.11n)
Receiving Antennas Local 1 to 8 The number of receive antennas
Guard Interval Local 400 and 800 Guard Interval is intended to avoid signal loss from multipath effect. Unit is nanoseconds.
Reference Distance d0 Local 1 to 10 Unit is m.
### IEEE802.11 Results#
IEEE802.11 performance metrics will be displayed in the results dashboard if the network scenario simulated consisted of at least one device with WLAN protocol enabled.
Parameter Description
Device_Id It represents the Id’s of the wireless devices which supports 802.11 (WLAN)
Interface_Id It represents the interface Id’s of the wireless nodes
Frame Sent It is the Number of frames sent by Access Point
Frame Received It is the number of frames received by a wireless node
RTS Sent It is the number of Request to send (RTS) packets sent by a Wireless Node. RTS/CTS frames are sent prior to transmission when the packet size exceeds RTS threshold. The access point receives the RTS and responds with a CTS frame. The station must receive a CTS frame before sending the data frame. The CTS also contains a time value that alerts other stations to hold off from accessing the medium while the station initiating the RTS transmits its data.
RTS Received It is the number of RTS packets received by an Access Points
CTS Sent It is the number of Clear to send (CTS) packets sent by an Access Points
CTS Received It is the number of CTS packets received by Wireless Nodes
Successful BackOff It is the number of successful backoffs running at a wireless node. In the IEEE 802.11 Wireless Local Area Networks (WLANs), network nodes experiencing collisions on the shared channel need to BackOff for a random period of time, which is uniformly selected from the Contention Window (CW). BackOff is a timer which is decreased as long as the medium is sensed to be idle for a DIFS, and frozen when a transmission is detected on the medium, and resumed when the channel is detected as idle again for a DIFS interval
Failed BackOff It is the number of failed backoffs at wireless node
Table 3‑17: Description of IEEE 802.11 Metrics
## Layer 2 (L2) Ethernet Switching#
Layer 2 switches have a MAC address table that contains a MAC address and port number. Switches follow this simple algorithm for forwarding packets:
1. When a frame is received, the switch compares the SOURCE MAC address to the MAC address table. If the SOURCE is unknown, the switch adds it to the table along with the port number the packet was received on. In this way, the switch learns the MAC address and port of every transmitting device.
2. The switch then compares the DESTINATION MAC address with the table. If there is an entry, the switch forwards the frame out the associated port. If there is no entry, the switch sends the packet out all its ports, except the port that the frame was received on This is termed as Flooding.
3. It does not learn the destination MAC until it receives a frame from that device
### Spanning Tree Protocol#
NetSim ethernet switches implement Spanning tree protocol to build a loop-free logical topology. This is always enabled and cannot be disabled.
### Switch Port States#
All switch ports in switches can be in one of the following states:
• Blocking: A port that would cause a switching loop if it were > active. No user data is sent or received over a blocking port.
• Listening: The switch processes BPDUs and awaits possible new > information that would cause it to return to the blocking state. > It does not populate the MAC address table and it does not forward > frames.
• Learning: While the port does not yet forward frames, it does learn > source addresses from frames received and adds them to the > filtering database (switching database). It populates the MAC > address table but does not forward frames.
• Forwarding: A port receiving and sending data in Ethernet frames, > normal operation.
It is recommended that the application start time is set to a value that is greater than the time it takes for the spanning tree protocol to complete (of the order of a 100s of milliseconds).
### Model Limitations#
1. The spanning protocol is only run at the beginning of simulation. If a link fails, the spanning protocol is not re-run.
2. If applications are started prior to completion of spanning tree protocol, then the MAC table created is not updated per the spanning tree protocol.
3. Jumbo Frames are not supported in NetSim Ethernet Protocol
### Switch: GUI Parameters#
Switch properties can be set by right clicking on a switch --> Properties -->
Interface_1(ETHERNET)
Figure 3‑10: Data Link Layer Properties of a Switch
The properties that can be set are:
Parameter Type * Range Description
MAC ADDRESS Fixed Auto generated The MAC address is a unique value associated with a network adapter. This is also known as hardware address or physical address. This is a 12-digit hexadecimal number (48 bits in length).
Buffer Size (MB) Local 1-5 Buffer is the memory in a device which holds data packets temporarily. If the transmitting port is busy, incoming packets are stored in the buffer. NetSim models the buffer as an egress buffer and the range is 1 MB to 5MB per port of the switch.
STP Status Fixed TRUE Spanning Tree Protocol is set to “True” in the Switches by default.
Switch Priority Local 1-61440 This is the priority that can be assigned to the Switch. Priority is involved in deciding the root bridge for STP.
Switch ID Fixed Auto generated Each switch has a unique ID for spanning tree calculation. The ID is derived by combining the priority and MAC address. Since a switch has a MAC address for each port, the least of the MAC address of the connected ports is taken while forming the unique ID.
Spanning Tree Fixed IEEE802.1D The Spanning Tree Protocol (STP) ensures a loop-free topology for any bridged Ethernet local area network. The basic function of STP is to prevent bridge loops and the broadcast radiation that results from them. STP is standardized as IEEE 802.1D. As the name suggests, it creates a spanning tree within a network of connected layer-2 bridges (typically Ethernet switches) and disables those links that are not part of the spanning tree, leaving a single active path between any two network nodes.
STP Cost Local 0-1000 Cost used by the switch to calculate spanning tree. The cost assigned to each port is based on its data rate.
Switching Mode Local
Store Forward,
Cut Through
Store and Forward: Forwarding takes place only after receipt of complete frame. This technique buffers the incoming frame and checks for errors. If no error is found it forwards the frame to the outgoing port, otherwise it discards the frame.
Cut through: Switch forwards the incoming frames to its appropriate outgoing port immediately after receipt of destination address of the frame.
VLAN Status* Local TRUE, FALSE To enable/disable VLAN
Table 3‑18: Description of Datalink layer properties of switch parameter
## Open Shortest Path First (OSPF v2) Routing Protocol#
### OSPF Overview#
OSPF is a link-state routing protocol. It is designed to be run internal to a single Autonomous System. Each OSPF router maintains an identical database describing the Autonomous System's topology. From this database, a routing table is calculated by constructing a shortest-path tree.
OSPF routes IP packets based solely on the destination IP address found in the IP packet header. IP packets are routed "as is" -- they are not encapsulated in any further protocol headers as they transit the Autonomous System. OSPF is a dynamic routing protocol. In NetSim, OSPF can detect topological changes in the AS (such as router interface failures) and calculate new loop-free routes after a period of convergence.
Each router maintains a database describing the Autonomous System's topology. This database is referred to as the link-state database. Each participating router has an identical database. Each individual piece of this database is a particular router's local state (e.g., the router's usable interfaces and reachable neighbors). The router distributes its local state throughout the Autonomous System by flooding.
All routers run the exact same algorithm, in parallel. From the link-state database, each router constructs a tree of shortest paths with itself as root. This shortest-path tree gives the route to each destination in the Autonomous System. The cost of a route is described by a single dimensionless metric.
### OSPF Features#
1. OSPF Messages – Hello, DD, LS Request, LS Update, LS Ack
2. Router LSA
3. The Neighbor Data structure features the following
5. DB summary list
9. Inactivity timer
10. Routing table
11. Shortest path tree
12. The Interface data structure features
13. Neighbor router list
14. Flood timer
15. Update LS list
16. Network LS timer
17. Delayed ack list
18. The Protocol data structure features
19. Interface list
20. Area list
21. Max age removal timer
22. SPF timer
23. Routing table
24. The Area Data structure features
25. Associated interface list
26. Router LSA list
27. Network LSA list
28. Router summary LSA list
29. Network summary LSA list
30. Max age list
31. Router LS timer
32. Shortest path list
33. The following can be logged during simulation
34. Hello log
35. SPF log
36. Common log
37. Debug logs – LSDB, RXList, RLSA, RCVLSU, LSULIST, Route
### Excluded Features#
The following features in OSPF have not been implemented - Multiple Areas, Network LSA, Router summary LSA, Network summary LSA, Authentication, Equal cost multipath, External AS, External routing information, Interface type – Broadcast, NBMA, Virtual, Point to multi-point
### OSPF: GUI Parameters#
OSPF properties can be set by right clicking on Router --> Properties --> Application layer see Figure 3‑11.
Figure 3‑11: Routing protocol properties of router
The properties that can be set are:
Parameter Type * Range Description
Version Global Fixed OSPF Version 2 as per RFC 2328 for IPv4.
LSRefresh_Time (s) Global Fixed The maximum time between distinct originations of any particular Link State Advertisement (LSA). If the link state age field of one of the router’s self-originated LSAs reaches the value LSRefreshTime, a new instance of the LSA is originated, even though the contents of the LSA (apart from the LSA header) will be the same. The value of LSRefreshTime is set to 30 minutes.
LSA_Maxage (s) Global Fixed The maximum age that an LSA can attain. When an LSA's LS age field reaches MaxAge, it is reflooded in an attempt to flush the LSA from the routing domain. LSAs of age MaxAge are not used in the routing table calculation. The default value of MaxAge is set to 1 hour or 3600s
Increment_Age (s) Global 0 - 100 This is an internal variable of NetSim used for simulation purposes. This value decides how often to increase the age of the LSA in the OSPF LSA Lists. A small value will cause frequent updates and provide higher accuracy but may slow down simulation, and vice versa for a large value
Maxage_removal_Time (s) Global 0 - 9999 This variable decides the time when the LSA is removed from the MaxAgeLSA List
MinLS_Interval (s) Global Fixed The minimum time between distinct originations of any particular LSA. The value of MinLSInterval is set to 5 seconds
SPFCalc_Delay (ms) Global 0 - 9999 If SPF calculation is triggered, then the router will wait for this duration before starting the calculation. This can be used for the router to take multiple updates into account
Flood_Timer (ms) Global 0 - 9999 The amount of time to wait before initializing the flood procedure. A random number between 0 to the set value will be chosen. The flood timer on/off is per the ISSENDDELAYUPDATE variable setting
Advertise_Self_Interface Global True/False This is reserved for future use. As of NetSim v12, this should always be true. This will be used when a point-to-multipoint link is connected to the interface, and when such links are connected this should be set to false
Send_Delayed_Update Global True/False This variable can be set to true to delay sending the LSU. If set to true, then the delay would be per the flooding timer. Else the update is set immediately.
Table 3‑19: Description of Application layer Routing protocol properties
*Global – Changes in all devices of similar type. Local – Only changes in current device
## Transmission Control Protocol (TCP)#
### TCP overview#
TCP is a connection-oriented, end-to-end reliable protocol designed to fit into a layered hierarchy of protocols which support multi-network applications. The TCP provides for reliable communication between host computers connected computer communication networks. Very few assumptions are made as to the reliability of the communication protocols below the TCP layer. TCP assumes it can obtain a simple, potentially unreliable datagram service from the lower-level protocols. In principle, the TCP should be able to operate above a wide spectrum of communication systems ranging from wired to wireless to mobile communication.
The TCP fits into a layered protocol architecture just above a basic Internet Protocol which provides a way for the TCP to send and receive variable-length segments of information enclosed in IP packets. The IP packet provides a means for addressing source and destination TCPs in different networks. The IP protocol also deals with any fragmentation or reassembly of the TCP segments required to achieve transport and delivery through multiple networks and interconnecting gateways.
Application
TCP
IP
MAC
PHY
Figure 3‑12: Protocol Layering
### TCP Features#
The following features are implemented in TCP.
1. Three-way handshake (open/close)
2. Sequence Numbers
3. Slow start and congestion avoidance
4. Fast Retransmit/Fast Recovery
5. Selective Acknowledgement
### Congestion Control Algorithms in TCP#
The following congestion control algorithms are supported in NetSim.
1. Old Tahoe
2. Tahoe
3. Reno
4. New Reno
5. BIC
6. CUBIC
### Limitations of TCP#
1. Send and Receive buffers are infinite
### TCP: GUI parameters#
The TCP parameters can be accessed by right clicking on a node and selecting Properties -> Transport Layer
Figure 3‑13: Transport layer protocol properties of wired node
The properties that can be set are:
Parameter Type * Range Description
Congestion Control Algorithm Local OLD TAHOE, TAHOE, RENO, NEW RENO, BIC, CUBIC
Congestion control algorithm is used to control the network congestion.
Old Tahoe is the combination of slow start and congestion avoidance algorithm.
The Fast-retransmit algorithms operating with Old Tahoe is known as the Tahoe. This algorithm works based on duplicate ack. When it receives three duplicate ack, which is the indication of segment loss, that segment will be retransmitted immediately without waiting for timeout.
Reno implements fast recovery in case of three duplicate acknowledgements.
New Reno improves retransmission during the fast-recovery phase of TCP Reno.
BIC algorithm tries to find the maximum where to keep the window at for a long period of time, by using a binary search algorithm.
CUBIC is an implementation of TCP with an optimized congestion control algorithm for high bandwidth networks with high latency.
Congestion
plot enabled
Local FALSE, TRUE Congestion plot can enable or disable by selecting value as TRUE and FALSE
Max SYN Retries Local 1-10 Maximum number of TCP SYN ACK packets that can be retransmitted. The value should in the range of 1 to 10.
Acknowledgement Type Local Delayed, Undelayed If set to delayed, ACK response will be delayed improving network performance. If set to Un delayed, ACK will be sent immediately without delay.
MSS (bytes) Local 64-1460
The maximum amount of data that a single message may contain. The MSS is the maximum data size and does not include the size of the header.
MSS = MTU – (Network and Transport layer protocol headers).
Initial SSThreshold(bytes) Local 5840-65535 The server-initial–ss-threshold should be in the range between 5840 and 65535 bytes.
Time Wait Timer(s) Local 30-240 The Time wait timer default value is 120 seconds. The purpose of TIME-WAIT is to prevent delayed packets from one connection being accepted by a later connection.
Selective ACK Local TRUE, FALSE In Selective Acknowledgment (SACK) mechanism, the receiving TCP sends back SACK packets to the sender informing the sender of data that has been received. The sender can then retransmit only the missing data segments.
Window Scaling Local TRUE, FALSE The TCP window scaling option is to increase the receive window size allowed in Transmission Control Protocol above its former maximum value of 65,535 bytes.
Sack Permitted Local TRUE, FALSE The SACK-permitted option is offered to the remote end during TCP setup as an option to an opening SYN packet. The SACK option permits selective acknowledgment of permitted data.
Timestamp Option Local TRUE, FALSE TCP is a symmetric protocol, allowing data to be sent at any time in either direction. Therefore, timestamp echoing may occur in either direction. For simplicity and symmetry, we specify that timestamps always be sent and echoed in both directions. For efficiency, we combine the timestamp and timestamp reply fields into a single TCP Timestamps Option.
Table 3‑20: Description of Transport layer protocol properties
### TCP Performance Metrics#
TCP Metrics table will be available in the Simulation Results dashboard if TCP is enabled in at least one device in the network. It provides the following information specific to TCP.
Parameter Description
Source It displays the name with ID of the source device which generates TCP packets
Destination It displays the name with ID of the destination device which receives TCP packets
Local Address It displays the local IP address with port number of the device present in source column
Remote Address It represents the remote IP address with port number for the source and destination
Syn Sent It is the number of syn packets sent by the source
Syn-Ack Sent It is the number of syn ack packets sent by the destination
Segment Sent It is the number of segments sent by a source
Segment Received It is the number of segments received by a destination
Segment Retransmitted It is the number of segments retransmitted by the source
Ack Sent It is the number of acknowledgements sent by a source to destination in response to TCP syn ack and the number of acks sent by destination to source in response to the successful reception of data packet
Ack Received It is the number of acknowledgements received by source in response to data packets and the number of acks received by destination in response to syn ack packet
Duplicate segment received It is the number of duplicate segments received by destination
Out of order segment received It is the number of out of ordered packets received by destination
Duplicate ack received It is the number of duplicate acknowledgements received by source
Times RTO expired It is the number of times RTO timer expired at source
Table 3‑21: Parameter discerption of TCP Metrics table
### TCP Reference Documents#
1. RFC 793: TRANSMISSION CONTROL PROTOCOL
2. RFC 1122: Requirements for Internet Hosts -- Communication Layers
3. RFC 5681: TCP Congestion Control
4. RFC 3390: Increasing TCP's Initial Window
5. RFC 6298: Computing TCP's Retransmission Timer
6. RFC 2018: TCP Selective Acknowledgment Options
7. RFC 6582: The NewReno Modification to TCP's Fast Recovery Algorithm
8. RFC 6675: A Conservative Loss Recovery Algorithm Based on Selective Acknowledgment (SACK) for TCP
9. RFC 7323: TCP Extensions for High Performance
10. https://research.csc.ncsu.edu/netsrv/sites/default/files/cubic_a_new_tcp_2008.pdf
11. https://research.csc.ncsu.edu/netsrv/sites/default/files/bitcp.pdf
12. https://research.csc.ncsu.edu/netsrv/sites/default/files/hystart_techreport_2008.pdf
## User Datagram Protocol (UDP)#
### UDP Overview#
UDP (User Datagram Protocol) is a communication protocol that offers a limited amount of service when messages are exchanged between computers in a network that uses the Internet Protocol (IP). UDP uses the Internet Protocol to get a data unit (called a datagram) from one computer to another.
This protocol is transaction oriented, and delivery and duplicate protection are not guaranteed. Applications requiring ordered reliable delivery of streams of data should use the Transmission Control Protocol (TCP).
### UDP: GUI parameters#
The UDP protocol can be set for an application by clicking on the Applications Transport Protocol option as shown below see Figure 3‑14.
Figure 3‑14: Application configuration window
### UDP Performance Metrics#
UDP Metrics table will be available in the Simulation Results dashboard if UDP is enabled in at least one device in the network. It provides the following information specific to UDP see Table 3‑22.
Parameter Description
Device Id It is the Id of a device in which UDP is enabled
Local Address It represents the IP address with port number of the local device (either source or destination)
Foreign Address It represents the IP address with port number of the remote device (either source or destination)
Datagram sent It is the total number of datagrams sent from the source
Datagram received It is the total number of datagrams received at the destination
Table 3‑22: Parameter discerption of UDP Metrics table
### UDP Reference Documents#
1. RFC 768: User Datagram Protocol
## IP Protocol#
### IP Performance Metrics#
IP Metrics table will be available in the Simulation Results dashboard if IP is enabled in at least one device in the network. It provides the following information specific to IP protocol:
Parameter Description
Device_Id It displays the Id’s of the Layer_3 devices
Packet sent It is the number of packets sent by a source, intermediate devices (Router or L3 switch)
Packet forwarded It is the number of packets forwarded by intermediate devices (Router or L3 switch)
Packet received It is the number of data packets received by destination, intermediate devices (routing packets (OSPF, RIP etc.) received by Routers)
Packet discarded It is the number of data packets that are discarded after their TTL value is expired.
TTL expired Time-to-live (TTL) is a value in an Internet Protocol (IP) packet that tells a network router whether or not the packet has been in the network too long and should be discarded
Firewall blocked It is the number of packets blocked by firewall at routers
Table 3‑23: Parameter discerption of IP Metrics table
## Buffering, Queueing and Scheduling#
### Buffers#
Devices and their Interfaces with buffers that support queuing and scheduling algorithms are:
1. Router (WAN – Network Layer)
2. EPC (WAN – Network Layer)
3. 6LOWPAN (WAN – Network Layer)
4. Satellite Gateway (WAN – Network Layer)
Queuing and scheduling in NetSim, works as follows:
1. The scheduler schedules packet transmission from the head-of-queue per the scheduling algorithm. FIFO algorithm uses a single queue while Priority, RR and WFQ use 4 queues (1 queue for each priority)
2. The buffer size is a user input. This buffer is not split among the various queues. At any point in time the cumulative size of all queues is the buffer fill.
3. The way in which the individual queues are filled up, is per the queuing algorithm selected (implemented in version 12.1)
The buffer is an egress buffer. The buffer size in Mega Bytes (MB), for each interface mentioned above is a user input. The options 8, 16, 32, 64, 128, 256, 512, 1024, 2048 and 4096 MB
### Queuing#
Drop Tail: The queue is filled up till the buffer capacity. When the queue is full if any packet arrives, it is dropped. The buffer size is a user input.
Random Early Detection (RED):
1. The queue is filled up till the average queue size is equal to minimum threshold, without dropping any packet.
2. Randomly packets are dropped when average queue size is between minimum threshold and maximum threshold. The number of packets being dropped depends on the Max Probability value.
3. All packets are dropped when average queue size is above maximum threshold.
User Inputs - Maximum threshold, minimum threshold and maximum probability.
$$Avg\ = \frac{t_{n}}{t_{n + 1}}\ \left( Avg - x_{n} \right) + x_{n}$$
Avg – Average Queue Size. Avg is initially 0
tn – Time when nth packet was added to the queue
tn + 1 – Current time which is the time when the (n+1)th packet is added
xn Size of nth packet (B)
Packets are dropped if
$No\ of\ Dropped\ Packets > \frac{Rand\ (0,1)}{P}\$ where p = C1 × Avg + C2
$$C_{1} = \frac{Max\ Probability}{(Max\ Threshold - Min\ Threshold)}$$
$$C_{2} = \ \frac{Max\ Probability}{(Max\ Threshold - Min\ Threshold)} \times Min\ Threshold$$
Weighted Random Early Detection (WRED):
Please refer to RED explained earlier. This is modified as follows
1. There are different Max and Min threshold value for each type of priority, i.e. High, Medium, Normal, Low (The RED algorithm had only one set of Max and Min Threshold)
2. For the given threshold values of the packets, Random Early Detection (RED) algorithm is applied.
Reference Documents
1. Sally Floyd, Van Jacobson (1993). Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Transactions on Networking.
Queue Size: The queue depth can be obtained from the Event Trace or by modifying the protocol source code. To obtain it from the event trace, an MS Excel script would need to be written to filter by node, and at different points of time, add the number of APP-OUT events and subtract the number of TRANSPORT-OUT events. Note that deeper issues such as segmentation etc. will need to be handled appropriately based on the way the application and transport layer interact.
### Scheduling#
First In First Out (FIFO): Packets are scheduled according to their arrival time in the queue. Hence, first in packet in queue is scheduled first.
Priority: NetSim supports 4 priority queues namely High, Medium, Normal and Low. With this scheduling, first all packets in the High priority queue are served, and then those in Medium, then in normal and finally those packets in the low priority queue. Note that this could lead to situations where only higher priority packets are served and lower priority packets are never served.
Round Robin: Packet from all the 4 priorities are served in circular order. When packet arrives, they are stored in the corresponding priority list
Weighted Fair Queuing (WFQ): When packet arrives, they are stored in corresponding list according to priority. Packets are served in order of maximum weight of the priority list. In NetSim WFQ is approximated as:
Weight = (Number of packets in Queue) × Priority where
Priority = 1, 2, 3 or 4
1 - Low priority, 2 - Normal, 3 – Medium, 4 - High
Early Deadline First (EDF): Packets are added in the queue as they arrive. While dequeuing the packets with earliest deadline are served first. The packets which have exceeded deadline are dropped.
Deadline = Max Latency − Packet Creation Time
Max Latency with respect to quality of service (QoS) of the packet is a user input
The error rates in NetSim wired links are based on a standard error measurement unit called BER or Bit Error Rate. BER represents the ratio of errored bits to total bits.
The BER value can be set by the user. A typical value of BER, say 1 × 10−6, which equals 0.000001, means that 1 bit is in error for every one-million bits transmitted. It is important to note that Bit Error Rate is NOT equal to Packet error rate. (PER)
PER = 1 − (1−BER)L where L is the packet length in bits
For BER values less than 0.001, this is mathematically approximated in NetSim as
PER = BER * L
# Internetworks Experiments in NetSim#
Apart from examples, in-built experiments are also available in NetSim. Examples help the user understand the working of features in NetSim. Experiments are designed to help the user (usually students) learn networking concepts through simulation. The experiments contain objective, theory, set-up, results, and inference. The following experiments are available in the Experiments manual (pdf file).
1. Understand Measures of Network Performance: Throughput and Delay
2. Throughput and Bottleneck Server Analysis
3. Delay and Little’s Law
4. Understand working of ARP, and IP Forwarding within a LAN and across a router
5. Simulate and study the spanning tree protocol.
6. Introduction to TCP connection management
7. Reliable data transfer with TCP
8. Mathematical Modelling of TCP Throughput Performance
9. Study how throughput and error of a Wireless LAN network changes as the distance between the Access Point and the wireless nodes is varied.
12. TCP Congestion Control Algorithms
13. Multi-AP Wi-Fi Networks: Channel Allocation
14. Study the working and routing table formation of Interior routing protocols, i.e. Routing Information Protocol (RIP) and Open Shortest Path First (OSPF)
15. M/D/1 and M/G/1 Queues
16. Wi-Fi Multimedia Extension (IEEE 802.11 EDCA)
17. Understand the working of OSPF.
18. Understand the events involved in NetSim DES (Discrete Event Simulator) in simulating the flow of one packet from a Wired node to a Wireless node.
19. Understand the working of TCP BIC Congestion control algorithm, simulate and plot the TCP congestion window.
# Reference Documents#
1. IEEE 802.3 standard for Ethernet
2. IEEE 802.11 standards for Wireless LAN
3. RFCs 777, 760, 792 for Internet Control Message Protocol
4. IENs 108, 128 for Internet Control Message Protocol
5. RFC 2328 for Open Shortest Path First (OSPF)
# Latest FAQs#
Up to date FAQs on NetSim’s Internetworks library is available at
https://tetcos.freshdesk.com/support/solutions/folders/14000108665
https://tetcos.freshdesk.com/support/solutions/folders/14000113123
https://tetcos.freshdesk.com/support/solutions/folders/14000119396
1. In other words, the instantaneous PER is used in a Bernoulli trial to decide whether the current packet is successfully received or not |
# TU Wien:Diskrete Mathematik für Informatik VU (Drmota)/Prüfung 2020-12-11
### Let S be a compact orientable surfaces with genus >= k. State the Euler characteristic of S. (2 Points)
${\displaystyle \chi (S)=2-2k}$
### Let G be a connected graph that is embedded on S. State the formula relating the number of vertices, faces, and edges of G. (2 Points)
${\displaystyle \alpha _{0}(G)-\alpha _{1}(G)+\alpha _{2}(G)=\chi (S)(\alpha _{0}=|V|,\alpha _{1}=|E|,\alpha _{2}=|F|)}$
### Suppose that G has n faces, and each face is bounded by exactly 4 edges. Calculate the number of vertices and edges of G. (5 Points)
1. edges = 2* #faces = 2n
2. vertices = #edges - #faces + \chi (S) = 2n-n + (2-2k) = n + 2-2k |
\nameJonathan Baxter \[email protected]
\addr4616 Henry Street Pittsburgh, PA 15213
\namePeter L. Bartlett \[email protected]
###### Abstract
Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce , a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (s) controlled by parameterized stochastic policies. A similar algorithm was proposed by \citeAkimura95. The algorithm’s chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of , and show how the correct choice of the parameter is related to the mixing time of the controlled . We briefly describe extensions of to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper [6] we show how the gradient estimates generated by can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward.
## 1 Introduction
Dynamic Programming is the method of choice for solving problems of decision making under uncertainty [9]. However, the application of Dynamic Programming becomes problematic in large or infinite state-spaces, in situations where the system dynamics are unknown, or when the state is only partially observed. In such cases one looks for approximate techniques that rely on simulation, rather than an explicit model, and parametric representations of either the value-function or the policy, rather than exact representations.
Simulation-based methods that rely on a parametric form of the value function tend to go by the name “Reinforcement Learning,” and have been extensively studied in the Machine Learning literature [8, 25]. This approach has yielded some remarkable empirical successes in a number of different domains, including learning to play checkers [20], backgammon [27, 28], and chess [7], job-shop scheduling [30] and dynamic channel allocation [22].
Despite this success, most algorithms for training approximate value functions suffer from the same theoretical flaw: the performance of the greedy policy derived from the approximate value-function is not guaranteed to improve on each iteration, and in fact can be worse than the old policy by an amount equal to the maximum approximation error over all states. This can happen even when the parametric class contains a value function whose corresponding greedy policy is optimal. We illustrate this with a concrete and very simple example in Appendix A.
An alternative approach that circumvents this problem—the approach we pursue here—is to consider a class of stochastic policies parameterized by , compute the gradient with respect to of the average reward, and then improve the policy by adjusting the parameters in the gradient direction. Note that the policy could be directly parameterized, or it could be generated indirectly from a value function. In the latter case the value-function parameters are the parameters of the policy, but instead of being adjusted to minimize error between the approximate and true value function, the parameters are adjusted to directly improve the performance of the policy generated by the value function.
These “policy-gradient” algorithms have a long history in Operations Research, Statistics, Control Theory, Discrete Event Systems and Machine Learning. Before describing the contribution of the present paper, it seems appropriate to introduce some background material explaining this approach. Readers already familiar with this material may want to skip directly to section 1.2, where the contributions of the present paper are described.
### 1.1 A Brief History of Policy-Gradient Algorithms
For large-scale problems or problems where the system dynamics are unknown, the performance gradient will not be computable in closed form111See equation (17) for a closed-form expression for the performance gradient.. Thus the challenging aspect of the policy-gradient approach is to find an algorithm for estimating the gradient via simulation. Naively, the gradient can be calculated numerically by adjusting each parameter in turn and estimating the effect on performance via simulation (the so-called crude Monte-Carlo technique), but that will be prohibitively inefficient for most problems. Somewhat surprisingly, under mild regularity conditions, it turns out that the full gradient can be estimated from a single simulation of the system. The technique is called the score function or likelihood ratio method and appears to have been first proposed in the sixties [2, 17] for computing performance gradients in i.i.d. (independently and identically distributed) processes.
Specifically, suppose is a performance function that depends on some random variable , and is the probability that , parameterized by . Under mild regularity conditions, the gradient with respect to of the expected performance,
η(θ)=\boldmathEr(X), (1)
may be written
∇η(θ)=\boldmathEr(X)∇q(θ,X)q(θ,X). (2)
To see this, rewrite (1) as a sum
η(θ)=∑xr(x)q(θ,x),
differentiate (one source of the requirement of “mild regularity conditions”) to obtain
∇η(θ)=∑xr(x)∇q(θ,x),
rewrite as
∇η(θ)=∑xr(x)∇q(θ,x)q(θ,x)q(θ,x),
and observe that this formula is equivalent to (2).
If a simulator is available to generate samples distributed according to , then any sequence generated i.i.d. according to gives an unbiased estimate,
^∇η(θ)=1NN∑i=1r(Xi)∇q(θ,Xi)q(θ,Xi), (3)
of . By the law of large numbers, with probability one. The quantity is known as the likelihood ratio or score function in classical statistics. If the performance function also depends on , then is replaced by in (2).
#### 1.1.1 Unbiased Estimates of the Performance Gradient for Regenerative Processes
Extensions of the likelihood-ratio method to regenerative processes (including Markov Decision Processes or s) were given by \citeAglynn86,glynn90,glynn95 and \citeAreiman86,reiman89, and independently for episodic Partially Observable Markov Decision Processes (s) by \citeAwilliams92, who introduced the algorithm222A thresholded version of these algorithms for neuron-like elements was described earlier in \citeABarSutAnd83.. Here the i.i.d. samples of the previous section are sequences of states (of random length) encountered between visits to some designated recurrent state , or sequences of states from some start state to a goal state. In this case can be written as a sum
∇q(θ,X)q(θ,X)=T−1∑t=0∇pXtXt+1(θ)pXtXt+1(θ), (4)
where is the transition probability from to given parameters . Equation (4) admits a recursive computation over the course of a regenerative cycle of the form , and after each state transition ,
zt+1=zt+∇pXtXt+1(θ)pXtXt+1(θ), (5)
so that each term in the estimate (3) is of the form333The vector is known in reinforcement learning as an eligibility trace. This terminology is used in \citeABarSutAnd83. . If, in addition, can be recursively computed by
r(X0,…,Xt+1)=ϕ(r(X0,…,Xt),Xt+1)
for some function , then the estimate for each cycle can be computed using storage of only parameters ( for and parameter to update the performance function ). Hence, the entire estimate (3) can be computed with storage of only real parameters, as follows.
##### Algorithm 1.1: Policy-Gradient Algorithm for Regenerative Processes.
1. Set , , , and ().
2. For each state transition :
• If the episode is finished (that is, , set
,
,
,
.
• Otherwise, set
.
3. If return , otherwise goto 2.
Examples of recursive performance functions include the sum of a scalar reward over a cycle, where is a scalar reward associated with state (this corresponds to being the average reward multiplied by the expected recurrence time ); the negative length of the cycle (which can be implemented by assigning a reward of to each state, and is used when the task is to mimimize time taken to get to a goal state, since in this case is just ); the discounted reward from the start state, , where is the discount factor, and so on.
As \citeAwilliams92 pointed out, a further simplification is possible in the case that is a sum of scalar rewards depending on the state and possibly the time since the starting state (such as , or as above). In that case, the update from a single regenerative cycle may be written as
Δ=T−1∑t=0∇pXtXt+1(θ)pXtXt+1(θ)[t∑s=0r(Xs,s)+T∑s=t+1r(Xs,s)].
Because changes in have no influence on the rewards associated with earlier states (), we should be able to drop the first term in the parentheses on the right-hand-side and write
Δ=T−1∑t=0∇pXtXt+1(θ)pXtXt+1(θ)T∑s=t+1r(Xs,s). (6)
Although the proof is not entirely trivial, this intuition can indeed be shown to be correct.
Equation (6) allows an even simpler recursive formula for estimating the performance gradient. Set , and introduce a new variable . As before, set and if , or and otherwise. But now, on each iteration, set . Then is our estimate of . Since is updated on every iteration, this suggests that we can do away with altogether and simply update directly: , where the are suitable step-sizes444The usual requirements on for convergence of a stochastic gradient algorithm are , , and .. Proving convergence of such an algorithm is not as straightforward as normal stochastic gradient algorithms because the updates are not in the gradient direction (in expectation), although the sum of these updates over a regenerative cycle are. \citeAmarbach98 provide the only convergence proof that we know of, albeit for a slightly different update of the form , where is a moving estimate of the expected performance, and is also updated on-line (this update was first suggested in the context of s by \shortciteAjaakola95).
\citeA
marbach98 also considered the case of -dependent rewards (recall the discussion after (3)), as did \citeAbaird98 with their “” algorithm (Value And Policy Search). This last paper contains an interesting insight: through suitable choices of the performance function , one can combine policy-gradient search with approximate value function methods. The resulting algorithms can be viewed as actor-critic techniques in the spirit of \citeABarSutAnd83; the policy is the actor and the value function is the critic. The primary motivation is to reduce variance in the policy-gradient estimates. Experimental evidence for this phenomenon has been presented by a number of authors, including \citeABarSutAnd83, \citeAkimura98a, and \citeAbaird98. More recent work on this subject includes that of \shortciteAsutton99 and \shortciteAkonda99. We discuss the use of -style updates further in Section 6.2.
So far we have not addressed the question of how the parameterized state-transition probabilities arise. Of course, they could simply be generated by parameterizing the matrix of transition probabilities directly. Alternatively, in the case of s or s, state transitions are typically generated by feeding an observation that depends stochastically on the state into a parameterized stochastic policy, which selects a control at random from a set of available controls (approximate value-function based approaches that generate controls stochastically via some form of lookahead also fall into this category). The distribution over successor states is then a fixed function of the control. If we denote the probability of control given parameters and observation by , then all of the above discussion carries through with replaced by . In that case, Algorithm 1.1 is precisely Williams’ algorithm.
Algorithm 1.1 and the variants above have been extended to cover multiple agents \shortcitepeshkin00, policies with internal state \shortcitemeuleau99, and importance sampling methods \shortcitemeuleau00. We also refer the reader to the work of \citeArubinstein93 and \citeArubinstein98 for in-depth analysis of the application of the likelihood-ratio method to Discrete-Event Systems (), in particular networks of queues. Also worth mentioning is the large literature on Infinitesimal Perturbation Analysis (IPA), which seeks a similar goal of estimating performance gradients, but operates under more restrictive assumptions than the likelihood-ratio approach; see, for example, \citeAho91.
#### 1.1.2 Biased Estimates of the Performance Gradient
All the algorithms described in the previous section rely on an identifiable recurrent state , either to update the gradient estimate, or in the case of the on-line algorithm, to zero the eligibility trace . This reliance on a recurrent state can be problematic for two main reasons:
1. The variance of the algorithms is related to the recurrence time between visits to , which will typically grow as the state space grows. Furthermore, the time between visits depends on the parameters of the policy, and states that are frequently visited for the initial value of the parameters may become very rare as performance improves.
2. In situations of partial observability it may be difficult to estimate the underlying states, and therefore to determine when the gradient estimate should be updated, or the eligibility trace zeroed.
If the system is available only through simulation, it seems difficult (if not impossible) to obtain unbiased estimates of the gradient direction without access to a recurrent state. Thus, to solve 1 and 2, we must look to biased estimates. Two principle techniques for introducing bias have been proposed, both of which may be viewed as artificial truncations of the eligibility trace . The first method takes as a starting point the formula555For ease of exposition, we have kept the expression for in terms of the likelihood ratios which rely on the availability of the underlying state . If is not available, should be replaced with . for the eligibility trace at time :
zt=t−1∑s=0∇pXsXs+1(θ)pXsXs+1(θ)
and simply truncates it at some (fixed, not random) number of terms looking backwards [13, 18, 19, 11]:
zt(n):=t−1∑s=t−n∇pXsXs+1(θ)pXsXs+1(θ). (7)
The eligibility trace is then updated after each transition by
zt+1(n)=zt(n)+∇pXtXt+1(θ)pXtXt+1(θ)−∇pXt−nXt−n+1(θ)pXt−nXt−n+1(θ), (8)
and in the case of state-based rewards , the estimated gradient direction after steps is
^∇nη(θ):=1T−n+1T∑t=nzt(n)r(Xt). (9)
Unless exceeds the maximum recurrence time (which is infinite in an ergodic Markov chain), is a biased estimate of the gradient direction, although as , the bias approaches zero. However the variance of diverges in the limit of large . This illustrates a natural trade-off in the selection of the parameter : it should be large enough to ensure the bias is acceptable (the expectation of should at least be within of the true gradient direction), but not so large that the variance is prohibitive. Experimental results by \citeAcao98 illustrate nicely this bias/variance trade-off.
One potential difficulty with this method is that the likelihood ratios must be remembered for the previous time steps, requiring storage of parameters. Thus, to obtain small bias, the memory may have to grow without bound. An alternative approach that requires a fixed amount of memory is to discount the eligibility trace, rather than truncating it:
zt+1(β):=βzt(β)+∇pXtXt+1(θ)pXtXt+1(θ), (10)
where and is a discount factor. In this case the estimated gradient direction after steps is simply
^∇βη(θ):=1TT−1∑t=0r(Xt)zt(β). (11)
This is precisely the estimate we analyze in the present paper. A similar estimate with replaced by where is a reward baseline was proposed by \shortciteAkimura95,kimura97 and for continuous control by \citeAkimura98. In fact the use of in place of does not affect the expectation of the estimates of the algorithm (although judicious choice of the reward baseline can reduce the variance of the estimates). While the algorithm presented by \citeAkimura95 provides estimates of the expectation under the stationary distribution of the gradient of the discounted reward, we will show that these are in fact biased estimates of the gradient of the expected discounted reward. This arises because the stationary distribution itself depends on the parameters. A similar estimate to (11) was also proposed by \citeAmarbach98, but this time with replaced by , where is an estimate of the average reward, and with zeroed on visits to an identifiable recurrent state.
As a final note, observe that the eligibility traces and defined by (10) and (8) are simply filtered versions of the sequence , a first-order, infinite impulse response filter in the case of and an -th order, finite impulse response filter in the case of . This raises the question, not addressed in this paper, of whether there is an interesting theory of optimal filtering for policy-gradient estimators.
### 1.2 Our Contribution
We describe , a general algorithm based upon (11) for generating a biased estimate of the performance gradient in general s controlled by parameterized stochastic policies. Here denotes the average reward of the policy with parameters . does not rely on access to an underlying recurrent state. Writing for the expectation of the estimate produced by , we show that , and more quantitatively that is close to the true gradient provided exceeds the mixing time of the Markov chain induced by the 666The mixing-time result in this paper applies only to Markov chains with distinct eigenvalues. Better estimates of the bias and variance of may be found in \citeAjcss_01, for more general Markov chains than those treated here, and for more refined notions of the mixing time. Roughly speaking, the variance of grows with , while the bias decreases as a function of .. As with the truncated estimate above, the trade-off preventing the setting of arbitrarily close to is that the variance of the algorithm’s estimates increase as approaches . We prove convergence with probability 1 of for both discrete and continuous observation and control spaces. We present algorithms for both general parameterized Markov chains and s controlled by parameterized stochastic policies.
There are several extensions to that we have investigated since the first version of this paper was written. We outline these developments briefly in Section 7.
In a companion paper we show how the gradient estimates produced by can be used to perform gradient ascent on the average reward [6]. We describe both traditional stochastic gradient algorithms, and a conjugate-gradient algorithm that utilizes gradient estimates in a novel way to perform line searches. Experimental results are presented illustrating both the theoretical results of the present paper on a toy problem, and practical aspects of the algorithms on a number of more realistic problems.
## 2 The Reinforcement Learning Problem
We model reinforcement learning as a Markov decision process () with a finite state space , and a stochastic matrix777A stochastic matrix has for all and for all . giving the probability of transition from state to state . Each state has an associated reward888All the results in the present paper apply to bounded stochastic rewards, in which case is the expectation of the reward in state . . The matrix belongs to a parameterized class of stochastic matrices, . Denote the Markov chain corresponding to by . We assume that these Markov chains and rewards satisfy the following assumptions:
###### Assumption 1.
Each has a unique stationary distribution satisfying the balance equations
π′(θ)P(θ)=π′(θ) (12)
(throughout denotes the transpose of ).
###### Assumption 2.
The magnitudes of the rewards, , are uniformly bounded by for all states .
Assumption 1 ensures that the Markov chain forms a single recurrent class for all parameters . Since any finite-state Markov chain always ends up in a recurrent class, and it is the properties of this class that determine the long-term average reward, this assumption is mainly for convenience so that we do not have to include the recurrence class as a quantifier in our theorems. However, when we consider gradient-ascent algorithms \citeAjair_01b, this assumption becomes more restrictive since it guarantees that the recurrence class cannot change as the parameters are adjusted.
Ordinarily, a discussion of s would not be complete without some mention of the actions available in each state and the space of policies available to the learner. In particular, the parameters would usually determine a policy (either directly or indirectly via a value function), which would then determine the transition probabilities . However, for our purposes we do not care how the dependence of on arises, just that it satisfies Assumption 1 (and some differentiability assumptions that we shall meet in the next section). Note also that it is easy to extend this setup to the case where the rewards also depend on the parameters or on the transitions . It is equally straightforward to extend our algorithms and results to these cases. See Section 6.1 for an illustration.
The goal is to find a maximizing the average reward:
η(θ):=limT→∞\boldmathEθ[1TT−1∑t=0r(Xt)∣∣ ∣∣X0=i],
where denotes the expectation over all sequences with transitions generated according to . Under Assumption 1, is independent of the starting state and is equal to
η(θ)=n∑i=1π(θ,i)r(i)=π′(θ)r, (13)
where [9].
## 3 Computing the Gradient of the Average Reward
For general s little will be known about the average reward , hence finding its optimum will be problematic. However, in this section we will see that under general assumptions the gradient exists, and so local optimization of is possible.
To ensure the existence of suitable gradients (and the boundedness of certain random variables), we require that the parameterized class of stochastic matrices satisfies the following additional assumption.
###### Assumption 3.
The derivatives,
∇P(θ):=[∂pij(θ)∂θk]i,j=1…n;k=1…K
exist for all . The ratios
⎡⎢ ⎢ ⎢⎣∣∣∣∂pij(θ)∂θk∣∣∣pij(θ)⎤⎥ ⎥ ⎥⎦i,j=1…n;k=1…K
are uniformly bounded by for all .
The second part of this assumption allows zero-probability transitions only if is also zero, in which case we set . One example is if is a forbidden transition, so that for all . Another example satisfying the assumption is
pij(θ)=eθij∑nj=1eθij,
where are the parameters of , for then
∂pij(θ)/∂θijpij(θ) =1−pij(θ),and ∂pij(θ)/∂θklpij(θ) =−pkl(θ).
Assuming for the moment that exists (this will be justified shortly), then, suppressing dependencies,
∇η=∇π′r, (14)
since the reward does not depend on . Note that our convention for in this paper is that it takes precedence over all other operations, so . Equations like (14) should be regarded as shorthand notation for equations of the form
∂η(θ)∂θk=[∂π(θ,1)∂θk,…,∂π(θ,n)∂θk][r(1),…,r(n)]′
where . To compute , first differentiate the balance equations (12) to obtain
∇π′P+π′∇P=∇π′,
and hence
∇π′(I−P)=π′∇P. (15)
The system of equations defined by (15) is under-constrained because is not invertible (the balance equations show that has a left eigenvector with zero eigenvalue). However, let denote the -dimensional column vector consisting of all s, so that is the matrix with the stationary distribution in each row. Since , we can rewrite (15) as
∇π′[I−(P−eπ′)]=π′∇P.
To see that the inverse exists, let be any matrix satisfying . Then we can write
limT→∞[(I−A)T∑t=0At] =limT→∞[T∑t=0At−T+1∑t=1At] =I−limT→∞AT+1 =I.
Thus,
(I−A)−1=∞∑t=0At.
It is easy to prove by induction that which converges to as by Assumption 1. So exists and is equal to . Hence, we can write
∇π′=π′∇P[I−P+eπ′]−1, (16)
and so999The argument leading to (16) coupled with the fact that is the unique solution to (12) can be used to justify the existence of . Specifically, we can run through the same steps computing the value of for small and show that the expression (16) for is the unique matrix satisfying .
∇η=π′∇P[I−P+eπ′]−1r. (17)
For s with a sufficiently small number of states, (17) could be solved exactly to yield the precise gradient direction. However, in general, if the state space is small enough that an exact solution of (17) is possible, then it will be small enough to derive the optimal policy using policy iteration and table-lookup, and there would be no point in pursuing a gradient based approach in the first place101010Equation (17) may still be useful for s, since in that case there is no tractable dynamic programming algorithm..
Thus, for problems of practical interest, (17) will be intractable and we will need to find some other way of computing the gradient. One approximate technique for doing this is presented in the next section.
## 4 Approximating the Gradient in Parameterized Markov Chains
In this section, we show that the gradient can be split into two components, one of which becomes negligible as a discount factor approaches .
For all , let denote the vector of expected discounted rewards from each state :
Jβ(θ,i):=\boldmathEθ[∞∑t=0βtr(Xt)∣∣ ∣∣X0=i]. (18)
Where the dependence is obvious, we just write .
###### Proposition 1.
For all and ,
∇η=(1−β)∇π′Jβ+βπ′∇PJβ. (19)
###### Proof.
Observe that satisfies the Bellman equations:
Jβ=r+βPJβ. (20)
[9]. Hence,
∇η =∇π′r =∇π′[Jβ−βPJβ] =∇π′Jβ−β∇π′Jβ+βπ′∇PJβ by (15) =(1−β)∇π′Jβ+βπ′∇PJβ.
We shall see in the next section that the second term in (19) can be estimated from a single sample path of the Markov chain. In fact, Theorem 1 in [14] shows that the gradient estimates of the algorithm presented in that paper converge to . By the Bellman equations (20), this is equal to , which implies . Thus the algorithm of \citeAkimura97 also estimates the second term in the expression for given by (19). It is important to note that —the two quantities disagree by the first term in (19). This arises because the the stationary distribution itself depends on the parameters. Hence, the algorithm of \citeAkimura97 does not estimate the gradient of the expected discounted reward. In fact, the expected discounted reward is simply times the average reward \shortcite[Fact 7]singh94a, so the gradient of the expected discounted reward is proportional to the gradient of the average reward.
The following theorem shows that the first term in (19) becomes negligible as approaches . Notice that this is not immediate from Proposition 1, since can become arbitrarily large in the limit .
###### Theorem 2.
For all ,
∇η=limβ→1∇βη, (21)
where
∇βη:=π′∇PJβ. (22)
###### Proof.
Recalling equation (17) and the discussion preceeding it, we have111111Since , (23) motivates a different kind of algorithm for estimating based on differential rewards [16].
∇η=π′∇P∞∑t=0[Pt−eπ′]r. (23)
But since is a stochastic matrix, so (23) can be rewritten as
∇η=π′[∞∑t=0∇PPt]r. (24)
Now let be a discount factor and consider the expression
f(β):=π′[∞∑t=0∇P(βP)t]r (25)
Clearly . To complete the proof we just need to show that .
Since , we can invoke the observation before (16) to write
∞∑t=0(βP)t=[I−βP]−1.
In particular, converges, so we can take back out of the sum in the right-hand-side of (25) and write121212We cannot back out of the sum in the right-hand-side of (24) because diverges (). The reason converges is that becomes orthogonal to in the limit of large . Thus, we can view as a sum of two orthogonal components: an infinite one in the direction and a finite one in the direction . It is the finite component that we need to estimate. Approximating with is a way of rendering the -component finite while hopefully not altering the -component too much. There should be other substitutions that lead to better approximations (in this context, see the final paragraph in Section 1.1).
f(β)=π′∇P[∞∑t=0βtPt]r. (26)
But . Thus . ∎
Theorem 2 shows that is a good approximation to the gradient as approaches , but it turns out that values of very close to lead to large variance in the estimates of that we describe in the next section. However, the following theorem shows that need not be too small, provided the transition probability matrix has distinct eigenvalues, and the Markov chain has a short mixing time. From any initial state, the distribution over states of a Markov chain converges to the stationary distribution, provided the assumption (Assumption 1) about the existence and uniqueness of the stationary distribution is satisfied [<]see, for example,¿[Theorem 15.8.1, p. 552]lancaster85. The spectral resolution theorem [15, Theorem 9.5.1, p. 314] implies that the distribution converges to stationarity at an exponential rate, and the time constant in this convergence rate (the mixing time) depends on the eigenvalues of the transition probability matrix. The existence of a unique stationary distribution implies that the largest magnitude eigenvalue is and has multiplicity , and the corresponding left eigenvector is the stationary distribution. We sort the eigenvalues in decreasing order of magnitude, so that for some . It turns out that determines the mixing time of the chain.
The following theorem shows that if is small compared to , the gradient approximation described above is accurate. Since we will be using the estimate as a direction in which to update the parameters, the theorem compares the directions of the gradient and its estimate. In this theorem, denotes the spectral condition number of a nonsingular matrix , which is defined as the product of the spectral norms of the matrices and ,
κ2(A)=∥A∥2∥A−1∥2,
where
∥A∥2=maxx:∥x∥=1∥Ax∥,
and denotes the Euclidean norm of the vector .
###### Theorem 3.
Suppose that the transition probability matrix satisfies Assumption 1 with stationary distribution , and has distinct eigenvalues. Let be the matrix of right eigenvectors of corresponding, in order, to the eigenvalues . Then the normalized inner product between and satisfies
1−∇η⋅β∇βη∥∇η∥2≤κ2(Π1/2S)∥∇(√π1,…,√πn)∥∥∇η∥√r′Πr1−β1−β|λ2|, (27)
where .
Notice that is the expectation under the stationary distribution of .
As well as the mixing time (via ), the bound in the theorem depends on another parameter of the Markov chain: the spectral condition number of . If the Markov chain is reversible (which implies that the eigenvectors are orthogonal), this is equal to the ratio of the maximum to the minimum probability of states under the stationary distribution. However, the eigenvectors do not need to be nearly orthogonal. In fact, the condition that the transition probability matrix have distinct eigenvalues is not necessary; without it, the condition number is replaced by a more complicated expression involving spectral norms of matrices of the form .
###### Proof.
The existence of distinct eigenvalues implies that can be expressed as , where [15, Theorem 4.10.2, p 153]. It follows that for any polynomial , we can write .
Now, Proposition 1 shows that . But
(1−β)Jβ =(1−β)(r+βPr+β2P2r+⋯) =(1−β)(I+βP+β2P2+⋯)r =(1−β)S(∞∑t=0βtΛt)S−1r =(1−β)n∑j=1xjy′j(∞∑t=0(βλj)t)r,
where .
It is easy to verify that is the left eigenvector corresponding to , and that we can choose and . Thus we can write
(1−β)Jβ =(1−β)eπ′r+n∑j=2xjy′j(∞∑t=0(1−β)(βλj)t)r =(1−β)eη+n∑j=2xjy′j(1−β1−βλj)r =(1−β)eη+SMS−1r,
where
M=diag(0,1−β1−βλ2,…,1−β1−βλn).
It follows from this and Proposition 1 that
1−∇η⋅β∇βη∥∇η∥2 =1−∇η⋅(∇η−∇π′(1−β)Jβ)∥∇η∥2 =∇η⋅∇π′(1−β)Jβ∥∇η∥2 =∇η⋅∇π′((1−β)eη+SMS−1r)∥∇η∥2 =∇η⋅∇π′SMS−1r∥∇η∥2 ≤∥∥∇π′SMS−1r∥∥∥∇η∥,
by the Cauchy-Schwartz inequality. Since , we can apply the Cauchy-Schwartz inequality again to obtain
1−∇η⋅β∇βη∥∇η∥2≤∥∥∇(√π′)∥∥∥∥Π1/2SMS−1r∥∥∥∇η∥. (28)
We use spectral norms to bound the second factor in the numerator. It is clear from the definition that the spectral norm of a product of nonsingular matrices satisfies , and that the spectral norm of a diagonal matrix is given by . It follows that
∥∥Π1/2SMS−1r∥∥ =∥∥Π1/2SMS−1Π−1/2Π1/2r∥∥ ≤∥∥Π1/2S∥∥2∥∥S−1Π−1/2∥∥2∥∥Π1/2r∥∥∥M∥2 ≤κ2(Π1/2S)√r′Πr1−β1−β|λ2|.
Combining with Equation (28) proves (27). ∎
## 5 Estimating the Gradient in Parameterized Markov Chains
Algorithm 1 introduces (Markov Chain Gradient), an algorithm for estimating the approximate gradient from a single on-line sample path from the Markov chain . requires only reals to be stored, where is the dimension of the parameter space: parameters for the eligibility trace , and parameters for the gradient estimate . Note that after time steps is the average so far of ,
ΔT=1TT−1∑t=0ztr(Xt). |
+0
0
32
5
https://web2.0calc.com/questions/help_71935
Sadly all the answers I got were wrong... Can someone help? THANKS!
May 22, 2020
#1
+1498
+3
You said you've been working on it, in a few hours you should make a bit of progress, can you share what you have done so far?
May 22, 2020
#2
0
I tried to calculate the number of possibilities of Aces, 2s and 3 in the same hand as (4^3\times3^3\times2^3\times1^3)=13824, and the number of possibilities as \dbinom{12}{3}\dbinom{8}{3}\dbinom{4}{3}=49280. 13824/49280=\boxed{\frac{108}{385}}
Sorry, the latex doesn't work here...
Guest May 22, 2020
#3
+438
+1
good job! you're on the right track! the numerator is correct, but the denominator isn't.
you got the 12 choose 3 part right, but not the rest.
think about it, if you've already chosen 3 cards, then there would be 12-3=9 cards left to choose from, not 8.
you're really close! you got this!
:)))
#4
+1
Wait. So I should have done 12 choose 3 times 9 choose 3 time 6 choose 3?
May 22, 2020
#5
+438
+1
yes!!! |
# Microsoft has released an early preview copy of its new operating system, Windows 7.
#### Talon1
My god!!!! What the hell are they thinking!!!! Vista is bad enough, and now they released another version of Vista?! If I wanted Windows Vista or Windows 7, I would rather wait until someone made a free Windows XP to Windows Vista or Windows 7 transformation pack (and thats exactly what I did, and now my Windows XP look's like Windows Vista). Although on the other hand, Windows 7 look's much, much better than Windows Vista and Windows XP because it's faster, and it doesn't cause much problems like Windows XP and Vista......
Last edited:
#### James.Denholm
*face palm*
It's not Vista v2... it's Windows 7... difference... *collapses in tired heap*
#### Hielor
##### Defender of Truth
Donator
Beta Tester
My god!!!! What the hell are they thinking!!!! Vista is bad enough, and now they released another version of Vista?!
It's not another version of Vista, it's a new OS. Moreover, it's not released yet.
If I wanted Windows Vista or Windows 7, I would rather wait until someone made a free Windows XP to Windows Vista or Windows 7 transformation pack (and thats exactly what I did, and now my Windows XP look's like Windows Vista).
I doubt that. Also, WindowsXP-that-looks-like-Vista is NOT Vista, so what's the point?
Although, Windows 7 look's much, much better than Windows Vista and Windows XP because it's faster, and it doesn't cause much problems like Windows XP and Vista......
Wait, weren't you just complaining about it? Also, XP causing problems? XP is widely considered to be the best and most stable Windows platform to date.
#### James.Denholm
Wait, weren't you just complaining about it? Also, XP causing problems? XP is widely considered to be the best and most stable Windows platform to date.
Note that he said to date, Talon, he's not commenting on the stability of "Win7" (as I'm calling it) because it's not released yet.
#### joeybigO
##### can't get in a word edgewise
Donator
*face palm*
It's not Vista v2... it's Windows 7... difference... *collapses in tired heap*
#### simonpro
##### Beta Tester
Beta Tester
It is probably fine for games and word processing and such if you have a decent machine, but it is *horribly* slow for number crunching applications where you need to crunch through flops as fast as possible. We upgraded to quad cores at work with Vista and attempted to run CFD simulations on them - we rolled back to XP after we started blowing project deadlines.
Really? We switched over to Vista here about 6 months ago, no problems at all. Our numerical simulations are definitely not slower, and the ones that run in (shudder) IDL are actually a little faster. The latter probably says more about the slowness of IDL than anything else, though.
(edit) For the record, I like Vista. Once you disable all the stupid nagging features and fiddle with the graphics a bit it looks fine, runs fine and does everything I want it to do.
The one and only thing that I don't like about Vista is that it refuses to play 2 audio streams at once, so I can't have iTunes and a video running at the same time. That's not exactly a showstopper though, and considering how much nicer it is than XP I can live with it.
#### Face
##### Well-known member
Orbiter Contributor
Beta Tester
Well, you can't expect someone who invented "managed code" to be able to think straight.
Hm. While you're certainly right for the term "managed code" itself, I'd like to note that M$didn't invent the concept behind managed code. This was first done by Sun with Java IIRC. At least for the popular mechanism, there were many experiments with managed code before Java, e.g. UCSD Pascal. And I don't think all those guys weren't able to think straight. Not that this says anything about M$'s mind state, though.
regards,
Face
BTW: Can a company have a mind state, after all???
#### tblaxland
Webmaster
Really? We switched over to Vista here about 6 months ago, no problems at all.
I agree. Our office has been migrating over to Vista for a while now (we have about 50/50 Vista/XP split ATM). There were a few teething problems with some of our engineering software but on the whole they were easier to resolve than with the Win 98->XP transition (we skipped 2000).
The biggest unresolved problem we have is with "offline files" synchronisation not working reliably with our Samba server but it is not really a big deal, since we never used this with XP.
#### Talon1
It's not another version of Vista, it's a new OS. Moreover, it's not released yet.
I doubt that. Also, WindowsXP-that-looks-like-Vista is NOT Vista, so what's the point?
Wait, weren't you just complaining about it? Also, XP causing problems? XP is widely considered to be the best and most stable Windows platform to date.
Uhh, ok, the only thing I like about windows vista is it's look.
Last edited:
#### Orbinaut Pete
##### ISSU Project Manager
News Reporter
Hmmm...
By starting this thread, I seem to have caused one big international windows-related fight
:rofl: |
# Ok
## The Obvious Meaning
Obviously, OK is a sideways person. At first it was said to be female, but that was before the species realized they have brains and can do things other than getting pregnant (like, for example giving birth, which did not always require sitting horizontally), so they started a riot. Quick enough, OK became a male. Then Michael Jackson came to be, moments in which everyone was confused and decided to blame its origins on Greek mythology.
Your mom still thinks it's cake on a weird looking table. Nigga Please.
## Greek Language Origins
"O.K." is the abbreviation (spelled correctly) of the Greek expression, Ola Kala (Ολα Καλά, ΟΚ) It is a standard expression in Greece that simply means: "My anus is red". An alternative etymology suggests that Okay is pig latin for K.O. (Knock-out).
Originally teachers used this as a mark on kids school papers in red ink as a taunt, saying that the paper was horrible. The meaning was lost when the children came up with a brilliant idea to stop this abuse; they being saying OK(as we know it)with and around their parents so that their parents became familiar with this expression as a positive meaning, and when they brought home their school papers, the parents thought that "OK" was good. Some teachers still use it to mark good school papers today. The brilliant children completly switched the meaning so that now it actually DOES mean the paper is good!
goobers
ok was originally written by a dislexic as K.O. after beating an enemy over the ass so many times his anus went red and he lossed con... the ability to stand up.
The word was introduced to the Western World during the Christmas period of 1689, when the majority of Christmas trees were oak saplings. A travelling pine tree salesman approached a Canadian passerby, and offered to sell him a tree, in a pitch similar to the following excerpt.
Salesman: "Would you like to buy a tree, sir?" Canadian: "Oak, eh."
The Canadian was given a pine tree to use that Christmas, as were many other Canadian families. Thousands of oak loving children cried that Christmas.
## In the Medical Field
The abbreviation "OK" was informally used to communicate some type of anus rash (light to severe) was present, mainly used between and among doctors. Also, for a doctor, hearing the Ola Kala was a quick way to take stock of a situation. Doctors had to mentally prepare themselves before entering a room with a patient that had "OK". It was common for a patient with "OK" to sit for great lengths of time waiting to be treated, as every doctor tried as hard as possible to avoid having to be the one to treat those patients.
## Uses in the United States
"OK," sometimes spelled as "okay," is the most recognized word in the United States of America today, and rightly so. It was first used in the United States in the mid 1960's when the coach of the Birmingham Firehoses, a professional baseball team, told a player named Rosco Peterson, "Go get them Niggers!" to which Rosco replied, "OK". To this day the word has been widely used in the United States. It is often heard in gay bars (this usually is accompanied by a flick of the wrist) and even known to be the first words spoken when a prostitute asks a virgin if he wants to conduct business.
Today, the word is also used in the U.S. when a question is asked to a person that does not speak the language in which the question was asked;
For example:
Englishman - "What type of tea would you like?"...Spaniard - "OK".
Asian - "Small or Large Boba?"...American - "OK".
African American - "You strapped?"...Canadian - "OK".
But some say that O.K. would be the abbreviation of Donkey Kong's long lost brother, Onkey Kong.
## Common questions in which "OK" is used as a response
Often, when a individual is lost in the moment of a ridiculous and also lude thought, s/he will respond paradoxically to any question with the same initial reply (OK). This is a physiological response orchastrated by a severely imbalanced brain chemistry as a result of massive head trauma sustained during the fifth grade, as well as a strong sense of shame from the act of masterbation. A similar condition with the exact same symptoms as described above is caused by a cell phone somehow being accidentally glued or hammered into the side of the face. The only known cure for this unfortunate condition is a swift kick, punch, slap, or similar impulsive event, to the opposite side of the head. Also, a pail of hot or cold water can be applied to the area. If you've done it right, the battery will short circuit, inducing a sizeable shock to the brain. This kills enough brain cells in precisely the correct region to render the patient brain dead, incapable of any linguistic vocalizations in the first place.
The following are common situations in which a sufferer will exhibit the response "OK":
• "Paper or plastic, sir?"
• "And what kind of drink with that?"
• "I'm gonna bash you, gay!"
• "Can I have the rest of your french fries?"
• "I'm an American, so I smell like a poof."
• "According to my calculations, if you $67/8x93+rfha+67$ Should make all of us want to make a lot of maths!"
• "What the hell are you doing on me?!?! GET THE HELL OFF!"
• "Stop raping me!"
• "No."
• "Yes."
• "This isn't making much sense."
• "Eat something,"
• "AAGGHHHHHH FUCK YOU!!!"
• "GOD DAMN STOP TEABAGGING ME!!!"
• "Hello, I am a gay. You fancy a bum?"
• "I'm considering whether you need a good punch in the weiner." |
## Flexible smoothing with $$B$$-splines and penalties. With comments and a rejoinder by the authors.(English)Zbl 0955.62562
Summary: $$B$$-splines are attractive for nonparametric modelling, but choosing the optimal number and positions of knots is a complex task. Equidistant knots can be used, but their small and discrete number allows only limited control over smoothness and fit. We propose to use a relatively large number of knots and a difference penalty on coefficients of adjacent $$B$$-splines. We show connections to the familiar spline penalty on the integral of the squared second derivative. A short overview of $$B$$-splines, of their construction and of penalized likelihood is presented. We discuss properties of penalized $$B$$-splines and propose various criteria for the choice of an optimal penalty parameter. Nonparametric logistic regression, density estimation and scatterplot smoothing are used as examples. Some details of the computations are presented.
### MSC:
62G05 Nonparametric estimation 62G07 Density estimation 62G08 Nonparametric regression and quantile regression
### Software:
KernSmooth; FITPACK
Full Text: |
# RegressionNeuralNetwork
Neural network model for regression
## Description
A `RegressionNeuralNetwork` object is a trained, feedforward, and fully connected neural network for regression. The first fully connected layer of the neural network has a connection from the network input (predictor data `X`), and each subsequent layer has a connection from the previous layer. Each fully connected layer multiplies the input by a weight matrix (`LayerWeights`) and then adds a bias vector (`LayerBiases`). An activation function follows each fully connected layer, excluding the last (`Activations` and `OutputLayerActivation`). The final fully connected layer produces the network's output, namely predicted response values. For more information, see Neural Network Structure.
## Creation
Create a `RegressionNeuralNetwork` object by using `fitrnet`.
## Properties
expand all
### Neural Network Properties
Sizes of the fully connected layers in the neural network model, returned as a positive integer vector. The ith element of `LayerSizes` is the number of outputs in the ith fully connected layer of the neural network model.
`LayerSizes` does not include the size of the final fully connected layer. This layer always has one output.
Data Types: `single` | `double`
Learned layer weights for fully connected layers, returned as a cell array. The ith entry in the cell array corresponds to the layer weights for the ith fully connected layer. For example, `Mdl.LayerWeights{1}` returns the weights for the first fully connected layer of the model `Mdl`.
`LayerWeights` includes the weights for the final fully connected layer.
Data Types: `cell`
Learned layer biases for fully connected layers, returned as a cell array. The ith entry in the cell array corresponds to the layer biases for the ith fully connected layer. For example, `Mdl.LayerBiases{1}` returns the biases for the first fully connected layer of the model `Mdl`.
`LayerBiases` includes the biases for the final fully connected layer.
Data Types: `cell`
Activation functions for the fully connected layers of the neural network model, returned as a character vector or cell array of character vectors with values from this table.
ValueDescription
`'relu'`
Rectified linear unit (ReLU) function — Performs a threshold operation on each element of the input, where any value less than zero is set to zero, that is,
`$f\left(x\right)=\left\{\begin{array}{cc}x,& x\ge 0\\ 0,& x<0\end{array}$`
`'tanh'`
Hyperbolic tangent (tanh) function — Applies the `tanh` function to each input element
`'sigmoid'`
Sigmoid function — Performs the following operation on each input element:
`$f\left(x\right)=\frac{1}{1+{e}^{-x}}$`
`'none'`
Identity function — Returns each input element without performing any transformation, that is, f(x) = x
• If `Activations` contains only one activation function, then it is the activation function for every fully connected layer of the neural network model, excluding the final fully connected layer, which does not have an activation function (`OutputLayerActivation`).
• If `Activations` is an array of activation functions, then the ith element is the activation function for the ith layer of the neural network model.
Data Types: `char` | `cell`
Activation function for final fully connected layer, returned as `'none'`.
Parameter values used to train the `RegressionNeuralNetwork` model, returned as a `NeuralNetworkParams` object. `ModelParameters` contains parameter values such as the name-value arguments used to train the regression neural network model.
Access the properties of `ModelParameters` by using dot notation. For example, access the function used to initialize the fully connected layer weights of a model `Mdl` by using `Mdl.ModelParameters.LayerWeightsInitializer`.
### Convergence Control Properties
Convergence information, returned as a structure array.
FieldDescription
`Iterations`Number of training iterations used to train the neural network model
`TrainingLoss`Training mean squared error (MSE) for the returned model, or `resubLoss(Mdl)` for model `Mdl`
`Gradient`Gradient of the loss function with respect to the weights and biases at the iteration corresponding to the returned model
`Step`Step size at the iteration corresponding to the returned model
`Time`Total time spent across all iterations (in seconds)
`ValidationLoss`Validation MSE for the returned model
`ValidationChecks`Maximum number of times in a row that the validation loss was greater than or equal to the minimum validation loss
`ConvergenceCriterion`Criterion for convergence
`History`See `TrainingHistory`
Data Types: `struct`
Training history, returned as a table.
ColumnDescription
`Iteration`Training iteration
`TrainingLoss`Training mean squared error (MSE) for the model at this iteration
`Gradient`Gradient of the loss function with respect to the weights and biases at this iteration
`Step`Step size at this iteration
`Time`Time spent during this iteration (in seconds)
`ValidationLoss`Validation MSE for the model at this iteration
`ValidationChecks`Running total of times that the validation loss is greater than or equal to the minimum validation loss
Data Types: `table`
Solver used to train the neural network model, returned as `'LBFGS'`. To create a `RegressionNeuralNetwork` model, `fitrnet` uses a limited-memory Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm (LBFGS) as its loss function minimization technique, where the software minimizes the mean squared error (MSE).
### Predictor Properties
Predictor variable names, returned as a cell array of character vectors. The order of the elements of `PredictorNames` corresponds to the order in which the predictor names appear in the training data.
Data Types: `cell`
Categorical predictor indices, returned as a vector of positive integers. Assuming that the predictor data contains observations in rows, `CategoricalPredictors` contains index values corresponding to the columns of the predictor data that contain categorical predictors. If none of the predictors are categorical, then this property is empty (`[]`).
Data Types: `double`
Expanded predictor names, returned as a cell array of character vectors. If the model uses encoding for categorical variables, then `ExpandedPredictorNames` includes the names that describe the expanded variables. Otherwise, `ExpandedPredictorNames` is the same as `PredictorNames`.
Data Types: `cell`
Unstandardized predictors used to train the neural network model, returned as a numeric matrix or table. `X` retains its original orientation, with observations in rows or columns depending on the value of the `ObservationsIn` name-value argument in the call to `fitrnet`.
Data Types: `single` | `double` | `table`
### Response Properties
Response variable name, returned as a character vector.
Data Types: `char`
Response values used to train the model, returned as a numeric vector. Each row of `Y` represents the response value of the corresponding observation in `X`.
Data Types: `single` | `double`
Response transformation function, returned as `'none'`. The software does not transform the raw response values.
### Other Data Properties
Cross-validation optimization of hyperparameters, specified as a `BayesianOptimization` object or a table of hyperparameters and associated values. This property is nonempty if the `'OptimizeHyperparameters'` name-value pair argument is nonempty when you create the model. The value of `HyperparameterOptimizationResults` depends on the setting of the `Optimizer` field in the `HyperparameterOptimizationOptions` structure when you create the model.
Value of `Optimizer` FieldValue of `HyperparameterOptimizationResults`
`'bayesopt'` (default)Object of class `BayesianOptimization`
`'gridsearch'` or `'randomsearch'`Table of hyperparameters used, observed objective function values (cross-validation loss), and rank of observations from lowest (best) to highest (worst)
Number of observations in the training data stored in `X` and `Y`, returned as a positive numeric scalar.
Data Types: `double`
Rows of the original training data used in fitting the model, returned as a logical vector. This property is empty if all rows are used.
Data Types: `logical`
Observation weights used to train the model, returned as an n-by-1 numeric vector. n is the number of observations (`NumObservations`).
The software normalizes the observation weights specified in the `Weights` name-value argument so that the elements of `W` sum up to 1.
Data Types: `single` | `double`
## Object Functions
expand all
`compact` Reduce size of machine learning model
`crossval` Cross-validate machine learning model
`lime` Local interpretable model-agnostic explanations (LIME) `partialDependence` Compute partial dependence `plotPartialDependence` Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots `shapley` Shapley values
`loss` Loss for regression neural network `predict` Predict responses using regression neural network
`resubLoss` Resubstitution regression loss `resubPredict` Predict responses for training data using trained regression model
## Examples
collapse all
Train a neural network regression model, and assess the performance of the model on a test set.
Load the `carbig` data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables `Acceleration`, `Displacement`, and so on, as well as the response variable `MPG`.
```load carbig cars = table(Acceleration,Displacement,Horsepower, ... Model_Year,Origin,Weight,MPG);```
Remove rows of `cars` where the table has missing values.
`cars = rmmissing(cars);`
Categorize the cars based on whether they were made in the USA.
```cars.Origin = categorical(cellstr(cars.Origin)); cars.Origin = mergecats(cars.Origin,["France","Japan",... "Germany","Sweden","Italy","England"],"NotUSA");```
Partition the data into training and test sets. Use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. Use `cvpartition` to partition the data.
```rng("default") % For reproducibility of the data partition c = cvpartition(height(cars),"Holdout",0.20); trainingIdx = training(c); % Training set indices carsTrain = cars(trainingIdx,:); testIdx = test(c); % Test set indices carsTest = cars(testIdx,:);```
Train a neural network regression model by passing the `carsTrain` training data to the `fitrnet` function. For better results, specify to standardize the predictor data.
`Mdl = fitrnet(carsTrain,"MPG","Standardize",true)`
```Mdl = RegressionNeuralNetwork PredictorNames: {1x6 cell} ResponseName: 'MPG' CategoricalPredictors: 5 ResponseTransform: 'none' NumObservations: 314 LayerSizes: 10 Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1x1 struct] TrainingHistory: [1000x7 table] Properties, Methods ```
`Mdl` is a trained `RegressionNeuralNetwork` model. You can use dot notation to access the properties of `Mdl`. For example, you can specify `Mdl.TrainingHistory` to get more information about the training history of the neural network model.
Evaluate the performance of the regression model on the test set by computing the test mean squared error (MSE). Smaller MSE values indicate better performance.
`testMSE = loss(Mdl,carsTest,"MPG")`
```testMSE = 6.8780 ```
Specify the structure of the neural network regression model, including the size of the fully connected layers.
Load the `carbig` data set, which contains measurements of cars made in the 1970s and early 1980s. Create a matrix `X` containing the predictor variables `Acceleration`, `Cylinders`, and so on. Store the response variable `MPG` in the variable `Y`.
```load carbig X = [Acceleration Cylinders Displacement Weight]; Y = MPG;```
Delete rows of `X` and `Y` where either array has missing values.
```R = rmmissing([X Y]); X = R(:,1:end-1); Y = R(:,end);```
Partition the data into training data (`XTrain` and `YTrain`) and test data (`XTest` and `YTest`). Reserve approximately 20% of the observations for testing, and use the rest of the observations for training.
```rng("default") % For reproducibility of the partition c = cvpartition(length(Y),"Holdout",0.20); trainingIdx = training(c); % Indices for the training set XTrain = X(trainingIdx,:); YTrain = Y(trainingIdx); testIdx = test(c); % Indices for the test set XTest = X(testIdx,:); YTest = Y(testIdx);```
Train a neural network regression model. Specify to standardize the predictor data, and to have 30 outputs in the first fully connected layer and 10 outputs in the second fully connected layer. By default, both layers use a rectified linear unit (ReLU) activation function. You can change the activation functions for the fully connected layers by using the `Activations` name-value argument.
```Mdl = fitrnet(XTrain,YTrain,"Standardize",true, ... "LayerSizes",[30 10])```
```Mdl = RegressionNeuralNetwork ResponseName: 'Y' CategoricalPredictors: [] ResponseTransform: 'none' NumObservations: 319 LayerSizes: [30 10] Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1x1 struct] TrainingHistory: [1000x7 table] Properties, Methods ```
Access the weights and biases for the fully connected layers of the trained model by using the `LayerWeights` and `LayerBiases` properties of `Mdl`. The first two elements of each property correspond to the values for the first two fully connected layers, and the third element corresponds to the values for the final fully connected layer for regression. For example, display the weights and biases for the first fully connected layer.
`Mdl.LayerWeights{1}`
```ans = 30×4 0.0123 0.0117 -0.0094 0.1175 -0.4081 -0.7849 -0.7201 -2.1720 0.6041 0.1680 -2.3952 0.0934 -3.2332 -2.8360 -1.8264 -1.5723 0.5851 1.5370 1.4623 0.6742 -0.2106 1.2830 -1.7489 -1.5556 0.4800 0.1012 -1.0044 -0.7959 1.8015 -0.5272 -0.7670 0.7496 -1.1428 -0.9902 0.2436 1.2288 -0.0833 -2.4265 0.8388 1.8597 ⋮ ```
`Mdl.LayerBiases{1}`
```ans = 30×1 -0.4450 -0.8751 -0.3872 -1.1345 0.4499 -2.1555 2.2111 1.2040 -1.4595 0.4639 ⋮ ```
The final fully connected layer has one output. The number of layer outputs corresponds to the first dimension of the layer weights and layer biases.
`size(Mdl.LayerWeights{end})`
```ans = 1×2 1 10 ```
`size(Mdl.LayerBiases{end})`
```ans = 1×2 1 1 ```
To estimate the performance of the trained model, compute the test set mean squared error (MSE) for `Mdl`. Smaller MSE values indicate better performance.
`testMSE = loss(Mdl,XTest,YTest)`
```testMSE = 18.3681 ```
Compare the predicted test set response values to the true response values. Plot the predicted miles per gallon (MPG) along the vertical axis and the true MPG along the horizontal axis. Points on the reference line indicate correct predictions. A good model produces predictions that are scattered near the line.
```testPredictions = predict(Mdl,XTest); plot(YTest,testPredictions,".") hold on plot(YTest,YTest) hold off xlabel("True MPG") ylabel("Predicted MPG")```
## Version History
Introduced in R2021a |
# A little hack to get a pdf from your IPython slides
| Source | Minimap
A lot of users of the slides option in IPython.nbconvert asked me about how to get a pdf from the generated Reveal.js-based slideshow.
To make the story short, here you have the detailed steps:
• cd in the directory where your slideshow lives
• add this custom.css file: https://gist.github.com/damianavila/6211198
• run this little snippet: https://gist.github.com/damianavila/6211211
• run python -m SimpleHTTPServer 8001
• open Mozilla Firefox browser and point to localhost:8001
• add ?print.pdf to the end of the url (ie, http://127.0.0.1:8001/your-ipynb.slides.html?print-pdf)
• print to pdf (use Landscape orientation)
Do you want to get this little hack integrated to IPython? @fperez, the IPython BDFL does:
So, time to write some code and do a PR... in the meantime use this hack ;-)
Damián.
Don't forget this blog post is an ipynb file itself! So, you can download it from the "Source" link at the top of the post if you want to play with it ;-) |
# Equations of Motions of a Wheel Axle Set
1. Sep 15, 2014
### altanonat
Hello all,
I am currently studying dynamics of a wheel-axle set for my research. My problem is I could not find the same equation for the rate of the change of the momentum in the book, book is a little bit old and I could not find any errata about the book or any other references that explains the derivation of equations. Thank you in advance for your help.
I am trying to obtain the general wheel axle set equations of motion given in the 5th chapter of the book (all the equations and figures are taken from this book):
I am giving the axes systems in the book used:
https://imagizer.imageshack.us/v2/965x464q90/661/dLDE2P.png [Broken]
The first axes is used as fixed inertial reference frame. The second one is an intermediate frame rotated through an angle $\psi$ about the z axis of the third axes system (which is attached to the mass center of wheelset) Transformation equations between coordinate axes given in the book:
$\begin{Bmatrix} i^{'}\\j^{'} \\ k^{'} \end{Bmatrix}=\begin{bmatrix} 1 &0 &0 \\ 0 &cos\phi &sin\phi \\ 0 &-sin\phi & cos\phi \end{bmatrix}\begin{Bmatrix} i^{''}\\j^{''} \\ k^{''} \end{Bmatrix}$
$\begin{Bmatrix} i^{''}\\j^{''} \\ k^{''} \end{Bmatrix}=\begin{bmatrix} cos\psi &sin\psi &0 \\ -sin\psi &cos\psi &0 \\ 0 &0 & 1 \end{bmatrix}\begin{Bmatrix} i^{'''}\\j^{'''} \\ k^{'''} \end{Bmatrix}$
$\begin{Bmatrix} i^{'}\\j^{'} \\ k^{'} \end{Bmatrix}=\begin{bmatrix} cos\psi &sin\psi &0 \\ -cos\phi sin\psi &cos\phi cos\psi &0 \\ sin\phi sin\psi &-sin\phi cos\psi & 1 \end{bmatrix}\begin{Bmatrix} i^{'''}\\j^{'''} \\ k^{'''} \end{Bmatrix}$
for small $\psi$ and $\phi$
$\begin{Bmatrix} i^{'}\\j^{'} \\ k^{'} \end{Bmatrix}=\begin{bmatrix} 1 &\psi &0 \\ -\psi &1 &0 \\ 0 &-phi & 1 \end{bmatrix}\begin{Bmatrix} i^{'''}\\j^{'''} \\ k^{'''} \end{Bmatrix}$
https://imagizer.imageshack.us/v2/773x270q90/661/B4L8It.png [Broken]
The angular velocity $\mathbf{\omega}$ of the axle wheelset is given by:
$\mathbf{\omega}=\dot{\phi }i^{''}+\left ( \Omega +\dot{\beta } \right )j^{'}+\dot{\psi }k^{''}$
The angular velocity $\mathbf{\omega}$ expressed in body coordinate axis is given by:
$\mathbf{\omega}=\dot{\phi }i^{'}+\left ( \Omega +\dot{\beta }+\dot{\psi }sin\phi \right )j^{'}+\dot{\psi }cos\phi k^{'}$
$\mathbf{\omega}=\omega_{x}i^{'}+\omega_{y}j^{'}+\omega_{z}k^{'}$
where $\omega_{x}=\dot{\phi }, \omega_{y}=\left ( \Omega +\dot{\beta }+\dot{\psi }sin\phi \right ), \omega_{z}=\dot{\psi }cos\phi$ and the angular momentum of the wheel axle set in the body coordinate system
$\mathbf{H}=I_{wx}\omega_{x}i^{'}+I_{wy}\omega_{y}j^{'}+I_{wz}\omega_{z}k^{'}$
please note that because of symmetry(principal mass moments) $I_{wx}=I_{wz}$.
Angular velocity of coordinate axes
ω_axis×H=(ψ ̇sinφI_wx ψ ̇cosφi^'-ψ ̇cosφI_wy (Ω+β ̇+ψ ̇sinφ) i^' )+(φ ̇I_wy (Ω+β ̇+ψ ̇sinφ) k^'-ψ ̇sinφI_wx φ ̇k^' )
$\mathbf{\omega_{axis}}=\dot{\phi }i^{'}+\dot{\psi }k^{''}=\dot{\phi }i^{'}+\dot{\psi }sin\phi j^{'}+\dot{\psi }cos\phi k^{'}$
The rate of change of momentum is given as
$\mathbf{dH/dt}=I_{wx}\dot{\omega_{x}}i^{'}+I_{wy}\dot{\omega_{y}}j^{'}+I_{wz}\dot{\omega_{z}}k^{'}+\mathbf{\omega_{axis}}\times\mathbf{H}$
This point is where I can not get the same equation in the book for rate of change of momentum. The rate of change of momentum given in fixed intertial frame is:
$\mathbf{dH/dt}=\left (I_{wx}\ddot \phi- I_{wy}\Omega \dot\psi \right )i^{'''}+I_{wy}\ddot \beta j^{'''}+\left (I_{wy}\Omega\dot \phi+ I_{wx}\ddot\psi \right ) k^{'''}$
Probably I am missing a simple point but I could not find what it is.
Last edited by a moderator: May 6, 2017
2. Sep 16, 2014
### altanonat
I am sorry, please do not consider this part:
ω_axis×H=(ψ ̇sinφI_wx ψ ̇cosφi^'-ψ ̇cosφI_wy (Ω+β ̇+ψ ̇sinφ) i^' )+(φ ̇I_wy (Ω+β ̇+ψ ̇sinφ) k^'-ψ ̇sinφI_wx φ ̇k^' )
Probably I wrote (copy and paste from my notes) it by mistake. |
# A fifth primer: plane geometry tutorial for preRMO and RMO: core stuff
1. Show that three straight lines which join the middle points of the sides of a triangle, divide it into four triangles which are identically equal.
2. Any straight line drawn from the vertex of a triangle to the base is bisected by the straight line which joins the middle points of the other sides of the triangle.
3. ABCD is a parallelogram, and X, Y are the middle points of the opposite sides AD, BC: prove that BX and DY trisect the diagonal AC.
4. If the middle points of adjacent sides of any quadrilateral are joined, the figure thus formed is a parallelogram. Prove this.
5. Show that the straight lines which join the middle points of opposite sides of a quadrilateral bisect one another.
6. From two points A and B, and from O the mid-point between them, perpendiculars AP, and BQ, OX are drawn to a straight line CD. If AP, BQ measure respectively 4.2 cm, and 5.8 cm, deduce the length of OX. Prove that OX is one half the sum of AP and BQ. or $\frac{1}{2}(AP-BQ)$ or $\frac{1}{2}(BQ-AP)$ according as A and B are on the same side or on opposite sides of CD.
7. When three parallels cut off equal intercepts from two transversals, prove that of three parallel lengths between the two transversals the middle one is the Arithmetic Mean of the other two.
8. The parallel sides of a trapezium are a cm and b cm respectively. Prove that the line joining the middle points of the oblique sides is parallel to the parallel sides, and that its length is $\frac{1}{2}(a+b)$ cm.
9. OX and OY are two straight lines, and along OX five points 1,2,3,4,5 are marked at equal distances. Through these points parallels are drawn in any direction to meet OY. Measure the lengths of these parallels : take their average and compare it with the lengths of the third parallel. Prove geometrically that the third parallel is the mean of all five.
10. From the angular points of a parallelogram perpendiculars are drawn to any straight line which is outside the parallelogram : prove that the sum of the perpendiculars drawn from one pair of opposite angles is equal to the sum of those drawn from the other pair. (Draw the diagonals,and from their point of intersection suppose a perpendicular drawn to the given straight line.)
11. The sum of the perpendiculars drawn from any point in the base of an isosceles triangle to the equal to the equal sides is equal to the perpendicular drawn from either extremity of the base to the opposite side. It follows that the sum of the distances of any point in the base of an isosceles triangle from the equal sides is constant, that is, the same whatever point in the base is taken).
12. The sum of the perpendiculars drawn from any point within the an equilateral triangle to the three sides is equal to the perpendicular drawn from any one of the angular points to the opposite side, and is therefore, constant. Prove this.
13. Equal and parallel lines have equal projections on any other straight line. Prove this.
More later,
Cheers,
Nalin Pithwa.
# A primer for preRMO and RMO plane geometry with basic exercises
Plane geometry is axiomatic deductive logic. I present a quick mention/review of “proofs” which can be “derived” in sequence….building up the elementary theorems …so for example, if there is a question like: prove that the three medians of a triangle are concurrent, please do not use black magic complicated machinery like Ceva’s theorem,etc; or even if say, the question asks you to prove Ceva’s theorem only, you have to prove it using elementary theorems like the ones presented below:
For the present purposes, I am skipping axioms and basic definitions and hypothetical constructions. I am using straight away the reference (v v v old text) : A School Geometry, Metric Edition by Hall and Stevens. (available almost everywhere in India):
Theorem 1:
The adjacent angles which one straight line makes with another straight line on one side of it are together equal to two right angles.
Corollary 1 of Theorem 1:
if two straight lines cut another, the four angles so formed are together equal to four right angles.
Corollary 2 of Theorem 1:
When any number of straight lines meet at a point, the sum of the consecutive angles so formed is equal to four right angles.
Corollary 3 of Theorem 1:
(a) Supplements of the same angle are equal. (ii) Complements of the same angle are equal.
Theorem 2 (converse of theorem 1):
If, at a point in a straight line, two other straight lines, on opposite sides of it, make the adjacent angles together equal to two right angles, then these two straight lines are in one and the same straight line.
Remark: this theorem can be used to prove stuff like three points are in a straight line.
Theorem 3:
If two straight lines cut one another, the vertically opposite angles are equal.
Theorem 4: SAS Test of Congruence of Two Triangles:
If two triangles have two sides of the one equal to two sides of the other, each to each, and the angles included by those sides equal, then the triangles are equal in all respects.
Theorem 5:
The angles at the base of an isosceles triangle are equal.
Corollary 1 of Theorem 5:
If the equal sides AB, AC of an isosceles triangle are produced, the exterior angles EBC, FCB are equal, for they are the supplements of the equal angles at the base.
Corollary 2 of Theorem 5:
If a triangle is equilateral, it is also equiangular.
Theorem 6:
If two angles of a triangle are equal to one another, then the sides which are opposite to the equal angles are equal to one another.
Corollary of Theorem 6:
Hence, if a triangle is equiangular, it is also equilateral.
Theorem 7 (SSS Test of Congruence of Two Triangles):
If two triangles have the three sides of the one equal to the three sides of the others, each to each, they are equal in all respects.
Theorem 8:
If one side of a triangle is produced then the exterior angle is greater than either of the interior opposite angles.
Corollary 1 to Theorem 8:
Any two angles of a triangle are together less than two right angles.
Corollary 2 to Theorem 8:
Every triangle must have at least two acute angles.
Corollary 3 to Theorem 8:
Only one perpendicular can be drawn to a straight line from a given point outside it.
Theorem 9 :
If one side of a triangle is greater than another, then the angle opposite of the greater side is greater than the angle opposite to the less.
Theorem 10:
If one angle of a triangle is greater than another, then the side opposite to the greater angle is greater than the side opposite to the less.
Theorem 11: Triangle Inequality:
Any two sides of a triangle are together greater than the third side.
Theorem 12: Another inequality sort of theorem:
Of all straight lines drawn from a given point to a given straight line the perpendicular is the least.
Corollary 1 to Theorem 12:
Hence, conversely, since there can be only one perpendicular and one shortest line from O to AB: if OC is the shortest straight line from O to AB, then OC is perpendicular to AB.
Corollary 2 to Theorem 12:
Two obliques OP, OQ which cut AB at equal distance from C, the foot of the perpendicular are equal.
Corollary 3 to Theorem 12:
Of two obliques OQ, OR, if OR cuts AB at the greater distance from C. the foot of the perpendicular, then OR is greater than OQ.
Theorem 13 :
If a straight line cuts two other straight lines so as to make: (i) the alternate angles equal or (ii) an exterior angle equal to the interior opposite angle on the same side of the cutting line or (iii) the interior angles on the same side equal to two right angles, then in each case, the two straight lines are parallel.
Theorem 14:
If a straight line cuts two parallel lines, it makes : (i) the alternate angles equal to one another; (ii) the exterior angle equal to the interior opposite angle on the same side of the cutting line (iii) the two interior angles on the same side together equal to two right angles.
Theorem 15:
Straight lines which are parallel to the same straight line are parallel to one another.
Theorem 16:
Sum of three interior angles of a triangle is 180 degrees.
Also, if a side of a triangle is produced the exterior angle is equal to the sum of the two interior opposite angles.
Corollary 1:
All the interior angles of one rectilinear figure, together with four right angles are equal to twice as many right angles as the figure has sides.
Corollary 2:
If the sides of a rectilinear figure, which has no reflex angle, are produced in order, then all the exterior angles so formed are together equal to four right angles.
Theorem 17: AAS test of congruence of two triangles:
If two triangles have two angles of one equal to two angles of the other, each to each, and any side of the first equal to the corresponding side of the other, the triangles are equal in all respects.
Theorem 18:
Two right angled triangles which have their hypotenuses equal, and one side of one equal to one side of the other are equal in all respects.
Theorem 19:
If two triangles have two sides of the one equal to two sides of the other, each to each, but the angle included by the two sides of one greater than the angle included by the two corresponding sides of the other, then the base of that which has the greater angle is greater than the base of the other.
Conversely,
if two triangles have two sides of the one equal to two sides of the other, each to each, but the base of one greater than the base of the other, then the angle contained by the sides of that which has the greater base is greater than the angle contained by the corresponding sides of the other.
Theorem 20:
The straight lines which join the extremities of two equal and parallel straight lines towards the same parts are themselves equal and parallel.
Theorem 21:
The opposite sides and angles of a parallelogram are equal to one another, and each diagonal bisects the parallelogram.
Corollary 1 to Theorem 21:
If one angle of a parallelogram to a right angle, all its angles are equal.
Corollary 2 to Theorem 21:
All the sides of a square are equal and all its angles are right angles.
Corollary 3 to Theorem 21:
The diagonals of a parallelogram bisect each other.
Theorem 22:
If there are three or more parallel straight lines, and the intercepts made by them on any transversal are equal, then the corresponding intercepts on any other transversal are also equal.
Tutorial exercises based on the above:
Problem 1: In the triangle ABC, the angles ABC, ACB are given equal. If the side BC is produced both ways, show that the exterior angles so formed are equal.
Problem 2: In the triangle ABC, the angles ABC, ACB are given equal. If AB and AC are produced beyond the base, show that the exterior angles so formed are equal.
Problem 3: Prove that the bisectors of the adjacent angles which one straight line makes with another contain a right angle. That is to say, the internal and external bisectors of an angle are at right angles to one another.
Problem 4: If from O a point in AB two straight lines OC, OD are drawn on opposite sides of AB so as to make the angle COB equal to the angle AOD, show that OC and OD are in the same straight line.
Problem 5: Two straight lines AB, CD cross at O. If OX is the bisector of the angle BOD, prove that XO produced bisects the angle AOC.
Problem 6: Two straight lines AB, CD cross at O. If the angle BOD is bisected by OX, and AOC by OY, prove that OX, OY are in the same straight line.
Problem 7: Show that the bisector of the vertical angle of an isosceles triangle (i) bisects the base (ii) is perpendicular to the base.
Problem 8: Let O be the middle point of a straight line AB, and let OC be perpendicular to it. Then, if P is any point in OC, prove that PA=PB.
Problem 9: Assuming that the four sides of a square are equal, and that its angles are all right angles, prove that in the square ABCD, the diagonals AC, BD are equal.
Problem 10: Let ABC be an isosceles triangle: from the equal sides AB, AC two equal parts AX, AY are cut off, and BY and CX are joined. Prove that BY=CX.
Problem 11: ABCD is a four-sided figure whose sides are all equal, and the diagonal BD is drawn : show that (i) the angle ABD = the angle ADB (ii) the angle CBD = the angle CDB (iii) the angle ABC = the angle ADC.
Problem 12: ABC, DBC are two isosceles triangles drawn on the same base BC, but on opposite sides of it: prove that the angle ABD = the angle ACD.
Problem 13: ABC, DBC are two isosceles triangles drawn on the same base BC, but on the same side of it: prove that the angle ABD = the angle ACD.
Problem 14: AB, AC are the equal sides of an isosceles triangle ABC, and L, M, N are the middle points of AB, BC and CA respectively; prove that (i) LM = NM (ii) BN = CL (iii) the angle ALM = the angle ANM.
Problem 15: Show that the straight line which joins the vertex of an isosceles triangle to the middle points of the base (i) bisects the vertical angle (ii) is perpendicular to the base.
Problem 16: If ABCD is a rhombus, that is, an equilateral four sided figure, show by drawing the diagonal AC that (i) the angle ABC = the angle ADC (ii) AC bisects each of the angles BAD and BCD.
Problem 17: If in a quadrilateral ABCD the opposite sides are equal, namely, AB = CD and AD=CB, prove that the angle ADC = the angle ABC.
Problem 18: If ABC and DBC are two isosceles triangles drawn on the same base BC, prove that the angle ABD = the angle ACD, taking (i) the case where the triangles are on the same side of BC (ii) the case where they are on the opposite sides of BC.
Problem 19: If ABC, DBC are two isosceles triangles drawn on opposite sides of the same base BC, and if AD be joined, prove that each of the angles BAC, BDC will be divided into two equal parts.
Problem 20: Show that the straight lines which join the extremities of the base of an isosceles triangle to the middle points of the opposite sides are equal to one another.
Problem 21: Two given points in the base of an isosceles triangle are equidistant from the extremities of the base: show that they are also equidistant from the vertex.
Problem 22: Show that the triangle formed by joining the middle points of the sides of an equilateral triangle is also equilateral.
Problem 23: ABC is an isosceles triangle having AB equal to AC, and the angles at B and C are bisected by BC and CO: prove that (i) BO = CO (ii) AO bisects the angle BAC.
Problem 24: Show that the diagonals of a rhombus bisect one another at right angles.
Problem 25: The equal sides BA, CA of an isosceles triangle BAC are produced beyond the vertex A to the points E and F, so that AE is equal to AF and FB, EC are joined: prove that FB is equal to EC.
Problem 26: ABC is a triangle and D any point within it. If BD and CD are joined, the angle BDC is greater than the angle BAC. Prove this (i) by producing BD to meet AC (ii) by joining AD, and producing it towards the base.
Problem 27: If any side of a triangle is produced both ways, the exterior angles so formed are together greater than two right angles.
Problem 28: To a given straight line, there cannot be drawn from a point outside it more than two straight lines of the same given length.
Problem 29: If the equal sides of an isosceles triangle are produced, the exterior angles must be obtuse.
Note: The problems 30 to 43 are based on triangle inequalities:
Problem 30: The hypotenuse is the greatest side of a right angled triangle.
Problem 31: The greatest side of any triangle makes acute angles with each of the other sides.
Problem 32: If from the ends of a side of a triangle, two straight lines are drawn to a point within the triangle, then those straight lines are together less than the other two sides of the triangle.
Problem 33: BC, the base of an isosceles triangle ABC is produced to any point D; prove that AD is greater than either of the equal sides.
Problem 34: If in a quadrilateral the greatest and least sides are opposite to one another, then each of the angles adjacent to the least side is greater than its opposite angle.
Problem 35: In a triangle, in which OB, OC bisect the angles ABC, ACB respectively: prove that if AB is greater than AC, then OB is greater than OC.
Problem 36: The difference of any two sides of a triangle is less than the third side.
Problem 37: The sum of the distances of any point from the three angular points of a triangle is greater than half its perimeter.
Problem 38: The perimeter of a quadrilateral is greater than the sum of its diagonals.
Problem 39: ABC is a triangle, and the vertical angle BAC is bisected by a line which meets BC in X, show that BA is greater than BX, and CA greater than CX. Obtain a proof of the following theorem : Any two sides of a triangle are together greater than the third side.
Problem 40: The sum of the distance of any point within a triangle from its angular points is less than the perimeter of the triangle.
Problem 41: The sum of the diagonals of a quadrilateral is less than the sum of the four straight lines drawn from the angular points to any given point. Prove this, and point out the exceptional case.
Problem 42: In a triangle any two sides are together greater than twice the median which bisects the remaining side.
Problem 43: In any triangle, the sum of the medians is less than the perimeter.
Problem 44: Straight lines which are perpendicular to the same straight line are parallel to one another.
Problem 45: If a straight line meets two or more parallel straight lines, and is perpendicular to one of them, it is also perpendicular to all the others.
Problem 46: Angles of which the arms are parallel each to each are either equal or supplementary.
Problem 47: Two straight lines AB, CD bisect one another at O. Show that the straight line joining AC and BD are parallel.
Problem 48: Any straight line drawn parallel to the base of an isosceles trianlge makes equal angles with the sides.
More later. Get cracking. This perhaps the simplest introduction, step by step, to axiomatic deductive logic…discovered by Euclid about 2500 years before ! Hail Euclid !
Cheers,
Nalin Pithwa
# Ratio and proportion: practice problems: set II: pRMO, preRMO or IITJEE foundation maths
Problem 1:
If $\frac{y+z}{pb+qc} = \frac{z+x}{pc+qa} = \frac{x+y}{pa+qb}$, then show that $\frac{2(x+y+z)}{a+b+c} = \frac{(b+c)x+(c+a)y+(a+b)z}{bc+ca+ab}$
Problem 2:
If $\frac{x}{a} = \frac{y}{b} = \frac{z}{b}$, show that $\frac{x^{3}+a^{3}}{x^{2}+a^{2}} +\frac{y^{3}+b^{3}}{y^{2}+b^{2}} + \frac{z^{3}+c^{3}}{z^{2}+c^{2}} = \frac{(x+y+z)^{3}+(a+b+c)^{3}}{(x+y+z)^{2}+(a+b+c)^{2}}$
Problem 3:
If $\frac{2y+2z-x}{a} = \frac{2z+2x-y}{b} = \frac{2x+2y-z}{c}$, show that $\frac{x}{2b+2c-a} = \frac{y}{2c+2a-b} = \frac{z}{2a+2b-c}$
Problem 4:
If $(a^{2}+b^{2}+c^{2})(x^{2}+y^{2}+z^{2}) = (ax+by+cz)^{2}$, prove that $x:a = y:b = z:c$
Problem 5:
If $l(my+nz-lx) = m(nz+lx-my) = n(lx+my-nz)$, prove that $\frac{y+z-x}{l} = \frac{z+x-y}{m} = \frac{x+y-z}{n}$
Problem 6:
Show that the eliminant of
$ax+cy+bz=0$
$cx+by+az=0$
$bx+ay+cz=0$
is $a^{3}+b^{3}+c^{3}-3abc=0$
Problem 7:
Eliminate x, y, z from the equations:
$ax+hy+gz=0$
$hx+by+fz=0$
$gx+fy+cz=0$.
This has significance in co-ordinate geometry. (related to conics).
Problem 8:
If $x=cy+bz$, $y=az+cx$, $z=bx+cy$, show that $\frac{x^{2}}{1-a^{2}} = \frac{y^{2}}{1-b^{2}} = \frac{z^{2}}{1-c^{2}}$.
Problem 9:
Given that $a(y+z)=x$, $b(z+x)=y$, $c(x+y)=z$, prove that $bc+ab+ca+2abc=1$
Problem 10:
Solve the following system of equations:
$3x-4y+7z=0$
$2x-y-2z=0$
$3x^{3}-y^{3}+z^{3}=18$
Problem 11:
Solve the following system of equations:
$x+y=z$
$3x-2y+17z=0$
$x^{3}+3y^{3}+2z^{3}=167$
Problem 12:
Solve the following system of equations:
$7yz+3zx=4xy$
$21yz-3zx=4xy$
$x+2y+3z=19$
Problem 13:
Solve the following system of equations:
$3x^{2}-2y^{2}+5z^{2}=0$
$7x^{2}-3y^{2}-15z^{2}=0$
$5x-4y+7z=0$
Problem 14:
If $\frac{l}{\sqrt{a}-\sqrt{b}} + \frac{m}{\sqrt{b}-\sqrt{c}} + \frac{n}{\sqrt{c}-\sqrt{a}} =0$,
and $\frac{l}{\sqrt{a}+\sqrt{b}} + \frac{m}{\sqrt{b}+\sqrt{c}} + \frac{n}{\sqrt{c}+\sqrt{c}} = 0$,
prove that $\frac{l}{(a-b)(c-\sqrt{ab})} = \frac{m}{(b-c)(a-\sqrt{ab})} = \frac{n}{(c-a)(b-\sqrt{ac})}$
Problem 15:
Solve the following system of equations:
$ax+by+cz=0$
$bcx+cay+abz=0$
$xyz+abc(a^{3}x+b^{3}y+c^{3}z)=0$
Cheers,
Nalin Pithwa
# Method of undetermined coefficients for PreRMO, PRMO and IITJEE Foundation maths
1. Find out when the expression $x^{3}+px^{2}+qx+r$ is exactly divisible by $x^{2}+ax+b$
Solution 1:
Let $x^{3}+px^{2}+qx+r=(x^{2}+ax+b)(Ax+B)$ where A and B are to be determined in terms of p, q, r, a and b. We can assume so because we know from the fundamental theorem of algebra that the if the LHS has to be of degree three in x, the remaining factor in RHS has to be linear in x.
So, expanding out the RHS of above, we get:
$x^{3}+px^{2}+qx+r=Ax^{3}+aAx^{2}+bAx+Bx^{2}+Bax+bB$
$x^{3}+px^{3}+qx+r=Ax^{3}+(aA+B)x^{2}+x(bA+aB)+bB$
We are saying that the above is true for all values of x: hence, coefficients of like powers of x on LHS and RHS are same; we equate them and get a system of equations:
$A=1$
$p=aA+B$
$bA+aB=q$
$bB=r$
Hence, we get $p=a+\frac{r}{b}$ and $bp-ba=r$ or that $b(p-a)=r$
Also, $b+aB=q$ so that $q=b+\frac{ar}{b}$ which means $q-b=\frac{a}{b}r$
but $\frac{r}{b}=B=p-a$ and hence, $q-b=\frac{a}{b}(p-a)$
So, the required conditions are $b(p-a)=r$ and $q-b=\frac{a}{b}(p-a)$.
2) Find the condition that $x^{2}+px+q$ may be a perfect square.
Solution 2:
Let $x^{2}+px+q=(Ax+B)^{2}$ where A and B are to be determined in terms of p and q; finally, we obtain the relationship required between p and q for the above requirement.
$x^{2}+px+q=A^{2}x^{2}+B^{2}+2ABx$ which is true for all real values of x;
Hence, $A^{2}=1$ so $A=1$ or $A=-1$
Also, $B^{2}=q$ and hence, $B=\sqrt{q}$ or $B=-\sqrt{q}$
Also, $2AB=p$ so that $2\sqrt{q}=p$ so $q=\frac{p^{2}}{4}$, which is the required condition.
3) To prove that $x^{4}+px^{3}+qx^{2}+rx+s$ is a perfect square if $(q-\frac{p^{2}}{4})^{2}=4s$ and $r^{2}=p^{2}s$.
Proof 3:
Let $x^{4}+px^{3}+qx^{2}+rx+s=(Ax^{2}+Bx+C)^{2}$
$x^{4}+px^{3}+qx^{2}+rx+s=A^{2}x^{4}+B^{2}x^{2}+C^{2}+2ABx^{3}+2BCx+2ACx^{2}$
$A^{2}=1$
$2AB=p$
$q=B^{2}+2AC$
$2BC=r$
$C^{2}=s$
$A=1$ or $A=-1$
$2AB=p \longrightarrow 2B=p \longrightarrow B=\frac{p}{2}$
$q=B^{2}+2AC=\frac{p^{2}}{4}+2\times \sqrt{s} \longrightarrow (q-\frac{p^{2}}{4})^{2}=4s$
$2 \times \frac{p}{2} \times \sqrt{s}=r \longrightarrow r^{2}=p^{2}s$
More later,
Nalin Pithwa.
PS: Note in the method of undetermined coefficients, we create an identity expression which is true for all real values of x.
# Check your mathematical induction concepts
Discuss the following “proof” of the (false) theorem:
If n is any positive integer and S is a set containing exactly n real numbers, then all the numbers in S are equal:
PROOF BY INDUCTION:
Step 1:
If $n=1$, the result is evident.
Step 2: By the induction hypothesis the result is true when $n=k$; we must prove that it is correct when $n=k+1$. Let S be any set containing exactly $k+1$ real numbers and denote these real numbers by $a_{1}, a_{2}, a_{3}, \ldots, a_{k}, a_{k+1}$. If we omit $a_{k+1}$ from this list, we obtain exactly k numbers $a_{1}, a_{2}, \ldots, a_{k}$; by induction hypothesis these numbers are all equal:
$a_{1}=a_{2}= \ldots = a_{k}$.
If we omit $a_{1}$ from the list of numbers in S, we again obtain exactly k numbers $a_{2}, \ldots, a_{k}, a_{k+1}$; by the induction hypothesis these numbers are all equal:
$a_{2}=a_{3}=\ldots = a_{k}=a_{k+1}$.
It follows easily that all $k+1$ numbers in S are equal.
*************************************************************************************
Regards,
Nalin Pithwa
# Miscellaneous Algebra: pRMO, IITJEE foundation maths 2019
For the following tutorial problems, it helps to know/remember/understand/apply the following identities (in addition to all other standard/famous identities you learn in high school maths):
$a^{3}+b^{3}+c^{3}-3abc=(a+b+c)(a^{2}+b^{2}+c^{2}-ab-bc-ca)$
By the way, I hope you also know how to derive the above.Let me mention two methods to derive the above :
Method I: Using polynomial division in three variable, divide the dividend $a^{3}+b^{3}+c^{3}-3abc$ by the divisor $a+b+c$.
Method II: Assume that $P(X)$ is a polynomial with roots a, b and c. So, we know by the fundamental theorem of algebra that $P(X)=(X-a)(X-b)(X-c)$. Now, we also know that a, b and c satisfy P(X). Now, proceed further and complete the proof.
Let us now work on the tutorial problems below:
1) If $2s=a+b+c$, prove that $\frac{1}{s-a} + \frac{1}{s-b} + \frac{1}{s-c} = \frac{abc}{s(s-a)(s-b)(s-c)}$
2) If $x^{2}+a^{2}=2(xy+yz+zu-y^{2}-z^{2})$, prove that $x=y=z=u$.
Prove the following identities:
3) $b(x^{3}+a^{3})+ax(x^{2}-a^{2})+a^{3}(x+a)=(a+b)(x+a)(x^{2}-ax+a^{2})$
4) $(ax+by)^{2}+(ay-bx)^{2}+c^{2}x^{2}+c^{2}y^{2}=(x^{2}+y^{2})(a^{2}+b^{2}+c^{2})$
5) $(x+y)^{3}+ 3(x+y)^{2}z+3(x+y)z^{2}+z^{3}=(x+z)^{3}+3(x+z)^{2}y+3(x+z)y^{2}+y^{3}$
6) $(a+b+c)(ab+bc+ca)-abc=(a+b)(b+c)(c+a)$
7) $(a+b+c)^{2}-a(b+c-a)-b(a+c-b)-c(a+b-c)=2(a^{2}+b^{2}+c^{2})$
8) $(x-y)^{3}+(x+y)^{3}+3(x-y)^{2}(x+y)+3(x+y)^{2}(x-y)=8x^{3}$
9) $x^{2}(y-z)+y^{2}(z-x)+z^{2}(x-y)+(y-z)(z-x)(z-y)=0$
10) $a^{3}(b-c)+b^{3}(c-a)+c^{3}(a-b)=-(b-c)(c-a)(a-b)(a+b+c)$
11) Prove that $(b-c)^{3}+(c-a)^{3}+(a-b)^{3}=3(b-c)(c-a)(a-b)$
12) If3 $2s=a+b+c$, prove that $(s-a)^{2}+(s-b)^{2}+(s-c)^{2}+s^{2}=a^{2}+b^{2}+c^{2}$
13) If $2s=a+b+c$, prove that $(s-a)^{3}+(s-b)^{3}+(s-c)^{3}+3abc=s^{3}$
14) If $2s=a+b+c$, prove that $16s(s-a)(s-b)(s-c)=2b^{2}c^{2}+2c^{2}a^{2}+2a^{2}b^{2}-a^{4}-b^{4}-c^{4}$
15) If $2s=a+b+c$, then prove that $2(s-a)(s-b)(s-c)+a(s-b)(s-c)+b(s-c)(s-a)+c(s-a)(s-b)=abc$
16) If $a+b+c=0$, then prove that $(2a-b)^{3}+(2b-c)^{3}+(2c-a)^{3}=3(2a-b)(2b-c)(2c-a)$
17) If $a+b+c=0$, then prove that $\frac{a^{2}}{2a^{2}+bc} + \frac{b^{2}}{2b^{2}+ca} + \frac{c^{2}}{2c^{2}+ab} =1$
18) Prove that $(x+y+z)^{3}+(x+y-z)^{3}+(x-y+z)^{3}+(x-y-z)^{3}=4x(x^{2}+3y^{2}+3z^{2})$
19) If $a+b+c=0$ prove that $(s+3a)^{3}-(s-3b)^{3}-(s-3c)^{3}-3(s-3a)(s-3b)(s-3c)=0$
20) If $X=b+c-2a$, $Y=c+a-2b$, $Z=a+b-2c$, find the value of $X^{2}+Y^{2}+Z^{2}-3XYZ$
21) Prove that $(a-b)^{2}+(b-c)^{2}+(c-a)^{2}=2(c-b)(c-a)+2(b-a)(b-c)+2(a-b)(a-c)$
22) Prove that $a^{2}(b^{3}-c^{3})+b^{2}(c^{3}-a^{3})+c^{2}(a^{3}-b^{3})=(a-b)(b-c)(c-a)(ab+bc+ca)=a^{2}(b-c)^{3}+b^{2}(c-a)^{3}+c^{2}(a-b)^{3} = -[a^{2}b^{2}(a-b)+b^{2}c^{2}(b-c)+c^{2}a^{2}(c-a)]$
23) if $(a+b)^{2}+(b+c)^{2}+(c+a)^{2}=4(ab+bc+cd)$, prove that $a=b=c=d$.
24) If $x=a+d$, $y=b+d$, $z=c+d$, prove that $x^{2}+y^{2}+z^{2}-yz-zx-xy=a^{2}+b^{2}+c^{2}-bc-ca-ab$
25) If $a+b+c=3$, prove that $\frac{1}{b^{2}+c^{2}-a^{2}}+ \frac{1}{c^{2}+a^{2}-b^{2}} + \frac{1}{a^{2}+b^{2}-c^{2}}=0$
26) If $a+b+c=0$, simplify: $\frac{b+c}{bc}(b^{2}+c^{2}-a^{2}) + \frac{c+a}{ca} (c^{2}+a^{2}-b^{2})+ \frac{a+b}{ab}(a^{2}+b^{2}-c^{2})$
27) Prove that the equation $(x-a)^{2}+(y-b)^{2}+(a^{2}+b^{2}-1)(x^{2}+y^{2}-1)=0$ is equivalent to the equation $(ax+by-1)^{2}+(bx-ay)^{2}=0$, hence show that the only possible values of x and y are: $\frac{a}{a^{2}+b^{2}}$, $\frac{b}{a^{2}+b^{2}}$
28) If $2(x^{2}+a^{2}-ax)(y^{2}+b^{2}-by)=x^{2}y^{2}+a^{2}b^{2}$, prove that $(x-a)^{2}(y-b)^{2}+(bx-ay)^{2}=0$ and therefore that $a=x$ and $y=b$ are the only possible solutions.
Good luck for the PreRMo August 2019 !!
Regards,
Nalin Pithwa |
What is 48.3% of 1500?
May 19, 2017
Answer:
$724.5$
Explanation:
First, rewrite 48.3% as a fraction:
$\frac{48.3}{100}$
Second, multiply $1500$ by the fraction:
$1500 \left(\frac{48.3}{100}\right) = 724.5$ |
### Home > A2C > Chapter 11 > Lesson 11.3.1 > Problem11-131
11-131.
Find the equation of the line that is perpendicular to$y = \frac { 1 } { 2 } x - 3$ and passes through the point $\left(10, 14\right)$.
A perpendicular line has a slope that is the opposite reciprocal of the slope of the given line.
Using $y = mx + b$, substitute the $m$, $y$, and $x$. Now solve for $b$.
$y = −2x + 34$
The lines are shown to the right. |
# 3-coloring planar graphs in $O\left(3^{n^.5}\right)$?
I was wondering if the task of searching for planar 3-colorings is known to be of complexity $$O\left(c^{\sqrt{n}}\right)$$ or lower? This feels like it would be an intuitive consequence based from planar separator results, yet in wikipedia, it only mentions independent sets, Steiner trees, Hamiltonian cycles, and TSP. Below I include some reasoning which I think almost does achieve this bound.
With a zero reduced decision diagram, (ZDD), I believe you can get $$O\left(c^{O(log_2(n)\sqrt{n})}\right)$$, and I was curious how I could do better. What I came up with is rather rudimentary. Note: throughout, the ZDD I describe is ternary, but I don’t think that greatly matters. For the ZDD, given an ordering, $$L = \{v_1 \dots v_n\}$$, of vertices to color, the number of nodes at step $$i$$ will be exponential in respect to the size the frontier, $$F_i = \{v_k | k < i \land v_k~v_j, j \geq i \}$$.
To create your ordering $$L$$, you may create an optimal branch-decomposition tree, $$b$$, in polynomial time, which has width at most $$\sqrt{n}$$. Then, select a random leaf $$v’$$ of $$b$$ to be your root. With a BFS, weight each edge $$e$$ by the number of leaves not connected to $$v’$$ if you were to remove $$e$$ from $$b$$. Then, do a DFS to finally create $$L$$, always going down the edge furtherest from $$v’$$, choosing the one with least weight if there is a tie, and choosing arbitrarily if there is still a tie. When we reach a leaf, $$(u,v)$$ add $$u$$/$$v$$ to $$L$$ if either is not in $$L$$. Let $$c_i$$ be the component induced in $$b$$ by the vertices visited when we added $$v_i$$ to $$L$$. Then, $$F_i$$ is bounded by the branch width times the number of edges $$x_i$$ needs to be removed from $$b$$ to get the component $$c_i$$. $$x$$ is bounded roughly by $$log_2$$ of the vertices in $$b$$, which is linear to $$n$$ since we’re dealing with planar graphs.
With that, you check all three colors for each node for each of the $$n$$ frontiers and you’re done.
• Why was this question downvoted? – Sasho Nikolov Aug 20 at 1:40
• It is not hard to find a DP algorithm that runs in $3^k poly(n)$ to check whether a graph with treewidth $k$ can be colored with 3 colors. Since planar graphs have treewidth $O(\sqrt{n})$ your desired time bound follows. – Chandra Chekuri Aug 20 at 2:06
• Planar separator theorem suffices to obtain a tree decomposition of width $O(\sqrt{n})$ in polynomial-time. You don't need an exact algorithm for the claimed running time. Also there is a constant factor approximation for treewidth in planar graphs. These are well-known results. – Chandra Chekuri Aug 20 at 2:47
• A minor comment: Since the $\sqrt n$ in the exponent has a constant factor in front of it (stemming from the size of the separator respectively the treewidth), the base $3$ should be a base $const$ everywehere: $O(c^{\sqrt{n}})$. – Gamow Aug 20 at 4:34
• So we know it is doable in $O(c^{\sqrt n})$ for some c which does not fully answer the question. – Hermann Gruber Aug 20 at 7:17
In short, Gu and Tamaki give a quadratic time algorithm which finds a branch-decomposition of a planar graph of width at most $$3\sqrt{n}$$. Then Robertson and Seymour in (5.1) give a tree-decomposition of width less than $$\frac{9\sqrt{n}}{2}$$. Then the classical dynamic programming algorithm (see, e.g., Marx) solves $$3$$-Coloring in time $$3^{\frac{9\sqrt{n}}{2}}\textrm{poly}(n)<141^{\sqrt{n}}$$.
On the other hand, it is known (Lichtenstein) that under the Exponential Time Hypothesis (ETH), the Planar $$3$$-SAT problem is $$2^{\Omega(\sqrt{n})}$$-hard. And a reduction from Planar $$3$$-SAT to Planar $$3$$-Coloring implies that under ETH there is no algorithm solving Planar $$3$$-Coloring in time $$2^{o(\sqrt{n})}$$.
• @Saeed, do we also know that the branchwidth of a planar graph is upper-bounded by $\sqrt{n}$? – Alex Golovnev Aug 26 at 21:24
• Good point, I remember Fomin et al had a paper showing the upper bound of almost $2\sqrt n$. Don't know what is the best upper bound now. On the other hand I think if we really want to shave off the exponent, it should be possible to directly employ dynamic programming based on branch decomposition, without transformation to tree decomposition (it might already exist in the literature or if not I think it is doable to do it in a good time). – Saeed Aug 27 at 7:06 |
HAL : in2p3-00740359, version 1
arXiv : 1209.2328
Physical Review D 86 (2012) 105025
Radiative Corrections to the Neutralino Dark Matter Relic Density - an Effective Coupling Approach
(2012)
In the framework of the minimal cosmological standard model, the $\Lambda$CDM model, the Dark Matter density is now known with an error of a few percent; this error is expected to shrink even further once PLANCK data are analyzed. Matching this precision by theoretical calculations implies that at least leading radiative corrections to the annihilation cross section of the dark matter particles have to be included. Here we compute one kind of large corrections in the context of the minimal supersymmetric extension of the Standard Model: corrections associated with two-point function corrections on chargino and neutralino lines. These can be described by effective chargino/neutralino-fermion-sfermion and chargino/neutralino-chargino/neutralino-Higgs couplings. We also employ one-loop corrected chargino and neutralino masses, using a recently developed version of the on-shell renormalization scheme. The resulting correction to the predicted Dark Matter density depends strongly on parameter space, but can easily reach 3%.
Thème(s) : Physique/Physique des Hautes Energies - Phénoménologie
Lien vers le texte intégral : http://fr.arXiv.org/abs/1209.2328
in2p3-00740359, version 1 http://hal.in2p3.fr/in2p3-00740359 oai:hal.in2p3.fr:in2p3-00740359 Contributeur : Emmanuelle Vernay <> Soumis le : Mercredi 10 Octobre 2012, 08:18:31 Dernière modification le : Lundi 19 Novembre 2012, 08:54:31 |
Medium-Hard
# PREREQUISITES:
Trees, Segment tree, DFS order
# PROBLEM:
You are given a rooted tree with N nodes. If you select a node u, then all nodes in u's subtree with distance \le K will be covered. For each K from 1 to N, find the minimum number of nodes to select to cover all nodes and return the sum.
# QUICK EXPLANATION
A greedy solution of always selecting the K-th ancestor of the deepest uncovered node is correct. We can prove that the sum of answers is O(N\log N), so we will perform each greedy step in O(\log N) time with segment tree operations, making the total time complexity O(N \log ^2 N).
# EXPLANATION:
Let’s consider the deepest uncovered node u. Somehow, we need to cover it, so one of the K+1 ancestors of u should be selected (including u). Which of those K+1 ancestors do we select?
Observation 1. If v is an ancestor of u with distance \le K, then after selecting v, all nodes in v's subtree will be covered.
Proof
u is the deepest uncovered node, so all other uncovered nodes in the subtree of v are closer to v and given that v has a distance \le K from u, v will also have a distance \le K from all uncovered nodes in v's subtree.
Observation 2. It is always optimal to select the K-th ancestor (or the root if there is no K-th ancestor) of the deepest uncovered node.
Proof
It is always better to select a higher ancestor that can still cover u because the uncovered nodes of a higher subtree is a superset of the uncovered nodes in the lower subtree.
Using observations 1 and 2, we can formulate a simple greedy solution, shown below:
• While there exists an uncovered node:
• Find the deepest uncovered node and let it be u.
• Let v be the K-th ancestor of u or the root if u has no K-th ancestor.
• Select v and cover all nodes in v's subtree.
Observation 3. The sum of answers is O(N \log N).
Proof
Every time we select v, we cover at least the nodes on the path from u to v, which covers at least K+1 uncovered nodes (except for the case when v is the root).
So the answer for each K is at most \frac{N}{K+1}+1. It is well-known that the sum of this expression over all K is O(N \log N) (it can be proven easily with calculus).
This means that if we can somehow perform each greedy step in O(\log N), then we will have a total time complexity of O(N \log^2 N).
Finding the K-th ancestor of u in O(\log N) is standard and the most common way to solve it is using Binary Lifting. This leaves us with 1. finding the deepest uncovered node and 2. covering the entire subtree of v efficiently.
Subtree queries are a sign that we should use DFS preorder to reduce the subtree queries into range queries on an array. After applying DFS preorder, our queries are 1. finding the node with maximum depth on the entire array and 2. setting a range of nodes to -1 (to simulate covering the nodes). To support these two queries in O(\log N), we just need a segment tree which supports range maximum queries and range set updates.
There’s one last thing: At the start of calculating the answer for each K, we need to undo the changes from covering the nodes while calculating the answer for the previous K. Note that rebuilding the segment tree naively for each K is not an option because it will cause the entire solution to be O(N^2). One simple (if you already know persistent segment trees well enough) solution is to just use a persistent segment tree. Another way you could uncover the nodes is to store the changes you make to the segment tree array and recover them before calculating the next K. Check my implementation for more details.
# SOLUTIONS:
Setter's Solution
//+-- -- --++-- +-In the name of ALLAH-+ --++-- -- --+ \\
#include <iostream>
#include <algorithm>
#include <fstream>
#include <vector>
#include <deque>
#include <assert.h>
#include <queue>
#include <stack>
#include <set>
#include <map>
#include <stdio.h>
#include <string.h>
#include <utility>
#include <math.h>
#include <bitset>
#include <iomanip>
#include <complex>
#define F first
#define S second
#define _sz(x) (int)x.size()
#define pb push_back
using namespace std ;
using ll = long long ;
using ld = long double ;
using pii = pair <int , int> ;
const int N = 1e5 + 20 ;
int n , ans = 0 ;
int st[N] , ft[N] , h[N] , tme , per[N] ;
vector <int> g[N] , vec[N] ;
struct node {
int mxh , v = -1 ;
int lazy ;
int hish , hisv ;
} seg[N << 2] ;
void clear() {
ans = tme = 0;
for (int i=0; i<n; i++) st[i] = ft[i] = h[i] = per[i] = 0;
for (int i=0; i<n; i++) g[i].clear(), vec[i].clear();
for (int i=0; i<4*n; i++) seg[i].mxh = seg[i].lazy = seg[i].hish = seg[i].hisv = 0, seg[i].v = -1;
}
void pre_dfs (int v , int par = -1) {
st[v] = tme ++ ;
per[st[v]] = v ;
vec[h[v]].pb(v) ;
for (int u : g[v]) {
if (u == par) continue ;
h[u] = h[v] + 1 ;
pre_dfs(u , v) ;
}
ft[v] = tme ;
}
#define lc (v << 1)
#define rc (lc ^ 1)
#define mid (s + e) >> 1
void change (int v , int val) {
seg[v].mxh = val ;
seg[v].lazy = val ;
}
void shift (int v) {
if (seg[v].hish == -1) seg[v].hish = seg[v].mxh , seg[v].hisv = seg[v].v ;
if (seg[v].lazy == -1) return ;
change(lc , seg[v].lazy) ;
change(rc , seg[v].lazy) ;
seg[v].lazy = -1 ;
}
void modify (int l , int r , int val , int v = 1 , int s = 0 , int e = n) {
if (e - s == 1) seg[v].v = per[s] ;
if (seg[v].hish == -1) seg[v].hish = seg[v].mxh , seg[v].hisv = seg[v].v ;
if (r <= s || e <= l) return ;
if (l <= s && e <= r) {
change(v , val) ;
return ;
}
shift(v) ;
modify(l , r , val , lc , s , mid) ;
modify(l , r , val , rc , mid , e) ;
seg[v].mxh = max(seg[lc].mxh , seg[rc].mxh) ;
if (seg[lc].mxh == seg[v].mxh) {
seg[v].v = seg[lc].v ;
}
else {
seg[v].v = seg[rc].v ;
}
}
void rst (int v = 1 , int s = 0 , int e = n) {
if (seg[v].hish == -1) return ;
seg[v].mxh = seg[v].hish ;
seg[v].v = seg[v].hisv ;
seg[v].lazy = -1 ;
seg[v].hish = -1 ;
seg[v].hisv = -1 ;
if (e - s == 1) return ;
rst(lc , s , mid) ;
rst(rc , mid , e) ;
}
inline int get (int h , int s) {
int low = -1 , high = _sz(vec[h]) ;
while (high - low > 1) {
int md = (low + high) >> 1 ;
if (st[vec[h][md]] <= s) low = md ;
else high = md ;
}
return vec[h][low] ;
}
void solve() {
cin >> n ;
for (int i = 0 , u , v ; i < n - 1 ; i ++) {
cin >> u >> v ;
u -- , v -- ;
g[u].pb(v) ;
g[v].pb(u) ;
}
h[0] = 1 ;
pre_dfs(0) ;
for (int i = 0 ; i < n ; i ++) modify(st[i] , st[i] + 1 , h[i]) ;
for (int i = 0 ; i < (n << 2) ; i ++) seg[i].hish = seg[i].hisv = seg[i].lazy = -1 ;
for (int k = 1 ; k <= n ; k ++) {
int cnt = 0 ;
while (seg[1].mxh != 0) {
cnt ++ ;
ans ++ ;
int v = seg[1].v ;
int u = get(max(1 , h[v] - k) , st[v]) ;
assert(st[u] <= st[v] && ft[v] <= ft[u]) ;
modify(st[u] , ft[u] , 0) ;
}
//if (k % 1000 == 0) cerr << ' ' << k << ' ' << cnt << endl ;
rst() ;
}
cout << ans << '\n' ;
}
int main(){
ios::sync_with_stdio(false) , cin.tie(0) , cout.tie(0) ;
int t; cin >> t;
while(t--) {
solve();
clear();
}
}
Tester's Solution
#include <bits/stdc++.h>
#include <vector>
#include <set>
#include <map>
#include <string>
#include <cstdio>
#include <cstdlib>
#include <climits>
#include <utility>
#include <algorithm>
#include <cmath>
#include <queue>
#include <stack>
#include <iomanip>
#include <ext/pb_ds/assoc_container.hpp>
#include <ext/pb_ds/tree_policy.hpp>
//setbase - cout << setbase (16); cout << 100 << endl; Prints 64
//setfill - cout << setfill ('x') << setw (5); cout << 77 << endl; prints xxx77
//setprecision - cout << setprecision (14) << f << endl; Prints x.xxxx
//cout.precision(x) cout<<fixed<<val; // prints x digits after decimal in val
using namespace std;
using namespace __gnu_pbds;
#define f(i,a,b) for(i=a;i<b;i++)
#define rep(i,n) f(i,0,n)
#define fd(i,a,b) for(i=a;i>=b;i--)
#define pb push_back
#define mp make_pair
#define vi vector< int >
#define vl vector< ll >
#define ss second
#define ff first
#define ll long long
#define pii pair< int,int >
#define pll pair< ll,ll >
#define sz(a) a.size()
#define inf (1000*1000*1000+5)
#define all(a) a.begin(),a.end()
#define tri pair<int,pii>
#define vii vector<pii>
#define vll vector<pll>
#define viii vector<tri>
#define mod (1000*1000*1000+7)
#define pqueue priority_queue< int >
#define pdqueue priority_queue< int,vi ,greater< int > >
#define flush fflush(stdout)
#define primeDEN 727999983
// find_by_order() // order_of_key
typedef tree<
int,
null_type,
less<int>,
rb_tree_tag,
tree_order_statistics_node_update>
ordered_set;
int timer=0;
int intim[123456],outim[123456];
int dep[123456];
int paren[123456][20];
int rev[123456];
int dfs(int cur,int par){
int i;
intim[cur]=timer++;
rev[timer-1]=cur;
if(par==-1){
dep[cur]=0;
}
else{
dep[cur]=dep[par]+1;
}
paren[cur][0]=par;
continue;
}
outim[cur]=timer-1;
return 0;
}
int kthpar(int u,int k){
int i;
if(dep[u]<=k)
return 0;
fd(i,19,0){
if((1<<i)<=k){
u=paren[u][i];
k-=(1<<i);
}
}
return u;
}
pii wow[412345],seg[412345];
int lazy[412345];
int build(int node,int s,int e){
lazy[node]=0;
if(s==e){
wow[node]=mp(dep[rev[s]],rev[s]);
seg[node]=wow[node];
return 0;
}
int mid=(s+e)/2;
build(2*node,s,mid);
build(2*node+1,mid+1,e);
wow[node]=max(wow[2*node],wow[2*node+1]);
seg[node]=wow[node];
return 0;
}
int update(int node,int s,int e,int l,int r,int val){
if(lazy[node]!=0){
if(s!=e){
lazy[2*node]=lazy[node];
lazy[2*node+1]=lazy[node];
}
if(lazy[node]==1){
seg[node]=wow[node];
}
else{
seg[node].ff=-1;
}
lazy[node]=0;
}
if(r<s || e<l)
return 0;
if(l<=s && e<=r){
if(val==1){
seg[node]=wow[node];
}
else{
seg[node].ff=-1;
}
if(s!=e){
lazy[2*node]=val;
lazy[2*node+1]=val;
}
return 0;
}
int mid=(s+e)/2;
update(2*node,s,mid,l,r,val);
update(2*node+1,mid+1,e,l,r,val);
seg[node]= max(seg[2*node],seg[2*node+1]);
return 0;
}
int main(){
std::ios::sync_with_stdio(false); cin.tie(NULL);
int t;
cin>>t;
while(t--){
timer=0;
int n;
cin>>n;
int i;
int u,v;
rep(i,n){
}
rep(i,n-1){
cin>>u>>v;
u--;
v--;
}
int j;
dfs(0,-1);
f(j,1,20){
rep(i,n){
if(paren[i][j-1]==-1)
paren[i][j]=-1;
else
paren[i][j]=paren[paren[i][j-1]][j-1];
}
}
int ver;
build(1,0,n-1);
int ans=0;
f(i,1,n+1){
while(seg[1].ff!=-1){
ans++;
ver=kthpar(seg[1].ss,i);
update(1,0,n-1,intim[ver],outim[ver],-1);
}
update(1,0,n-1,intim[0],outim[0],1);
}
cout<<ans<<endl;
}
return 0;
}
Editorialist's Solution
#include <bits/stdc++.h>
using namespace std;
#define ar array
const int mxN=1e5;
int n, dt, ds[mxN], de[mxN], anc[mxN][17];
ar<int, 2> st[1<<18];
vector<pair<int, ar<int, 2>>> ch;
//set a[l1] = x
void upd1(int l1, ar<int, 2> x, int i=1, int l2=0, int r2=n-1) {
if(l2==r2) {
st[i]=x;
return;
}
int m2=(l2+r2)/2;
if(l1<=m2)
upd1(l1, x, 2*i, l2, m2);
else
upd1(l1, x, 2*i+1, m2+1, r2);
st[i]=max(st[2*i], st[2*i+1]);
}
//set a[l1..r1] = {0, -1}
void upd2(int l1, int r1, int i=1, int l2=0, int r2=n-1) {
//store original
ch.push_back(make_pair(i, st[i]));
if(l1<=l2&&r2<=r1) {
st[i]={0, -1};
return;
}
int m2=(l2+r2)/2;
if(l1<=m2)
upd2(l1, r1, 2*i, l2, m2);
if(m2<r1)
upd2(l1, r1, 2*i+1, m2+1, r2);
st[i]=max(st[2*i], st[2*i+1]);
}
void dfs(int u=0, int p=0, int d=0) {
anc[u][0]=p;
for(int i=1; i<17; ++i)
anc[u][i]=anc[anc[u][i-1]][i-1];
upd1(dt, {d, u});
ds[u]=dt++;
if(v^p)
dfs(v, u, d+1);
de[u]=dt;
}
void solve() {
//input
cin >> n;
for(int i=1, u, v; i<n; ++i) {
cin >> u >> v, --u, --v;
}
//calculate necessary info with dfs
dt=0;
dfs();
int ans=0;
//simulate each k
for(int k=1; k<=n; ++k) {
for(; ; ++ans) {
//find deepest
ar<int, 2> u=st[1];
if(u[1]<0) {
//no nodes left
break;
}
//find ancestor
for(int i=16; ~i; --i)
if(k>>i&1)
u[1]=anc[u[1]][i];
//cover ancestor
upd2(ds[u[1]], de[u[1]]-1);
}
//restore changes
for(; ch.size(); ch.pop_back())
st[ch.back().first]=ch.back().second;
}
cout << ans << "\n";
}
int main() {
ios::sync_with_stdio(0);
cin.tie(0);
int t;
cin >> t;
while(t--)
solve();
}
Please give me suggestions if anything is unclear so that I can improve. Thanks
5 Likes
@tmwilliamlin can you please explain why we need to write a code for covering the subtree of ‘v’ ? Because v already covers node u which is the deepest uncovered node and so all the nodes in the subtree of v will automatically be covered by v because they are all within a distance of K from v …
That happens when we select node v, that’s why we cover all nodes in subtree of v.
Sir, will you please elaborate a bit what you said? @tmwilliamlin
When we select node v, all nodes in v’s subtree will be covered. So that’s why we cover all nodes in v’s subtree.
Ok Sir,Thank you…@tmwilliamlin
@tmwilliamlin
for(int i=1; i<17; ++i)
anc[u][i]=anc[anc[u][i-1]][i-1];
What does 17 indicate as the iteration limit in the function dfs()?
Do you know binary lifting?
@tmwilliam No sir…
@tmwilliamlin Sir,see the portion I have underlined.This is from yours youtube editorial of this problem .I want to know why u have spoken of subtree of ‘u’ here bcz if u is the deepest node it cannot have a subtree…it is a leaf node…Sir kindly clear my doubt
A leaf can have a subtree (which is just the leaf).
However, the u I mention in step 3 is just a general u, it has nothing to do with the deepest node.
@tmwilliamlin Ok sir…
@tmwilliamlin Sir, while calculating the dfs for this problem.you have first done binary lifting for a particular node.after that what is the role of ds[u] and de[u]?? What is dt? |
# If the order of a non abelian group $G$ is $39$, then find all normal subgroup in $G$.
If the order of a non abelian group $$G$$ is $$39$$, then find all normal subgroup in $$G$$.
All I know is a subgroup $$N$$ is normal in group $$G$$:
(1) if $$N$$ is abelian or cyclic in $$G$$.
(2) if $$N$$'s index is $$2$$ in the group $$G$$.
(3) if $$N$$ is either trivial or the whole $$G$$.
Also if order of a group $$G$$ is $$pq$$ ($$p, q$$ prime and $$p) then there exist unique subgroup of order $$q$$ and hence is normal (since it is isomorphic to $$\Bbb Z_q$$) and also total subgroup of order $$p$$ is equal to $$q.$$
• You need to clean up your question a bit. Also your criterion (1) as you have stated it is false. Take any simple group which is not of prime order. The alternating group $A_n$ will do, for $n\ge 5$. This has no proper normal subgroups, but every element generates a proper cyclic subgroup (which is, of course, abelian). – Mark Bennet Sep 27 '19 at 20:35
• Your (1) is wrong. Perhaps you meant "If $G$ is abelian or cyclic"? – Arturo Magidin Sep 27 '19 at 20:42
• (2) generalizes to "if $N$'s index is the smallest prime $p$ dividing $|G|$" from which it follows immediately that there's a normal subgroup of index $3$ (order $13$). – Bungo Sep 27 '19 at 20:50
• How do you know what you state in the final paragraph - you are indicating that you probably know more than you say and you are just quoting what you think is relevant. – Mark Bennet Sep 27 '19 at 20:56
Order of the group $$G$$ is 39 i.e. $$\circ (G)=3.13$$. Now since $$3\mid (13-1)$$. Therefore $$\exists$$ two groups up to isomorphism of order 39. One is cyclic and other is non-abelian (see in the link for details Question on groups of order $pq$). If $$G$$ is cyclic then for each devisor it has unique normal subgroup. Further, if $$G$$ is a non-abelian group of order 39, then by using sylow theorem we can see that subgroup of the order of 13 is unique therefore it is normal. Again subgroup of order 3 can not be normal otherwise it will be unique and $$G$$ becomes cyclic. Therefore only one non-trivial normal subgroup in this case.
From your final comment let $$p=3$$ and $$q=13$$. This resolves the fact that there is a single subgroup of order $$13$$ which is normal, and $$13$$ subgroups of order $$3$$.
Since the possible orders of subgroups are $$1, 3, 13, 39$$ this leaves only the question of whether there are any normal subgroups of order $$3$$.
Think about one such - it is cyclic and consists of three elements $$1, a, a^2$$. Let $$b$$ be another element of order $$3$$. $$a$$ and $$b$$ together generate a group of order (a) greater than $$3$$ because it contains the distinct elements $$1, a, a^2, b$$; and (b) divisible by $$3$$ because it contains the element $$a$$ of order $$3$$. The only possibility is that $$a$$ and $$b$$ generate the whole group.
Now if the subgroup generated by $$a$$ were normal we'd have either:
$$b^2ab=a$$ (remembering that $$b^3=1$$) which would make $$a$$ and $$b$$ commute and generate a subgroup of order $$9$$ and this is impossible in a group of order $$39$$; or
$$b^2ab=a^2$$, which we can write $$aba=b$$ whence $$abab=b^2\neq 1$$. Then $$(ab)^3=b^2ab=a^2\neq 1$$ and $$(ab)^6=1$$. And $$ab$$ would have order $$6$$ (we eliminated $$2$$ and $$3$$ on the way), and this is impossible in a group of order $$39$$.
This is not the slickest proof, but shows it can be done with elementary calculations and without Sylow. The general proofs are better because they show more insight into the structure - and the non-abelian groups of order $$pq$$ have a structure it is worth understanding.
You could also work with $$c$$, an element of order $$13$$. We have $$a^2ca=c^r: r\neq 1$$ because the subgroup of order $$13$$ is normal and the whole group is not cyclic. And that also leads (with care) to a proof that the group generated by $$a$$ does not have $$c$$ in its normaliser, and hence can't be normal.
[Note that $$r$$ is more restricted than shown above]
Trivially, $$1$$ and $$G$$ are normal subgroups.
From the Sylow theorems, it follows that there is exactly one subgroup of order $$13$$; this is therefore characteristic and hence normal. So far we have three normal subgroups.
The remaining $$39-13$$ elements of $$G$$ must be of order $$3$$ (because any element of order $$39$$ would make $$G$$ cyclic, hence abelian). In particular, there are $$13$$ Sylow $$3$$-groups. None of these is normal (as $$G$$ acts transitively on the set of Sylow $$3$$-groups by conjugation). |
# Utility Commands¶
This module contains generally useful, atomic commands that aren’t otherwise categorized into a dedicated module.
These commands might be safe for use by anyone, or locked behind in-Discord permissions.
## NSFW Images Detection Tools¶
GiselleBot implements an (experimental) NSFW images detection system using TensorFlow.js as its base.
The detection system is based on Infinite Red’s NSFW JS library and GantMan’s Inception v3 Keras Model for NSFW detection to classify any image as a composition of 5 categories:
• Drawings: Safe for work drawings (including anime).
• Hentai: Hentai and pornographic drawings.
• Neutral: Safe for work neutral images.
• Porn: Pornographic images, sexual acts.
• Sexy: Sexually explicit images, not pornography.
The module was furtherly converted into a back-end module and customized with a caching system to enhance its performance.
This interesting article by Infinite Red explains the reasons behind the creation of the original NSFW JS client-side module.
Warning
This module, by no means, is supposed to reliably recognize all NSFW images. Its main purpose is quickly classifying provided images and supporting humans in better moderating a server.
The module itself will not store or expose any sexually explicit images. The output will not contain a direct link to the original image, and a censored (low resolution, blurred) version of the image will be locally cached and used to refer to the original image.
Here’s an example of an output of this command, and the corresponding censored image:
For those of you with a background in image processing - yes, Lenna is actually flagged as NSFW with a confidence score of 81.9%!
### !nsfwcheck¶
#### Command Syntax¶
!nsfwcheck (image URL, or image as a message attachment)
#### Command Description¶
Submits an image against the GantMan’s Inception v3 Keras Model for NSFW detection (as explained above) and returns a detailed output about the classification.
#### Examples¶
!nsfwcheck http://www.lenna.org/lena_std.tif
### !nsfwcache¶
#### Command Syntax¶
!nsfwcache (cache ID)
#### Command Description¶
Recalls an image classification output by its cache ID (as given in the footer of the !nsfwcheck command.
#### Examples¶
!nsfwcache 5d6c4cd78e422b00137d14ce
### !nsfwthreshold¶
#### Command Syntax¶
!nsfwthreshold [new threshold, or "-"]
#### Command Description¶
While the classification scores given to an image cannot be tuned, each server can choose its own NSFW threshold (the sum of NSFW-related scores over which an image is considered NSFW).
The new threshold is an integer within the range [0, 100], inclusive of 0 (treat all images as NSFW) and 100 (only treat an image as NSFW if the model recognize it as having no-SFW components - which is highly unlikely, hence basically meaning “treat no images as NSFW”).
Running the command with - as argument will reset the server threshold to the global, default threshold of 60%.
Running the command with no arguments will show the current value for the server.
#### Examples¶
!nsfwthreshold 80
!nsfwthreshold -
!nsfwthreshold
#### Permissions Needed¶
User: Manage Server
## Other Tools¶
### !shorturl¶
#### Command Syntax¶
!shorturl (long URL)
#### Command Description¶
Converts a long URL into a short URL using the proprietary gisl.eu shortening service.
Note
URLs shortened using the gisl.eu service never expire, unless deleted by the person that created the short URL (feature not available yet). The original URLs are saved as encrypted strings within the redirection database. Any sensitive data contained in the URL (authentication keys, login info, etc.) will not be exposed in case of a breach.
#### Examples¶
!shorturl http://www.amazon.com/Kindle-Wireless-Reading-Display-Globally/dp/B003FSUDM4/ref=amb_link_353259562_2?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-10&pf_rd_r=11EYKTN682A79T370AM3&pf_rd_t=201&pf_rd_p=1270985982&pf_rd_i=B002Y27P3M
### !unitconverter¶
#### Command Syntax¶
!unitconvert (value) (unit) [destination unit]
#### Command Description¶
Converts between quantities in different units. It also supports converting currency with the most recent exchange rates.
The value and originating unit are mandatory. If the destination unit is omitted or invalid (e.g. non-existing, or a unit in a different measure, like trying to convert length to mass) then the “best” destination unit will be picked. For currencies, if the destination currency is omitted or invalid, USD will be automatically used.
#### Examples¶
!unitconvert 10 EUR USD
!uconv 1000 mm
!uconv 30 C F
!uconv 1 MB b
### !clockchannel¶
#### Command Syntax¶
!clockchannel (time zone name) [--template {custom channel name template}] [--12ht]
#### Command Description¶
Creates a channel as “clock channel”, updating its name every 10 minutes. You must specify the time zone name: if you need to search for a valid time zone name, use the !searchtz command.
Note
The initial implementation of this command used to have clocks update every minute. Discord suddenly changed the rate limit of channel updates to 2 updates every 10 minutes, but the rate limiter is not precise. The 10-minutes update is the safest update that is still useful to track time.
You can set a custom template for the channel name. You can use one (or more) of these placeholders in your custom channel name template:
• %time_zone% or %tz%: This will be replaced with the name of the chosen time zone.
• %clock%: This will be replaced with the auto-updating clock.
• %date%: This will be replaced with the current date.
Additionally, you can add the --12ht parameter if you want the clock to be shown as 12 hours time.
By default, the channel name template is %time_zone%: %clock%.
Out of the box, each server is limited to having 5 clock channels. You can unlock up to 10 different clock channels as a Premium feature (see: Premium Perks). This Premium limit was increased from 1/5 after the Discord rate limit update.
#### Permissions Needed¶
User: Manage Channels, Manage Server
Bot: Manage Channels, Connect
#### Examples¶
!clockchannel UTC
!clockchannel Europe/London --12ht
!clockchannel America/New_York --template Current Time: %clock%
### !clockchanneldelete¶
#### Command Syntax¶
!clockchanneldelete [channel id/mention/name]
#### Command Description¶
Deletes a clock channel, bypassing Discord’s implicit denial: normally, “Manage Channels” does not apply if the user doesn’t have “Connect” permissions to the channel. As long as GiselleBot has “Connect” permissions, this channel will allow users to delete the clock channel.
You can omit the channel identifier if you are connected to the clock channel when you run the command.
#### Permissions Needed¶
User: Manage Channels, Manage Server
Bot: Manage Channels, Connect
#### Examples¶
!clockchanneldelete 123456789098765432
!clockchanneldelete UTC Time: 9:10 AM
### !searchtz¶
#### Command Syntax¶
!searchtz (country code or lookup string)
#### Command Description¶
Searches for a valid time zone name.
Using a 2-letters country identifier will show the available time zones for the specified country.
Using any 3+ characters string will search for matching time zones.
#### Examples¶
!searchtz US
!searchtz New
### !membercountchannel¶
#### Command Syntax¶
!membercountchannel [--template {custom channel name template}]
#### Command Description¶
Creates a channel as “member count channel”, showing the total amount of users that are currently in the server, updating its value every hour.
You can set a custom template for the channel name. If you do so, you must use the %count% placeholder in your custom channel name template.
By default, the channel name template is Members: %count%.
You can only have 1 member count channel up at any time.
#### Permissions Needed¶
User: Manage Channels, Manage Server
Bot: Manage Channels, Connect
#### Examples¶
!membercountchannel
!membercountchannel --template Current Users: %count%
### !urban¶
#### Command Syntax¶
!urban (search string) [--more]
#### Command Description¶
Urban Dictionary text lookup. The output will be the highest ranked result. The embed title will hyperlink to the corresponding online page.
Using --more will show up to 5 results, if available.
Warning
Given the nature of the website, Urban Dictionary lookups will only be executed in channels that are marked as NSFW.
#### Examples¶
!urban guinea tee
### !lyrics¶
#### Command Syntax¶
!lyrics (song name or search keyword)
#### Command Description¶
Looks for lyrics of a song by song name or keyword. |
## Boundless: "The GDP Deflator"
Read this overview of the GDP deflator.
The GDP deflator is a price index that measures inflation or deflation in an economy by calculating a ratio of nominal GDP to real GDP.
#### LEARNING OBJECTIVES
• Calculate the GDP deflator and explain how it is used to measure inflation
• Summarize the importance of inflation in measuring GDP changes over time
#### KEY POINTS
• The GDP deflator is a measure of priceinflation. It is calculated by dividing Nominal GDP by Real GDP and then multiplying by 100. (Based on the formula).
• Nominal GDP is the market value of goods and services produced in an economy, unadjusted for inflation. Real GDP is nominal GDP, adjusted for inflation to reflect changes in real output.
• Trends in the GDP deflator are similar to changes in the Consumer Price Index, which is a different way of measuring inflation.
#### TERMS
• real GDP
A macroeconomic measure of the value of the economy's output adjusted for price changes (inflation or deflation).
• GDP deflator
A measure of the level of prices of all new, domestically produced, final goods and services in an economy. It is calculated by computing the ratio of nominal GDP to the real measure of GDP.
• nominal gdp
A macroeconomic measure of the value of the economy's output that is not adjusted for inflation.
#### FULL TEXT
The GDP deflator (implicit price deflator for GDP) is a measure of the level of prices of all new, domestically produced, final goods and services in an economy. It is a price index that measures price inflation or deflation, and is calculated using nominal GDP and real GDP.
## Nominal GDP versus Real GDP
Nominal GDP, or unadjusted GDP, is the market value of all final goods produced in a geographical region, usually a country. That market value depends on the quantities of goods and services produced and their respective prices. Therefore, if prices change from one period to the next but actual output does not, nominal GDP would also change even though output remained constant.
In contrast, real gross domestic product accounts for price changes that may have occurred due to inflation. In other words, real GDP is nominal GDP adjusted for inflation. If prices change from one period to the next but actual output does not, real GDP would be remain the same. Real GDP reflects changes in real production. If there is no inflation or deflation, nominal GDP will be the same as real GDP.
## Calculating the GDP Deflator
The GDP deflator is calculated by dividing nominal GDP by real GDP and multiplying by 100 .
## GDP Deflator Equation
The GDP deflator measures price inflation in an economy. It is calculated by dividing nominal GDP by real GDP and multiplying by 100.
Consider a numeric example: if nominal GDP is $100,000, and real GDP is$45,000, then the GDP deflator will be 222 (GDP deflator = $100,000/$45,000 * 100 = 222.22).
In the U.S., GDP and GDP deflator are calculated by the U.S. Bureau of Economic Analysis.
## Relationship between GDP Deflator and CPI
Like the Consumer Price Index (CPI), the GDP deflator is a measure of price inflation/deflation with respect to a specific base year. Similar to the CPI, the GDP deflator of the base year itself is equal to 100. Unlike the CPI, the GDP deflator is not based on a fixed basket of goods and services; the "basket" for the GDP deflator is allowed to change from year to year with people's consumption and investmentpatterns. However, trends in the GDP deflator will be similar to trends in the CPI.
Source: Boundless. “The GDP Deflator.” Boundless Economics. Boundless, 21 Jul. 2015. Retrieved 26 Oct. 2015 from https://www.boundless.com/economics/textbooks/boundless-economics-textbook/measuring-output-and-income-19/comparing-real-and-nominal-gdp-94/the-gdp-deflator-358-12455/ |
# Definition:Genus of Surface
## Definition
Let $S$ be a surface.
Let $G = \left({V, E}\right)$ be a graph which is embedded in $S$.
Let $G$ be such that each of its faces is a simple closed curve.
Let $\chi \left({G}\right) = v - e + f = 2 - 2 p$ be the Euler characteristic of $G$ where:
$v = \left|{V}\right|$ is the number of vertices
$e = \left|{E}\right|$ is the number of edges
$f$ is the number of faces.
Then $p$ is known as the genus of $S$. |
Abstract In this thesis, after giving an introduction to topology in electronic systems and topological superconductors, I first discuss the additional transport properties Majoranas induce — that is, spin transport. I show that the Andreev reflection processes induce non-trivial spin properties when a Majorana is present, which can be tested as an additional experimental signature of Majoranas or can be used for spintronics. We further generalize this property to systems with Majorana flat bands. Then, I discuss a case where two Majoranas coexist, protected by a chiral symmetry. Interestingly, the local Andreev reflection is suppressed at zero bias due to the interferences between the two Majoranas. Consequently, crossed Andreev reflection probability can be enhanced to a value as large as unity (resonant). This can happen in realistic systems of quantum anomalous Hall insulators in proximity to superconductors, where Majorana chiral modes appear. Very recently, an experiment has nicely shown the existence of such Majorana chiral modes which manifest themselves in conductance plateaus quantized at $\frac{1}{2} \frac{e^2}{h}$. I show that our numerical simulation with domain walls further explain the data and the results support the claim that the plateaus are due to Majorana chiral modes. |
# Inconsistent ban policy?
Discussion in 'SF Open Government' started by funkstar, Dec 29, 2011.
1. ### ReikuBannedBanned
Messages:
11,238
Ok. Try me with some university level questions James on Newtonian theory. I would like to put my own theory to the test. Deal?
We'll see how I go from that.
3. ### James RJust this guy, you know?Staff Member
Messages:
31,445
Reiku:
I haven't commanded you to do anything.
With respect, you are not qualified to judge whether things like the chapters from your book are "of pretty good standard". That's for others to judge - readers and reviewers. And you've had mostly bad reviews so far.
The chapters of your book I have seen contain massive jumps from one topic to another. If they are written for 16 year olds, like you say they are, then no 16 year old will be able to follow from one point to the next. You skip over a whole heap of requisite knowledge and post disconnected factoids and baseless speculations.
I have seen lengthy posts from AlphaNumeric picking apart many mistakes of yours. In response, you have addressed perhaps the odd one or two and ignored the rest.
I haven't seen that from AlphaNumeric. But I have seen you repeatedly equate matrices and scalars, and this has been pointed out to you several times.
There are plenty of cranks with PhDs out there, I assure you. There are Nobel prize winners who are cranks. Having letters after your name doesn't mean you're immune.
I'd never heard of Fred Alan Wolf, your hero, before you mentioned him. It seems he was part of a "maverick" group of physicists at one point in time. I'm not sure what he is doing now. Suffice it to say, he is not a big name in physics as far as I am aware. Also, it appears that some of his ideas and those of the people in the same group are fairly cranky.
If you really want to. We'd better start a new thread for that.
5. ### Hercules RockefellerBeatings will continue until morale improves.Moderator
Messages:
2,729
Perhaps a modifed form of 'debate' in the Formal Debate subforum?
7. ### James RJust this guy, you know?Staff Member
Messages:
31,445
Reiku:
I have started a new thread in Free Thoughts for now, with a few first-year level physics and maths questions.
All questions are taken from actual first-year university exams.
Messages:
11,238
9. ### AlphaNumericFully ionizedRegistered Senior Member
Messages:
6,697
As I previously explained (why do you keep bringing up things I've already addressed at length?), there was a lot more wrong with your Dirac discussion than multiplying matrices. And being able to do something I learnt in 6th form (multiply 2x2 matrices together) doesn't provide evidence you can do postgrad level physics. The stuff which required beyond high school level mathematics you failed to do properly. I went through it at length, with explanations.
It's one thing to lie to other people about what was said between you and I, it's an entirely different thing to lie to me about it.
You made plenty of mistakes, which you then further compounded. As for the scalar vs vector thing, it was STILL a mistake on your part. Rather than adding two vectors and a scalar you were adding 1 vector and a scalar. I've clarified this with you several times now, because you've made the "I wasn't mistaken, you were!" comment on a number of occasions.
Really, who do you think you're going to convince? Do you think if you lie and lie and lie enough that reality will bend to your desires? Wolf might think that, given the nonsense on 'What The Bleep Do We Know' but that isn't how reality works.
And that thread wasn't an isolated incident. You got the string vacua things wrong and you completely arsed up the discussion about Lagrangians which James was involved in. And that's just threads from this last week or so. Countless threads of yours which ended up in pseudo are other examples.
You constantly try to play your mistakes off as little rare errors, little more than typos, but they aren't. EVERYONE knows they aren't. The evidence is so vast even the non-physicists are aware of it.
Wolf is a crank. PhD after ones name doesn't make one automatically right and I've never claimed that.
I'm somewhat hesitant to allow Reiku an open ended time scale because, to be frank, I believe he'll either spend loads of time Googling the question or even post the questions on another forum to try to get others to help him. Hence why I said we could sort out a 2 hour window where he can sit down and work through questions we give him out of the blue, then he has to provide evidence of his own workings, like photos of his calculations on paper. He has demonstrated in previous "Let's test Reiku" instances he isn't above a direct copy and paste of other people's work.
Reiku, you say you spend 4 hours a day doing this stuff. If you could easily deal with undergraduate then you should be able to answer all of James's questions by this time tomorrow, including workings. You admit you have the time, so you have no excuse.
10. ### ReikuBannedBanned
Messages:
11,238
You were mistaken and I am not lying. I know that equation was sound, I even demontrated where it originally came from.
And meh, to your time scale, I am actually buisy for the rest of the day, I plan on going to the pictures to see the new underworld movie. Then I am having a few drinks at a friends house.
As for ''answering questions,'' in a reasonable time, I woke up this morning, saw James' post here and began writing it down. Check the post times. They should be within reasonable distance with each other. Thirdly, I am only answering these questions by James and then that's it. I have nothing to prove. I hesitantly agreed to this in the first place, I think if I answer eight random questions that will suffice whether I have any kind of understanding or not...
Oh fuck it, the answer to the associated vectors is
V_1 =
-0.707
0.707
V_2 =
0.6
-0.8
Just in case you think I have asked anyone in a forum.
11. ### ReikuBannedBanned
Messages:
11,238
May I ask a question, I don't want big arguements or anything.
My idea's are usually in good scientific context. For instance, I remember my curvilinear distortions idea, where matter comes from curvature to answer where matter came from in the universe. I remember my idea met good reception with many people here. But as it went when I created that, AN, you continued to say, ''your idea means nothing, you have no working model.''
Yet, it seems captain bork has said something similar just very recently here: http://www.sciforums.com/showthread.php?p=2892530#post2892530 - I don't believe he has stolen my idea, because essentially, all I did was predict something that made sense from relativity. So why was my idea met with such harsh criticism back then? It seems my idea was very plausible?
Because I didn't have a working model? In all honestly, that is the way of science a lot of the time. Take spin. Spin was hypothesized first, it wasn't until later a full working mathematical model came about?
12. ### AlphaNumericFully ionizedRegistered Senior Member
Messages:
6,697
*sigh* Remember how you once asserted that $(a+ib)(a-ib) = a^{2} - b^{2} + 2abi$ and asserted Wolf agreed with you? Remember how you asserted your equations about the $\psi_{L,R}$ were right but they weren't? Remember how you asserted a pair of 2x2 matrices multiplied to give a number, saying Susskind agreed with you?
As for 'demonstrating where it originally came from' I don't doubt you have it within your capacity to get expressions from other sources. However, it is not unheard of for you to incorrectly copy them or fail to understand the notation or to alter them deliberately in ways which render them meaningless or not know what to do with them.
Being able to list a bunch of equations doesn't mean you understand them. I can list plenty of German words, thanks to Google translate. Doesn't mean I can speak German.
Since you keep bringing it up and not listening each time I explain you were still wrong I'll go through it again. The post in question is here. The part in question is the following,
M is mass, $\omega$ is frequency and $k$ is the Fourier dual of x so equivalent to a momentum component. All of them are scalars. $\alpha$ is a matrix, you give a possible representation for it further on in the post and we discuss things about it at length. Thus the left hand side of the above expression is a scalar, the right hand side is a matrix plus a scalar. It's therefore nonsense. Originally I said the left hand side was a vector, I made an error. As I explained to you in the original thread, your equation was still wrong despite the fact I made a mistake about part of it.
And I explained at length why you were mistaken and how such a mistake could arise. You ignore that, claiming I just waffled unnecessarily. No, I was actually providing you with an accurate explanation of a common pitfall people make with matrix operations.
It's possible that given a spinor $\psi$ that $\omega \psi= (\alpha k + M)\psi$. This follows from the fact the matrix maps a spinor to a spinor, just like multiplying a spinor by a number gives a spinor. A simpler case is actions on vectors, ie $A \cdot \mathbf{v} = \lambda \mathbf{v}$ can be true, it would mean $\mathbf{v}$ is an eigenvector of A with corresponding eigenvalue $\lambda$. It would be a mistake to then conclude that $A = \lambda$, it's the error you just made. Even saying $\lambda$ is actually multiplied by the identity matrix isn't going to fix it. Such expressions mean that the action of a matrix reduces to the action of scalar multiplication in particular directions in the vector space (ie those spanned by a single eigenvector, ignoring degeneracy in eigenvalues).
Now this is a standard bit of linear algebra you should be aware of if you're knowledgeable enough to handle undergrad stuff. The fact you're making such mistakes and despite a lot of attention being given to that equation you still don't get it further undermines your claims of competency.
You've also been demanding to see where more than 2 of your mistakes are. In each of my larger posts where I post equations I give at least 2 mistakes by you each post. Some of them were about fundamental failures of understanding you have pertaining to the Dirac equation, which you believe yourself sufficiently versed on to be 'teaching' other people.
I would expect a half decent student to do the first 5 of James's questions in under an hour. Most of them are doable by a decent A Level student. If you think you can do well at general undergrad stuff and even perhaps the Dirac equation then you should steamroller his questions without much work. Some of them you shouldn't even need to put pen to paper, just type out the answer.
You say this but obviously you do want to prove something to people. You'd not be engaged in this ridiculous "Here's a lengthy post about conciousness and the Dirac equation. Look, I can multiply matrices!" nonsense for so long. You want people to think you can do this stuff and your behaviour says you're desperate to convince people. I say desperate because when you're reduced to lying to James and myself about exchanges we've had with you it's really quite sad.
If you do manage to answer most of them (which I doubt) it wouldn't then be a validation of your claims. You think being able to multiply 2x2 matrices together is a reason I should think you can do stuff related to the Dirac equation so you struggle to judge your competency levels.
Personally I think you'll be Googling for all you're worth. That's why I suggested the idea of a specific time frame so you can't go away and then come back later, you're unable to ask other people.
13. ### ReikuBannedBanned
Messages:
11,238
Before I head out the door, I had a quick glance at one of my favourite subjects here
No captian, it's nothing to do with a modification of relativity. A black hole has inner boundaries as well. Time and space become distorted so badly, they switch roles for most of the journey, but inside a black hole, you might be lucky enough to come across another boundary before reaching the singular region. In this boundary, space and time switch again, and it is in this boundary things like entire galaxies can exist.
14. ### ReikuBannedBanned
Messages:
11,238
I'm not lying.
You go figure. $\omega$ is energy, $M$ is mass and $k$ is the wave number (momentum). So what we have really is
$E=p+M$
This is true for relativity.
15. ### AlphaNumericFully ionizedRegistered Senior Member
Messages:
6,697
No, it isn't. Firstly you've changed the equation, the equation which I originally corrected you on was, according to you, $\omega = \alpha k + M$, where you've mixed scalars and matrices. Secondly $E=p+M$ isn't true in relativity. $-E^{2} + p^{2} = -m^{2}$ can be rearranged to $E^{2} = m^{2} + p^{2}$ but that doesn't imply $E=m+p$. That is precisely the mistake you made in the thread in question, one I spent considerable time explaining to you. You complain I waffled too much but you've just shown you didn't even learn anything from it!
In fact, I explained at length here why you can't say $E = \sqrt{p^{2}+M^{2}} = p+M$ but it's something you could mistakenly conclude given the structure of the Dirac equation. It's a structure the whole $\{ \gamma^{\mu} , \gamma^{\nu} \} = 2g^{\mu\nu}\mathbb{I}$ thing we talked about is for!
I'll explain it again since obviously you didn't understand it before, you didn't understand the explanation and you still don't understand it.
Oscillating things obey the massive wave equation, which for v=1 is $(-\partial_{t}^{2} + \nabla^{2})A = M^{2}A$. This is second order as there's second order derivatives. Got that or do I need to go over it again? If you Fourier transform this or hit it on $e^{i(\omega t - \mathbf{k} \cdot \mathbf{x})}$ you'll get that $(-\omega^{2} + \mathbf{k}\cdot \mathbf{k})\tilde{A} = -M^{2}\tilde{A}$, where $\tilde{A}$ is the Fourier transform of A. From this we get that $-\omega^{2} + \mathbf{k}\cdot \mathbf{k} = -M^{2}$ because the coefficients are all scalars and none are differential operators. This is a reformulation of the mass-energy-momentum relation.
So how can we get something to do with just E and not $E^{2}$? Through particular arguments (see Chapter 1 of 'Quantum Theory of Fields' by Weinberg) Dirac realised he needed a first order operator to apply to a spinor. But the operator has to square to something which is the wave operator on all components of the spinor. Clearly something with a second order time derivative and likewise spatial derivative needs to be of the form $D = a\partial_{t} + \mathbf{b}\cdot \nabla$. Except such a thing doesn't square to the wave operator no matter the scalar values of a and the components of b. But if you make them all matrices you can do it, then the operator you want at the end is the wave operator times the identity matrix. Using the whole anticommutation relations thing I've explained multiple times you can deduce the conditions the matrices must satisfy, the Dirac algebra relations. In what follows I probably drop factors of i or -1 but that isn't relevant to what I'm getting at here. Then what happens if you have $D = \gamma^{0}\partial_{t} + \gamma^{i}\partial_{i}$ is a matrix operator, which when you apply to the spinor you get $D \psi = m\psi = \left( \gamma^{0}\partial_{t} + \gamma^{i}\partial_{i} \right) \psi$. If you Fourier transform this you get the usual alteration of the coefficients, $m\tilde{\psi} \propto \left( \gamma^{0}\omega + \gamma^{i}k_{i} \right) \psi$. However, unlike the scalar wave equation case you cannot say that the coefficients are equal, ie $-\omega + \sum_{i}k_{i} + m = 0$ because there's matrices involves and these matrices are not all the same. If you expanded out $m\tilde{\psi} \propto \left( \gamma^{0}\omega + \gamma^{i}k_{i} \right)\tilde{\psi}$ in terms of spinor components you'd find you don't get $-\omega + \sum_{i}k_{i} + m$ in front of each term.
Since you no doubt haven't got a clue what I'm talking about since you don't know spinor matrix behaviour I'll use a previous example. In 1+1 dimensions the Dirac operator could be written as $D = \gamma^{0}\partial_{t} + \gamma^{1}\partial_{x} = \left( \begin{array} i\partial_{t} & \partial_{x} \\ \partial_{x} & -i\partial_{t} \end{array} \right)$. This hits the 2 component spinor $\left( \begin{array}{c} \psi_{1} \\ \psi_{2} \end{array} \right)$ as $D\psi = \left( \begin{array} i\partial_{t} & \partial_{x} \\ \partial_{x} & -i\partial_{t} \end{array} \right)\left( \begin{array}{c} \psi_{1} \\ \psi_{2} \end{array} \right) = m \psi = m\left( \begin{array}{c} \psi_{1} \\ \psi_{2} \end{array} \right)$. So we Fourier transform each side and the equation reduces to $\left( \begin{array} -\omega & ik \\ ik & \omega \end{array} \right)\left( \begin{array}{c} \tilde{\psi}_{1} \\ \tilde{\psi}_{2} \end{array} \right) = m\left( \begin{array}{c} \tilde{\psi}_{1} \\ \tilde{\psi}_{2} \end{array} \right)$ which gives up a pair of equations, $(-\omega-m) \tilde{\psi}_{1} + ik\tilde{\psi}_{2} = 0$ and $ik\tilde{\psi}_{1} + (\omega-m)\tilde{\psi}_{2} = 0$. Clearly this hasn't reduced to $\omega = m+k$, there's mixing between the different components.
To see this in the proper light it's useful to consider $D^{2}\psi$. As I previously showed $D^{2} = \left( \begin{array} -\partial_{t}^{2}+\partial_{x}^{2} & 0 \\ 0 & -\partial_{t}^{2}+\partial_{x}^{2} \end{array} \right) = \mathbb{I}(-\partial_{t}^{2}+\partial_{x}^{2})$. This is diagonal, there is no mixing between the different components of whatever spinor I apply it to. This is the power of the Dirac algebra, in order to get a first order operator which squares to something that acts like the wave operator on each component separately you have to use a non-diagonal matrix operator. If it could be diagonal you'd not need matrices at all! But this non-diagonalness (which isn't really a word) is why you can't just say "Square root both sides of $E^{2} = M^{2} + p^{2}$ to get $E = M+p$!", because the squaring was done using matrices, so you can't unsquare the expression ignoring that fact.
As I said to you in the original thread, this is a fundamental thing in the Dirac equation. The book of Weinberg I mentioned devotes the whole chapter to discussing how Dirac come to realise this requirement, the methods he considered, the algebra he worked through and the implications of the results and that's just the introduction, never mind the detail. It's something every course on QED and the Dirac equation will highlight in the extreme. You claim you are knowledgeable in the Dirac equation, well enough to perhaps handle university material in it. This is repeated evidence you're not. This isn't a typo, this isn't a little slip, this isn't a transcription error, this isn't an "I was heading out to get drunk and I just wrote it down" mistake, it's a fundamental gaping hole in your supposed knowledge. It undermines any claim to be knowledgeable in this stuff because it means you've never do any real working calculations with the Dirac equation else you'd know all about this. Spinor components and matrix actions on them are the bane of many a student in this stuff. I speak from personal experience of getting lost in indices many times.
Seriously, give it a rest. How many times do you need to have your ignorance exposed before you stop lying?
16. ### ReikuBannedBanned
Messages:
11,238
I haven't changed anything. Go back to our original debates, your arguement was that I mixed up my vector and scalar notations.
YOU WERE PROVEN WRONG, MR PHD! lol
If you had mentioned the matrices, I would have honestly remarked, ''yes, the equation is incomplete in this sense, this is why I said it is 'starting to look more like the Dirac Equation,' implying it is not quite there.''
17. ### ReikuBannedBanned
Messages:
11,238
As for the squares, big deal. Atleast I never squared one side and not the other. Here, is this better
E^2 = \sqrt{M^2 + p^2}
Fin!
18. ### ReikuBannedBanned
Messages:
11,238
I just look at your post and shake my head... why do you insist on writing long-winded posts, I mean seriously dude? Who are you trying to impress? I know what you are talking about, are you trying to go over my head or something?
19. ### TrippyALEA IACTA ESTStaff Member
Messages:
10,890
And you complain about Alphanumeric being arrogant and talking down to people?
Whatever.
20. ### AlphaNumericFully ionizedRegistered Senior Member
Messages:
6,697
You're very unwise to bend the truth when I link to the posts directly above you.
I made a slip up, in that the explanations I then provided clearly demonstrated I understood the nature of $\omega$. Furthermore your equation was still nonsense even though $\omega$ is a scalar. Furthermore you just got basic relativity wrong, by saying $E=m+p$ in relativity. Furthermore it's a mistake you've repeated, despite having a lengthy indepth explanation from me, an explanation you've complained was unnecessary but which you obviously need given you're made the mistake twice now.
Incompleteness has nothing to do with it, it's bullshit!
Well that's STILL wrong! $E^{2} = m^{2} + p^{2}$ is the equation, you've got an unnecessary square root over the right hand side! And it was a 'big deal' because you were showing you don't understand the nature of the Dirac operator, which is what the Dirac equation is all about. Remember, you are the one claiming to be knowledgeable in it. If you didn't make that claim I'd not be holding you to this standard. You claim you're sufficiently familiar with it to understand working details, to be able to do some of the algebra, to even perhaps handle university material on it. The fact you're failing to understand basic notation is undermining all of that and I'm going to point it out.
If you just put your hands up and said "Ok, the truth is I don't understand this stuff, I just copy expressions from Wikipedia and other websites and just reword other people's explanations. In truth I don't have any working understanding of this stuff" then I'd stop saying you're dishonest.
You made mistake after mistake after mistake, in some cases repeatedly. Why are you surprised people think you're full of crap?
You don't know what I'm talking about because I explained it before and you still made the same mistake!
As for trying to go over your head, anything university level is over your head. I could post old homework problems I used to have along with a solution and it would go over your head, as we're seeing in your laughable attempts to do James's questions. If I wanted to go crazy with the algebra I could but what I'm posting here is bookwork. It's assumed knowledge for anyone who does quantum field theory. It might seem like trying to go over people's head to you but you have very low standards as you understand so little. If you really understood this stuff you'd see I'm posting little more than a simple explanation of the form of the Dirac operator. If I wanted to go nuts with over the top stuff I'd have gone into detail about compact spaces in string vacua in the Hawking thread.
Unlike you I'm not required to misrepresent myself to talk about this stuff. This stuff is, literally, lunch time discussion material for me, discussing it isn't something I expect to terribly impress people.
Reiku is happy to play the "Oh I'll explain that to you, I have a good understanding of it. I'll do it by posting lots of equations!" card when he's the one posting the equations but not when he's the one whose being schooled. Just look at his 'book' he says is aimed at 14~16 year olds. It includes Lie algebra commutation relations and the Dirac equation! He wants to be seen to be informed and knowledgeable so he posts the most complicated thing he thinks he can get away with. When someone else posts an explanation on similar material he shows his own attitude when he accuses them of trying to unnecessarily impress people. It's a complete double standard, just like how he demands civility from people but out of all the people in this thread he does many times more swearing and name calling then the rest of us put together.
He doesn't like being hold to standards he demands of others. It's why he doesn't like moderators dealing with him, he wants to be immune from criticism while throwing plenty of it.
21. ### ReikuBannedBanned
Messages:
11,238
It's called a reflection on the obvious behaviour.
Please continue with these dark-ages attitude.
22. ### ReikuBannedBanned
Messages:
11,238
I did swear at the man afterall
I sent him a message of apology.
And I meant it.
If the man let's me back here, tell, me, how can anyone hold him judgement of my behaviour?
The answer is ''no one''. It's my fault always anyway.
23. ### ReikuBannedBanned
Messages:
11,238
I will give you one word of advice, alphanumericunt, your behaviour is out of order. You are not showing modest or mod behaviour. You are below the purity of real scientific adventure. You've lest embraced the finer points of your knightood and belowed your power to support the likes of Guest who James has already questioned whether there is a clique in itself and if he himself does not know what I mean, then I say this: It's clear that from James' post in university first year thread for me in the free thoughts, that Guest clearly has a strange, (Yet similar to your) attitude towards me.
Too obvious.
Every time in private messages of Guest being a sock, you quickly changed the subject. |
• # question_answer How is the mutual inductance of a pair of coils affected when, (i) separation between the coils is increased (ii) the number of turns in each coil is increased (iii) a thin iron sheet is placed between the two coils, other factors remaining the same? Explain your answer in each case.
(i) When the relative distance between the coil is increased, the leakage of flux increases which reduces the magnetic coupling of the coils. So, magnetic flux linked with all the turns decreases. Therefore, mutual inductance will be decreased. (ii) Mutual inductance (M) for a pair of coil is given by $M=\frac{{{\mu }_{0}}{{N}_{1}}{{N}_{2}}A}{l}$ $\Rightarrow$ $M\propto {{N}_{1}}{{N}_{2}}$ Therefore, when the number of turns in each coil increases, the mutual inductance also increases. (iii) Since, mutual inductance (M) is directly proportional to the relative permeability of space ${{\mu }_{r}}.$ i.e. $M=\frac{{{\mu }_{0}}{{\mu }_{r}}{{N}_{1}}{{N}_{2}}A}{l}$ Hence, mutual inductance (M) is increased on placing thin iron sheet between the two coils. |
## Calculus (3rd Edition)
$$\frac{1}{2(e^{-x}+2)^2}+c$$
Let $u=e^{-x}+2$, and then $du=-e^{-x}dx$, hence we have $$\int \frac{e^{-x}dx}{(e^{-x}+2)^3}=-\int \frac{du}{u^3}=-\frac{1}{-2}u^{-2}+c\\ =\frac{1}{2(e^{-x}+2)^2}+c$$ |
# zbMATH — the first resource for mathematics
The solutions of one type $$q$$-difference functional system. (English) Zbl 1343.30027
Summary: In this paper, we study the functional system on $$q$$-difference equations, our results can give estimates on the proximity functions and the counting functions of the solutions of $$q$$-difference equations system. This implies that solutions have a relatively large number of poles. The main results in this paper concern $$q$$-difference equations to the system of $$q$$-difference equations.
##### MSC:
30D35 Value distribution of meromorphic functions of one complex variable, Nevanlinna theory 39B32 Functional equations for complex functions 39A13 Difference equations, scaling ($$q$$-differences) 39B12 Iteration theory, iterative and composite equations
Full Text:
##### References:
[1] Hayman W-K: Meromorphic Functions. Clarendon, Oxford; 1964. · Zbl 0115.06203 [2] Laine I: Nevanlinna Theory and Complex Differential Equations. de Gruyter, Berlin; 1993. [3] Korhonen, R, A new clunie type theorem for difference polynomials, J. Differ. Equ. Appl, 17, 387-400, (2011) · Zbl 1213.39005 [4] Zhang, JC; Wang, G; Chen, JJ; Zhao, RX, Some results on $$q$$-difference equations, No. 2012, (2012) [5] Barnett, D; Halburd, R-G; Korhonen, R-J; Morgan, W, Nevanlinna theory for the $$q$$-difference operator and meromorphic solutions of $$q$$-difference equations, Proc. R. Soc. Edinb. A, 137, 457-474, (2007) · Zbl 1137.30009 [6] Hayman, W-K, On the characteristic of functions meromorphic in the plane and of their integrals, Proc. Lond. Math. Soc, 14A, 93-128, (1965) · Zbl 0141.07901
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
# Thread: Changing order of integration
1. ## Changing order of integration
Hello can anyone help me with chaging the order of integration? I'm not sure how to go about it.
i Have to evaluate the double integral of 2(x^2)ysin(x(y^3)) dx dy. The limits are 1 and 0. And 1 and square root of y.
But i'm not sure how to change the order of integration! i've tried drawing a diagram ut it went wrong. Can anybody help?
2. Originally Posted by studentsteve1202
Hello can anyone help me with chaging the order of integration? I'm not sure how to go about it.
i Have to evaluate the double integral of 2(x^2)ysin(x(y^3)) dx dy. The limits are 1 and 0. And 1 and square root of y.
But i'm not sure how to change the order of integration! i've tried drawing a diagram ut it went wrong. Can anybody help?
I can't see how your diagram can go wrong ....!?
$\int_{y=0}^{y=1} \int_{x = \sqrt{y}}^{x = 1} ..... dx \, dy = \int_{x=0}^{x=1} \int_{y = 0}^{y = x^2} ..... dy \, dx$ |
# Laboratoire de Mécanique des Fluides et d’Acoustique - UMR 5509
LMFA - UMR 5509
Laboratoire de Mécanique des Fluides et d’Acoustique
Lyon
France
## Nos partenaires
Article dans Phys. Fluids (2019)
## Instabilities in viscosity-stratified two-fluid channel flow over an anisotropic-inhomogeneous porous bottom
Geetanjali Chattopadhyay, Usha Ranganathan & Séverine Millet
A linear stability analysis of a pressure driven, incompressible, fully developed laminar Poiseuille flow of immiscible two-fluids of stratified viscosity and density in a horizontal channel bounded by a porous bottom supported by a rigid wall, with anisotropic and inhomogeneous permeability, and a rigid top is examined. The generalized Darcy model is used to describe the flow in the porous medium with the Beavers-Joseph condition at the liquid-porous interface. The formulation is within the framework of modified Orr-Sommerfeld analysis, and the resulting coupled eigenvalue problem is numerically solved using a spectral collocation method. A detailed parametric study has revealed the different active and coexisting unstable modes : porous mode (manifests as a minimum in the neutral boundary in the long wave regime), interface mode (triggered by viscosity-stratification across the liquid-liquid interface), fluid layer mode [existing in moderate or $O(1)$ wave numbers], and shear mode at high Reynolds numbers. As a result, there is not only competition for dominance among the modes but also coalescence of the modes in some parameter regimes. In this study, the features of instability due to two-dimensional disturbances of porous and interface modes in isodense fluids are explored. The stability features are highly influenced by the directional and spatial variations in permeability for different depth ratios of the porous medium, permeability and ratio of thickness of the fluid layers, and viscosity-stratification. The two layer flow in a rigid channel which is stable to long waves when a highly viscous fluid occupies a thicker lower layer can become unstable at higher permeability (porous mode) to long waves in a channel with a homogeneous and isotropic/anisotropic porous bottom and a rigid top. The critical Reynolds number for the dominant unstable mode exhibits a nonmonotonic behaviour with respect to depth ratio. However, it increases with an increase in anisotropy parameter ξ indicating its stabilizing role. Switching of dominance of modes which arises due to variations in inhomogeneity of the porous medium is dependent on the permeability and the depth ratio. Inhomogeneity arising due to an increase in vertical variations in permeability renders short wave modes to become more unstable by enlarging the unstable region. This is in contrast to the anisotropic modulations causing stabilization by both increasing the critical Reynolds number and shrinking the unstable region. A decrease in viscosity-stratification of isodense fluids makes the configuration hosting a less viscous fluid in a thinner lower layer adjacent to a homogeneous, isotropic porous bottom to be more unstable than the one hosting a highly viscous fluid in a thicker lower layer. An increase in relative volumetric flow rate results in switching the dominant mode from the interface to fluid layer mode. It is evident from the results that it is possible to exercise more control on the stability characteristics of a two-fluid system overlying a porous medium in a confined channel by manipulating the various parameters governing the flow configurations. This feature can be effectively exploited in relevant applications by enhancing/suppressing instability where it is desirable/undesirable. |
Navigate back to the activity.
### CHANGE OF VARIABLES
#### Essentials
##### Main ideas
• There are many ways to solve this problem!
• Using Jacobians (and inverse Jacobians)
##### Prerequisites
• Surface integrals
• Jacobians
• Green's/Stokes' Theorem
##### Warmup
Perhaps a discussion of single and double integral techniques for solving this problem.
##### Props
• whiteboards and pens
##### Wrapup
This is a good conclusion to the course, as it reviews many integration techniques. We emphasize that (2-dimensional) change-of-variable problems are a special case of surface integrals.
Here are some of the methods one could use to do these integrals:
• change of variables (at least 2 ways)
• Area Corollary to Green's Theorem (at least 2 ways)
• ordinary single integral (at least 2 ways)
• ordinary double integral (at least 2 ways)
• surface integral
#### Details
##### In the Classroom
• Some students will want to simply use Jacobian formulas; encourage such students to try to solve this problem both by computing $\Partial{(x,y)}{(u,v)}$ and by computing $\Partial{(u,v)}{(x,y)}$.
• Other students will want to work directly with $d\rr_1$ and $d\rr_2$. This works fine if one first solves for $x$ and $y$ in terms of $u$ and $v$.
• Students who compute $d\rr_1$ and $d\rr_2$ directly can easily get confused, since they may try to eliminate $x$ or $y$, rather than $u$ or $v$. 1) Emphasize that one must choose parameters, both on the region, and on each curve, and that $u$ and $v$ are chosen to make the limits easy.
##### Subsidiary ideas
• Review of Green's Theorem
• Review of single integral techniques
• Review of double integral techniques
(none yet)
(none yet)
##### Enrichment
• Discuss the 3-dimensional case, perhaps relating it to volume integrals.
1) Along the curve $v=\hbox{constant}$, one has $dy=v\,dx$, so that $d\rr_1 = dx\,\ii + dy\,\jj = (\ii + v\,\jj)\,dx$, which some students will want to write in terms of $x$ alone. But one needs to express this in terms of $du$! This can be done using $du = x\,dy + y\,dx = x (v\,dx) + y\,dx = 2y\,dx$, so that $d\rr_1 = (\ii + v\,\jj) \,\frac{du}{2y}$. A similar argument leads to $d\rr_2 = (-\frac{1}{v}\,\ii+\jj)\,\frac{x\,dv}{2}$ for $u=\hbox{constant}$, so that $d\SS = d\rr_1\times d\rr_2 = \kk \,\frac{x}{2y}\,du\,dv = \kk \,{du\,dv\over2v}$. This calculation can be done without solving for $x$ and $y$, provided one recognizes $v$ in the penultimate expression.
##### Views
New Users
Curriculum
Pedagogy
Institutional Change
Publications |
## Stream: Is there code for X?
### Topic: General lift
#### Adam Topaz (Feb 19 2021 at 16:03):
Do we have something like this?
def general_lift {α ι β : Type*} [I : setoid α] (f : (ι → α) → β)
(c : ∀ i j : ι → α, (∀ x : ι, i x ≈ j x) → f i = f j) :
(ι → quotient I) → β := sorry
#### Eric Wieser (Feb 19 2021 at 16:35):
docs#pi_setoid seems relevant
#### Eric Wieser (Feb 19 2021 at 16:38):
It's λ i, quotient.lift_on (quotient.choice i) f c
#### Eric Wieser (Feb 19 2021 at 16:38):
Or quotient.lift f c ∘ quotient.choice for short
#### Adam Topaz (Feb 19 2021 at 16:51):
Here's the fully dependent version:
noncomputable def quotient.pi_lift {ι : Type*} {α : ι → Type*} {β : Type*}
[I : Π i, setoid (α i)] (f : (Π i, α i) → β)
(c : ∀ g h : (Π i, α i), (∀ i, g i ≈ h i) → f g = f h) :
(Π i, quotient (I i)) → β := quotient.lift f c ∘ quotient.choice
Last updated: May 16 2021 at 05:21 UTC |
×
# cylinder....... help
A circular cylinder is inscribed in a given cone of radius R cm and height H cm as shown in the figure Find the curved surface area S of the circular cylinder as a function of x Find the relation connecting x and R when S is maximum
Note by Sonali Sukesh
3 years, 1 month ago
Sort by:
By Similarity of Triangles,
$$\displaystyle \frac{h}{R-x} = \frac{H}{R}$$
$$\displaystyle\Rightarrow h = \frac{H}{R} (R-x)$$
Now, it is clear that,
$$\displaystyle S = 2\pi x\times h$$
Substitute the value of $$\displaystyle h$$ and get $$\displaystyle S$$ as a function of $$\displaystyle x$$.
Differentiate the function that you just derived wrt $$\displaystyle x$$, and put it equal to $$\displaystyle 0$$.
You will find a relation between $$\displaystyle x$$ and $$\displaystyle R$$. · 3 years, 1 month ago
Right. · 3 years, 1 month ago
2pixh = S, x = R(1-h/H)..............................................................(when S is maximum) · 3 years, 1 month ago
S = 2pix ; x= R[1-h/H] · 3 years, 1 month ago
right 2pixh = S, x = R(1-h/H)..........(when S is maximum) · 3 years, 1 month ago
thanks 2 all · 3 years, 1 month ago
need help fast · 3 years, 1 month ago |
# Evolutionary Rate at the Molecular Level
Motoo Kimura | | 12 minutes to read.
This paper, published in 1968, is a prelude to the highly influential The neutral theory and molecular evolution published later in 1983. This paper does some computations about rate of mutations at DNA level. These numbers turn to be so high that we have no choice but to accept that most of the mutation are selectively neutral. This is in contrast to widely held notion that evolution, and therefore mutations, happens by natural selection. Selection must just be one of the many evolutionary forces that shape an organism.
Yellow highlights/annotations are my own. You can disable them.
## Abstract
Calculating the rate of evolution in terms of nucleotide substitutions seems to give a value so high that many of the mutations involved must be neutral ones.
## Paper
Comparative studies of haemoglobin molecules among different groups of animals suggest that, during the evolutionary history of mammals, amino-acid substitution has taken place roughly at the rate of one amino-acid change in $10^7$ yr for a chain consisting of some 140 amino-acids. For example, by comparing the $\alpha$ and $\beta$ chains of man with these of horse, pig, cattle and rabbit, the figure of one amino-acid change in $7 \times 10^6$ yr was obtained [1] This is roughly equivalent to the rate of one amino-acid substitution in $10^7$ yr for a chain consisting of 100 amino-acids.
A comparable value has been derived from the study of the haemoglobin of primates [2]. The rate of amino-acid substitution calculated by comparing mammalian and avian cytochrome c (consisting of about 100 amino-acids) turned out to be one replacement in $45 \times 10^6$ yr (ref. 3). Also by comparing the amino-acid composition of human triosephosphate dehydrogenase with that of rabbit and cattle [4], a figure of at least one amino-acid substitution for every $2.7 \times 10^8$ yr can be obtained for the chain consisting of about 1,110 amino-acids. This figure is roughly equivalent to the rate of one amino-acid substitution in $30 \times 10^6$ yr for a chain consisting of 100 amino-acids. Averaging those figures for haemoglobin, cytochrome c and triosephosphate dehydrogenase gives an evolutionary rate of approximately one substitution in $28 \times 10^6$ yr for a polypeptide chain consisting of 100 amino-acids.
I intend to show that this evolutionary rate, although appearing to be very low for each polypeptide chain of a size of cytochrome c, actually amounts to a very high rate for the entire genome.
First, the DNA content in each nucleus is roughly the same among different species of mammals such as man, cattle and rat (see, for example, ref. 5). Furthermore, we note that the G-C content of DNA is fairly uniform among mammals, lying roughly within the range of 40-44 percent [6]. These two facts suggest that nucleotide substitution played a principal part in mammalian evolution.
In the following calculation, I shall assume that the haploid chromosome complement comprises about $4 \times 10^9$ nucleotide pairs, which is the number estimated by Muller [7] from the DNA content of human sperm. Each amino-acid is coded by a nucleotide triplet (codon), and so a polypeptide chain of 100 amino-acids corresponds to 300 nucleotide pairs in a genome. Also, amino-acid replacement is the result of nucleotide replacement within a codon. Because roughly 20 per cent of nucleotide replacement caused by mutation is estimated to be synonymous [8], that is, it codes for the same amine-acid, one amino-acid replacement may correspond to about 1.2 base pair replacements in the genome. The average time taken for one base pair replacement within a genome is therefore
$28 \times 10^6 \text{ yr} \div \left(\frac{4 \times 10^9}{300}\right) \div 1.2 = 1.8 \text{ yr}$
This means that in the evolutionary history of mammals, nucleotide substitution has been so fast that, on average, one nucleotide pair has been substituted in the population roughly every 2 yr.
This figure is in sharp contrast to Haldane’s well known estimate [9] that, in horotelic evolution (standard rate evolution), a new allele may be substituted in a population roughly every 300 generations. He arrived at this figure by assuming that the cost of natural selection per generation (the substitutional load in my terminology [10]) is roughly 0.1, while the total cost for one allelic substitution is about 30. Actually, the calculation of the cost based on Haldane’s formula shows that if new alleles produced by nucleotide replacement are substituted in a population at the rate of one substitution every 2 yr, then the substitutional load becomes so large that no mammalian species could tolerate it
Thus the very high rate of nucleotide substitution which I have calculated can only be reconciled with the limit set by the substitutional load by assuming that most mutations produced by nucleotide replacement are almost neutral in natural selection. It can be shown that in a population of effective size $N_e$, if the selective advantage of the new allele over the pre-existing alleles is $s$, then, assuming no dominance, the total load for one gene substitution is
$L(p) = 2 \left\{ \frac{1}{u(p)} - 1 \right\} \int_0^{4Sp} \frac{e^y - 1}{y} dy - 2e^{-4S} \int_{4Sp}^{4S} \frac{e^y}{y} dy + 2 \log_e \left(\frac{1}{p}\right) \tag{1}$
Where $S = N_{e}s$ and $p$ is the frequency of the new allele at the start. The derivation of the foregoing formula will be published elsewhere. In the expression given here $u(p)$ is the probability of fixation given by [11]
$u(p) = (1 - e^{-4Sp})/(1 - e^{-4S}) \tag{2}$
Now, in the special case of $|2N_{e}s| \ll 1$, formulae (1) and (2) reduce to
$L(p) = 4 N_{e}s \log_e(1/p) \tag{1'}$ $u(p) = p + 2 N_{e}s p(1-p) \tag{2'}$
Formula (1’) shows that for a nearly neutral mutation the substitutional load can be very low and there will be no limit to the rate of gene substitution in evolution. Furthermore, for such a mutant gene, the probability of fixation (that is, the probability by which it will be established in the population) is roughly equal to its initial frequency as shown by equation (2’). This means that new alleles may be produced at the same rate per individual as they are substituted in the population in evolution.
This brings the rather surprising conclusion that in mammals neutral (or nearly neutral) mutations are occurring at the rate of roughly 0.5 per yr per gamete. Thus, if we take the average length of one generation in the history of mammalian evolution as 4 yr, the mutation rate per generation for neutral mutations amounts to roughly two per gamete and four per zygote ($5 \times 10^{-10}$ per nucleotide site per generation).
Such a high rate of neutral mutations is perhaps not surprising, for Mukai [12] has demonstrated that in Drosophila the total mutation rate for “viability polygenes” which on the average depress the fitness by about 2 per cent reaches at least some 35 per cent per gamete. This is a much higher rate than previously considered. The fact that neutral or nearly neutral mutations are occurring at a rather high rate is compatible with the high frequency of heterozygous loci that has been observed recently by studying protein polymorphism in human and Drosophila populations [13-15].
Lewontin and Hubby [15] estimated that in natural populations of Drosophila pseudoobscura an average of about 12 per cent of loci in each individual is heterozygous. The corresponding heterozygosity with respect to nucleotide sequence should be much higher. The chemical structure of enzymes used in this study does not seem to be known at present, but in the typical case of esterase-5 the molecular weight was estimated to be about $10^5$ by Narise and Hubby [16], In higher organisms, enzymes with molecular weight of this magnitude seem to be common and usually they are “multimers” [17]. So, if we assume that each of those enzymes comprises on the average some 1,000 amino-acids (corresponding to molecular weight of some 120,000), the mutation rate for the corresponding genetic site (consisting of about 3,000 nucleotide pairs) is
$u = 3 \times 10^3 \times 5 \times 10^{-10} = 1.5 \times 10^{-6}$
per generation. The entire genome could produce more than a million of such enzymes.
In applying this value of $u$ to Drosophila it must be noted that the mutation rate per nucleotide pair per generation can differ in man and Drosophila, There is some evidence that with respect to the definitely deleterious effects of gene mutation, the rate of mutation per nucleotide pair per generation is roughly ten times as high in Drosophila as in man [18,19], This means that the corresponding mutation rate for Drosophila should be $u=1.5 \times 10^{-5}$ rather than $u=1.5 \times 10^{-6}$. Another consideration allows us to suppose that $u=1.5 \times 10^{-5}$ is probably appropriate for the neutral mutation rate of a cistron in Drosophila. If we assume that the frequency of occurrence of neutral mutations is about one per genome per generation (that is, they are roughly two to three times more frequent than the mutation of the viability polygenes), the mutation rate per nucleotide pair per generation is $1/(2 \times 10^8)$, because the DNA content per genome in Drosophila is about one-twentieth of that of man [20]. For a cistron consisting of 3,000 nucleotide pairs, this amounts to $u=1.5 \times 10^{-5}$.
Kimura and Crow [21] have shown that for neutral mutations the probability that an individual is homozygous is $1/(4 N_{e}u +1)$, where $N_e$ is the effective population number, so that the probability that an individual is heterozygous is $H_e=4 N_{e}u/(4 N_{e}u +1)$. In order to attain at least $H_e=0.12$, it is necessary that at least $N_e=2,300$. For a higher heterozygosity such as $H_e=0.35$, $N_e$ has to be about 9,000. This might be a little too large for the effective number in Drosophila, but with migration between subgroups, heterozygosity of 35 per cent may be attained even if $N_e$ is much smaller for each subgroup.
We return to the problem of total mutation rate. From a consideration of the average energy of hydrogen bonds and also from the information on mutation of rIIA gene in phage $T_4$, Watson [22] obtained $10^{-8} \sim 10^{-9}$ as the average probability of error in the insertion of a new nucleotide during DNA replication. Because in man the number of cell divisions along the germ line from the fertilized egg to a gamete is roughly 50, the rate of mutation resulting from base replacement according to these figures may be $50 \times 10^{-8} \sim 50 \times 10^{-9}$ per nucleotide pair per generation. Thus, with $4 \times 10^{9}$ nucleotide pairs, the total number of mutations resulting from base replacement may amount to $200 \sim 2,000$. This is 100-1,000 times larger than the estimate of 2 per generation and suggests that the mutation rate per nucleotide pair is reduced during evolution by natural selection [18,19].
Finally, if my chief conclusion is correct, and if the neutral or nearly neutral mutation is being produced in each generation at a much higher rate than has been considered before, then we must recognize the great importance of random genetic drift due to finite population number [22] in forming the genetic structure of biological populations. The significance of random genetic drift has been deprecated during the past decade. This attitude has been influenced by the opinion that almost no mutations are neutral, and also that the number of individuals forming a species is usually so large that random sampling of gametes should be negligible in determining the course of evolution, except possibly through the “founder principle” [24]. To emphasize the founder principle but deny the importance of random genetic drift due to finite population number is, in my opinion, rather similar to assuming a great flood to explain the formation of deep valleys but rejecting a gradual but long lasting process of erosion by water as insufficient to produce such a result.
### References
1. Zuckerkandl, E., and Pauling, L.. in Evolving Genes and Proteins (edit, by Bryson, V., and Vogel, H. J.), 97 (Academie Press, New York, 1965),
2. Buettner-Janusch, J., and Hill, R. L., in Evolving Genes and Proteins (edit. by Bryson, V,,and Vogel, H.J.), 167 (Academic Press, New York, 1965).
3. Margoliash, B., and Smith, E. L. in Evolving Genes and Proteins (edit. by Bryson, V., and Vogel, H. 5.), 221 (Academic Press, New York, 1965),
4. Kaplan, N.O., in Evolving Genes and Proteins (edit. by Bryson, V., and Vogel, H. 5.), 221 (Academic Press, New York, 1965)
5. Sager, R. and Ryan, F. J, Cell Heredity (John Wiley and Sons, New York, 1961)
6. Sueoka, N., J. Mol. Biol., 8, 31 (1961).
7. Muller, H. J., Bull. Amer. Math, Soc, 64, 137 (1958).
8. Kimura, M., Genel. Res. (in the press).
9. Haldane, J. B. S., J. Genet., 55, 511 (1957).
10. Kimura, M., J. Genet., 57, 21 (1960).
11. Kimura, M., Ann, Math. Stat., 28, 882 (1957).
12. Mukai, T., Genetics, 50, 1 (1964).
13. Harris, H., Proc. Roy. Soc,, B, 164, 298 (1966).
14. Hubby, J. L.,and Lewontin, B. C., Genetics, 54, 577 (1900).
15. Lewontin, R. C., and Hubby, J. L., Genetics, 54, 595 (1966).
16. Narise, 8. and Hubby, J. L., Biochim, Biophys. Acta, 122, 281 (1966).
17. Fincham, J. R. S., Genetic Complementation (Benjamin, New York, 1966).
18. Muller, 1. Heritage from Mendel (edit. by Brink. K. A), 419 (University of Wisconsin Press, Madison, 1967).
19. Kimura, M., Genet. Res., 9, 28 (1967).
20. Report of the United Nations Scientific Committee on the Effects of Atomic Radiation (New York, 1958).
21. Kimura, M,.and Crow, J. F., Genetics, 49, 725 (1964).
22. Watson, J. D., Molecular Biology of the Gene (Benjamin, New York, 1965),
23. Wright, S., Genetics, 16, 97 (1931).
24. Mayr, E., Animal Species and Evolution (Harvard University Press, Cambridge, 1965) |
# Use rational exponents to simplify. Write the answer in radical notation if appropriate (sqrt[7]{cd})^{14}
Use rational exponents to simplify. Write the answer in radical notation if appropriate $\left(\sqrt[7]{cd}{\right)}^{14}$
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
avortarF
Given:
$\left(\sqrt[7]{cd}{\right)}^{14}$
We need use property of rational exponents
$\left(\sqrt[b]{a}{\right)}^{c}={\left(\frac{\left(a{\right)}^{1}}{b}\right)}^{c}=\left({a}^{\frac{c}{b}}\right)$
use property of rational exponents
$\left(\sqrt[b]{a}{\right)}^{c}={\left(\frac{\left(a{\right)}^{1}}{b}\right)}^{c}=\left({a}^{\frac{c}{b}}\right)$
${\left(\frac{\left(CD{\right)}^{1}}{7}\right)}^{14}=\frac{\left(CD{\right)}^{14}}{7}=\left(CD{\right)}^{2}={C}^{2}{D}^{2}$
Finally answer is: $\left(CD{\right)}^{2}={C}^{2}{D}^{2}$
###### Not exactly what you’re looking for?
Jeffrey Jordon
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee |
# Material covered in class
• Notes
• The material for this part of the class is not covered in the textbook. |
# Compute Mean from Histogram: Week 6 Quiz
Here, if I use the CG concept, then it is clear that, the fulcrum will be at 6.
Even the Bin size is 1.
then why 5.98?
1 Like
Mean from a histogram is calculated as weighted average.
1 Like
If I understand correctly, bin size 1 does not mean that mean will be exactly one of the values of x axis. However when bin size is 1, the calculated mean and actual mean will be the same.
1 Like
The positive and negative deviation form 6 is same, so it canceling each other. so how it is different from weighted average.
No, it’s not same… the x-axis acts here as a weight, and y-axis as number of such weights.
So we cannot calculate the mean this way.
1 Like |
Journal topic
Ann. Geophys., 38, 385–394, 2020
https://doi.org/10.5194/angeo-38-385-2020
Special issue: 7th Brazilian meeting on space geophysics and aeronomy
Ann. Geophys., 38, 385–394, 2020
https://doi.org/10.5194/angeo-38-385-2020
Regular paper 24 Mar 2020
Regular paper | 24 Mar 2020
# Characterization of gravity waves in the lower ionosphere using very low frequency observations at Comandante Ferraz Brazilian Antarctic Station
Characterization of gravity waves in the lower ionosphere using very low frequency observations at Comandante Ferraz Brazilian Antarctic Station
Emilia Correia1,2, Luis Tiago Medeiros Raunheitte2, José Valentin Bageston3, and Dino Enrico D'Amico2 Emilia Correia et al.
• 1Instituto Nacional de Pesquisas Espaciais, INPE, São José dos Campos, São Paulo, Brazil
• 2Centro de Rádio Astronomia e Astrofísica Mackenzie, Universidade Presbiteriana Mackenzie, São Paulo, São Paulo, Brazil
• 3Centro Regional Sul de Pesquisas Espaciais, CRS/INPE, Santa Maria, Rio Grande do Sul, Brazil
Correspondence: Emilia Correia ([email protected])
Abstract
The goal of this work is to investigate the gravity wave (GW) characteristics in the low ionosphere using very low frequency (VLF) radio signals. The spatial modulations produced by the GWs affect the conditions of the electron density at reflection height of the VLF signals, which produce fluctuations of the electrical conductivity in the D region that can be detected as variations in the amplitude and phase of VLF narrowband signals. The analysis considered the VLF signal transmitted from the US Cutler, Maine (NAA) station that was received at Comandante Ferraz Brazilian Antarctic Station (EACF, 62.1 S, 58.4 W), with its great circle path crossing the Drake Passage longitudinally. The wave periods of the GWs detected in the low ionosphere are obtained using the wavelet analysis applied to the VLF amplitude. Here the VLF technique was used as a new aspect for monitoring GW activity. It was validated comparing the wave period and duration properties of one GW event observed simultaneously with a co-located airglow all-sky imager both operating at EACF. The statistical analysis of the seasonal variation of the wave periods detected using VLF technique for 2007 showed that the GW events occurred all observed days, with the waves with a period between 5 and 10 min dominating during night hours from May to September, while during daytime hours the waves with a period between 0 and 5 min are predominant the whole year and dominate all days from November to April. These results show that VLF technique is a powerful tool to obtain the wave period and duration of GW events in the low ionosphere, with the advantage of being independent of sky conditions, and it can be used during the whole day and year-round.
1 Introduction
The upper part of the middle atmosphere, the upper mesosphere and lower thermosphere (MLT), is dominated by the effects of the atmospheric waves (acoustic–gravity waves, gravity waves, tides and planetary waves) with periods from a few seconds to hours, which originate at tropospheric and stratospheric layers or even from in situ generation. The waves with a period below the acoustic cutoff, which is typically less than a few minutes, are classified as acoustic waves, and the waves with a period above the Brunt–Väisälä period, which is typically about 5 min, are classified as gravity waves (Beer, 1974).
During last decades, due to the recognized importance of the gravity waves (GWs) in the general circulation, structure, and variability in the MLT, and as an essential component in the Earth climate system (Fritts and Alexander, 2003; Alexander et al., 2010), these waves had been intensively investigated. For example, Ern et al. (2011), using data from the SABER instrument on board the TIMED satellite, estimated the horizontal gravity wave momentum flux and showed that the fluxes at stratospheric heights (40 km) are stronger at latitudes above 50 in local winter and near the subtropics in the summer hemisphere. This is in agreement with Wang et al. (2005) and Zhang et al. (2012), who used temperature soundings of the same instrument and showed high gravity wave activity over regions of strong convection located at lower latitudes in summer and over the southern Andes and Antarctica Peninsula in winter. The sources of mesospheric GW obtained through high-resolution general circulation model also show that the dominant sources are steep mountains and strong upper-tropospheric westerly jets in winter and intense subtropical monsoon convection in summer (Sato et al., 2009). Thus, any major disturbances that occur in the stratosphere can significantly modify the GW fluxes, which in turn change the thermal and wind structures of the MLT region. One of these disturbances is the sudden stratospheric warmings (e.g., Schoeberl, 1978), which are large-scale perturbation of the polar winter stratosphere where the gradients of winds and temperatures are reversed for periods of days to weeks.
Acoustic–gravity waves (AGWs) and GWs are generated simultaneously by the same tropospheric sources and produce strong temperature perturbations in the thermosphere (e.g., Snively, 2013). The atmospheric gravity waves originate in the lower atmosphere and propagate upwards, traveling through regions with decreasing density, which results in an exponential growth of their amplitudes (e.g., Andrews et al., 1987). The large wave amplitudes lead to wave breaking, which deposits the momentum flux at the MLT region, which comes mostly from waves with periods lower than 30 min (Fritts and Vincent, 1987; Vincent, 2015). Theoretical, numerical, and observational studies have improved the understanding of the GW sources, observed parameters (wavelength, period, and velocity), propagation directions (isotropic/anisotropic), spectrum of intrinsic wavelengths and periods, and moment fluxes, as well as their impact in the MLT region. A variety of techniques have been used to obtain wave parameters, such as the horizontal and vertical wavelengths, phase speeds, and periods, involving satellite observations as well as ground-based instrumentation. Each technique has its own strengths and limitations as presented, for example, by Vincent (2015).
The GW activity has been extensively observed mainly by using airglow all-sky imagers that permit one to obtain the horizontal wave parameters and the propagation directions of the small-scale waves (e g., Taylor et al., 1995). In airglow imagers the GWs are seen as intensity variations of the optical emission from airglow layers located at the MLT region (80–100 km altitude), but this technique requires dark and cloud-free conditions during the night. Particularly at high latitudes it is impossible to observe the nightglow during the summer since there are no totally dark conditions during this season.
In order to avoid the limitations of the optical airglow observations, other techniques using radio soundings started to be used to characterize the mesospheric GWs in the ionospheric D and E regions. The propagation of GWs through the mesosphere induces spatial modulations in the neutral density, which modulates the electron production rate and the effective collision frequency between the neutral components and electrons in the lower ionosphere. The ionospheric absorption of the cosmic radio noise is a function of the product of these two parameters, and so the fluctuations produced by the effect of GWs can be detected by imaging riometers. The ionospheric absorption modulations observed with different riometer beams permit one to infer the gravity wave parameters such as the phase velocity, period, and direction of propagation, as demonstrated by Jarvis et al. (2003) and Moffat-Griffin et al. (2008). They validated this technique comparing mesospheric GW signatures observed by using both a co-located imaging riometer and airglow imager. AGWs in the ionosphere have been mapped using Global Positioning System total electron content data. As reported by Nishioka et al. (2013), both AGWs and GWs are often observed to persist over hours.
The atmospheric gravity waves can also be detected in the lower ionosphere using very low frequency (VLF: 3–30 kHz) radio signals. The amplitude and phase of VLF signals propagating in the Earth–ionosphere waveguide are affected by the conditions of the local electron density at reflection height, which is in the ionospheric D region. The spatial modulations produced by the GWs in the neutral density produce fluctuations of the electrical conductivity in the D region, which are detected as variations in the amplitude and phase of VLF narrowband (NB) signals. AGWs have been detected as amplitude variations of VLF signals associated with solar terminator motions (Nina and Cadez, 2013), with the passage of tropical cyclones crossing the transmitter–receiver VLF propagation path (Rozhnoi et al., 2014), and particularly during nighttime, in association with local convective and lightning activity (Marshall and Snively, 2014). Planetary wave signatures have also been detected in the VLF NB amplitude data, whose effects are pronounced during wintertime and present a predominant quasi 16 d oscillation (Correia et al., 2011, 2013; Schmitter, 2012; Pal et al., 2015).
The advantage of using radio techniques to observe AGWs instead of the optical ones is that they are able to provide observations independently of the sky conditions, even during the daytime, and year-round. The purpose of this paper is to present the characterization of the GW events detected in the lower ionosphere from the analysis of the VLF NB amplitude of signals detected at Comandante Ferraz Brazilian Antarctic Station (EACF). The wave parameters such as the period and the time duration of the GW activity will be obtained from the spectral analysis of the VLF amplitude fluctuations. The methodology using the VLF technique is validated comparing the derived parameters of one GW event detected simultaneously with a co-located airglow all-sky imager.
2 Instrumentation and data analysis
The VLF signals propagate over long distances via multiple reflections, with considerably low attenuation, and are detected by VLF receivers after being reflected in the lower ionosphere at ∼70–90 km of height (e.g., Wait and Spies, 1964). The changes detected in the amplitude and phase of the VLF NB signals give information on the D-region physical and dynamic conditions along the transmitter–receiver great circle path (GCP), which are associated with the ionosphere electrical conductivity. This analysis uses VLF signals transmitted from the US Navy stations at Cutler, Maine (24.0 kHz, NAA), and at Lualualei, Hawaii (21.4 kHz, NPM), which after propagating along the GCPs NAA–EACF and NPM–EACF were detected with 1 s time resolution using an AWESOME receiver (Scherrer et al., 2008) operating at EACF (62.1 S, 58.4 W) station located on King George Island in the Antarctic Peninsula (Fig. 1).
Figure 1VLF propagation paths from NAA and NPM transmitters to the receiver stations located at Comandante Ferraz Brazilian Antarctic Station (EACF) (blue paths) and Atibaia, São Paulo (red path).
The GW parameters were obtained from the VLF NB amplitude signals using a wavelet spectral analysis, which gives the wave period and time duration of GW activity, as will be described in the following section. To demonstrate the potentiality of usage of the VLF technique to observe GWs, the spectral analysis is applied during the night of 10 June 2007, when a prominent GW event (mesospheric front) occurred. It was well observed and characterized by using a co-located airglow imager along with temperature profiles from TIMED/SABER and horizontal winds from a medium-frequency (MF) radar operated at Rothera Station (Bageston et al., 2011). Afterwards, a year-round climatology of GWs of parameters related to the wave periods was obtained from the amplitude data of VLF signals propagating in the NAA–EACF GCP for the full year of 2007.
## 2.1 Wavelet spectral analysis
The wavelet analysis was used to obtain the parameters of VLF amplitude signal fluctuations, which might be associated with the time and duration of the GW event and the period range it covers. The tool used was developed by Torrence and Compo (1998) and includes the rectification of the bias in favor of large scales in the wavelet power spectrum, which was introduced by Liu et al. (2007). The analysis uses the Morlet mother wavelet with a frequency parameter equal to 6, significance level of 95 %, and time lag of 0.72 (Torrence and Compo, 1998). The wavelet analysis returns the following general results: the power spectrum; the global wavelet spectra, which measures the time-averaged wavelet power spectra over a certain period and its significance level; and scale-averaged wavelet power, which is the weighted sum of the wavelet power spectrum over 2 to 64 bands.
The wavelet analysis was applied to the VLF data obtained at EACF during the night of 10 July 2007, when a GW event was observed with a co-located airglow imager. This was done to compare the wave period and event duration parameters obtained from VLF data with the ones obtained from all-sky images.
Figure 2Processed all-sky images of the GW event observed at EACF (red symbol) at 23:30 and 23:41 LT (UT3) on the night of 9–10 July 2007, showing the mesospheric front (white box) propagating from west/southwest to east/northeast (arrow direction in the first image). The images were projected at the mesospheric layer in order to have a spatial area as good as 312 km × 312 km without significant distortion in the unwrapped images.
Figure 3VLF amplitude from NAA transmitter station detected with 15 s time resolution at EACF on 10 July 2007. The vertical lines mark the sunrise (SR) and sunset (SS) at NAA transmitter station (T, full lines) and at receiver station (R, dashed lines). The periods of completely night and day in the NAA–EACF VLF path are identified. The box marks the time interval of data used to perform the spectral analysis.
Figure 2 shows two processed airglow images plotted in geographical coordinates, centered at Comandante Ferraz Station (denoted by the red symbol) and observed on the night of 9–10 July 2007, when it was possible to identify a gravity wave event (inside the white box) in the upper mesosphere by using a wideband near-infrared hydroxyl (OH-NIR) filter. The wave propagation direction is denoted by the arrow put just ahead of the box in the first image. The date and time of observation are indicated at the top of the map. The latitude and longitude (each 2 apart) are also shown, as well as the horizontal distances (in km), respectively in latitude and longitude, just above and on the left of the airglow images, for distances of 2 (in latitude) and 4 (in longitude). The images were processed as follows: star's field subtraction, correcting for the fish-eye lens format, and application of the time difference (TD) image processing to a short set of images. The small projected area (312 km × 312 km, resolution of 1 km per pixel) was caused by the limitations of the CCD size relative to the optical system since this is a low-cost CCD that was adapted in an old optical system (nowadays the optical system is reassembled to allow a useful area in the CCD of 512 pixels × 512 pixels).This mesospheric GW was classified by Bageston et al. (2011) as a mesospheric front observed at EACF from about 23:20 LT (LT = UT3) up to 23:53 LT. The analysis was performed from 23:20 to 23:42 LT, when an increase in the number of wave crests was visible in the wave packet when it propagates across the field of view of the sky, and this growth rate was inferred as four waves crest per hour (Bageston et al., 2011). The fast Fourier transform 2D spectral analysis was applied to six images from 23:32 to 23:38 LT on 9 July (02:32 to 02:38 UT on 10 July), and the following wave parameters were obtained: horizontal wavelength of 33 km, observed period of 6 min, and observed phase speed of 92 m s−1. During the same night, this event was observed with a co-located near-zenithal (field of view about 22 off-zenith) temperature airglow imaging spectrometer, which observes the OH (6–2) band emission (FotAntar-3, Bageston et al., 2007). The spectral analysis of the temperature showed evidence of gravity waves of small scale with a predominant period of ∼14 min (Bageston et al., 2011). Since the spectrometer has a smaller field of view (∼70 km in diameter) compared to the all-sky imager (∼300 km of diameter in the un-warped images), the larger predominant periodicity obtained from the temperature could be one component of the main wave observed with the airglow all-sky imager (Bageston et al., 2011). These parameters are similar to the ones obtained for mesospheric fronts or bore-type events, which were understood as a rare type of gravity waves at polar latitudes and were first observed at Halley Station in May 2001 (Nielsen et al., 2006). Nowadays, with more observations, it is clear that the mesospheric fronts or bores are more likely to be observed at middle to high latitudes (even in unexpected places such as the South Pole) as can be noted in the recent studies on this subject (e.g., Pautet et al., 2018; Giongo et al., 2018; Hozumi et al., 2018).
Figure 4Example of wavelet spectral analysis applied to the VLF amplitude signal in the NAA–EACF GCP on 10 July 2007. (a) The residual VLF amplitude after subtracting the raw data from a 10 min running mean. (b) Wavelet power spectra in logarithm (base 2), with regions of confidence levels greater than 95 % (showed with black contours), and the cross-hatched areas indicating the regions where edge effects become important. (c) Time-averaged wavelet power spectra (Global WS). (d) Scale-averaged wavelet power.
Figure 5Same as Fig. 4, but for the VLF signal propagating in the NAA–Atibaia VLF GCP.
The VLF amplitude from NAA transmitter detected at EACF on 10 July 2007 is shown in Fig. 3, where the vertical lines identify the sunrise and sunset hours at the transmitter (SR-T and SS-T, full lines) and receiver (SR-R and SS-R, dashed lines) stations. The wavelet spectral analysis (Fig. 4) was applied to the VLF data from 01:00 to 04:30 UT (22:00 LT 9 June to 01:30 LT on 10 June, box in Fig. 3), which covers the nighttime interval of the images obtained with the co-located all-sky imager.
Figure 4 shows the spectral analysis applied to the VLF amplitude data. The analysis is applied to the residual value obtained after subtracting the raw data from a 12 min running mean (Fig. 4a), which implies in an upper cutoff period of ∼30 min in order to characterize the small-scale and short-period waves. Figure 4a clearly shows four strong fluctuations in the VLF amplitude between 01:50 and 02:40 UT (22:50 and 23:40 LT), which occurred in close temporal association with the crests identified in the airglow images. The last VLF fluctuation was the strongest one and ended at 02:40 UT (23:40 LT), near the time when the wave packet started to dissipate as observed in the airglow images (Bageston et al., 2011). The power spectrum of the residual VLF amplitude (Fig. 4b) shows strong significant components with periods between 4 and 16 min, with stronger peaks at ∼6 and 14 min. The global wavelet spectrum (Fig. 4c) shows a stronger component with period between 4 and 8 min that is due six significant events of ∼20 min duration (Fig. 4d), with one of them occurring from 02:32 to 02:38 UT (23:32 to 23:38 LT), which is the same time interval a wave period of 6 min was identified in the airglow images. The other significant component with a peak at ∼14 min is present from 01:50 to 02:40 UT (Fig. 4d), the same time interval when the four crests of the mesospheric front were identified in the airglow images. They occurred in close temporal association with the identification of gravity waves with the same period in the spectral analysis of the OH temperature obtained with the co-located imaging spectrometer (Bageston et al., 2011).
Since the VLF path is quite long, we have performed one test to make sure that the wave event was the same one detected near EACF and not at any other location in the path between the transmitter and receiver. This test considers the wavelet analysis applied to the VLF path NAA–Atibaia (NAA–ATI), which is almost the same trajectory of NAA–EACF, but its length is ∼50 % shorter. Figure 5 shows no wave events at the time the event was detected in the NAA–EACF path that had association with the GW seen in the airglow imager, evidencing that the event occurred in the part of the VLF trajectory closer the EACF station. This test confirms the GW events detected by VLF technique in the NAA–EACF path occurred near the Antarctic Peninsula and could be associated with the events observed by the airglow imager operating at EACF.
Figure 6Monthly small-scale wave activity at EACF as detected in the low ionosphere using VLF technique during 2007 during night (a) and daytime (b) hours. The black bar shows the number of the observed days per month, and the colored bars show the number of nights and days per month with GW events observed according to the wave predominant periods. The bar colors give the number of waves with predominant period observed in each month separated as the following period intervals: 0–5 (blue), 5–10 (red), 10–15 (green), and 15–20 min (purple).
The characterization of the GWs using VLF amplitude data using wavelet analysis demonstrated the viability for the usage of VLF signals to obtain the period and time duration of the GW events. The use of VLF observations to characterize the GW events permits one to obtain their climatology all year-round since they are not affected by the atmospheric conditions and also can be done during daytime.
## 2.2 Climatology of GW period from VLF signal
The GW climatology was made based on the wavelet analysis applied to the VLF amplitude signal detected during both the nighttime and daytime hours in the NAA–EACF GCP for the full 2007 year. The wave period from the VLF technique is the predominant component with the highest relative power amplitude in the global wavelet power spectrum. For example, in the analysis done in the previous subsection, the predominant wave period was ∼6 min. The wave period year-round climatology obtained via the VLF technique during nighttime is compared with the one obtained with the co-located airglow imager.
3 Observational results
Here the statistical analysis is presented of the predominant wave period of the GW events detected in the low ionosphere as amplitude fluctuations of the VLF signals, which is a new aspect of using the VLF technique. The analysis uses the VLF signal received at EACF during the whole year of 2007, and it is performed independently during nighttime (21:00–05:00 LT) and daytime (11:00–16:00 LT) hours, in order to avoid the influence of the sunrise and sunset terminators in the spectral analysis. The nighttime wave period properties obtained via VLF were used to compare with the wave period characteristics obtained with the co-located airglow all-sky imager.
Figure 7Histogram plots of the predominant observed wave periods of the small-scale GWs detected in the lower portion of the ionosphere as amplitude variations of the VLF signal propagating in the NAA–EACF during night (a) and daytime (b) hours.
The solar activity during 2007 was at lower levels since this year was near the minimum phase of the 23rd solar cycle. So it was a period of low occurrence of solar flares, most of them of GOES C class. In order to avoid the effect of D-region electron density changes associated with flares, the periods disturbed by the impact of flares were not used in the daytime wavelet analysis of VLF signal. The geomagnetic conditions were at lower levels during 2007 with 85 % of the geomagnetic storms having the Dst index peak higher than 50 nT (weak storm) and only two moderate storms with Dst peak $\sim -\mathrm{70}$ nT. The monthly Dst values were higher than −15 nT and kp lower than 2, which means low-level geomagnetic activity.
Figure 6 shows the seasonal variation of GW occurrence rate per month evaluated from the number of VLF observed days per month (black bars) and the respective number of nights (Fig. 6a) and days (Fig. 6b) with events detected in the low ionosphere. Small-scale GW events were detected during all nights and days of observations, with the predominant wave periods between 0 and 25 min, which are distributed in five period ranges from 0 to 25 min (0–5, 5–10, 10–15, 15–20, and 20–25). The occurrence rate of the events detected during nighttime (Fig. 6a) shows that the waves with a period between 5 and 10 min occurred at a higher number from May to September (> 60 %, winter season). They are followed by the waves with a period between 10 and 15 min, which occurred at a higher number from October to November, suggesting an equinoctial distribution. Waves with periods < 5 min (AGWs) also suggest an equinoctial distribution but with a higher occurrence in March. The distribution of the waves with a period between 15 and 25 min suggests a higher occurrence from October to March (Antarctic summer season). The distribution of the GWs with periods from 5 to 10 min is in excellent agreement with the statistical results of the GW events observed by the co-located airglow all-sky imager, which showed the majority of the waves (∼85 %) were observed between June and September (Bageston et al., 2009). The daytime analysis (Fig. 6b) shows the AGWs (period < 5 min) predominate all days (100 %) from November to April, while the waves with a period between 5 and 10 min were predominant some days from May to October with a higher occurrence between June and July (Antarctic winter season), followed by the waves with a period between 10 and 20 min dominating for only a few days from May to August with a lower occurrence in July.
Figure 7 shows the histogram plots containing the distribution of the predominant wave period of the wave events detected in the lower ionosphere using the VLF technique for the 337 nights and 268 d of observations in 2007. During nighttime (Fig. 7a), the predominant wave periods were mostly distributed between 5 and 15 min (∼80 %), with a higher number of occurrences between 5 and 10 min (∼ 50 %) and a smaller number of occurrences (∼10 %) of waves with periods below 5 min (AGWs). This wave period distribution for small-scale and short-period GWs is in good agreement with the statistics reported by Bageston et al. (2009) from the analysis of 234 GWs observed with a co-located airglow all-sky imager from April to October 2007. On the other hand, during the daytime the predominant wave periods were concentrated between 0 and 5 min (∼85 %), in the AGW range, followed by the GWs with a period between 5 and 10 min (∼10 %) and few waves (∼5 %) with periods between 10 and 20 min.
4 Summary
In this work we presented an investigation of the GW characteristics in the low ionosphere, where they produce density fluctuations that were detected as amplitude variations of VLF signals. The analysis used the VLF signal transmitted from the US Cutler, Maine (NAA) station that was received at Comandante Ferraz Brazilian Antarctic Station (EACF), with a great circle path crossing the Drake Passage longitudinally. The wavelet analysis of the VLF amplitude considered the predominant small-scale wave periods observed during the daytime and night hours separately, in order to compare the wave periods observed during nighttime with the ones obtained from a co-located airglow all-sky imager. The use of the VLF technique was validated by comparing the wave period and duration properties of one GW event observed simultaneously with a co-located airglow all-sky imager.
The statistical analysis of the wave period of the GW events detected at EACF using the VLF technique for 2007 showed that the GW events were observed almost all days with VLF observations. During nighttime the waves with periods between 5 and 10 min are dominant (55 %), presenting a higher occurrence rate (large activity) per month from May to September with the maximum in June–July. The next predominantly more frequent waves have periods ranging from 10 to 15 min (30 %), followed by few events (10 %) with periods lower than 5 min (AGWs). Both waves suggested an equinoctial distribution with the waves with periods between 10 and 15 min occurring at a higher number in November and the shorter-period waves in March. The wave period distribution of the 5 to 10 min component is in good agreement with the wave period distribution of the GW events observed during 2007 with the co-located airglow all-sky imager. On the other hand, during daytime the waves with a period below 5 min are dominant (85 %), and particularly from November to April they dominated all days of the months, followed by the waves with a period between 5 and 10 min (10 %), which dominate for a few days from May to October and present a higher occurrence from June to July, and finally for the waves with periods between 10 and 20 min (5 %) that dominate just for a few days from May to August with a lower occurrence rate in July.
These results show that the VLF technique is a powerful tool to obtain the wave period and duration of GW events in the low ionosphere, with the advantage of being independent of sky conditions. It can also be used during the whole day and year-round. The VLF technique also shows its potentiality to simultaneously obtain the properties of the AGWs and GWs, which is important to better define the generation mechanisms of these atmospheric waves and their relevance in the Earth's thermosphere. The analysis of wave events using VLF signals from two distinct transmitter stations ∼100 apart in longitude (e.g., NAA–EACF and NPM–EACF) could also be used to obtain information about the velocity and direction of propagation of the GW events, but these tasks will be the subject of future work.
Data availability
Data availability.
The VLF data from EACF station are available upon request from the corresponding author at [email protected]. Airglow images from EACF can be solicited directly by email to José Valentin Bageston ([email protected]).
Author contributions
Author contributions.
EC conceived the study, led the implementation of data processing and analysis, and actively contributed to the discussion of results and paper writing. JVB assisted in conceiving the study, contributed to the data processing and analysis, and discussion of results and paper writing. LTMR and DED'A as Ms students also did a significant part of the data analysis work and helped with the interpretation of the results.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Special issue statement
Special issue statement.
This article is part of the special issue “7th Brazilian meeting on space geophysics and aeronomy”. It is a result of the Brazilian meeting on Space Geophysics and Aeronomy, Santa Maria/RS, Brazil, 5–9 November 2018.
Acknowledgements
Acknowledgements.
Emilia Correia thanks the National Council for Scientific and Technological Development – CNPq, São Paulo Research Foundation – FAPESP for individual research support, and the National Institute for Space Research (INPE/MCTI). The authors also acknowledge the support of the Brazilian Ministry of Science, Technology, Innovation and Communications (MCTIc); the Ministry of the Environment (MMA); and Inter-Ministry Commission for Sea Resources (CIRM). Luis Tiago Medeiros Raunheitte thanks the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001.
Financial support
Financial support.
This research has been supported by the Conselho Nacional de Desenvolvimento Científico e Tecnológico – CNPq (grant nos. 406690/2013-8 and 303299/2016-9), the São Paulo Research Foundation – FAPESP (grant no. 2019/05455-2), and the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) – Finance Code 001.
Review statement
Review statement.
This paper was edited by Inez Batista and reviewed by two anonymous referees.
References
Alexander, M. J., Geller, M., McLandress, C., Polavarapu, S., Preusse, P., Sassi, F., Sato, K., Ern, M., Hertzog, A., Kawatani, Y., Pulido, M., Shaw, T., Sigmond, M., Vincent, R., and Watanabe, S.: Recent developments in gravity wave effects in climate models and the global distribution of gravity wave momentum flux from observations and models, Q. J. R. Meteorol. Soc., 136, 1103–1124, https://doi.org/10.1002/qj.637, 2010.
Andrews, D. G., Holton, J. R., and Leovy, C. B.: Middle atmosphere dynamics, Academic Press, London, 489 pp., 1987.
Bageston, J. V., Gobbi, D., Tahakashi, H., and Wrasse, C. M.: Development of Airglow OH Temperature Imager for Mesospheric Study, Revista Brasileira de Geofísica, 25, 27–34, https://doi.org/10.1590/S0102-261X2007000600004, 2007.
Bageston, J. V., Wrasse, C. M., Gobbi, D., Takahashi, H., and Souza, P. B.: Observation of mesospheric gravity waves at Comandante Ferraz Antarctica Station (62 S), Ann. Geophys., 27, 2593–2598, https://doi.org/10.5194/angeo-27-2593-2009, 2009.
Bageston, J. V., Wrasse, C. M., Batista, P. P., Hibbins, R. E., Fritts, D. C., Gobbi, D., and Andrioli, V. F.: Observation of a mesospheric front in a thermal-doppler duct over King George Island, Antarctica, Atmos. Chem. Phys., 11, 12137–12147, https://doi.org/10.5194/acp-11-12137-2011, 2011.
Beer, T.: Atmospheric Waves, John Wiley, New York, 300 pp., 1974.
Correia, E., Kaufmann, P., Raulin, J.-P., Bertoni, F., and Gavilan, H. R.: Analysis of daytime ionosphere behavior between 2004 and 2008 in Antarctica, J. Atmos. Sol.-Terr. Phys., 73, 2272–2278, https://doi.org/10.1016/j.jastp.2011.06.008, 2011.
Correia, E., Raulin, J. P., Kaufmann, P., Bertoni, F. C., and Quevedo, M.T.: Inter-hemispheric analysis of daytime low ionosphere behavior from 2007 to 2011, J. Atmos. Sol.-Terr. Phys., 92, 51–58, https://doi.org/10.1016/j.jastp.2012.09.006, 2013.
Ern, M., Preusse, P., Gille, J. C., Hepplewhite, C. L., Mlynczak, M. G., Russell III, J. M., and Riese, M.: Implications for atmospheric dynamics derived from global observations of gravity wave momentum flux in stratosphere and mesosphere, J. Geophys. Res., 116, D19107, https://doi.org/10.1029/2011JD015821, 2011.
Fritts, D. C. and Vincent, R. A.: Mesospheric momentum flux studies at Adelaid, Australia: Observations and a gravity wave-tidal interaction model, J. Atmos. Sci., 44, 605–619, 1987
Fritts, D. C. and Alexander, M. J.: Gravity wave dynamics and effects in the middle atmosphere, Rev. Geophys., 41, 1003, https://doi.org/10.1029/2001RG000106, 2003.
Giongo, G. A., Bageston, J. V., Batista, P. P., Wrasse, C. M., Bittencourt, G. D., Paulino, I., Paes Leme, N. M., Fritts, D. C., Janches, D., Hocking, W., and Schuch, N. J.: Mesospheric front observations by the OH airglow imager carried out at Ferraz Station on King George Island, Antarctic Peninsula, in 2011, Ann. Geophys., 36, 253–264, https://doi.org/10.5194/angeo-36-253-2018, 2018.
Hozumi, Y., Saito, A., Sakanoi, T., Yamazaki, A., and Hosokawa, K.: Mesospheric bores at southern midlatitudes observed by ISS-IMAP/VISI: a first report of an undulating wave front, Atmos. Chem. Phys., 18, 16399–16407, https://doi.org/10.5194/acp-18-16399-2018, 2018.
Jarvis, M. J., Hibbins, R. E., Taylor, M. J., and Rosenberg, T. J.: Utilizing riometry to observe gravity waves in the sunlit mesosphere, Geophys. Res. Lett., 30, 1979, https://doi.org/10.1029/2003GL017885, 2003.
Liu, Y., Liang, X. S., and Weisberg, R. H.: Rectification of the bias in the wavelet power spectrum, J. Atmos. Ocean. Techno., 24, 2093–2102, https://doi.org/10.1175/2007JTECHO511.1, 2007
Marshall, R. A. and Snively, J. B.: Very low frequency subionospheric remote sensing of thunderstorm-driven acoustic waves in the lower ionosphere, J. Geophys. Res.-Atmos., 119, 5037–5045, https://doi.org/10.1002/2014JD021594, 2014.
Moffat-Griffin, T., Hibbins, R. E., Nielsen, K., Jarvis, M. J., and Taylor, M. J.: Observing mesospheric gravity waves with an imaging riometer, J. Atmos. Sol.-Terr. Phys., 70, 1327–1335, https://doi.org/10.1016/j.jastp.2008.04.009, 2008.
Nielsen, K., Taylor, M. J., Stockwell, R., and Jarvis, M.: An unusual mesospheric bore event observed at hight latitudes over Antarctica, Geophys. Res. Lett., 33, L07803, https://doi.org/10.1029/2005GL025649, 2006.
Nina, A. and Čadež, V. M.: Detection of acoustic-gravity waves in lower ionosphere by VLF radio waves, Geophys. Res. Lett., 40, 4803–4807, https://doi.org/10.1002/grl.50931, 2013.
Nishioka, M., Tsugawa, T., Kubota, M., and Ishii, M.: Concentric waves and short-period oscillations observed in the ionosphere after the 2013 Moore EF5 tornado, Geophys. Res. Lett., 40, 5581–5586, https://doi.org/10.1002/2013GL057963, 2013.
Pal, S., Chakraborty, S., and Chakrabarti, S. K.: On the use of Very Low Frequency transmitter data for remote sensing of atmospheric gravity and planetary waves, Adv. Sp. Res., 55, 1190–1198, https://doi.org/10.1016/j.asr.2014.11.023, 2015.
Pautet, P.-D., Taylor, M. J., Snively, J. B., and Solorio, C.: Unexpected occurrence of mesospheric frontal gravity wave events over South Pole (90 S), J. Geophys. Res., 123, 160–173, https://doi.org/10.1002/2017JD027046, 2018.
Rozhnoi, A., Solovieva, M., Levin, B., Hayakawa, M., and Fedun, V.: Meteorological effects in the lower ionosphere as based on VLF/LF signal observations, Nat. Hazards Earth Syst. Sci., 14, 2671–2679, https://doi.org/10.5194/nhess-14-2671-2014, 2014.
Sato, K., Watanabe, S., Kawatani, Y., Amd, K., Miyazaki, Y. T., and Takahashi, M.: On the origins of mesospheric gravity waves, Geophys. Res. Lett., 36, L19801, https://doi.org/10.1029/2009GL039908, 2009.
Scherrer, D., Cohen, M., Hoeksema, T., Inan, U., Mitchell, R., and Scherrer, P.: Distributing space weather monitoring instruments and educational materials worldwide for IHY2007: the AWESOME and SID project, Adv. Space Res., 42, 1777–1785, https://doi.org/10.1016/j.asr.2007.12.013, 2008.
Schmitter, E. D.: Data analysis of low frequency transmitter signals received at a midlatitude site with regard to planetary wave activity, Adv. Radio Sci., 10, 279–284, https://doi.org/10.5194/ars-10-279-2012, 2012.
Schoeberl, M. R.: Stratospheric warmings: Observations and theory, Rev. Geophys. Space Phys., 16, 521–538, https://doi.org/10.1029/RG016i004p00521, 1978.
Snively, J. B.: Mesospheric hydroxyl airglow signatures of acoustic and gravity waves generated by transient tropospheric forcing, Geochemistry, Geophysics, Geosystems, 40, 4533–4537, https://doi.org/10.1002/grl.50886, 2013.
Taylor, M. J. and Garcia, F. J.: A two-dimensional spectral analysis of short period gravity waves imaged in the OI(557.7 nm) and near infrared OH nightglow emissions over Arecibo, Puerto Rico, Geophys. Res. Lett., 22, 2473–2476, https://doi.org/10.1029/95GL02491, 1995.
Torrence, C. and Compo, G. P.: A practical guide to wavelet analysis, Bull. Amer. Meteor. Soc., 79, 61–78, 1998.
Vincent, R. A.: The dynamics of the mesosphere and lower thermosphere: a brief review, Prog. Earth Planet. Sc., 2, 1–13, https://doi.org/10.1186/s40645-015-0035-8, 2015.
Wait, J. R. and Spies, K. P.: Characteristics of the Earth-ionosphere waveguide for VLF radio waves, US Dept. of Commerce, National Bureau of Standards, 1964.
Wang, L., Geller, M. A., and Alexander, M. J.: Spatial and temporal variations of gravity wave parameters. Part I: intrinsic frequency, wavelength, and vertical propagation direction, J. Atmos. Sci., 62, 125–142, https://doi.org/10.1175/JAS-3364.1, 2005.
Zhang, Y., Xiong, J., Liu, L., and Wan, W.: A global morphology of gravity wave activity in the stratosphere revealed by the 8-year SABER/TIMED data, J. Geophys. Res., 117, 21101, https://doi.org/10.1029/2012jd017676, 2012. |
On the intersection graph of ideals of a commutative ring Keywords: Intersection graph, perfect graph, clique number, chromatic number, diameter, girth. 2010 Mathematics Subject Classification: 05C15, 05C17, 05C69, 13A99, 13C99.
# On the intersection graph of ideals of a commutative ring 1
## Abstract
Let be a commutative ring and be an -module, and let be the set of all non-trivial ideals of . The -intersection graph of ideals of , denoted by , is a graph with the vertex set , and two distinct vertices and are adjacent if and only if . For every multiplication -module , the diameter and the girth of are determined. Among other results, we prove that if is a faithful -module and the clique number of is finite, then is a semilocal ring. We denote the -intersection graph of ideals of the ring by , where are integers and is a -module. We determine the values of and for which is perfect. Furthermore, we derive a sufficient condition for to be weakly perfect.
## 1 Introduction
Let be a commutative ring, and be the set of all non-trivial ideals of . There are many papers on assigning a graph to a ring , for instance see [1–4]. Also the intersection graphs of some algebraic structures such as groups, rings and modules have been studied by several authors, see [3, 6, 8]. In [6], the intersection graph of ideals of , denoted by , was introduced as the graph with vertices and for distinct , the vertices and are adjacent if and only if . Also in [3], the intersection graph of submodules of an -module , denoted by , is defined to be the graph whose vertices are the non-trivial submodules of and two distinct vertices are adjacent if and only if they have non-zero intersection. In this paper, we generalize to , the -intersection graph of ideals of , where is an -module.
Throughout the paper, all rings are commutative with non-zero identity and all modules are unitary. A module is called a uniform module if the intersection of any two non-zero submodules is non-zero. An -module is said to be a multiplication module if every submodule of is of the form , for some ideal of . The annihilator of is denoted by . The module is called a faithful -module if . By a non-trivial submodule of , we mean a non-zero proper submodule of . Also, denotes the Jacobson radical of and denotes the ideal of all nilpotent elements of . By , we denote the set of all maximal ideals of . A ring having only finitely many maximal ideals is said to be a semilocal ring. As usual, and will denote the integers and the integers modulo , respectively.
A graph in which any two distinct vertices are adjacent is called a complete graph. We denote the complete graph on vertices by . A null graph is a graph containing no edges. Let be a graph. The complement of is denoted by . The set of vertices and the set of edges of are denoted by and , respectively. A subgraph of is said to be an induced subgraph of if it has exactly the edges that appear in over . Also, a subgraph of is called a spanning subgraph if . Suppose that . We denote by the degree of a vertex in . A regular graph is a graph where each vertex has the same degree. We recall that a walk between and is a sequence — — — of vertices of such that for every with , the vertices and are adjacent. A path between and is a walk between and without repeated vertices. We say that is connected if there is a path between any two distinct vertices of . For vertices and of , let be the length of a shortest path from to ( and if there is no path between and ). The diameter of , , is the supremum of the set . The girth of , denoted by , is the length of a shortest cycle in ( if contains no cycles). A clique in is a set of pairwise adjacent vertices and the number of vertices in the largest clique of , denoted by , is called the clique number of . The chromatic number of , , is the minimal number of colors which can be assigned to the vertices of in such a way that every two adjacent vertices have different colors. A graph is perfect if for every induced subgraph of , . Also, is called weakly perfect if .
In the next section, we introduce the -intersection graph of ideals of , denoted by , where is a commutative ring and is a non-zero -module. It is shown that for every multiplication -module , and . Among other results, we prove that if is a faithful -module and is finite, then and . In the last section, we consider the -intersection graph of ideals of , denoted by , where are integers and is a -module. We show that is a perfect graph if and only if has at most four distinct prime divisors. Furthermore, we derive a sufficient condition for to be weakly perfect. As a corollary, it is shown that the intersection graph of ideals of is weakly perfect, for every integer .
## 2 The M-intersection graph of ideals of R
In this section, we introduce the -intersection graph of ideals of and study its basic properties.
Definition. Let be a commutative ring and be a non-zero -module. The -intersection graph of ideals of , denoted by , is the graph with vertices and two distinct vertices and are adjacent if and only if .
Clearly, if is regarded as a module over itself, that is, , then the -intersection graph of ideals of is exactly the same as the intersection graph of ideals of . Also, if and are two isomorphic -modules, then is the same as .
###### Example 1
. Let . Then we have the following graphs.
{tikzpicture}\GraphInit
[vstyle=Classic] \Vertex[x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]4 \Vertex[x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]2 \Vertex[x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]6 \Vertex[x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]3 \Edges(2,3) \Edges(6,3) \Edges(2,6) \Edges(2,4)
{tikzpicture}\GraphInit
[vstyle=Classic] \Vertex[x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]4 \Vertex[x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]2 \Vertex[x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]6 \Vertex[x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]3
{tikzpicture}\GraphInit
[vstyle=Classic] \Vertex[x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]4 \Vertex[x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]2 \Vertex[x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]6 \Vertex[x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]3 \Edges(2,4)
{tikzpicture}\GraphInit
[vstyle=Classic] \Vertex[x=1,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]4 \Vertex[x=1,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]2 \Vertex[x=2.3,y=0,style=black,minimum size=3pt,LabelOut=true,Lpos=270,L=]6 \Vertex[x=2.3,y=1,style=black,minimum size=3pt,LabelOut=true,Lpos=90,L=]3 \Edges(2,3) \Edges(6,3) \Edges(2,6)
###### Example 2
. Let be an integer. If is the least common multiple of two distinct integers , then . Thus and are adjacent in if and only if does not divide .
###### Example 3
. Let be a prime number and be two positive integers. If divides , then is an isolated vertex of . Therefore, since is a uniform -module, so is a disjoint union of an infinite complete graph and its complement. Also, (the quasi-cyclic -group), is a uniform -module and . Hence is an infinite complete graph.
###### Remark 1
. Obviously, if is a faithful multiplication -module, then is a complete graph if and only if is a uniform -module.
###### Remark 2
. Let be a commutative ring and let be a non-zero -module.
1. If is a faithful -module, then is a spanning subgraph of . To see this, suppose that and are adjacent vertices of . Then implies that and so . Therefore is adjacent to in .
2. If is a multiplication -module, then is an induced subgraph of . Note that for each non-trivial submodule of , there is a non-trivial ideal of , such that and so we can assign to . Also, is adjacent to in if and only if , that is, if and only if is adjacent to in .
###### Theorem 1
. Let be a commutative ring and let be a faithful -module. If is not connected, then is a direct sum of two -modules.
###### Proof.
Suppose that and are two distinct components of . Let and . Since is a faithful -module, so implies that and . Now if , then — — is a path between and , a contradiction. Thus and so .
The next theorem shows that for every multiplication -module , the diameter of has possibilities.
###### Theorem 2
. Let be a commutative ring and be a multiplication -module. Then .
###### Proof.
Assume that is a connected graph with at least two vertices. So is a faithful module. If there is a non-trivial ideal of such that , then is adjacent to all other vertices. Hence . Otherwise, we claim that is connected. Let and be two distinct vertices of . Since is a multiplication module, so and , for some non-trivial ideals and of . Suppose that — — — is a path between and in . Therefore, — — — — is a walk between and . Thus, we conclude that there is also a path between and in . The claim is proved. So by [3, Theorem 2.4], . Now, suppose that and are two distinct vertices of . If , then and are two distinct vertices of . Hence there exists a non-trivial submodule of which is adjacent to both and in . Since is a multiplication module, so , for some non-trivial ideal of . Thus is adjacent to both and in . Therefore .
###### Theorem 3
. Let be a commutative ring and be a multiplication -module. If is a connected regular graph of finite degree, then is a complete graph.
###### Proof.
Suppose that is a connected regular graph of finite degree. If , then . So assume that . We claim that is an Artinian module. Suppose to the contrary that is not an Artinian module. Then there is a descending chain of submodules of , where ’s are non-trivial ideals of . This implies that is infinite, a contradiction. The claim is proved. Therefore has at least one minimal submodule. To complete the proof, it suffices to show that contains a unique minimal submodule. By contrary, suppose that and are two distinct minimal submodules of . Hence and , where and are two non-trivial ideals of . Since , so and are not adjacent. By Theorem 2, there is a vertex which is adjacent to both and . So both and are contained in . Thus each vertex adjacent to is adjacent to too. This implies that , a contradiction.
Also, the following theorem shows that for every multiplication -module , the girth of has possibilities.
###### Theorem 4
. Let be a commutative ring and be a multiplication -module. Then .
###### Proof.
Suppose that — — — — is a cycle of length in . If , we are done. Thus assume that . Since and is a multiplication module, we have , where is a non-zero ideal of . If is a proper ideal of and , then — — — is a triangle in . Otherwise, we conclude that or . Similarly, we can assume that or , for every , . Without loss of generality suppose that . Now, if , then — — — is a cycle of length 3 in . Therefore assume that . Since or , so — — — is a triangle in . Hence if contains a cycle, then .
###### Lemma 1
. Let be a commutative ring and be a non-zero -module. If is an isolated vertex of , then the following hold:
1. is a maximal ideal of or .
2. If , then , for every .
###### Proof.
There is a maximal ideal of such that . Assume that . Then we have , since is an isolated vertex. So .
Suppose that and . Since is an isolated vertex, we have and so , a contradiction. Thus .
###### Theorem 5
. Let be a commutative ring and be a faithful -module. If is a null graph, then it has at most two vertices and is isomorphic to one of the following rings:
1. , where and are fields;
2. , where is a field;
3. , where is a coefficient ring of characteristic , for some prime number .
###### Proof.
By Lemma 1, every non-trivial ideal of is maximal and so by [10, Theorem 1.1], cannot have more than two different non-trivial ideals. Thus has at most two vertices. Also, by [11, Theorem 4], is isomorphic to one of the mentioned rings.
In the next theorem we show that if is a faithful -module and , then is a semilocal ring.
###### Theorem 6
. Let be a commutative ring and be a faithful -module. If is finite then and .
###### Proof.
First we prove that . Let . By contradiction, assume that are distinct maximal ideals of . We know that , for every , . Otherwise, , for some , . So and hence by Prime Avoidance Theorem [5, Proposition 1.11], we have , for some , , which is impossible. This implies that is a clique in , a contradiction. Thus .
Now, we prove that . By contrary, suppose that . Since , for every , and is finite, we conclude that , for some integers . Hence , for some . Since , so is a unit. This yields that , a contradiction. The proof is complete.
## 3 The Zn-intersection graph of ideals of Zm
Let be two integers and be a -module. In this section we study the -intersection graph of ideals of the ring . Also, we generalize some results given in [9]. For abbreviation, we denote by . Clearly, is a -module if and only if divides .
Throughout this section, without loss of generality, we assume that and , where ’s are distinct primes, ’s are positive integers, ’s are non-negative integers, and for . Let and . The cardinality of is denoted by . For two integers and , we write () if divides ( does not divide ).
First we have the following remarks.
###### Remark 3
. It is easy to see that divides and . Let be a -module. If , then is an isolated vertex of . Obviously, and are adjacent if and only if . This implies that is a subgraph of .
###### Remark 4
. Let be a -module and be a divisor of . We set . Clearly, . Suppose that is a clique of . Then is an intersecting family of subsets of . (A family of sets is intersecting if any two of its sets have a non-empty intersection.) Also, if is an intersecting family of subsets of and is non-empty, then is a clique of . (If is a non-empty subset of and , then we will denote by .) Thus we have
Now, we provide a lower bound for the clique number of .
###### Theorem 7
. Let be a -module. Then
###### Proof.
Suppose that . With the notations of the previous remark, let . Then is an intersecting family of subsets of and so is a clique of . Clearly, . Therefore and hence the result holds.
Clearly, if , then equality holds in the previous theorem. Also, if has only two distinct prime divisors, that is, , then again equality holds. So the lower bound is sharp.
###### Example 4
. Let , where are distinct primes. Thus and . It is easy to see that and . Also, . Let , for . Hence , for . If , then . Therefore .
By the strong perfect graph theorem, we determine the values of and for which is a perfect graph.
###### Theorem A
. (The Strong Perfect Graph Theorem [7]) A finite graph is perfect if and only if neither nor contains an induced odd cycle of length at least .
###### Theorem 8
. Let be a -module. Then is perfect if and only if has at most four distinct prime divisors.
###### Proof.
First suppose that and , where ’s are distinct primes and ’s are positive integers. Let , , , , and . Now, assume that , for . Hence — — — — — is an induced cycle of length 5 in . So by Theorem A, is not a perfect graph.
Conversely, suppose that is not a perfect graph. Then by Theorem A, we have the following cases:
Case 1. — — — — — is an induced cycle of length 5 in . Let , for . So and , for . Let and , for . Clearly, are distinct and thus .
Case 2. — — — — — is an induced path of length 5 in . Let , for . So , for . Let , for . Clearly, are distinct and hence .
Case 3. There is an induced cycle of length 5 in . So contains an induced cycle of length 5 and by Case 1, we are done.
Case 4. — — — — — is an induced path of length 5 in . Since , and , we may assume that , where and , for some distinct . Similarly, we find that , for some distinct and also . Now, since and , we deduce that .
###### Corollary 1
. The graph is perfect if and only if has at most four distinct prime divisors.
In the next theorem, we derive a sufficient condition for to be weakly perfect.
###### Theorem 9
. Let be a -module. If for each , then is weakly perfect.
###### Proof.
Let be a non-empty subset of and . As we mentioned in Remark 4, if is non-empty, then is a clique of . Also, the vertices of (if ) are adjacent to all non-isolated vertices. Suppose that and are two non-empty subsets of and . Since for each , so . This implies that and hence .
Let be an intersecting family of subsets of and . Let . We show that or . Assume that . So there is such that . Thus and hence . We claim that , for each . Suppose to the contrary, and . If and , then . So we have . Let . Then is an intersecting family of subsets of and , a contradiction. The claim is proved.
Now, we show that has a proper -vertex coloring. First we color all vertices of with different colors. Next we color each family of vertices out of with colors of vertices of . Note that if , then and . Suppose that and are two adjacent vertices of . Thus . Without loss of generality, one can assume . So we deduce that and . Therefore, and have different colors. Thus and hence .
As an immediate consequence of the previous theorem, we have the next result.
###### Corollary 2
. The graph is weakly perfect, for every integer .
In the case that for each , we determine the exact value of . It is exactly the lower bound obtained in the Theorem 7.
###### Theorem 10
. Let be a -module. If for each , then .
###### Proof.
Let be a proper subset of . Then and hence . Also, the vertices of (if ) are adjacent to all non-isolated vertices and . Clearly if is an intersecting family of subsets of , then . Moreover, if and , then . Thus by Theorem 9, .
###### Corollary 3
. Let , where ’s are distinct primes. Then .
Problem. Let be a -module. Then is it true that is a weakly perfect graph?
### Footnotes
1. thanks: Keywords: Intersection graph, perfect graph, clique number, chromatic number, diameter, girth.
2010 Mathematics Subject Classification: 05c15, 05c17, 05c69, 13a99, 13c99.
### References
1. S. Akbari, F. Heydari, The regular graph of a noncommutative ring, Bull. Aust. Math. Soc., 89 (2014), 132–140.
2. S. Akbari, S. Khojasteh, Commutative rings whose cozero-divisor graphs are unicyclic or of bounded degree, Comm. Algebra, 42 (2014), 1594–1605.
3. S. Akbari, H. A. Tavallaee, S. Khalashi Ghezelahmad, Intersection graph of submodules of a module, J. Algebra Appl., 11 (2012), Article No. 1250019.
4. D. F. Anderson, A. Badawi, The total graph of a commutative ring, J. Algebra, 320 (2008), 2706–2719.
5. M. F. Atiyah, I. G. Macdonald, Introduction to Commutative Algebra, Addison-Wesley Publishing Company, 1969.
6. I. Chakrabarty, S. Ghosh, T. K. Mukherjee, M. K. Sen, Intersection graphs of ideals of rings, Discrete Math., 309 (2009), 5381–5392.
7. M. Chudnovsky, N. Robertson, P. Seymour, R. Thomas, The strong perfect graph theorem, Ann. Math., 164 (2006), 51–229.
8. B. Csákány, G. Pollák, The graph of subgroups of a finite group, Czechoslovak Math. J., 19 (1969), 241–247.
9. R. Nikandish, M. J. Nikmehr, The intersection graph of ideals of is weakly perfect, Utilitas Mathematica, to appear.
10. F. I. Perticani, Commutative rings in which every proper ideal is maximal, Fund. Math., 71 (1971), 193–198.
11. J. Reineke, Commutative rings in which every proper ideal is maximal, Fund. Math., 97 (1977), 229–231. |
This post could be subtitled: because I only have half a brain. But fortunately
1. first, it was not my computer but my son’s computer (and I don’t give a s*** to his computer which mainly contains video games);
2. second, using half a sleepless night, I finally succeded to fix this problem.
## Episode 1: the accident
My son’s computer is equipped with xUbuntu 14.04 LTS and LVM volume manager and I am naive enough to let him manage it by himself. Unfortunately, he does not read his mom’s blog and when he encoutered the very simple problem described in this post, he just stopped to upgrade his OS. I am the best mom ever so started to fix this issue and cleaned up a bit of the mess he has generated. I just only use an unecessary star; never, ever run the following command line:
sudo apt-get remove --purge linux-image-3.13.*-generic linux-headers-3.13.*-generic
because it will remove all linux kernels and images. I just wanted to remove the older one but was too quick writing the command line and forgot to put the critical 0-4 before the star… Ok, that’s brainless (I am a woman, it can maybe explain things) and worst of all: I rebooted just after this brillant action…!!! In this case, what happens is pretty simple: your /boot directory only contains the following files:
memtest86+.bin memtest86+.elf memtest86+_multiboot.bin
and the grub menu then tells your computer to directly boot on the memory test utility.
## Episode 2: a problem never comes alone
I already had a very similar problem during the upgrade of an older linux distribution before (see this post for further references). However, in the previous case, I was using a standard volume management whereas my son’s computer uses LVM. Hence,
1. booting on an external USB device
2. trying the standard
sudo mount /dev/sda5 /mnt
to mount the main linux partition / gives the following error message:
mount: you must specify the filesystem type
## Episode 3: Best Mom Ever solves the damned problem that she has herself created
So after a few tests, I found the solution which consisted in:
1. booting on an external USB device
2. mounting the LVM volume partition on the USB device’s system with the method described on this page: first, the list of partitions is obtained with:
sudo pvs
which (in my case) gives:
PV VG Fmt Attr PSize PFree
/dev/sda5 xubuntu-vg lvm2 a-- 465.52g 52,00m
and indicates that the volume group to which our physical volume /dev/sda5 belongs is called xubuntu-vg. Then the command
sudo lvdisplay /dev/xubuntu-vg
starts with
LV PATH /dev/xubuntu-vg/root
LV Name root
VG Name xubuntu-vg
and it can thus be mounted with:
sudo mount /dev/xubuntu-vg/root /mnt
3. The purpose is now to chroot into /mnt which contains the computer’s system. Before doing so, a few additional directory has to be mounted:
sudo mount --bind /dev /mnt/dev
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo mount --bind /var/run/dbus /mnt/var/run/dbus
and most importantly,
sudo mount /dev/sda1 /mnt/boot
to mount the /boot directory of your computer before chrooting into it. </li>
4. chrooting into /mnt with
sudo chroot /mnt
To check that everything was OK, I did:
cd /boot
ls
which confirmed that the /boot directory only contained:
memtest86+.bin memtest86+.elf memtest86+_multiboot.bin
At this stage, if /boot is empty, you have failed to mount the /boot directory of your system. </li>
5. Now, simply fix the problem by installing the current linux header and image:
sudo apt-get install linux-image-generic linux-headers-generic
and everything should be just fine at the next reboot.</li> </ol>
</div> |
# Homework Help: Feynman rules & diagram for phi^3 theory
1. Mar 10, 2009
### SkeZa
I'm reading a course in Introduction to QFT and I'm stuck at a problem.
I'm hoping someone here could point me in the right direction or say if my assumptions are incorrect.
1. The problem statement, all variables and given/known data
Derive the Feynman rules and all diagrams at tree-level for $$\lambda \phi^3$$ theory using Wick's theorem.
My own questions:
To what n do you take the n-point correlation function $$\tau(x_1,x_2,...,x_n)$$?
How does one draw a simple tree-level Feynman diagram depending if you know the interaction term $$L_i_n_t$$ or $$H'_I$$
Is there something I haven't understood yet or is there something I'm forgetting about Feynmans rules and diagrams
2. Relevant equations
The correlation function:
$$\tau(x_1,x_2,...,x_n) = \langle0\left\right| T{\phi_i_n(x_1)\phi_i_n(x_2)...\phi_i_n(x_n)exp(-i\int H'_I(t')dt')}\left|\right0\rangle$$
to all orders in pertubation theory and where
$$H'_I = \int d^3 x \lambda \phi^3 / 3!$$
3. The attempt at a solution
I've read (in Peskin & Schroeder) that higher than $$n = 2$$ correlation functions have to be solved using brute force (doesn't understand) and thought that it has to be $$n = 2$$ otherwise it's beyond the scope of the course.
For that case however, for the first order of pertubation in $$\lambda$$, there are an odd number of $$\phi$$'s and therefore they can't give any contributions due to normal-ordering. Right?
There is, however, an even number of $$\phi$$'s for the second order in the pertubation.
Do I have to take to the second order of $$\lambda$$ to get the diagrams?
As for the rules: I've understood that the zeroth order in pertubation just gives the propagator (for $$n = 2$$) and that there has to be a 3-way vertex (insted of the 4-way vertex for $$\lambda \phi^4$$ theory). Should they also have the same "value" when constructing the $$M$$ matrix?
The diagrams draw for $$\lambda \phi^4$$ theory in Peskin & Schroeder's "An Introduction to Quantum Field Theory" har straight 2 point lines (with some loops), how does one go from these to a tree-level diagram?
I'd also appreciate any tips on books that deal with $$\phi^3$$ theory.
Last edited: Mar 10, 2009
2. Mar 11, 2009
### weejee
Srednicki treats phi^3 theory.
3. Mar 11, 2009
### LongLiveYorke
I'm not an expert, but since no one else is answering, I'll throw in my two cents:
For this theory, it's clear what Feynman diagrams will look like. You can draw lines and you can draw three lines meeting at a point. So, we can pretty easily see what diagrams contribute at a particular level of perturbation theory. Of course, the real hard part is coming up with rules for these diagrams, which is your question.
As to your question about the correlation function, I believe that you'll only need to calculate the 2-point function. This is just the propagator for a free theory which comes from the non-perturbative part of the Lagrangian. The fact that the theory has 3-point vertices doesn't mean that we need to calculate a three particle correlation function. It means that what we eventually want to do is to use Wick's theorem to turn longs chains of fields into sums of correlation functions (which is just the idea of Feynman diagrams).
As to your concern about phi^3 giving you an odd number of fields which normal order to 0: Remember, some of the fields will be used to destroy or create initial or final state particles. So, if we have an initial state containing one particle and a final state containing two particles, we can connect these states using the phi^3 interaction term. What we would want to calculate would be
$<i| \lambda \phi \phi \phi |f>$
with
$|i> = \phi^+ |0>$
$|f> = \phi^+ \phi^+ |0>$
(this is just schematic and I haven't included any indices or anything)
But this doesn't vanish because one of the phi's is used to destroy the particle in the initial state and the other two phi's are used to destroy the particles in the final state (or however you want to think about it). So, the odd number of phi's are used to deal with the initial and final state particle wavefunctions. This term ends up not even having a propagator. If we were to calculate a higher order diagram, we would first use up any available phi's to cancel out initial and final states, and then we would group remaining phi's together to form correlation functions (two point propagators), and those would give us loops, etc.
Does this make sense?
4. Mar 11, 2009
### LongLiveYorke
Yeah, he covers it in chapters 9,10 and lists Feynman rules. If you're just starting out with QFT, I think Srednicki may be a bit awkward in his notation since he includes a lot of factors early on that anticipate renormalization (there are a lot of Z's floating around that may confuse you at first, and he talks about counter terms which you may not need to in your problem)
5. Mar 11, 2009
### SkeZa
These where my thoughts at the beginning. I later found a reference in a book by F. Gross, but he's including the charged $$\Phi$$ K-G fields as well.
In the book, he tells that the first order pertubation (for 2 point) describes the decay the fields. Second order would then describe scattering (2 $$\phi$$ fields scatter into one $$\phi$$ field which then decays in two fields. This could the be a tree-level diagram. What I haven't understood is then how you determine if it's a "s", "t" and/or "u" (Mandelstam variables)- channel tree-level diagram.
I'm assuming that they will be similar to the rules for $$\lambda \phi^4$$ theory except that you will have a 3-way vertex instead of a 4-way vertex. But I could be wrong.
Would rules like "Divide by a potential symmetry factor" or "Impose 4-momentum conservation at every vertex" for $$\lambda \phi^4$$ theory be different for $$\lambda \phi^3$$ theory?
I haden't considered that. I was under the assumption that they had to act on the ground state of the free theory ( $$\left|0\right\rangle$$ ).
Thanks.
I've looked into Srednicki's book. His methods are different then what we've talked about in class. This is only an introductory course and Srednicki's methods may be a little too advanced. For example: we haven't discussed path integrals for field theory.
6. Mar 11, 2009
### LongLiveYorke
I think it you have the rules for phi^4 theory, the rules for phi^3 are nearly identical, and the only difference is the type of diagrams that you are allowed to draw. The initial and final state wave functions will be exactly the same and the propagators will be exactly the same. If you're doing everything at tree level (no loops), it should be easy to go from phi-3 to phi-4. Yes, you'll still have to deal with symmetry factors and such if they apply to a particular diagram.
7. Mar 11, 2009
### SkeZa
That's a relief.
At least I don't have to "reinvent" Faynmans rules.
But I'm still confused as to which channel/channels the diagrams will be.
For actual particles, one gets clues as to which it might be but how does one figure it out for a field $$\phi$$
Last edited: Mar 11, 2009
8. Mar 13, 2009
### Ol49
Hej guys !
I'm working on the same problem right now (are you a classmate SkeZa ?)
Anyways, i read in Gross that only the rule 1 for Feynman diagrams is theory-dependent. This can be found both in the appendix of the Gross' book and in his derivation of the [tex]\phi^3[\tex] theory (chapter 9.4, introduction to feynman rules, where the [tex]\phi^3[\tex] theory is used here as a "simple" example for deriving the rules)
So the only rule that have to be changed, is that the vertex term is ([tex]-i\lambda[\tex]). The problem is that i don't see how to derive this from Wick's theorem...
Last edited: Mar 13, 2009
9. Mar 27, 2009 |
# Green's functions and Hadamard parametrices for vector and tensor fields in general linear covariant gauges
Markus B. Fröb, Mojtaba Taslimi Tehrani
August 01, 2017
We determine the Green's functions and Hadamard parametrices in curved spacetimes for linearized massive and massless gauge bosons and linearized Einstein gravity with a cosmological constant in general linear covariant gauges. These vector and tensor parametrices are more singular than their Feynman/de Donder-gauge counterpart, with the most singular part proportional to $\sigma^{-2}$ and $\sigma^{-3}$, respectively. We also give explicit recursion relations for the Hadamard coefficients, and indicate their generalization to $n$ dimensions. Furthermore, we express the divergence and trace of the vector and tensor Green's functions in terms of derivatives of scalar and vector Green's functions, and show how these relations appear as Ward identities in the free quantum theory.
%%% contains utf-8, see: http://inspirehep.net/info/faq/general#utf8 %%% add \usepackage[utf8]{inputenc} to your latex preamble @article{Frob:2017gez, author = "Fröb, Markus B. and Tehrani, Mojtaba Taslimi", title = "{Green's functions and Hadamard parametrices for vector and tensor fields in general linear covariant gauges}", year = "2017", eprint = "1708.00444", archivePrefix = "arXiv", primaryClass = "gr-qc", SLACcitation = "%%CITATION = ARXIV:1708.00444;%%" }
Keywords:
none |
# RL transient circuit
1. Feb 13, 2013
### freshbox
1. The problem statement, all variables and given/known data
A series R-L transient circuit is connected to a voltage source of E= 40V through a switch as shown. Suppose the switch is closed at time t=0μs and it was observed that the current iL flowing through the inductor rises to 4mA at time t=20μs and remains at 4mA after t=20μs.
(i) Find the R-L time constant τ in μs for the storage phase. Resistor given is 10kΩ
3. The attempt at a solution
iL=E/R(1-e-t/τ)
0.004=40/10000(1-e-20x10-6/τ)
0.004=0.004(1-e-20x10-6/τ) -> 0.004-0.004=0
0=-e-20x10-6/τ -> Wanted to take ln for both side, ln 0 =error
(ii) Calculate the amount of energy Wl in nJ, stored in the inductor after 1 minute.
How do we know 1min=60sec>>5time constant?
#### Attached Files:
• ###### a.jpg
File size:
19 KB
Views:
88
Last edited: Feb 13, 2013
2. Feb 13, 2013
### Staff: Mentor
There's a useful rule of thumb for these exponential curves that essentially states that all the interesting action is over after 5 time constants
3. Feb 13, 2013
### freshbox
I know that after 5 time constants, the inductor will become a short circuit.
But how would i know after 1minute, it's 5 time constant?
And if you don't mind can you help me take a look at my working please.
Thanks gneill.
4. Feb 13, 2013
### cepheid
Staff Emeritus
Yeah, the exponential term has to decay to 0 in order for the current to approach the asymptotic value of E/R. This happens when the exponent "blah" is large enough that e^-blah = 0. (To put it loosely: (-20 us)/tau → ln0 ≈ -infinity). So the time of 20 us is much much larger than a time constant, so that the exponent will be much larger than -1, and you have many factors of "e" of decay. Typically in these engineering classes they give you a "rule of thumb" that says that you can consider the exponential to have effectively decayed to zero after n time constants. I'm inferring from you comment in part ii that your class is taking n = 5 as your rule of thumb. If that's true, you can solve for tau by assuming that the 20 us is = 5 time constants. Without such a rule of thumb, you can't really solve for a specific value of tau, all you can say is that tau is much smaller than 20 us.
EDIT: scooped by gneill!
5. Feb 13, 2013
### freshbox
Then how do i determine whether the time given by the question for example 10s,20s or 30s etc is after 5 tau? It's not clear at all. For this instance he just say that after 1min. But actually 1min >> 5 time constants
Last edited: Feb 13, 2013
6. Feb 13, 2013
### Staff: Mentor
Part (ii) of the question follows part (i) In part (i) you determined the time constant, and it must be less than 20 μs. One minute is 60 million microseconds, which is much more than 5 times 20 μs. In fact, anything over a tenth of a millisecond is guaranteed to be more than $5\tau$.
7. Feb 13, 2013
### freshbox
oic, btw where have i gone wrong for my working?
8. Feb 13, 2013
### cepheid
Staff Emeritus
Agreed, except for this part. It should say, "which is much more than 20 μs", since it is 5τ that equals 20 μs, not τ. Right?
9. Feb 13, 2013
### Staff: Mentor
I used 5 times 20 μs since 20 μs is a given upper bound for τ (although not a least upper bound) and 5τ is the settling time. But sure, your phrasing is arguably more accurate.
10. Feb 13, 2013
### Staff: Mentor
You're trying to find $\tau$ such that e-20x10-6 = 0. But the function $e^{-x}$ only approaches zero asymptotically as x goes to infinity. You can't find τ exactly by this method. Use the "excitement's over after 5τ" rule of thumb instead.
11. Feb 13, 2013
### freshbox
You mean part (i) is actually after 5 time constant already?
12. Feb 13, 2013
### Staff: Mentor
"(i) Find the R-L time constant τ in μs for the storage phase. Resistor given is 10kΩ"
You're told that the current reaches essentially steady state after 20 μs. So τ must be less than 20 μs. In fact, it's about 5 times smaller than 20 μs.
13. Feb 13, 2013
### freshbox
which part of the question says that the circuit is at steady state?
14. Feb 13, 2013
### Staff: Mentor
"Suppose the switch is closed at time t=0μs and it was observed that the current iL flowing through the inductor rises to 4mA at time t=20μs and remains at 4mA after t=20μs."
15. Feb 13, 2013
### freshbox
Ok here's another question. A series RC charging circuit is connected to a voltage source of E = 80V through a switch as sown. Assume there was no charge initially stored in the capactior. Suppose the switch is closed at time =0s and it was observed that the voltage vc across the capacitor, rises to 80v after time t=25millisec.
For this question, it did not say that remains at xxx after t=xxxx. So how would i know it is after 5 time constant (steady state) ?
***Ah ok i understand already, is it because initially there was no charge and since battery is 80v and after 25millisec, it's fully charged hence steady state. ?
16. Feb 13, 2013
### Staff: Mentor
Because E = 80V and it reached 80V after 25 ms. 80V is as high as it can go; the steady state.
17. Feb 13, 2013
thanks :) |
SLAC Publication SLAC-PUB-15001
SLAC Release Date: June 6, 2012
Time Evolution of Electric Fields in CDMS Detectors
Leman, S. W.
The Cryogenic Dark Matter Search (CDMS) utilizes large mass, 3" diameter x 1" thick target masses as particle detectors. The target is instrumented with both phonon and ionization sensors, the later providing a $\sim$1 V cm$^{-1}$ electric field in the detector bulk. Cumulative radiation exposure which creates $\sim 200\times 10^6$ electron-hole pairs is sufficient to produce a comparable reverse field in the detector thereby degrading the ionization channel performance. To study this, the exist... Show Full Abstract
The Cryogenic Dark Matter Search (CDMS) utilizes large mass, 3" diameter x 1" thick target masses as particle detectors. The target is instrumented with both phonon and ionization sensors, the later providing a $\sim$1 V cm$^{-1}$ electric field in the detector bulk. Cumulative radiation exposure which creates $\sim 200\times 10^6$ electron-hole pairs is sufficient to produce a comparable reverse field in the detector thereby degrading the ionization channel performance. To study this, the existing CDMS detector Monte Carlo has been modified to allow for an event by event evolution of the bulk electric field, in three spatial dimensions. Our most resent results and interpretation are discussed. Show Partial Abstract |
# Probability
Last updated
Probability is a numerical description of how likely an event is to occur or how likely it is that a proposition is true. Probability is a number between 0 and 1, where, roughly speaking, 0 indicates impossibility and 1 indicates certainty. [note 1] [1] [2] The higher the probability of an event, the more likely it is that the event will occur. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
## Contents
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in such areas of study as mathematics, statistics, finance, gambling, science (in particular physics), artificial intelligence/machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems. [3]
## Interpretations
When dealing with experiments that are random and well-defined in a purely theoretical setting (like tossing a fair coin), probabilities can be numerically described by the number of desired outcomes divided by the total number of all outcomes. For example, tossing a fair coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents possess different views about the fundamental nature of probability:
1. Objectivists assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome, when repeating the experiment. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. [4] A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once.
2. Subjectivists assign numbers per subjective probability, i.e., as a degree of belief. [5] The degree of belief has been interpreted as, "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E." [6] The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, normalized, results in a posterior probability distribution that incorporates all the information known to date. [7] By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions regardless of how much information the agents share. [8]
## Etymology
The word probability derives from the Latin probabilitas, which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which, in contrast, is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference. [9]
## History
The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability for millennia, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues[ clarification needed ] are still obscured by the superstitions of gamblers. [10]
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." [11] However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence. [12]
The earliest known forms of probability and statistics were developed by Middle Eastern mathematicians studying cryptography between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels. Al-Kindi (801–873) made the earliest known use of statistical inference in his work on cryptanalysis and frequency analysis. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis. [13]
The sixteenth century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes [14] ). Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. [15] Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. [16] See Ian Hacking's The Emergence of Probability [9] and James Franklin's The Science of Conjecture [17] for histories of the early development of the very concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. [18] The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.
The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774 and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error, disregarding sign. The second law of error was proposed in 1778 by Laplace and stated that the frequency of the error is an exponential function of the square of the error. [19] The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old." [19]
Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). [20] In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
${\displaystyle \phi (x)=ce^{-h^{2}x^{2}},}$
where ${\displaystyle h}$ is a constant depending on precision of observation, and ${\displaystyle c}$ is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850).[ citation needed ] Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula[ clarification needed ] for r, the probable error of a single observation, is well known.
In the nineteenth century authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion, and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.
Andrey Markov introduced [21] the notion of Markov chains (1906), which played an important role in stochastic processes theory and its applications. The modern theory of probability based on the measure theory was developed by Andrey Kolmogorov (1931). [22]
On the geometric side (see integral geometry) contributors to The Educational Times were influential (Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin). [23]
## Theory
Like other theories, the theory of probability is a representation of its concepts in formal terms—that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see probability space), sets are interpreted as events and probability itself as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (that is, not further analyzed) and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the laws of probability as usually understood.
## Applications
Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis (Reliability theory of aging and longevity), and financial regulation.
A good example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily very rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict. [24]
In addition to financial assessment, probability can be used to analyze trends in biology (e.g. disease spread) as well as ecology (e.g. biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play. [25]
The discovery of rigorous methods to assess and combine probability assessments has changed society. [26] [ citation needed ]
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty. [27]
The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.
## Mathematical treatment
Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment. The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that if you look at a collection of mutually exclusive events (events with no common results, e.g., the events {1,6}, {3}, and {2,4} are all mutually exclusive), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events. [28]
The probability of an event A is written as ${\displaystyle P(A)}$, ${\displaystyle p(A)}$, or ${\displaystyle {\text{Pr}}(A)}$. [29] This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.
The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as ${\displaystyle {\overline {A}},A^{\complement },\neg A}$, or ${\displaystyle {\sim }A}$; its probability is given by P(not A) = 1 − P(A). [30] As an example, the chance of not rolling a six on a six-sided die is 1 – (chance of rolling a six) ${\displaystyle =1-{\tfrac {1}{6}}={\tfrac {5}{6}}}$. See Complementary event for a more complete treatment.
If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as ${\displaystyle P(A\cap B)}$.
### Independent events
If two events, A and B are independent then the joint probability is
${\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=P(A)P(B),\,}$
for example, if two coins are flipped the chance of both being heads is ${\displaystyle {\tfrac {1}{2}}\times {\tfrac {1}{2}}={\tfrac {1}{4}}}$. [31]
### Mutually exclusive events
If either event A or event B but never both occurs on a single performance of an experiment, then they are called mutually exclusive events.
If two events are mutually exclusive then the probability of both occurring is denoted as ${\displaystyle P(A\cap B)}$.
${\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=0}$
If two events are mutually exclusive then the probability of either occurring is denoted as ${\displaystyle P(A\cup B)}$.
${\displaystyle P(A{\mbox{ or }}B)=P(A\cup B)=P(A)+P(B)-P(A\cap B)=P(A)+P(B)-0=P(A)+P(B)}$
For example, the chance of rolling a 1 or 2 on a six-sided die is ${\displaystyle P(1{\mbox{ or }}2)=P(1)+P(2)={\tfrac {1}{6}}+{\tfrac {1}{6}}={\tfrac {1}{3}}.}$
### Not mutually exclusive events
If the events are not mutually exclusive then
${\displaystyle P\left(A{\hbox{ or }}B\right)=P(A\cup B)=P\left(A\right)+P\left(B\right)-P\left(A{\mbox{ and }}B\right).}$
For example, when drawing a single card at random from a regular deck of cards, the chance of getting a heart or a face card (J,Q,K) (or one that is both) is ${\displaystyle {\tfrac {13}{52}}+{\tfrac {12}{52}}-{\tfrac {3}{52}}={\tfrac {11}{26}}}$, because of the 52 cards of a deck 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards" but should only be counted once.
### Conditional probability
Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written ${\displaystyle P(A\mid B)}$, and is read "the probability of A, given B". It is defined by [32]
${\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}.\,}$
If ${\displaystyle P(B)=0}$ then ${\displaystyle P(A\mid B)}$ is formally undefined by this expression. However, it is possible to define a conditional probability for some zero-probability events using a σ-algebra of such events (such as those arising from a continuous random variable).[ citation needed ]
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is ${\displaystyle 1/2}$; however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken, such as, if a red ball was taken, the probability of picking a red ball again would be ${\displaystyle 1/3}$ since only 1 red and 2 blue balls would have been remaining.
### Inverse probability
In probability theory and applications, Bayes' rule relates the odds of event ${\displaystyle A_{1}}$ to event ${\displaystyle A_{2}}$, before (prior to) and after (posterior to) conditioning on another event ${\displaystyle B}$. The odds on ${\displaystyle A_{1}}$ to event ${\displaystyle A_{2}}$ is simply the ratio of the probabilities of the two events. When arbitrarily many events ${\displaystyle A}$ are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood, ${\displaystyle P(A|B)\propto P(A)P(B|A)}$ where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as ${\displaystyle A}$ varies, for fixed or given ${\displaystyle B}$ (Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005). See Inverse probability and Bayes' rule.
### Summary of probabilities
Summary of probabilities
EventProbability
A${\displaystyle P(A)\in [0,1]\,}$
not A${\displaystyle P(A^{\complement })=1-P(A)\,}$
A or B{\displaystyle {\begin{aligned}P(A\cup B)&=P(A)+P(B)-P(A\cap B)\\P(A\cup B)&=P(A)+P(B)\qquad {\mbox{if A and B are mutually exclusive}}\\\end{aligned}}}
A and B{\displaystyle {\begin{aligned}P(A\cap B)&=P(A|B)P(B)=P(B|A)P(A)\\P(A\cap B)&=P(A)P(B)\qquad {\mbox{if A and B are independent}}\\\end{aligned}}}
A given B${\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}={\frac {P(B|A)P(A)}{P(B)}}\,}$
## Relation to randomness and probability in quantum mechanics
In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon), (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness and roundness of the ball, variations in hand speed during the turning and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant 6.02×1023) that only a statistical description of its properties is feasible.
Probability theory is required to describe quantum phenomena. [33] A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". [34] Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. [35] In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
In Law
## Notes
1. Strictly speaking, a probability of 0 indicates that an event almost never takes place, whereas a probability of 1 indicates than an event almost certainly takes place. This is an important distinction when the sample space is infinite. For example, for the continuous uniform distribution on the real interval [5, 10], there are an infinite number of possible outcomes, and the probability of any given outcome being observed — for instance, exactly 7 — is 0. This means that when we make an observation, it will almost surely not be exactly 7. However, it does not mean that exactly 7 is impossible. Ultimately some specific outcome (with probability 0) will be observed, and one possibility for that specific outcome is exactly 7.
## Related Research Articles
Frequentist probability or frequentism is an interpretation of probability; it defines an event's probability as the limit of its relative frequency in many trials. Probabilities can be found by a repeatable objective process. This interpretation supports the statistical needs of many experimental scientists and pollsters. It does not support all needs, however; gamblers typically require estimates of the odds without experiments.
The word probability has been used in a variety of ways since it was first applied to the mathematical study of games of chance. Does probability measure the real, physical tendency of something to occur or is it a measure of how strongly one believes it will occur, or does it draw on both these elements? In answering such questions, mathematicians interpret the probability values of probability theory.
Probability theory is the branch of mathematics concerned with probability. Although there are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event.
In probability theory and statistics, a probability distribution is a mathematical function that provides the probabilities of occurrence of different possible outcomes in an experiment. In more technical terms, the probability distribution is a description of a random phenomenon in terms of the probabilities of events. For instance, if the random variable X is used to denote the outcome of a coin toss, then the probability distribution of X would take the value 0.5 for X = heads, and 0.5 for X = tails. Examples of random phenomena can include the results of an experiment or survey.
A statistical model is a mathematical model that embodies a set of statistical assumptions concerning the generation of sample data. A statistical model represents, often in considerably idealized form, the data-generating process.
A statistical hypothesis, sometimes called confirmatory data analysis, is a hypothesis that is testable on the basis of observing a process that is modeled via a set of random variables. A statistical hypothesis test is a method of statistical inference. Commonly, two statistical data sets are compared, or a data set obtained by sampling is compared against a synthetic data set from an idealized model. An alternative hypothesis is proposed for the statistical-relationship between the two data-sets, and is compared to an idealized null hypothesis that proposes no relationship between these two data-sets. This comparison is deemed statistically significant if the relationship between the data-sets would be an unlikely realization of the null hypothesis according to a threshold probability—the significance level. Hypothesis tests are used when determining what outcomes of a study would lead to a rejection of the null hypothesis for a pre-specified level of significance.
In statistics, the likelihood function measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters. It is formed from the joint probability distribution of the sample, but viewed and used as a function of the parameters only, thus treating the random variables as fixed at the observed values.
In probability theory and statistics, Bayes’s theorem describes the probability of an event, based on prior knowledge of conditions that might be related to the event. For example, if the probability that someone has cancer is related to their age, using Bayes’ theorem the age can be used to more accurately assess the probability of cancer than can be done without knowledge of the age.
In the theory of probability and statistics, a Bernoulli trial is a random experiment with exactly two possible outcomes, "success" and "failure", in which the probability of success is the same every time the experiment is conducted. It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his Ars Conjectandi (1713).
Odds are a numerical expression, usually expressed as a pair of numbers, used in both gambling and statistics. In statistics, the odds for or odds of some event reflect the likelihood that the event will take place, while odds against reflect the likelihood that it will not. In gambling, the odds are the ratio of payoff to stake, and do not necessarily reflect exactly the probabilities. Odds are expressed in several ways, and sometimes the term is used incorrectly to mean simply the probability of an event. Conventionally, gambling odds are expressed in the form "X to Y", where X and Y are numbers, and it is implied that the odds are odds against the event on which the gambler is considering wagering. In both gambling and statistics, the 'odds' are a numerical expression of the likelihood of some possible event.
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. The degree of belief may be based on prior knowledge about the event, such as the results of previous experiments, or on personal beliefs about the event. This differs from a number of other interpretations of probability, such as the frequentist interpretation that views probability as the limit of the relative frequency of an event after many trials.
In information theory, the information content, self-information, surprisal, or Shannon information is a basic quantity derived from the probability of a particular event occurring from a random variable. It can be thought of as an alternative way of expressing probability, much like odds or log-odds, but which has particular mathematical advantages in the setting of information theory.
Mathematical statistics is the application of probability theory, a branch of mathematics, to statistics, as opposed to techniques for collecting statistical data. Specific mathematical techniques which are used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure theory.
In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions spliced together back-to-back, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution.
Most of the terms listed in Wikipedia glossaries are already defined and explained within Wikipedia itself. However, glossaries like this one are useful for looking up, comparing and reviewing large numbers of terms together. You can help enhance this page by adding new terms or writing definitions for existing ones.
The history of statistics in the modern way is that it originates from the term statistics, found in 1749 in Germany. Although there have been changes to the interpretation of the word over time. The development of statistics is intimately connected on the one hand with the development of sovereign states, particularly European states following the peace of Westphalia (1648); and the other hand with the development of probability theory, which put statistics on a firm theoretical basis.
Statistical proof is the rational demonstration of degree of certainty for a proposition, hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data, the facts, the test, and the odds. Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review.
In probability theory, conditional probability is a measure of the probability of an event occurring given that another event has occurred. If the event of interest is A and the event B is known or assumed to have occurred, "the conditional probability of A given B", or "the probability of A under the condition B", is usually written as P(A | B), or sometimes PB(A) or P(A / B). For example, the probability that any given person has a cough on any given day may be only 5%. But if we know or assume that the person has a cold, then they are much more likely to be coughing. The conditional probability that someone coughing is unwell might be 75%, then: P(Cough) = 5%; P(Sick | Cough) = 75%
Probability has a dual aspect: on the one hand the likelihood of hypotheses given the evidence for them, and on the other hand the behavior of stochastic processes such as the throwing of dice or coins. The study of the former is historically older in, for example, the law of evidence, while the mathematical treatment of dice began with the work of Cardano, Pascal and Fermat between the 16th and 17th century.
An Essay towards solving a Problem in the Doctrine of Chances is a work on the mathematical theory of probability by the Reverend Thomas Bayes, published in 1763, two years after its author's death, and containing multiple amendments and additions due to his friend Richard Price. The title comes from the contemporary use of the phrase "doctrine of chances" to mean the theory of probability, which had been introduced via the title of a book by Abraham de Moivre. Contemporary reprints of the Essay carry a more specific and significant title: A Method of Calculating the Exact Probability of All Conclusions founded on Induction.
## References
1. "Kendall's Advanced Theory of Statistics, Volume 1: Distribution Theory", Alan Stuart and Keith Ord, 6th Ed, (2009), ISBN 978-0-534-24312-8.
2. William Feller, An Introduction to Probability Theory and Its Applications, (Vol 1), 3rd Ed, (1968), Wiley, ISBN 0-471-25708-7.
3. Probability Theory The Britannica website
4. Hacking, Ian (1965). The Logic of Statistical Inference. Cambridge University Press. ISBN 978-0-521-05165-1.[ page needed ]
5. Finetti, Bruno de (1970). "Logical foundations and measurement of subjective probability". Acta Psychologica. 34: 129–145. doi:10.1016/0001-6918(70)90012-0.
6. Hájek, Alan (21 October 2002). Edward N. Zalta (ed.). "Interpretations of Probability". The Stanford Encyclopedia of Philosophy (Winter 2012 ed.). Retrieved 22 April 2013.
7. Hogg, Robert V.; Craig, Allen; McKean, Joseph W. (2004). Introduction to Mathematical Statistics (6th ed.). Upper Saddle River: Pearson. ISBN 978-0-13-008507-8.[ page needed ]
8. Jaynes, E.T. (2003). "Section 5.3 Converging and diverging views". In Bretthorst, G. Larry (ed.). Probability Theory: The Logic of Science (1 ed.). Cambridge University Press. ISBN 978-0-521-59271-0.
9. Hacking, I. (2006) The Emergence of Probability: A Philosophical Study of Early Ideas about Probability, Induction and Statistical Inference, Cambridge University Press, ISBN 978-0-521-68557-3 [ page needed ]
10. Freund, John. (1973) Introduction to Probability. Dickenson ISBN 978-0-8221-0078-2 (p. 1)
11. Jeffrey, R.C., Probability and the Art of Judgment, Cambridge University Press. (1992). pp. 54–55 . ISBN 0-521-39459-7
12. Franklin, J. (2001) The Science of Conjecture: Evidence and Probability Before Pascal, Johns Hopkins University Press. (pp. 22, 113, 127)
13. Broemeling, Lyle D. (1 November 2011). "An Account of Early Statistical Inference in Arab Cryptology". The American Statistician. 65 (4): 255–257. doi:10.1198/tas.2011.10191.
14. Abrams, William, A Brief History of Probability, Second Moment, retrieved 23 May 2008
15. Ivancevic, Vladimir G.; Ivancevic, Tijana T. (2008). Quantum leap : from Dirac and Feynman, across the universe, to human body and mind. Singapore ; Hackensack, NJ: World Scientific. p. 16. ISBN 978-981-281-927-7.
16. Franklin, James (2001). The Science of Conjecture: Evidence and Probability Before Pascal. Johns Hopkins University Press. ISBN 978-0-8018-6569-5.
17. Shoesmith, Eddie (November 1985). "Thomas Simpson and the arithmetic mean". Historia Mathematica. 12 (4): 352–355. doi:10.1016/0315-0860(85)90044-8.
18. Wilson EB (1923) "First and second laws of error". Journal of the American Statistical Association, 18, 143
19. Seneta, Eugene William. ""Adrien-Marie Legendre" (version 9)". StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. Archived from the original on 3 February 2016. Retrieved 27 January 2016.
20. Weber, Richard. "Markov Chains" (PDF). Statistical Laboratory. University of Cambridge.
21. Vitanyi, Paul M.B. (1988). "Andrei Nikolaevich Kolmogorov". CWI Quarterly (1): 3–18. Retrieved 27 January 2016.
22. Wilcox, Rand R. Understanding and applying basic statistical methods using R. Hoboken, New Jersey. ISBN 978-1-119-06140-3. OCLC 949759319.
23. Singh, Laurie (2010) "Whither Efficient Markets? Efficient Market Theory and Behavioral Finance". The Finance Professionals' Post, 2010.
24. Gao, J.Z.; Fong, D.; Liu, X. (April 2011). "Mathematical analyses of casino rebate systems for VIP gambling". International Gambling Studies. 11 (1): 93–106. doi:10.1080/14459795.2011.552575.
25. "Data: Data Analysis, Probability and Statistics, and Graphing". archon.educ.kent.edu. Retrieved 28 May 2017.
26. Gorman, Michael F. (2010). "Management Insights". Management Science. 56: iv–vii. doi:10.1287/mnsc.1090.1132.
27. Ross, Sheldon M. (2010). A First course in Probability (8th ed.). Pearson Prentice Hall. pp. 26–27. ISBN 9780136033134.
28. Olofsson (2005) p. 8.
29. Olofsson (2005), p. 9
30. Olofsson (2005) p. 35.
31. Olofsson (2005) p. 29.
32. Burgin, Mark (2010). "Interpretations of Negative Probabilities": 1. arXiv:.Cite journal requires |journal= (help)
33. Jedenfalls bin ich überzeugt, daß der Alte nicht würfelt. Letter to Max Born, 4 December 1926, in: Einstein/Born Briefwechsel 1916–1955.
34. Moore, W.J. (1992). Schrödinger: Life and Thought. Cambridge University Press. p. 479. ISBN 978-0-521-43767-7.
## Bibliography
• Kallenberg, O. (2005) Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New York. 510 pp. ISBN 0-387-25115-4
• Kallenberg, O. (2002) Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. 650 pp. ISBN 0-387-95313-2
• Olofsson, Peter (2005) Probability, Statistics, and Stochastic Processes, Wiley-Interscience. 504 pp ISBN 0-471-67969-0. |
I was recently trying to explain PID controllers to someone and realized that I didn’t have a very good intuitive understanding of what they’re useful for and how they work. When looking around the web, I had trouble finding a straightforward explainer. So in this post, I’ll give (hopefully) simple answers to some basic questions that I had about PID controllers. Since this post got a bit long, here’s a table of contents.
What is a PID controller?
A PID controller is a way to solve problems with the following formulation:
• You can change some input to the system, called the process variable
• You have a sensor which monitors something about the system
• You want the sensor measurement to be close to some target value, called the set point
The PID controller is a good way to decide what the input to the system should be without knowing anything about the internal workings of the system, except that the change in output is roughly proportional to the input.
Example Use Cases
3D Printing
When running a 3D printer, you want the nozzel end to be a specific temperature. You control the temperature by regulating the voltage through a hot end - higher voltage makes the temperature go up, whereas if you turn it off entirely the temperature will go down (sometimes helped by a fan next to the extruder). You likely want to change the temperature during the course of printing, and you want it to reach the target temperature as quickly as possible. Changing the voltage affects the rate of change in temperature rather than the temperature itself.
Vehicle Control
When driving a car, you regulate the speed by controlling how much to open the throttle. Opening the throttle will cause the car to accelerate, while closing it will cause the car to decelerate. You want the car to reach some set speed as quickly as possible. Changing the throttle toggles the acceleration and not the velocity, but the variable you care about controlling is the velocity.
Medicine
When giving vasopressors to a patient in a hospital, you want them to reach some target blood pressure. After injecting a particular amount, the patient’s blood pressure will go up or down. Changing that amount will change the rate of change of their blood pressure. You want to reach the target blood pressure as quickly as possible without overshooting.
How do PID controllers work?
To answer this question, I think the best way is to start off with an easy-to-understand controller, then add on top of it until we get to the final PID formulation.
A Simple Controller
You could write a control rule like this:
• If the sensor measurement is too low, set the system input to “positive” (try to make the sensor measurement higher)
• If the sensor measurement is too high, set the system input to “negative” (try to make the sensor measurement lower)
However, if the system has inertia (in other words, a delay between the change in input and the change in output), then this control algorithm will start oscillating as you repeatedly undershoot and overshoot. Inertia can happen in lots of different ways and is common in most systems that you would actually want to control. An improvement to this could be to scale the input relative to the error, so that as your error gets smaller, you decrease your input.
\begin{aligned} \text{error} & = \text{target} - \text{sensor} \\ \text{input} & = K_p * \text{error} \end{aligned}
In this case, we introduce an additional scaling constant $K_p$, which relates the size of the error to the size of the input. For example, in the case of our 3D printer, our error is the difference between the target temperature and the observed temperature, while our input is the voltage, so we need to convert from degrees celcius to volts somehow.
We can try these out with some sample numbers to demonstrate the idea. Suppose our target temperature is $200 \deg$ and our current temperature is $175 \deg$. We can use some scaling constant $K_p = 0.1 V / \deg$.
\begin{aligned} \text{error} & = 200 \deg - 175 \deg = 25 \deg \\ \text{input} &= (0.1 V / \deg) * 25 \deg = 2.5 V \end{aligned}
After some time, the temperature increases to $190 \deg$.
\begin{aligned} \text{error} & = 200 \deg - 190 \deg = 10 \deg \\ \text{input} &= (0.1 V / \deg) * 10 \deg = 1 V \end{aligned}
As expected, our input is smaller than when the error was larger, since we want to make the temperature delta smaller as we get closer to our target temperature.
Let’s say sometime later the system has gotten hotter, and now our temperature reading is $240 \deg$.
\begin{aligned} \text{error} & = 200 \deg - 240 \deg = -40 \deg \\ \text{input} &= (0.1 V / \deg) * -40 \deg = -4 V \end{aligned}
As expected, now that we’ve overshot our target temperature, we need to supply an input in the opposite direction. Note that the input voltage is relative to some zero point, since we can’t actually have a negative input voltage.
Using this Controller
Here’s a simple program to simulate a 3D printer nozzel. Note that the heater simulator is only a very loose approximation to the behavior of the actual heater, and can stand in for any black box system that you might want to use a PID controller for.
Heater simulation simple controller Python code
Just the simple controller
We can run this script using:
You can try running this script yourself to see how playing with different parts of the system affect the temperature curves. In particular, --v-offset and --inertia are interesting parameters to play with.
The resulting temperature curves for the built-in configuration are as follows:
Proportional, Integral, Derivative
PID stands for “proportional, integral, derivative” and is a way to address some issues with the above model. Namely, there are two issues that we want to address:
1. Undershoot: Our input is too weak, and the output isn’t changing quickly enough in response to a change in input
2. Overshoot: Our input is too strong, and the output is changing too quickly
Among the temperature curves above, the $K_p = 0.8$ overshoots the most, while the $K_p = 0.05$ undershoots the most.
Integral Control
Consider the case in which we are undershooting our target value. We can detect that we’re undershooting if the error is accumulating too fast - in other words, our error isn’t going down fast enough. We can add another term to the controller which takes this into account, using the integral of the error.
$\text{input} = K_i \int \text{error}_t \, d \text{ time}$
We can approximate this by keeping track of our running error:
$\text{input} \approx K_i \sum_{t = 0}^{T} \text{error}_t \, d \text{ time}$
This controller can work on it’s own, and will correct for undershooting. However, by itself it will naturally oscillate, because it has to accumulate error on the opposite side of the target value in order to start heading in the opposite direction.
Here’s a few temperature curves for an undershooting proportional controller, with different integral controller coefficients.
Derivative Control
Consider the case in which we are overshooting our target value. We can detect that we’re about to overshoot if the error is getting smaller too fast. We can add another term to the controller which takes this into account, using the derivative of the error. The desired behavior is to decrease the input if the error is getting smaller too quickly, and increase the input if the error is getting smaller too slowly. This can be expressed as a function of the derivative of the error:
$\text{input} = K_d \frac{d \, \text{error}}{d \, \text{time}}$
We can approximate this by keeping track of our past error:
$\text{input} \approx K_d \frac{\text{error} - \text{prev error}}{d \, \text{time}}$
This controller won’t work on it’s own, because the error shouldn’t change without changing the input. The power of this controller is to help correct for the overshooting behavior of our original controller.
Here’s a few temperature curves for an overshooting proportional controller, with different derivative controller coefficients.
Updating our Controller
We can put together each of our controllers into the final PID controller formulation shown below:
\begin{aligned} \text{input} \, = \, K_p & * \text{error} \\ + K_i & * \int \text{error}_t \, d \text{ time} \\ + K_d & * \frac{d \, \text{error}}{d \, \text{time}} \end{aligned}
A Python implementation for this controller can be found below.
Heater simulation PID controller Python code
Just the PID controller
How Can you Make One Yourself?
Now that we’ve figured out the basic formulation for PID controllers, how can we figure out the values for $K_p$, $K_I$ and $K_d$ which give us the best behavior?
There are a few methods for doing this, and it depends a lot on the particular scenario. Some relevant questions:
• Can you run the control algorithm many times, or is it important to run it only a few times?
• Is there a lot of noise in the system, or if you run the same control algorithm with the same parameters will the result be relatively similar each time?
There are many packages and techniques which will figure out these parameters for you, but if you’re in a pinch you can follow the approach below.
1. Choose an objective to minimize
2. Do a grid search over $K_p$, $K_i$ and $K_d$
3. Choose the values of $K_p$, $K_i$ and $K_d$ which minimize that objective
Define the Objective to Minimize
For most PID controllers, you care about making the output reach the set point as quickly as possible, without overshoot. Different applications can tolerate different amount of overshoot and undershoot, so the metric might vary. However, in this application I’m going to choose a simple error function which optimizes for both quickly reaching the target and not overshooting, by taking the absolute error when the output is less than the set point and the squared error when the output is greater than the set point.
\begin{aligned} \text{error} & = \text{set point} - \text{output} \\ L & = \int_0^T \begin{cases} \text{error}_t^2 & \text{if } \text{error}_t < 0 \\ \text{error}_t & \text{if } \text{otherwise} \end{cases} \end{aligned} dt
Do a Grid Search over $K_p$, $K_i$ and $K_d$
I’ve included a script which can be used for sweeping different PID configurations for our original simulation.
PID configuration sweep script
Just the error measurement
The script can be run, for example, using the command below:
Running this command gives the following plot of the error curves generated when varying $K_p$ for different values of $K_d$:
As this graph shows, the ideal value of $K_p$ (the lowest point in the error curve) increases as we increase our value for $K_d$. This makes sense intuitively; by increasing the derivative term, the temperature will not shoot up as quickly for the same value of $K_p$, but we can safely use a higher $K_p$ without worrying about overshooting.
Choose the values of $K_p$, $K_i$ and $K_d$ which minimize that objective
Now that we’ve looked at a few different configurations, we can just choose the configuration which minimizes our objective.
We can plot the associated temperature curve using the command below:
It looks reasonable, and definitely better than our original curve.
We can do a much more careful job and get a better curve, but this is pretty reasonable for our toy problem. In fact, for the default parameters, we can just make $K_p$ and $K_d$ really large and get very close to an ideal curve. It’s kind of fun to play around with different values for --heat-coeff, --voltage-coeff and --inertia to see how that changes the ideal PID parameters.
Resume Github Twitter Email Feed Directory Home uBlock |
# Probability that $S$ of the sets are of a given type?
I have an urn with $N$ red balls and $M$ white balls, total $T=N+M$.
I draw balls without replacement $K$ at a time, where $K|T$, ending up with $T/K$ sets of $K$ balls.
I'd like to calculate the probability distribution for the number of sets that have exactly $K$ red balls.
For example, say there are 10 Red and 20 White, I draw 10 sets of 3 balls each, I'd like to get the PMF for the number of sets of 3 Red balls present among the 10 sets drawn.
I was able to get proper results for very simple cases (like all the red balls ending up in full sets), but I'm at a loss how to generalize this.
• You draw one $K$ set without replacement... OK. Before you draw the second $K$ set, do you put the first $K$ set back, or do you select without replacement all over the process? – zoli Feb 6 '17 at 10:34
• @zoli: No, balls are never replaced. If they were, it becomes super simple - it's just a binomial distribution on the number of sets drawn for probability equal to the hypergeometric probability of getting a complete set of red on a draw. I think inclusion-exclusion might be needed here, added that tag. – HammyTheGreek Feb 6 '17 at 20:52
You can answer this question using logic similar to inclusion/exclusion, but the standard formula does not seem to apply directly. Let's treat all balls as being distinguishable, and let's cast our experiment as drawing all of the balls in sequence, and then grouping the first $K$ balls drawn into a set $S_1$, then the second set of $K$ balls into a set $S_2$, etc., until we have $K' = \frac{T}{K}$ sets $S_i$ for $i \in \{1 \ldots K'\} = {\cal I}$. Modeling this way, our sample space is the set of all permutations of $T$ balls, of which there are $T!$ elements.
Define $Z = \lfloor \frac{N}{K}\rfloor$, which is the largest number of groups that can be entirely red, and for any $Y \subset {\cal I}$, let $R_Y$ denote the set of all permutations of the balls such that the set $S_i$ is entirely red if and only if $i \in Y$.
Suppose first that $Y \subset {\cal I}$ is of size $Z$. Then we can count $R_Y$ by assigning red balls to all sets $S_i$ with $i \in Y$, which can be done in $\frac{N!}{(N-ZK)!}$ ways, and then assigning balls to the remaining sets arbitrarily, which can be done in $(T-ZK)!$ ways. Note that no additional sets $S_j$ with $j \notin Y$ can be created, since there are not enough red balls remaining to fully populate a set.
Let's denote this number of permutations in $R_Y$ for a particular set $Y \subset {\cal I}$ of size $Z$ as $${\cal R}(Z)=\frac{N!}{(N-ZK)!}(T-ZK)!$$ Hence the probability of having exactly $Z$ sets of entirely red balls is $\frac{{K' \choose Z}{\cal R}(Z)}{T!}$, since ${\cal R}(Z)$ depends only on the cardinality of $Y$, not the elements of $Y$ themselves.
Now suppose $Y \subset {\cal I}$ has size $Z-1$. We can count the number of permutations that have $S_i$ all red for each $i \in Y$ just like before as $\frac{N!}{(N-(Z-1)K)!}(T-(Z-1)K)!$, but this count includes permutations which have one additional set of balls entirely red. To quantify the overcount, we can pick an additional set $S_j$ to be entirely red in $K'-Z+1$ ways, and then we know that the number of sets which have $S_i$ red for all $i \in Y \cup \{j\}$ is exactly ${\cal R}(Z)$. Letting ${\cal R}(Z-1)$ denote the number of permutations that have $S_i$ entirely red if and only if $i \in Y \subset {\cal I}$, with $|Y| = Z-1$, we have $${\cal R}(Z-1)= \frac{N!}{(N-(Z-1)K)!}(T-(Z-1)K)! - (K'-Z+1){\cal R}(Z)$$
Using the same logic, for any fixed $x$ we can calculate ${\cal R}(x)$ recursively with the formula $${\cal R}(x) = \frac{N!}{(N-xK)!}(T-xK)! - \displaystyle \sum_{i=x+1}^Z {{K'-x} \choose {i-x}}{\cal R}(i)$$ Let $X \subset {\cal I}$ have size $x$, so that ${\cal R}(x)$ is the size of $R_X$. We can populate the sets $S_i$ with red balls for $i \in X$, and distribute the remaining balls randomly in $\frac{N!}{(N-xK)!}(T-xK)!$ ways. But for every superset of $X$ with size $Z$ or less, this count includes permutations with all sets in the superset being entirely red. For each such superset $X'$, there are exactly ${\cal R}(X')$ permutations which have $S_i$ entirely red if and only if $i \in X'$; moreover, none of these permutations are double-counted because of the "if and only if". Thus we just subtract off ${\cal R}(|X'|)$ for each superset $X'$ of $X$ that has size less than or equal to $Z$, proving the formula above.
With this formula in place, we have that the probability of getting exactly $x$ sets of entirely red balls is $${K' \choose x}\frac{{\cal R}(x)}{T!}$$ I used Magma to enumerate some small cases to test out the formula against enumeration, and the results check out. Looking at the test case in the original post, with 30 balls, of which 10 are red and we draw balls three at a time, I find the distribution of $X$, the number of sets consisting of entirely red balls, as $$Pr[X=0] = 0.72026\\Pr[X=1] = 0.26399\\Pr[X=2] = 0.01566\\Pr[X=3] = 0.00008$$
• Nice! I'm working through understanding the derivation, but your work is clear enough I was able to go ahead and implement in Python, works perfectly. Thanks! – HammyTheGreek Feb 9 '17 at 20:38 |
## Organic Chemistry (8th Edition)
Published by Pearson
# Chapter 3 - Structure and Stereochemistry of Alkanes - Problems: Problem 3-17 c
#### Answer
The IUPAC name of the compound is $trans−1,2−dimethylcyclopropane$
#### Work Step by Step
The compound has 3 carbon atoms in the carbon ring (−prop). There are 2 methyl groups attatched to the ring, both on the opposite side of the plane ($trans$). So, the IUPAC name of the compound is $trans−1,2−dimethylcyclopropane$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. Since the binding energy of the electron increases with $$n$$, this is energetically unfavorable to binding with an atom that has more space in its outermost shell. Sodium atom has oxidation number of +1.
Nonmetals will readily form covalent bonds with other nonmetals in order to obtain stability, and can form anywhere between one to three covalent bonds with other nonmetals depending on how many valence electrons they posses. the octet rule and formal charges need to be satisfied. (a) Write the Lewis structures of the ions that form when glycine is dissolved in 1 M HCl and in 1 M KOH.
Thus, the valence electrons are shown by dots in Lewis structure. Ano ang Imahinasyong guhit na naghahati sa daigdig sa magkaibang araw? The octet rule is a chemical rule of thumb that reflects observation that atoms of main-group elements tend to combine in such a way that each atom has eight electrons in its valence shell, giving it the same electronic configuration as a noble gas. Sodium has one valence electron and chlorine has seven valence electrons; the two elements react such that the chlorine atom takes the valence electron from the sodium atom leaving the chlorine atom with one extra electron and thus negatively charged and the Sodium atom without an electron and thus positively charged.
$$m_s$$ is the spin quantum number (direction of spin). How many SO32 - ions are contained in 99.6 mg of Na2SO3? SF 4: 6 + 4(7) = 34. (write your answer with 3 sig figs and no units). All Rights Reserved. Sodium chloride / ˌ s oʊ d i ə m ˈ k l ɔːr aɪ d /, commonly known as salt (although sea salt also contains other chemical salts), is an ionic compound with the chemical formula NaCl, representing a 1:1 ratio of sodium and chloride ions. 7 - Write the Lewis symbols for each of the following... Ch. The most favorable Lewis Structure has the smallest formal charge for the atoms, and negative formal charges tend to come from more electronegative atoms. Because this requires using eight valence electrons to form the covalent bonds that hold the molecule together, there are 26 nonbonding valence electrons. The balanced equation for the combustion of ethanol is 2C2H5OH (g) + 7O2 (g) → 4CO2 (g) + 6H2O (g) How many grams of oxygen gas are required to burn 5.54 g of C2H5OH? There are 26 valence electrons. 2. The molar mass of Na2SO3 is 126.05 g/mol. formal charges of 0 for as many of the atoms in a structure as possible. 7 - Write the Lewis structure for the diatomic... Ch. Although it is important to remember the "magic number" is 8, but there are many octet rule exceptions.
(b) Write the Lewis structure of glycine when this amino acid is dissolved in water. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. They can also be called Lewis dot diagrams and are used as a simple way to show the configuration of atoms within a molecule. For covalent bonds, atoms to share their electrons with each other to satisfy the octet rule. What is the time signature of the song Atin Cu Pung Singsing? 7 - Many monatomic ions are found in seawater,... Ch. Hydrogen needs only 2 electrons when Cl needs 8, and both Na and Cl need 8. Example $$\PageIndex{3}$$: The Chlorate Ion. Does Jerry Seinfeld have Parkinson's disease? A couple problems with this question: 1. Thus, sodium losses one electron to chlorine and chlorine accepts this electron to form ionic bond. Since 6 electrons were used for the bonds, the 20 others become those un-bonding electrons to complete the octet: The oxygen atom's shells fill up with 18 electrons, and the other 2 complete Chlorine's octet. With molar masses of 22.99 and 35.45 g/mol respectively, 100 g of NaCl contains 39.34 g Na and 60.66 g Cl. Therefore our Lewis Structure would look as it does below: The Hydrogen atoms are each filled up with their two electrons and both the Carbon and the Oxygen atoms' octets are filled. The Lewis Structure for Li is Li with one dot to the right of the element. Why does nitrogen's Lewis Structure have five dots around it while nitrogen's Bohr diagram contains 7 dots around it? This sums to a total of eight possible value vectors: In the $$n = 3$$ shell, the Octet rule also holds for $$l < 2$$, which gives you sort of the same combinatorics all over again. For example, for the nonmetals (and the 's' block metals) to achieve an octet, the number of valence electrons is equal to the group number (Table $$\PageIndex{1}$$). 0 + (-1) + (-1) + 1 = -1. Thus the chlorine gains an electron from the sodium atom. The remaining electrons become non-bonding electrons. & ^{\left ( free\; atom \right )} & ^{\left ( atom\; in\; Lewis\; structure \right )} When two chlorine atoms covalently bond to form $$Cl_2Cl_2$$, the following sharing of electrons occurs: Each chlorine atom shared the bonding pair of electrons and achieves the electron configuration of the noble gas argon. There are four covalent bonds in the skeleton structure for SF 4. It is sometimes possible to write more than one Lewis structure for a substance that does not violate the octet rule, as we saw for CH2O, but not every Lewis structure may be equally reasonable. 7.2 Oxygen. In a sense, it has the electron configuration of the noble gas helium. Electrons are shown as "dots" or for bonding electrons as a line between the two atoms. So in the $$n = 2$$ shell you can have two possible values for $$l$$, one possible value for $$m_l$$ when $$l = 0$$, three possible values for $$m_l$$ when $$l = 1$$, and two possible values for $$m_s$$. Ano ang mga kasabihan sa sa aking kababata? The formal charge is a way of computing the charge distribution within a Lewis structure; the sum of the formal charges on the atoms within a molecule or an ion must equal the overall charge on the molecule or ion. For the first rows in the periodic table, the magic number 8 can easily be explained from quantum mechanics of the multiple electron atoms discussed previously.
However, in many molecules atoms attain complete octets by sharing more than one pair of electrons between them. (Hint: Consider the relative base strengths of the –NH 2 and $-\text{CO}_2^{\;\;-}$ groups.) The chlorine has a high affinity for electrons, and the sodium has a low ionization potential. By the Pauli's Exclusion Principle all of these numbers cannot be the same for any two electrons in an atom. This octet can be made up by own electrons and some electrons which are donated (e.g, ionic bonding) or shared (covalent bonding). The second and third row of transition metals have f electrons, too, but we usually simplify and don't worry about those electrons for electron counting purposes; we treat them like core electrons, not valence electrons. The structure is shown in image below. Chris P Schaller, Ph.D., (College of Saint Benedict / Saint John's University). How long will the footprints on the moon last? \text{formal charge}= & valence\; e^{-}- & \left ( nonbonding\; e^{-}+\dfrac{bonding\;e^{-}}{2} \right )\\
How did the pH level and the water components level change after adding water to the battery acid?
Khsusu dalam … Example $$\PageIndex{2}$$: Ionic Bonding in NaCl. Lewis theory successfully describe the covalent interactions between various nonmetal elements. Why don't libraries smell like bookstores? Copyright © 2020 Multiply Media, LLC. A. determining the composition of moon rocks B. calculating the distance between objects in the solar system C. © 2020 Education Strings, All rights reserved.
Have questions or comments? Nonbonding electrons are assigned to the atom on which they are located. Untuk lebih jelas tentang definisi dan cara penulisan struktur lewis bisa sobat baca di sini. Moreover, the N - N bond distance in N2 is 1.10 Å, which is appreciably shorter than the average N-N single bonds (14.7Å). For example, in the reaction of Na (sodium) and Cl (chlorine), each Cl atom takes one electron from a Na atom.
Ano ang pinakamaliit na kontinente sa mundo? The Lewis Structure, or Lewis Dot Diagram, shows the bonding between atoms of a molecule and any electrons that may exist. Pagkakaiba ng pagsulat ng ulat at sulating pananaliksik? This section will discuss the rules for writing out Lewis structures correctly. Next draw out the framework of the molecule: To satisfy the octet of Carbon, one of the pairs of electrons on Oxygen must be moved to create a double bond with Carbon. Electronic configuration of sodium: Sodium atom will loose one electron to gain noble gas configuration and form sodium cation with +1 charge. First, lets find the how many valence electrons chlorate has: ClO3- : 7 e-(from Cl) + 3(6) e-(from 3 O atoms) + 1 (from the total charge of -1) = 26. Due to their opposite charges, they attract each other to form an ionic lattice. An atom continues to form bonds until an octet of electrons is reached. In these situations, we can choose the most stable Lewis structure by considering the formal charge on the atoms, which is the difference between the number of valence electrons in the free atom and the number assigned to it in the Lewis electron structure.
Struktur lewis adalah sebuah diagram yang bisa menggambarkan banyak hal yang terjadi dalam sebuah senyawa. Constructing the Lewis Structure of the formaldehyde (H2CO) molecule. Thus, these charges are correct, as the overall charge of nitrate is -1. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot.
Pilgrims Game Walkthrough, Improbable Nationals Documentary, Worst Scotch Whisky, Suzuki Generator Sv2200l, Ramona Lynn Jones, Lonny Flash Stabbed, Jordan Beckford Height, Hyundai Elantra Performance Mods, Translate Recoge La Feria, Haz Lo Que Tú Sabes, Update On Ryan Seacrest Health, Delonghi Ecp3420 Vs Ec702, Karen Rietz Pictures, Black Cloud Quotes, Gbf Grid Builder, 2006 Chevy 2500 Duramax For Sale, Tampa Gun Show, Michael Thatcher Wedding, Michael J Pollard Daughter, Suzuki Drz 125 Weight Limit, Chihuahua Anatomy Organs, Sweet Emotion Didgeridoo, Hande Subasi Biography, Jerome Henderson Wife, Sim Racing Cockpit For Direct Drive, Alisa Name Meaning Hebrew, Srb2 Modern Sonic, Morning Star Lyrics Lanky Sun, Raytheon Locations California, Todd Mcshay Wife, Baal And Molech, Peppermint Hippo Urban Dictionary, Melamine Tegu Enclosure, Veep Jonah's Wife Actress, Charli D'amelio Taille, Joseph C Phillips Cosby Show, 1964 Chevelle Fender, 2019 Acura Mdx Forum, Kt Tape Metatarsal Stress Fracture, Vin Diesel With Hair Photos, Stefane Zamorano Net Worth, Rom Pokemon Gba(fr), A1 Scotswood To North Brunton Costain, Karen Mcdonald Model, Egyptian Bear God, Cody Ko Thats Cringe Quotes, Matt Gaetz Son Nestor Galban, Mike's Hard Lemonade Alcohol Content, Kathryn Chenault Net Worth, Plantilla De Chivas 2020 2021, Who Plays Fatima On Sistas, Aim Login Alsde, Is Phil Margera Still Alive, Regent Park Crips, Snipes Military Discount, Langrisser 1 And 2 Switch Walkthrough, How Long Does It Take A Rat To Starve To Death, Tashdeed Meaning In English, John Howard Net Worth, Terrence Duckett Bio, Neuron Scooter Charger, By His Hands We Are Fed, Quelqu'un Qui Aide Les Autres Adjectif, Kenmore Elite Refrigerator Error Code Cf, Jeremy Maclin Children, Ethiopian Building Electrical Installation Pdf, Isla May Atkinson, |
# Dynamic Programming vs Memoization
I am having trouble to understand dynamic programming. Mainly because of its name. As far as I understand, it's just another name of memoization or any tricks utilizing memoization.
Am I understanding correctly? Or is DP something else?
• No, memorization is not the major part of Dynamic Programming (DP). Memorization could be considered as an auxiliary tool that often appears in DP. – Apass.Jack Nov 2 '18 at 19:13
• Other name Dynamic tables – kelalaka Nov 2 '18 at 19:13
• stackoverflow.com/questions/6184869/… – Sanghyun Lee Aug 5 at 14:42
Summary: the memoization technique is a routine trick applied in dynamic programming (DP). In contrast, DP is mostly about finding the optimal substructure in overlapping subproblems and establishing recurrence relations.
Warning: a little dose of personal experience is included in this answer.
## Background and Definitions
Memoization means the optimization technique where you memorize previously computed results, which will be used whenever the same result will be needed.
Memoization comes from the word "memoize" or "memorize".
Dynamic programming (DP) means solving problems recursively by combining the solutions to similar smaller overlapping subproblems, usually using some kind of recurrence relations. (Some people may object to the usage of "overlapping" here. My definition is roughly taken from Wikipedia and Introduction to Algorithm by CLRS.) I will only talk about its usage in writing computer algorithms. Note that an actual implementation of DP might use iterative procedure.
Why is DP called DP? The word "dynamic" was chosen by its creator, Richard Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. For the full story, check how Bellman named dynamic programming?.
Nowadays I would interpret "dynamic" as meaning "moving from smaller subproblems to bigger subproblems". (The word "programming" refers to the use of the method to find an optimal program, as in "linear programming". People like me treat it as in software programming sometimes.)
## Explanations
Why do some people consider they are the same?
It is understandable that Dynamic Programming (DP) is seen as "just another name of memoization or any tricks utilizing memoization". When the examples and the problems presented initially have rather obvious subproblems and recurrence relations, the most advantage and important part of DP seems to be the impressive speedup by the memoization technique.
In fact, for some time, I had been inclined to equating DP to mostly memoization technique applied to recursive algorithms such as computation of Fibonacci sequence or the computation of how many ways one can go from the left bottom corner to the top right corner of a rectangle grid.
How are DP and memoization different?
The memoization technique is an auxiliary routine trick that improves the performance of DP (when it appears in DP). It appears so often and so effective that some people even claim that DP is memoization
Let me use a classic simple example of DP, the maximum subarray problem solved by kadane's algorithm, to make the distinction between DP and memoization clear.
Since I was a kid, I had been wondering how I could find the maximum sum of a the contiguous subarray of a given array. First thought was grouping adjacent positive numbers together and adjacent negative numbers together, which could simplify the input. Then I tried combine neighbouring numbers together if their sum is positive or, hm, negative. Then the uncertainty seemed attacking my approach from everywhere. Many years later, when I stumbled upon the Kadane's algorithm, I was awe-struck. It is such a beautiful simple algorithm, thanks to the simple but critical observation made by Kadane: any solution (i.e., any member of the set of solutions) will always have a last element. Trust me, only if you can appreciate the power of such a simple observation in the construction of DP can you fully appreciate the crux of DP.
Please note there is not any (significant) usage of memoization in Kadane's algorithm.
Just in case you might brush off Kadane's algorithm as being trivial, let me present two similar problems.
• Can you find efficiently the maximum sum of two disjoint contiguous subarray of a given array of numbers?
• Can you find efficiently two disjoint increasing subsequence of a given sequence of numbers the sum of whose lengths is the maximum? (This problem is created by me.)
If you can find the solution to these two problems, you will, I believe, be able to appreciate the importance of recognizing the subproblems and recurrence relations more. That might just be the start of a long journey, if you are like me.
By Wikepedia entry on Dynamic programming, the two key attributes that a problem must have in order for DP to be applicable are the optimal substructure and overlapping sub-problems. In other words, the crux of dynamic programming is to find the optimal substructure in overlapping subproblems, where it is relatively easier to solve a larger subproblem given the solutions of smaller subproblem.
In summary, here are the difference between DP and memoization.
• DP is a solution strategy which asks you to find similar smaller subproblems so as to solve big subproblems. It usually includes recurrence relations and memoization.
• Memoization is a technique to avoid repeated computation on the same problems. It is special form of caching that caches the return value of a function based on its parameters.
Here I would like to single out "more advanced" dynamic programming. More advanced is a pure subjective term. What I would like to emphasize is that the harder the problems become, the more difference you will appreciate between dynamic programming and memoization.
Even as the problem becomes harder and varied to solve, there is not much variation to the memoization. The memoization technique are present and helpful most of the time. However, it becomes routine. After all, all you need to do is just to record all result of subproblems that will be used to reach the result of final problem.
However, as I have been solving more and harder problems using DP, the task of identifying the subproblems and construction of the recurrence relations becoming more and more challenging and interesting. There are many variations and techniques in how you can recognize or define the subproblems and how to deduce or apply the recurrence relations. Many of the harder problems look like having a distinct personality to me. Here are some classical ones that I have used.
The following is a nice article.
In Memoization, you store the expensive function calls in a cache and call back from there if exist when needed again. This is a top-down approach, and it has extensive recursive calls.
In Dynamic Programming (Dynamic Tables), you break the complex problem into smaller problems and solve each of the problems once. In Dynamic Programming, you maintain a table from bottom up for the subproblems solution.
Both are applicable to problems with Overlapping sub-problems; as in Fibonacci sequence. If there is no overlapping sub-problems you will not get a benefit; as in the calculation of $$n!$$
The result can be solved in same $$\mathcal(O)$$-time in each. DP, however, can outperform the memoization due to recursive function calls. If the sub-problem space need not be solved completely, Memoization can be a better choice.
From Wikipedia:
There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping sub-problems. If a problem can be solved by combining optimal solutions to non-overlapping sub-problems, the strategy is called "divide and conquer" instead[1]. This is why merge sort and quick sort are not classified as dynamic programming problems.
Therefore, it seems the point is overlapping of subproblems. And if some subproblems are overlapped, you can reduce amount of processing by eliminating duplicated processing. There can be many techniques, but usually it's good enough to re-use operation result, and this reusing technique is memoization. I can imagine that in some cases of well designed processing paths, memoization won't be required. |
# elliptic curve over extension field
This post is a wiki. Anyone with karma >750 is welcome to improve it.
how to define elliptic curve over extension field GF(2^113) in sagemath
the equation is y^2+xy=x^3+ax^2+b over F2^m
where
a=984342157317881800509153672175863 b=4720643197658441292834747278018339
edit retag close merge delete
Please tell us how to see the two "cryptic" numbers a, b as elements of the above field. Everything else remained is simple, just use EllipticCurve( F, [ 1, 0, 0, a, b] ) after defining the field F.
For example, over GF(11):
sage: F = GF( 11 )
sage: E = EllipticCurve( F, [ 1,2,3,4,9 ] )
sage: E
Elliptic Curve defined by y^2 + x*y + 3*y = x^3 + 2*x^2 + 4*x + 9 over Finite Field of size 11
( 2017-11-17 11:12:57 +0200 )edit
To interpret a and b as elements of K = GF(2^113), you probably want to use K.fetch_int(a). See the documentation
( 2017-11-17 13:13:43 +0200 )edit
Sorry, take one more look here, and saw incidentally, that $a$ comes with $x^2$ in $$y^2 +xy = x^3 + a x^2 + b\ ,$$ so the needed command is:
EllipticCurve( F, [ 1, a, 0, 0, b ] )
for instance, over some other field:
sage: a, b, F = 10, 11, QQ; E = EllipticCurve( F, [ 1, a, 0, 0, b ] ); E
Elliptic Curve defined by y^2 + x*y = x^3 + 10*x^2 + 11 over Rational Field
( 2017-11-18 21:01:54 +0200 )edit
Sort by » oldest newest most voted
There is still some needed input to get the "right curve". So let me "guess" (or rather search the net for) it...
Let us consider first the section 3.2.1 in
http://www.secg.org/SEC2-Ver-1.0.pdf
because the book Song Y. Yan, Quantum Computing for Elliptic Curve Discrete Logarithms, (page 187) is not freely accessible.
This is related to the question, so let us write code, which reconstructs the situation.
The field with $2^{113}$ elements is modelled as $K = \mathbb F_2[X]\ /\ (X^{113}+X^9+1)$ . In sage one has to declare X^113 + X^9 + 1 as the modulus. Code:
R.<X> = PolynomialRing( GF(2) )
K.<x> = GF( 2**113, modulus = X^113 + X^9 + 1 )
This defines inside sage first the field $K$:
sage: K
Finite Field in x of size 2^113
sage: x.minpoly()
x^113 + x^9 + 1
The numbers a, b posted have the hex-representation from the article.
a = 984342157317881800509153672175863
b = 4720643197658441292834747278018339
print "hex(a) =", hex(a)
print "hex(b) =", hex(b)
This gives:
hex(a) = 3088250ca6e7c7fe649ce85820f7
hex(b) = e8bee4d3e2260744188be0e9c723
so the numbers $a,b$ from the two sources, and from the posted question coincide. (But in the post, the essential information, the modulus, was missing.)
So let us put all together, with the data from the book Quantum Computing...:
R.<X> = PolynomialRing( GF(2) )
K.<x> = GF( 2**113, modulus = X^113 + X^9 + 1 )
a = 984342157317881800509153672175863
b = 4720643197658441292834747278018339
A = K.fetch_int( a )
B = K.fetch_int( b )
E = EllipticCurve( K, [ 1, A, 0, 0, B ] )
P = E.point( ( K.fetch_int( 8611161909599329818310188302308875 ) ,
K.fetch_int( 7062592440118670058899979569784381 ) , ) )
Q = E.point( ( K.fetch_int( 6484392715773238573436200651832265 ) ,
K.fetch_int( 7466851312800339937981984969376306 ) , ) )
k = 2760361941865110448921065488991383
print "Is Q = k*P in E(K)? %s" % ( Q == k*P )
And we get:
Is Q = k*P in E(K)? True
more |
# Reaction-Diffusion Equations 2
## Why do we need Diffusion AND Reactions?
What benefit do diffusion-reaction models have over just reaction or just diffusion models?
As discussed in the introduction, one of the big questions in developmental biology is how complex organisms emerge from a single fertilized egg. The answer has long been thought to be chemical gradients. If you have different chemicals at different concentrations throughout the embryo, and you apply a threshold function so that the cell differentiates depending on the chemicals immediately surrounding it, then you get a spatially organized differentiation of cells that will later develop into different organs and body parts.
The question of how these gradients are set up were subject of great debate, and at the beginning, diffusive models were discarded because it was assumed to be too slow to establish a stable gradient. In 1970, Francis Crick (who would later win the Nobel Prize for discovering DNA) proposed that diffusion was a plausible mechanism since it was fast enough. [(crick_1970)]
He modeled this as a one-dimensional diffusion problem much like the example above. In other words, he imagined an embryo as a line of cells. His equation was,
$$\label{eqn:Crick1} \frac{\partial C(x,t)}{\partial t} = D \frac{ \partial^2C(x,t)}{\partial x^2},$$ where $C(x,t)$ is the concentration of the chemical at position x and time t. Furthermore, he placed boundary conditions $C(0,t) = C_0$ and $C(L,t) = 0$, where L is the length of the embryo.
Crick wanted a stable gradient (in time), so he solved \eqref{eqn:Crick1} by setting the LHS to zero. The system becomes $$\label{eqn:Crick2} \frac{\partial^2 C(x,t)}{\partial x^2} = 0,$$ which implies that the concentration gradient will be a straight line when it is stable.
He then verified that the stable linear gradient would be set up fast enough. Diffusion is a random walk process and our derivation above showed that the diffusion constant has dimensions $L^2t^{-1}$. The time it takes to set up the gradient is, $$\label{eqn:Crick3} t - \frac{A(nl)^2}{D},$$ where $t$ = time in seconds, $n$ = number of cells in the embryo, $l$ = length of each cell in cm, and $D$ = diffusion constant in $cm^2s^{-1}$. $A$ is a numerical constant that is fit from data. Assuming that the time it takes a real embryo to set up the chemical gradient is around three hours, Crick found that diffusion would be fast enough if L was on the order of milimeters, which is the case with fruit flies.
Crick did not have the tools to check whether the concentration gradient of bicoid in fruit fly embryos was linear, so he was satisfied with using just diffusion. He also used boundary conditions without stating what could biologically explain the $u(L,t)=0$ condition.
New imaging technologies have been developed that allow scientists to measure the concentration of chemicals in embryos. One very important chemical is called bicoid and it is important in establishing an asymmetry in the anterior-posterior axis (i.e. it determines what cells become part of the head and which ones become part of the body). The bicoid gradient can be approximated by an exponential curve.
Instead of relying solely on boundary conditions like Crick, models of bicoid have introduced a reaction term. Bicoid will take time both to get produced, and once it is created it degrades naturally. This new model looks like: $$\label{eqn:revMod1} \frac{\partial C(x,t)}{\partial t} = D(t) \frac{ \partial^2C(x,t)}{\partial x^2} - \frac{1}{\tau}C(x,t) + \rho(x,t),$$ where $D$ is the diffusion constant, $\tau$ is the degradation rate, and $\rho$ is a synthesis rate [(little_2011)]
Consider a simpler version of \eqref{eqn:revMod1}, $$\label{eqn:revMod2} \frac{\partial C(x,t)}{\partial t} = D \frac{ \partial^2C(x,t)}{\partial x^2} - \frac{1}{\tau}C(x,t),$$ which we can solve for a stable gradient analytically, $$\label{eqn:revMod3} C(x,t) = C_0 e^{-x/\lambda},\quad \lambda = \sqrt{D\tau}$$
This example shows why sometimes reaction and diffusion are needed to make realistic biological models. Diffusion can model spatial phenomena, but often, like in the case of bicoid, we have local reactions that can only be included in the model with equations like the Reaction-Diffusion PDE in \eqref{eqn:revMod1}.
## Reaction-Diffusion equations and spatial domains
So far we have seen how reaction and diffusion can work together to create biological models, and how diffusion is really fast in short length scales. However, we have not studied the spatial properties of Reaction-Diffusion equations.
Let us consider one last example of a one-dimensional reaction diffusion equation to study the effect of the spatial domain on the structure that emerges [(kierstead_1953)]. Imagine we have a phytoplankton population and it can only survive in some water mass that has the adequate temperature and dissolved nutrients. If the water mass is isolated (i.e. surrounded by water where the phytoplankton will die), is there a minimum water mass size so that the phytoplankton population can increase?
In the ocean, this water mass would be three-dimensional, but let us take a simpler one-dimensional approach in which we consider a mass of water that has been stretched out into a very thin tube. We impose boundary conditions on the concentration of phytoplankton $c$ such that any phytoplankton at the edges are automatically destroyed, and also that the concentration is constant at $t = 0$. That is, we have \begin{gather} c(0,t) = 0 = c(L,t) \label{bcplank} \\ c(x, 0) = c_0. \label{icplank} \end{gather}
Phytoplankton cannot swim against the current of the ocean, so its motion can be described by diffusion. Therefore, if the phytoplankton population does not grow or decrease, its concentration will be the solution to the diffusion equation, $$\label{eqn:Plank1} \frac{\partial c}{\partial t} = D \frac{\partial^2c}{\partial x^2}.$$
However, phytoplankton are living organisms, so they will reproduce at a rate that is proportional to their concentration. Therefore we add a constant term to Equation \eqref{eqn:Plank1}, $$\label{eqn:Plank2} \frac{\partial c}{\partial t} = D \frac{\partial^2c}{\partial x^2} +Kc,$$ where K is a growth constant.
Before we solve this via separation of variables, we can simplify our problem by scaling out the diffusion-less exponential growth, $$\label{eqn:Plank3} c(x, t) = f(x, t)e^{Kt},$$
and substituting Equation \eqref{eqn:Plank3} into Equation \eqref{eqn:Plank2}, we find that f must satisfy the standard diffusion (or heat) equation of the previous part, $$\label{eqn:Plank4} \frac{\partial f}{\partial t} = D \frac{\partial^2f}{\partial x^2}.$$
By the standard techniques of Fourier series, and using the boundary conditions of $c = 0$ and $x = 0, L$, we find that $$\label{eqn:Plank5} f = \sum_{n=1}^\infty B_n \sin \left(\frac{n\pi x}{L}\right) e^{-n^2\pi^2D/L^2t},$$
where $B_n$ are the Fourier sine coefficients given by, $$\label{eqn:Plank6} B_n = \frac{2}{L} \int_0^L c_0 \sin \left(\frac{n\pi x}{L}\right) \, \de{x},$$
for $n = 1, 2, \ldots$, which are then computed for given initial concentration, $c_0$.
Substituting \eqref{eqn:Plank4} into \eqref{eqn:Plank5} we get the concentration, $$\label{eqn:Plank7} c(x,t) = \sum_{n=1}^\infty B_n \sin \left(\frac{n\pi x}{L}\right) e^{(K-n^2\pi^2D/L^2)t}.$$
### What will be the steady state of the phytoplankton population?
They key is to note from \eqref{eqn:Plank7} that because the Fourier coefficients are bounded and decreasing as $n \to \infty$, and since the sinusoidals are well behaved, the long term behavior of the system will be controlled by the time term in Equation \eqref{eqn:Plank7}, $$\label{eqn:Planktime} e^{(K-n^2\pi^2D/L^2)t}.$$
The sign of the argument of the exponential will determine whether the population of the plankton will grow, stay the same or decay as time goes to infinity. Thus, we obtain three cases related to the argument of \eqref{eqn:Planktime}:
• If the argument is exactly zero, then the population will stay the same.
• If the argument is negative, then the population will decay over time.
• If the argument is positive, then the population will grow over time.
Moreover, if these conditions hold for the $n=1$ mode, then the $n = 2, 3,$… higher modes don't change the steady-state behaviour. For $n = 1$, the bifurcation (the point at which the behavior changes from decay to growth) is found at the length, $L$ such that $$\label{eqn:Plank8} K - \pi^2\frac{D}{L^2} = 0,$$
and thus, we can obtain a critical length given by $$\label{eqn:Plank10} L_c = \pi\sqrt{\frac{D}{K}}.$$
In summary, we have found the critical length of the domain such that
• If $L= L_c$ the population will stay the same.
• If $L>L_c$ the population will increase.
• If $L<L_c$ the population will decrease.
As a final note, observe that the critical length increases proportional to D but inversely proportional to K. This suggests that the steady-state behavior of the plankton population is determined by the relative strength of the diffusive and reactive terms in Equation \eqref{eqn:Plank2}. When $L>L_c$ then the reactive term $Kc$ will dominate the long-term behavior, but when $L<L_c$ the diffusive term $D\frac{\partial^2c}{\partial x^2}$ dominates. When $L = L_c$ the diffusive and reactive forces balance each other equally. (For a more complete discussion on the minimum domains for spatial patterns refer to [(murraySperb_1983)]).
You can now proceed to the next chapter: Turing Instabilities I
## References
[(:crick_1970» author: Crick, F. ref-author: Crick title: Diffusion in Embryogenesis journal: Nature volume: 225 year: 1970 )]
[(:einstein_1956» author: Einstein, A. ref-author: Einstein publisher: Dover Publications title : Investigations on the Theory of the Brownian Movement year : 1956 )]
[(:mehrer_2009» author: Mehrer, H. and Stolwijk, N.A. ref-author: Mehrer and Stolwijk title: Heroes and Highlights in the History of Diffusion journal: Diffusion Fundamentals volume: 11 number: 1 year: 2009 )]
[(:nelson_1967» author : Nelson, E. ref-author: Nelson publisher : Princeton University Press title : Dynamical theories of Brownian motion year : 1967 )]
[(:little_2011» author: Little, SC, Tkacik G, Kneeland TB, Wiechaus EF, Gregor T ref-author: Little et al title: The Formation of the Bicoid Morphogen Gradient Requires Movement from Anteriorly Localized mRNA journal: PLoS Biology volume: 9 number: 3 year: 2011 )]
[(:kierstead_1953» author: Kierstead, H, and Slobodkin LB. ref-author: Kierstead and Slobodkin title: The size of water masses containing plankton blooms journal: J. Mar. Res volume: 12 number: 1 year: 1953 )]
[(:murraySperb_1983» author: Murray J.D. and Sperb R.P. ref-author: Murray and Sperb title: Minimum domains for spatial patterns in a class of reaction diffusion equations journal: J. Math. Biology volume: 18 year: 1983 )] |
# Does this monotonicity-type concept have a name?
I am interested if the following concept has a name, or what do you think that a good name would be?
Let $\{X_{i}: i=1,2,\ldots,n,n+1\}$ be a family of posets and $F:X_{1}\times X_{2}\times\cdots\times X_{n}\rightarrow X_{n+1}$ such that $F$ is either increasing or decreasing with respect to each of the arguments (independently).
What would be an appropriate name for this property?
There is the concept of mixed monotone mapping, i.e., when $F:X_{1}\times X_{2}\rightarrow X_{3}$ is increasing in the first variable and decreasing in the second one, but I don't think that this name would be good to designate the more general concept above. I thought of generalized mixed monotone as a last resort, but I want to know your opinion on this.
-
Coordinatewise monotonic? – Arthur Fischer Jul 29 '12 at 16:42
@ArthurFischer: Is this a standard name or is it your personal suggestion? – digital-Ink Jul 29 '12 at 16:49
Just a personal suggestion; I've never come across the concept before, but it seems like a natural name. – Arthur Fischer Jul 29 '12 at 16:51
@ArthurFischer: A search on Google gave some good results on "Coordinate-wise monotonic" (and the related), which may lead me to believe that this is already standard terminology; I will keep investigate. – digital-Ink Jul 29 '12 at 20:29 |
Compensating wage differentials with unemployment: Evidence from China
Citation:
Xiaoqi Guo and James K Hammitt. 2009. “Compensating wage differentials with unemployment: Evidence from China.” Environmental and Resource Economics, 42, 2, Pp. 187-209. Publisher's Version
Abstract:
We estimate the economic value of mortality risk in China using the compensating-wage-differential method. We find a positive and statistically significant correlation between wages and occupational fatality risk. The estimated effect is largest for unskilled workers. Unemployment reduces compensation for risk, which suggests that some of the assumptions under which compensating wage differentials can be interpreted as measures of workers’ preferences for risk and income are invalid when unemployment is high. Workers may be unwilling to quit high-risk jobs when alternative employment is difficult to obtain, violating the assumption of perfect mobility, or some workers (e.g., new migrants) may be poorly informed about between-job differences in risk, violating the assumption of perfect information. These factors suggest our estimates of the value per statistical life (VSL) in China, which range from approximately US$30,000 to US$100,000, may be biased downward. Alternative estimates adjust for heterogeneity of risk within industry by assuming that risk is concentrated among low-skill workers. These estimates, which are likely to be biased downward, range from US$7,000 to US$20,000.
Notes:
This study developed a new approach to the valuation of health risk in China, for monetizing health damages of environmental degradation.
Last updated on 07/23/2019 |
# A function such that $f(x) = \lim_{t\to0}\frac{1}{2t}\int_{x-t}^{x+t} sf'(s)\,ds$ for all $x$
Let $f:\mathbb R\to\mathbb R$ be a function with continuous derivative such that $f(\sqrt{2})=2$ and $$f(x) = \lim_{t\to0}\frac{1}{2t}\int_{x-t}^{x+t} sf'(s)\,ds$$ for all $x\in\mathbb R$. Find $f(3)$.
I guess Fundamental theorem of Calculus needs to be used to solve this.
Taking derivative of x on both sides I simplified the integral to
$(x+t)f'(x+t) - (x-t)f'(x-t)$
The equation becomes:
$f'(x) = \lim (1/2t)(x+t)f'(x+t) - (x-t)f'(x-t)$ as t tends to 0.
This is leading me nowhere. Any ideas on how to tackle this problem?
• This being your 19th question, you should know better than post blurry screenshots of problems. See math notation guide. – user147263 Jan 21 '15 at 15:58
• @Fundamental Thanks for editing my post. Yes, this is my 19th question, all within a span of about a month! That is because I have an exam coming up and I could really use some help. But right now I really don't have the time to sit back and learn MathJax. I'll definitely learn it once I am done with my exam. – Deepabali Roy Jan 21 '15 at 16:07
Lemma. If $g$ is a continuous function, then $$\lim_{h\to0}\frac{1}{2\,h}\int_{x-h}^{x+h}g(s)\,ds=g(x).$$ Proof. We may assume $h>0$. $$\Bigl|g(x)-\frac{1}{2\,h}\int_{x-h}^{x+h}g(s)\,ds\Bigr|=\frac{1}{2\,h}\Bigl|\int_{x-h}^{x+h}(g(x)-g(s))\,ds\Bigr|\le\frac{1}{2\,h}\int_{x-h}^{x+h}|g(x)-g(s)|\,ds.$$ Use that $g$ is continuous at $x$ to show that the last expression converges to $0$ as $h\to0$.
Let's return to the original question. Since $x\,f'(x)$ is continuous, we have $$\lim_{t\to0}\frac{1}{2\,t}\int_{x-t}^{x+t} s\,f'(s)\,ds=x\,f'(x).$$ (You can get the same result integrating by parts.) All is left is to solve the ODE $$x\,f'(x)=f(x),\quad f(\sqrt2)=2.$$ $$\frac{f'}{f}=\frac{1}{x}\implies (\log f)'=\log x+c\implies f(x)=C\,x.$$ |
# Stable Vector bundles
Let $C$ be smooth curve, and let $F$ be a stable rank 2 vector bundle of degree equal to $2c+1$, $c\in\mathbb{N}$, and fix a point $p\in C$:
Can one choose an epi-morphism $u:F\rightarrow \mathbb C_p$ such that $u$ does not vanish on all the sub-line bundles of $F$ of degree $c$? (where $\mathbb C_p$ is the skyscraper sheaf)
Thank you.
Yes, because $F$ has at most 2 sub-line bundles of degree $c$, so you have plenty of choices. The reason is the following. Twisting by a line bundle of degree $-c$ you reduce to the case $c=0$. You can assume that $F$ contains at least one sub-line bundle of degree $0$, and twisting again that this is $\mathcal{O}_C$, so that you have an exact sequence $$0\rightarrow \mathcal{O}_C\rightarrow F\rightarrow L\rightarrow 0$$with $\deg(L)=1$. This extension is given by a nonzero extension class $e\in H^1(C,L^{-1})$.
Now suppose that $F$ contains another line bundle of degree $0$. It must map non-trivially to $L$, hence it is of the form $L(-q)$ for some point $q\in C$. This means that your extension splits when pulled back to $L(-q)$, or equivalently that the extension class $e$ goes to $0$ in $H^1(C,L^{-1}(q))$. Dually, $e$ defines a hyperplane $e^*$ in $H^0(C,K_C\otimes L)$, and this hyperplane must be equal to the image of $H^0(C,K_C\otimes L(-q))$; in other words, the image of $q$ by the map $\varphi :C\rightarrow |K\otimes L|^*$ defined by the linear system $|K\otimes L|$ must be equal to $e^*$. Since $\deg(K\otimes L)=2g-1$ it is an easy exercise to show that there are at most 2 such points $q$.
Edit: as explained in the comment below, there can actually be 3 such points if $C$ is hyperelliptic.
• If $C$ is elliptic there is a map from any line bundle of degree $0$ to $F$, just by Riemann-Roch. – Sasha Feb 5 '15 at 21:15
• Right, I assume $g\geq 2$ -- my argument fails for $g=1$ because the linear system $|K\otimes L|$ is just a point. – abx Feb 6 '15 at 5:08
• thanks @abx; if there are two such point, so $F$ has 3 sub-line bundles $\mathcal O_C$, $L(-p)$ and $L(-q)$ ?? Could you please explain why there are at most 2 such points? – Z.A.Z.Z Feb 6 '15 at 13:45
• Actually I forgot one case in which there are 3 points $q$ (hence 4 sub-line bundles). With the above notation, the possible line bundles are $\mathcal{O}_C$ and $L(-q)$ with $\varphi (q)=e^*$, with $\varphi$ defined by the line bundle $L':=K_C\otimes L$ of degree $2g-1$. If there are 3 points $q_1,q_2,q_3$ with this property, one has $\dim H^0(C,L'(-q_1-q_2-q_3))=\dim H^0(C,L')-1= g-1$. By Riemann-Roch this is possible only if $C$ is hyperelliptic and $L=H^{-1}(q_1+q_2+q_3)$. By Riemann-Roch again 4 points in $\varphi ^{-1}(e^*)$ is impossible. – abx Feb 6 '15 at 15:54 |
# Cubic Equation. (Factorisation)
I'm given this question, factorise $4x^3-7x-3$. Is this answer acceptable?
$(x+\frac{1}{2})(x-\frac{3}{2})(x+1)$.
• Can you add more details to your question, like why do you think your answer might not be acceptable and etc. Asking only , is it correct? Kinda doesn't seem right. You can get anybody to check it. – Mann Apr 16 '15 at 12:45
• Because I don't know is it must to put (2x+1)(2x-3)(x+1). – Mathxx Apr 16 '15 at 12:46
• Yes, add this in your question. – Mann Apr 16 '15 at 12:48
Not quite. What you've got is a quarter of what you want, the correct expression is $$4(x+\frac{1}{2})(x-\frac{3}{2})(x+1)$$ You've done the hard bit - finding the factors - and for most things (most importantly finding roots) your expression is perfectly fine as the four doesn't have much effect. However, if I'm being picky, or if you're being marked on this, then having the four there to have the exact same expression is important.
No. If you expand $(x+\frac{1}{2})\,(x−\frac{3}{2})\,(x+1)$ you get $\displaystyle\frac{4x^3 - 7x - 3}{4}$. You are missing a factor $4$. You can express it like this:
$4x^3 - 7x - 3 = 4\,(x+\frac{1}{2})\,(x−\frac{3}{2})\,(x+1)$.
If you want to avoid the fractions you can also express it as
$(2x+1)\,(2x−3)\,(x+1)$.
First, it is clear that $-1$ is a root. So, $(x+1)$ is a factor of that polynomial. Thus,
$4x^3-7x-3=(x+1)(ax^2+bx+c)=ax^3+bx^2+cx+ax^2+bx+c$
By equating the coefficients of powers, we get:
$a=4$
$b+a=0$ so $b=-4$
$c+b=-7$, so $c=-3$
So, we have $(x+1)(4x^2-4x-3)=4(x+1)(x^2-x-\frac{3}{4})$
Now, using the "usual" method to find roots of quadratic equations, we find that $-\frac{1}{2},\frac{3}{2}$ are roots and $x^2-x-\frac{3}{4}$ is a monic polynomial so it can be written as
$(x+\frac{1}{2})(x-\frac{3}{2})$
Thus, the correct factorization is:
$4(x+1)(x+\frac{1}{2})(x-\frac{3}{2})$
Yes! That is the correct answer if multiplied by 4.
Yes you are almost correct, the correct factor is
$(2x+1)(2x-3)(x+1)$
• Well it depends on, if you are equating to cubic to 0, or considering it of the form $f(x)=somecubic$ , in the latter case. Not putting a 4 in front can cause trouble. – Mann Apr 16 '15 at 12:57 |
# Space elevator feasibility; split from: NASA Announces New Launch Vehicle and CEV
• NASA
LunchBox
What I wish is that lots of money would be dumped into a ribbon-type space elevator or a space fountain That's the real way to get to space cheaply and safely.
Yeah, they'll have those right after they have satellites that can scratch your ass with a laser beam from space.
Cheers...
SkepticJ
LunchBox said:
Yeah, they'll have those right after they have satellites that can scratch your ass with a laser beam from space.
Cheers...
While I admire your skepticism, you seem to have never heard of carbon nanotubes have you? They're already strong enough to create such a ribbon, give it a few decades. Remember, we'll never break the speed of sound, land on the moon or split the atom; don't be absurd.
Staff Emeritus
Gold Member
I'm afraid Lunchbox is right on this one...
It's possible... (possibly), but anyone who says that building it is anything short of the most complicated engineering feat ever attempted has their head in the clouds and their pie in the sky.
Between grounding out the Van Allen Belts and the slightest gravitational, atmospheric, or solar pressure perturbations exciting 1047th natural mode of the ribbon and causing it to oscillate out of control... I just don't think it's possible in our lifetime, nor the lifetimes of our children or our children's children's children.
SkepticJ
enigma said:
I'm afraid Lunchbox is right on this one...
It's possible... (possibly), but anyone who says that building it is anything short of the most complicated engineering feat ever attempted has their head in the clouds and their pie in the sky.
Between grounding out the Van Allen Belts and the slightest gravitational, atmospheric, or solar pressure perturbations exciting 1047th natural mode of the ribbon and causing it to oscillate out of control... I just don't think it's possible in our lifetime, nor the lifetimes of our children or our children's children's children.
Would you please give links etc. to such data? I've seen "show stopping" problems(at least they appear to be before rebuttals are given) with space elevators before, and I'm not persuaded they aren't feasible yet. I'm not saying one will be built within thirty years, but not within my great great grandchildren's lives? Come on. For one thing I'm only twenty, and for another, in a smaller span of "grands" we've gone from http://quest.arc.nasa.gov/aero/wright/background/otto.jpg
Last edited by a moderator:
Staff Emeritus
Gold Member
SkepticJ said:
A few semesters of stuctures vibrations courses is my only source. Damping out a string under tension with one fixed end and one free end will be damn near impossible.
There was a shuttle experiment where they extended wires a hundred meters or so radially and the current produced destroyed the experiment, and the shuttle wasn't even in the VA belt. A space elevator is thousands of kilometers long. Even if the ribbon is built strong enough to withstand the current, due to IXB, you'll get a time-varying tangential force on top of the floating string in tension model which now also needs to be damped out.
I did a bunch of research a few years back and read reports by David Smitherman at NASA which said that it will be feasible in the next century, but I am highly skeptical that his analysis was thorough.
Again, I'm the first person to say guffaw to anyone who says it will never happen... but I don't think this one will be attainable anytime soon.
SkepticJ
enigma said:
A few semesters of stuctures vibrations courses is my only source. Damping out a string under tension with one fixed end and one free end will be damn near impossible.
There was a shuttle experiment where they extended wires a hundred meters or so radially and the current produced destroyed the experiment, and the shuttle wasn't even in the VA belt. A space elevator is thousands of kilometers long. Even if the ribbon is built strong enough to withstand the current, due to IXB, you'll get a time-varying tangential force on top of the floating string in tension model which now also needs to be damped out.
I did a bunch of research a few years back and read reports by David Smitherman at NASA which said that it will be feasible in the next century, but I am highly skeptical that his analysis was thorough.
Again, I'm the first person to say guffaw to anyone who says it will never happen... but I don't think this one will be attainable anytime soon.
Ah, but this wouldn't be a problem to a space fountain which I linked. A space fountain isn't under tension, it's under compression. It's held up using mass beams (steams of mass pellets shot out of an auto fire magnetic accelerator gun) Another advantage to space fountains is they need not be built on the equator. One could build one at the North Pole if they wanted to, it would be stupid and fuel wasting, but could be done. The best place for a space fountain would be at the equator, to give the cars traveling up it that added boost from the spin of the Earth; just like rockets make use of.
Ki Man
20 mile high elevator.
what the heck is the top of it going to lead to? what's the point? is it going to be like the cn tower but blown up in size 20x? threes no point in going up if there's nothing to do when your up there.
i feel sorry for the "astronaut" who has to be first to ride it.
not impossible idea but not very plausible. your going to make a building/stucture partially leave the atmoshere while still tethered tot he ground.
maybe it can happen. right after pigs take over england and i can go around the world with the press of a button while having my but scratched by a satalite guided laser while being served fruit that came from our new martian neighbors.
SkepticJ
Ki Man said:
20 mile high elevator.
what the heck is the top of it going to lead to? what's the point? is it going to be like the cn tower but blown up in size 20x? threes no point in going up if there's nothing to do when your up there.
i feel sorry for the "astronaut" who has to be first to ride it.
not impossible idea but not very plausible. your going to make a building/stucture partially leave the atmoshere while still tethered tot he ground.
maybe it can happen. right after pigs take over england and i can go around the world with the press of a button while having my but scratched by a satalite guided laser while being served fruit that came from our new martian neighbors.
One would think we could dispense with strawman arguments and engage critical thinking at a science forum. :yuck:
It'd be far taller than 20 miles. I space fountain could be 200 mi. high to provive service to low Earth orbit; or as high as you want to go. They have no limit in height. You should read up on them at the link I gave, or at this link before you strawman and mock them.
A "beanstalk" ribbon type elevator, which has the problem noted in this thread, would go up about 60,000 miles into the sky.
Oh I don't know, how about a space station or something? What do we sent those Russian craft or the Shuttle into space for?
The idea is that craft ride up into orbit and then detach from the elevator at the height they want to go to; or go on to the end.
Why? If mag lev rails went up the side of the space fountain tower the elevator car could reach the top in less than a few hours. The ribbon type's transit time might be up to a few weeks from bottom to top. As long as some good Led Zeppelin music was playing over the speakers, Internet link with the ground, TV beamed up and perhaps love mate(s) as well the ride up would be great.
This matters why? What, you're talking about making a metal tube soar through the air using metal "wings" and loud tube-like things with spinning "turbines" to propell them? Carry four-hundred people as well? At 30,000 ft.? At 600mph? Across an entire ocean on one tank of fuel? :rofl: Arguments from Incredulity aren't valid arguments.
*Sigh* I won't dignify this pile of BS hyperbole with a response longer than this.
Last edited:
Ki Man
thats different. planes have good physics backing them up. this is just... whoa.
think about it. even if we had a design that would stand the test of time, altitude, weather, and earthquakes, along with hunderds of other things, how the heck are we planning on building this thing? AND how much will this cost? AND what are we going to do when its up there? were not going to be able to leave what ever is at the top of the elevator in any way other than going back down the elevator. we're going to be limited to spacewalks unless we have a spacecraft taken up there, but if we take a spacecraft up there its probably going to be using rocketsto get there so then there's no point in an elevator in the first place. and there's always the danger of being hit by lowflying aircraft AND satalites. if a b-29 hit the empire state building, somethings going to hit this baby sometime.
Intuitive
I remember reading about an article that a dirigible made of rigid nano tubes and if the area of displacement was as large as a football field, You could lift 800 lbs of mass, The Rigid dirigible was an evacuated Blimp and contain a football field size vacuum of displacement, You can could put more into space this way and it's reusable, If you have efficient vacuum pumps and a super material like nanotubes, If I had the money to make it I bet I could win some X prize, The rigid evacuated dirigible will seek an equal equalibrium of space.
If you can created a super hard vacuum in a rigid dirigible it would be pushed up to space.
I bet if we had a good engineer team work on this we could have a blimp in space, work out all the bugs. (Specific Gravity is our friend, Buddy up!)
SkepticJ
Ki Man said:
thats different. planes have good physics backing them up. this is just... whoa.
think about it. even if we had a design that would stand the test of time, altitude, weather, and earthquakes, along with hunderds of other things, how the heck are we planning on building this thing? AND how much will this cost? AND what are we going to do when its up there? were not going to be able to leave what ever is at the top of the elevator in any way other than going back down the elevator. we're going to be limited to spacewalks unless we have a spacecraft taken up there, but if we take a spacecraft up there its probably going to be using rocketsto get there so then there's no point in an elevator in the first place. and there's always the danger of being hit by lowflying aircraft AND satalites. if a b-29 hit the empire state building, somethings going to hit this baby sometime.
Space fountains don't violate known physics. If they did I wouldn't suggest them as a possible way to space. If you want to keep talking about them it's best you actually read how they'd work at the links I gave.
Would you care to explain what negative effect altitude above sea level would have on metals? Ummm, I have; please don't be condescending with me. Read the links, they explain how they'd be made.
Can you read? I said the stuff that goes up detaches from the tower. Nothing is "stuck" any more than payloads are stuck to the Space Shuttle or other rockets that take them to orbit or beyond. If you're not going to stop being stupid with me I'm not going to spend time conversing with you any longer; your choice what happens.
No, the most energy use is getting from the ground up to orbit; which is what the elevator does. Once you're in orbit rockets can be much smaller to get where you want to go. Air space around such a tower would be as tight as the space around a government building. Idiots or terrorists would be shot down long before they could hit the tower. We have a thing called RADAR for tracking things in orbit. How do you think we can keep the ISS or Shuttle from impacting with objects? Because we know where they are. A tower won't be built in the orbital paths of satellites.
The B-29 that you speak of hit the Empire State Building in heavy fog, before the days of RADAR and it still didn't make the tower fall. Name a modern event where a plane hit a building by accident and you'd have a better point. Such a tower would, as a last resort, have guns similar to the missile defence guns on Navy ships that shred enemy missles into tiny, tiny bits several miles away. Such a tower could have missles, and would have protective ocean and aircraft that could take out threats.
Ki Man
What I wish is that lots of money would be dumped into a ribbon-type space elevator or a space fountain That's the real way to get to space cheaply and safely.
since when is building an elevator many many miles high cheap
I never said its impossible, but there's a whole lot of hurdles we need to jump before we can start taking this to the next level.
*sigh* everything i say always comes out wrong. i need to work on my charisma. your not going to hear anything from me for a while until i get better at speaking.
Last edited:
SkepticJ
Ki Man said:
since when is building an elevator many many miles high cheap
I never said its impossible, but there's a whole lot of hurdles we need to jump before we can start taking this to the next level.
*sigh* everything i say always comes out wrong. i need to work on my charisma. your not going to hear anything from me for a while until i get better at speaking.
It's not cheap, at first. I don't know if you know this or not, but every time the US's Space Shuttle goes into orbit, the effort that went into that feat costs the tax paying public about 1,000,000,000 US Dollars. This breaks down to about $10,000 dollars per pound to get a payload into space. Why did Dennis Tito, the first space tourist, have to pay$20,000,000? Because of his weight and the weight of the air, water, food and fuel that was needed to keep him alive. Plus a tiny bit of profit to the Russians for their trouble, I guess. I'm all for NASA and the Russian's space program. In fact I wish they got about $0.05 per tax dollar instead of the$0.01 or less they get. During the Apollo program NASA got about \$0.04 per tax dollar, and we had missions to the effing moon! But I digress. A space elevator would cost a lot up front, but the cost after that would be very low; the level of low that would let you take a tourist trip to space, if millions of other people weren't on the waiting list cash in hand, that is.
Yes, there is things that need to be developed to make the idea even more economical. Such as higher temp superconductor materials to cut down on the energy lost as heat from slowing down the mass beams and speeding them up again. And better magnetic guns.
Talk to you again on Tuesday, have to go until then.
Ki Man
the quote i was referring too said that the ribbon would be a cheap and safe alternative. safe, maybe. cheap? definitely not
money money money money
going to space costs billions into begin with.
what about 2 or 3 stage jet/rocket aircraft. definitely cheaper, re-usable, and reasonable safe.
Staff Emeritus
Gold Member
cheaper like the shuttle is cheaper?
Re-usable spacecraft are some of the most expensive things on the planet.
Ki Man
cheaper as in a small craft being carried up on something kind of like a boeing 747 and then shooting off when its high enough, then entering another stage, wheich means it will take less than a giant red tank of fuel 300 feet large and 2 boosters to get up. just enough for an airliner and some small rockets.
Staff Emeritus
Gold Member
So instead of having one single system to get to 7.75 kilometers per SECOND and 250 kilometers high, you propose that designing all of the interlinkages and safety systems so you can launch it from 0.25 kilometers per second and 10 kilometers up?
All that and you now need to size the rocket small enough to be able to be carried by a 747. I'm sorry... if it was more feasible to launch from a plane for larger rockets, they'd do it.
LunchBox
I gots some numbers
I just tried to post this and my browser crapped out on me... f'ing FireFox... anywho... here it is in condensed form because I don't feel like retyping it all...
SkepticJ... your 'space fountain' is crap. Your grandchildren's grandchildren's grandchildren won't see it. Here's why:
1.) Bending. This thing will sway like a drunken frat whore at Mardi Gras. For comparison, the Sears Tower, at 1450 ft, sways an average of 6 into either side. Comparing that ratio with the 656000 ft 'space fountain' yields a sway of 226 ft. Now, that's to BOTH SIDES. So the magnetic catch on the redirector needs to be 500 FEET IN DIAMETER! And that's for AVERAGE SWAY. This thing will also be cutting through the jet stream so that 226 ft mean sway is so conservative, W is telling it to back off.
2.) Torsion. Everything from 1.) applies. Oops... just increased the necessary magnetic catch diameter.
3.) Projectiles. These things will require a TREMENDOUS amount of energy just to get to the top with NO residual energy. Oh, and you need residual energy to keep the structure from falling down like a lightweight frat pledge at initiation. (Wow... two drunken references in an analysis... new record.) The amount of energy required just to get the projectile to the top is 2 MJ/kg... yes 2 MILLION Joules per kilogram. Even assuming you have a rail launcher that is 1 km long, that is a required initial velocity of 2 km/s. Oh, and all of those numbers are excluding aerodynamic drag which will be substantial on a projectile leaving an accelerator at SIX TIMES THE SPEED OF SOUND!
I could go on... but... well... [URL=[URL=[PLAIN]http://groups.msn.com/_Secure/0TwDtAuIY!vxCf!LqsXBkoLXuJ0MS!*SqpH8VwXdtWpv4XVx7NBucxjJh2umHmm2c83SmaHcre6HAhkr33eDqi82b2CUpiYc1WH7nGFtSe74!5sVUE*1!vg/beating-a-dead-horse.gif[/URL] [Broken] pretty much sums up what I'm already doing.
Cheers...
Last edited by a moderator:
SkepticJ said:
Ah, but this wouldn't be a problem to a space fountain which I linked.
You provided no link that I can see. I am a huge skeptic of the elevator notion but I would like to see something on the fountain idea before I start with my views on the subject.
Staff Emeritus
Gold Member
FredGarvin said:
You provided no link that I can see. I am a huge skeptic of the elevator notion but I would like to see something on the fountain idea before I start with my views on the subject.
He did, but it got trimmed when I split the threads.
Wikipedia knows all
Ki Man
i'm thinking more like, large version of spaceship one, maybe not as huge as i said with the 747
SkepticJ
LunchBox said:
SkepticJ... your 'space fountain' is crap. Your grandchildren's grandchildren's grandchildren won't see it. Here's why:
1.) Bending. This thing will sway like a drunken frat whore at Mardi Gras. For comparison, the Sears Tower, at 1450 ft, sways an average of 6 into either side. Comparing that ratio with the 656000 ft 'space fountain' yields a sway of 226 ft. Now, that's to BOTH SIDES. So the magnetic catch on the redirector needs to be 500 FEET IN DIAMETER! And that's for AVERAGE SWAY. This thing will also be cutting through the jet stream so that 226 ft mean sway is so conservative, W is telling it to back off.
2.) Torsion. Everything from 1.) applies. Oops... just increased the necessary magnetic catch diameter.
3.) Projectiles. These things will require a TREMENDOUS amount of energy just to get to the top with NO residual energy. Oh, and you need residual energy to keep the structure from falling down like a lightweight frat pledge at initiation. (Wow... two drunken references in an analysis... new record.) The amount of energy required just to get the projectile to the top is 2 MJ/kg... yes 2 MILLION Joules per kilogram. Even assuming you have a rail launcher that is 1 km long, that is a required initial velocity of 2 km/s. Oh, and all of those numbers are excluding aerodynamic drag which will be substantial on a projectile leaving an accelerator at SIX TIMES THE SPEED OF SOUND!
I could go on... but... well... [URL=[URL=[PLAIN]http://groups.msn.com/_Secure/0TwDtAuIY!vxCf!LqsXBkoLXuJ0MS!*SqpH8VwXdtWpv4XVx7NBucxjJh2umHmm2c83SmaHcre6HAhkr33eDqi82b2CUpiYc1WH7nGFtSe74!5sVUE*1!vg/beating-a-dead-horse.gif[/URL] [Broken] pretty much sums up what I'm already doing.
Cheers...
Yep, total crap. Those people at Lawrence Livermore National Laboratory are time wasting idiots; and so was Robert Lull Forward.
1. Amazing this can happen, since much of the tower is above atmosphere of any appreciable thickness. The thinner the air, less less the force of the wind per the wind's speed. Even if magnets of this size would be needed I see no problem. Particle accelerators, km in length, have been around longer than I have.
We know where the jet streams are. We could build it where the streams never go, if they'd be a problem.
2. From what? You'll have to explain in detail.
You lack the ability to read what the links I give say I see. You're "Oh, mass pellets being shot up, energy lost, debunked!" without even looking at the math these people did. Sloppy. So? If you think 2km per sec. gun velocity is current science fiction you'd be wrong. There are rail guns that can shoot spike-like rounds at 7+km per second already. IIRC the finned spike projectiles have a mass of about a kilogram. The rail gun is less than 20 meters long IIRC. Particle accelerators get particles, with mass, up to just below the speed of light, currently.
I'd rather you did, because your link doesn't come up for me.
Last edited by a moderator:
LunchBox
Those people at Lawrence Livermore National Laboratory
...
(Psst... just because the people work for a place who's name ends in "national laboratory" doesn't mean they have a direct pipeline to the wisdom of the ages. In fact, they are just as likely to be wrong as anyone else... I know... I work with these people from time to time.)
1. Amazing this can happen
...
Isn't it? I've always found vibrations to be an interesting topic. The numbers were for comparison. Now, in the 'articles' you link, the authors stress how only a spindly, lightweight structure will be needed. Spindly, lightweight structures have extremely small bending (EI) stiffnesses. The Sears Tower is practically rigid compared with a 'space fountain' structure. Pick your favorite 'space fountain' design (as the articles you linked surprisingly had no structure sizing) and perform a sinusoidal gust loading along the height... I imagine the tip displacement value will astonish you.
2. From what? You'll have to explain in detail.
...
Ok... I'll explain. The gust loading will not be even over the side area of the structure facing the wind. This difference in loading along the windward face will create a torque on the structure. And spindly structures have even lower torsional (GJ) stiffnesses than bending stiffnesses.
You lack the ability to read what the links I give say I see.
No... they just fail to say anything useful...
You're "Oh, mass pellets being shot up, energy lost, debunked!" without even looking at the math these people did.
I'll assume you meant 'your' and not 'you are'. I saw no 'math' in your Wikipedia article and the only technical publication linked from there was on a 'launch loop', not a 'space fountain'. The wonderful catch-all in the Wikipedia article
but Roderick Hyde worked out all the engineering design details for a Space Fountain and showed that there were no show-stoppers.
really means that the problems were manageable within the scope of the universe we occupy, not that the problems were practically solvable, or that they were economically practical. I would ABSOLUTELY LOVE to see that analysis done by Roderick Hyde. If you have it, please send me a link or a pdf. Also, always remember Akin's Law of Spacecraft Design Number 17
So? If you think 2km per sec. gun velocity is current science fiction you'd be wrong. There are rail guns that can shoot spike-like rounds at 7+km per second already.
No... I didn't say that they were science fiction. In fact, I am a huge proponent of rail gun technology. As soon as it becomes economically feasible, I think the military should put those bad boys on EVERYTHING. However, what I was referring to was the atmospheric drag that will suck momentum from these projectiles. Not only that, but the projectiles will need to be ferromagnetic in order to be redirected at the top and bottom of the tower. The atmospheric drag on a projectile traveling at Mach 6+ will cause tremendous heating and could exceed the Curie temperature of the material making it no longer ferromagnetic. Also, the velocity of the returning projectiles will will be limited by the terminal velocity of the projectile profile. All these losses will add up to necessitate a tremendous energy expenditure to bring the projectiles back up to speed at the bottom of the 'space fountain'.
Now look, maybe I started a little harsh, but I'm sick and tired of people thinking you can get to space easily by climbing successively taller trees. My goal it not to stifle creativity... far from it. However, I think a little realism and practicality needs to be brought into every discussion. Oh and I was serious about wanting to see that analysis...
Cheers...
Mentor
Ki Man said:
i'm thinking more like, large version of spaceship one, maybe not as huge as i said with the 747
The important thing to know about SpaceShip One is that it is not a space ship. It's just the most expensive amusement park ride ever created. Ok, so I'm not sure if there is a formal definition of the term, but my point is that Spaceship One does not come anywhere close to achieving orbit, which is what it must be able to do have any real use. So scaling it up would accomplish little of value.
Also, while SpaceShip One had a number of unique design features, the overall concept is an old one, the same as the X-15. The performance is somewhat less than what the X-15 achieved 50 years ago.
Last edited:
Gold Member
Ki Man said:
thats different. planes have good physics backing them up. this is just... whoa.
I was unaware that there was "good" physics and "bad" physics.
As SkepticJ says: "Space fountains don't violate known physics."
Perhaps what you meant was that there are a number of technical hurdles to overcome. To which I think most of us agree. We only disagree on how much, how long and how costly.
Ki Man said:
since when is building an elevator many many miles high cheap
When? Why, by trip #2!
The whole point of these devices is that, utterly unlike any kind of rocket, you only put out the expense once, not every time.
Gold Member
enigma, re: your technical problems mentioned in post #5, would any or all of these also apply to a skyhook?
SkepticJ
LunchBox said:
...
(Psst... just because the people work for a place who's name ends in "national laboratory" doesn't mean they have a direct pipeline to the wisdom of the ages. In fact, they are just as likely to be wrong as anyone else... I know... I work with these people from time to time.)
...
Isn't it? I've always found vibrations to be an interesting topic. The numbers were for comparison. Now, in the 'articles' you link, the authors stress how only a spindly, lightweight structure will be needed. Spindly, lightweight structures have extremely small bending (EI) stiffnesses. The Sears Tower is practically rigid compared with a 'space fountain' structure. Pick your favorite 'space fountain' design (as the articles you linked surprisingly had no structure sizing) and perform a sinusoidal gust loading along the height... I imagine the tip displacement value will astonish you.
...
Ok... I'll explain. The gust loading will not be even over the side area of the structure facing the wind. This difference in loading along the windward face will create a torque on the structure. And spindly structures have even lower torsional (GJ) stiffnesses than bending stiffnesses.
No... they just fail to say anything useful...
I'll assume you meant 'your' and not 'you are'. I saw no 'math' in your Wikipedia article and the only technical publication linked from there was on a 'launch loop', not a 'space fountain'. The wonderful catch-all in the Wikipedia article really means that the problems were manageable within the scope of the universe we occupy, not that the problems were practically solvable, or that they were economically practical. I would ABSOLUTELY LOVE to see that analysis done by Roderick Hyde. If you have it, please send me a link or a pdf. Also, always remember Akin's Law of Spacecraft Design Number 17
No... I didn't say that they were science fiction. In fact, I am a huge proponent of rail gun technology. As soon as it becomes economically feasible, I think the military should put those bad boys on EVERYTHING. However, what I was referring to was the atmospheric drag that will suck momentum from these projectiles. Not only that, but the projectiles will need to be ferromagnetic in order to be redirected at the top and bottom of the tower. The atmospheric drag on a projectile traveling at Mach 6+ will cause tremendous heating and could exceed the Curie temperature of the material making it no longer ferromagnetic. Also, the velocity of the returning projectiles will will be limited by the terminal velocity of the projectile profile. All these losses will add up to necessitate a tremendous energy expenditure to bring the projectiles back up to speed at the bottom of the 'space fountain'.
Now look, maybe I started a little harsh, but I'm sick and tired of people thinking you can get to space easily by climbing successively taller trees. My goal it not to stifle creativity... far from it. However, I think a little realism and practicality needs to be brought into every discussion. Oh and I was serious about wanting to see that analysis...
Cheers...
Government Labs don't usually write stuff that is BS though. If it involves physics breaking stuff they have to show that physics, as it was understood up to that point, isn't correct. This, as far as I know, hasn't happened yet.
Right.
That's because it's a general concept. The towers could be whatever diameter, mass etc. is wanted/needed.
That's better.
Neither did your non-functioning "link". You might want to go back and fix that, because I can't be privy to the "debunking" of "crap" concept space fountains.
It's very ironic your critiquing of my typing flaw when you, while using a sock puppet, made this typing error: "maybe it can happen. right after pigs take over england and i can go around the world with the press of a button while having my but scratched by a satalite guided laser while being served fruit that came from our new martian neighbors."
I think you and Ki Man are one and the same because of these two quotes:
"having my but scratched by a satalite guided laser"-Ki Man
"Yeah, they'll have those right after they have satellites that can scratch your ass with a laser beam from space."-LunchBox
Even if you're not one and the same, this should do: "Also, the velocity of the returning projectiles will will be limited by..."
If you're going to debate with me, do it with one account, mmmmkay?
If you're not one and the same person, a little advice, don't correct people's typing in a debate and in a pissy manner. Especially in a debate with someone that is anal retentive about doing correct spelling in his own work.
Be nice to me and I'll be nice to you in return.
Vacuum pipes, which the streams travel through, make this a non-issue. No air, no drag. *sigh* I guess I'll have to say everything that the linked papers say before you get it. Why, oh, why did I waste my time? What's the difference between this "manageable within the scope of the universe we occupy" and this "not that the problems were practically solvable"? http://www.answers.com/practical&r=67 [Broken] seems to fit both. Economic is another matter though.
I'll probably have to read through Dr. Robert Forward's Indistinguishable from Magic to find the calculations/analysis. Said book has a lot of information on Space Fountains in it; plus other stuff.
Last edited by a moderator:
Ki Man
meh, ignore what i said before i was drunk or something (not literally)
if this is about 200 miles high, how quickly will we be traveling to get up this thing. is it going to be like a 5 hour take your time kind of thing as you go straight up into the atmosphere or is it going to be somekind of accelerated trip to make you get there quickly. i understand what's backing this up now, i can have a concept design done some time this week.
how fast are we talking? how high is it going to be (estimate) and how big is the elevator going to have to be (like how wide is it going to need to be at most and what kind of things are going to need to be carried up by the elevator carts.)?
and no this is not lunchbox! i doubt he is a 13 year old living in south california. i just thought using his comment in my own would be a good idea. outside of the forum i am not associated with him.
i am not him. he's not smart enough to be me lol :rofl:
okay there ARE similarities in our posts but if a mod checked out our IP adresses he'd see that they are different (unless lunchbox is really my sister which i really really doubt). can i get a mods help here.
Last edited:
Space-elevator at Yahoo
Ki Man said:
if this is about 200 miles high
It goes to geostationary orbit, which is about 22,200 miles high.
KI Man said:
how quickly will we be traveling to get up this thing.
There exists a variety of space-elevator conceptions. There are speculation links at the bottom of this page...
en.wikipedia.org/wiki/Space_elevator#See_also
...and a good space-elevator discussion group here:
groups.yahoo.com/group/space-elevator
Staff Emeritus
Gold Member
SkepticJ said:
Government Labs don't usually write stuff that is BS though.
Usually, no. Occasionally? Yes. Just because you have PhD after your name does not mean you can't make mistakes or overlook certain aspects of a problem. That's why designs are always done in groups. In this case, the idea has so many critical engineering details neglected as to be worthless. Where is the report? All I've seen is a non-technical Wikipedia article.
Neither did your non-functioning "link". You might want to go back and fix that, because I can't be privy to the "debunking" of "crap" concept space fountains.
Link works fine for me. Links to the website of University of Maryland's Space Systems lab director Dr. Dave Akin. Law #17 states:
Dave Akin said:
17. The fact that an analysis appears in print has no relationship to the likelihood of its being correct.
Something which is absolutely true. I have read many technical journals with blatant errors that got overlooked.
Even if you're not one and the same,
Lunchbox and Ki Man are not the same person.
Vacuum pipes, which the streams travel through, make this a non-issue. No air, no drag.
It is proposing a pump which drains the air (and keeps the air drained) out of a pipe _200 miles long_?!? Amazing what is trivially attainable when you just wave the magic "engineers can do anything... and easily" wand.
Steel is unable to sustain its own weight in the Earth's gravity field if it's on the order of 10 miles long. Stresses inside the structure due to the gravity loading will cause it to fail, even if it's being pushed up at the end. The failures would occur in the middle anyways.
Until I see an in-depth technical analysis which covers structure vibrations and stress analyses, I'm going to say it's not feasible as well, just from back-of-the-envelope calculations.
I guess I'll have to say everything that the linked papers say before you get it
I see a grand total of zero linked technical papers in this thread, or the one which I separated this discussion out of.
Last edited:
SkepticJ said:
Another advantage to space fountains is they need not be built on the equator. One could build one at the North Pole if they wanted to
Ribbon-type space elevators do not need to be anchored on the equator. They can be anchored anywhere, including the poles.
Last edited:
blimkie
Ki Man said:
meh, ignore what i said before i was drunk or something (not literally)
how fast are we talking? how high is it going to be (estimate) and how big is the elevator going to have to be (like how wide is it going to need to be at most and what kind of things are going to need to be carried up by the elevator carts.)?
Ki man the space elevator is not an actualy elevator as u have in the CN tower empire state etc.
Its not gonne be built as if it were a massive cn tower reaching into space. Keyword "ribbons" made from carbon nano tubes its flexible.
IM NOT saying that it is 100% possible because i don't know all the complicated physics involved exceeding high school. The idea was for a spacecraft to latch onto the ribbon and drive itself into space at a slower velocity than rocket ships and supposedly be more safer. A company called lift port has smalled scale models suspended by balloons. The question is can they create one much larger and extend it into space.
I watched a small piece about it discovery channel last night and was quite suprised but beleives they can make this work by 2018. http://liftport.com/research1.php [Broken]
Last edited by a moderator:
SkepticJ
Ribbon-type space elevators do not need to be anchored on the equator. They can be anchored anywhere, including the poles.
Until the asteroid falls out of the sky blowing the Arctic ice sheet or Antarctica a new crater. It's the centripital force of the asteroid on the end that holds the ribbon tight. Other places as well have a problem, because geosynchronous orbit can only happen above the equator.
SkepticJ
enigma said:
Usually, no. Occasionally? Yes. Just because you have PhD after your name does not mean you can't make mistakes or overlook certain aspects of a problem. That's why designs are always done in groups. In this case, the idea has so many critical engineering details neglected as to be worthless. Where is the report? All I've seen is a non-technical Wikipedia article.
Link works fine for me. Links to the website of University of Maryland's Space Systems lab director Dr. Dave Akin. Law #17 states:...
It is proposing a pump which drains the air (and keeps the air drained) out of a pipe _200 miles long_?!? Amazing what is trivially attainable when you just wave the magic "engineers can do anything... and easily" wand.
Good point. I'll see if I can find that technical paper. Google searches aren't finding anything, but I'll keep trying to get it.
No, not that link; the link in post #18's bottom. It doesn't work for me.
Shouldn't be hard at all. For one thing, the pipe could be very narrow in diameter; which means less air inside. For another thing, the vacuum pump isn't fighting against gravity. If you had a pipe as tall as this, connected even just a small vacuum pump, it could eventually suck all the air out. "Eventually" wouldn't do, so something like a 777's jet engine powered vacuum pump would do nicely. It'd probably be able to suck the pipe to a vacuum in several hours to a day. Since the top of the pipe is above the atmosphere no more, or very, very little and slowly, will get in again that way. Going through the pipe material itself, hydrogen and helium are the only gases I know of that can squeeze through metal. There's not much of either gas in Earth's atmosphere, because they rise to the top of the atmosphere and are blown away by Sol's solar wind. |
• \$9.00
NewsTrader EA v6.4 (with Source Code)
Expert: NewsTrader_v6.4.mq4 (Unlimited – Source Code), pdf: How to install MT4 files.pdf
NewsTrader EA which makes use of the DailyFX Financial Calendar. This EA may be backtested and optimized if you enter the parameter LoadFromFile = true. In this case the information from the hooked up DailyFX Historical past.zip needs to be unpacked and positioned in ../tester/information listing. Additionally you must set the correct worth for the TimeZone parameter (UseAutoTimeZone = false) and the parameter ReadFromFile needs to be true. |
×
## Completing The Square
This is completing the square on 'roids.
# Completing the Square Warmup
$x^2 + 8x + 16 = (x + B)^2$
What is the value of $$B?$$
Which of the following is equivalent to $x^2 + 6x + 10?$
Given that the green portion is a square, what is its area?
What are the solutions of
$x^2 + 8x + 7 = 0?$
$(x - 3)^2 = 25$
What are the solutions to this equation?
Hint:
$$\sqrt{(x - 3)^2} = \pm \sqrt{25}$$
× |
0
# Have a script wait precisely?
Nickoakz 231
7 years ago
I have a countdown that counts down a hour per second. Over time, the countdown would be over 5 minutes or less then 5 minutes. Is there a way to have wait() perfectly exact?
0
I don't really understand what you're saying, can you rephrase? YaYaBinks3 110 — 7y
0
When you use roblox wait(), it will have roblox decide how much micro seconds to add and puts it with the wait. Like if I request to wait(1), it would sometimes wait(1.01512512241) or wait(0.9995123152). Its what roblox does to allow scripts to have time to run. But I want this script to be precise. Nickoakz 231 — 7y
0
Oh, I see. So like, when you use a large integer with wait(), all the extra micro seconds add up. infalliblelemon 145 — 7y
0
Yes.. Nickoakz 231 — 7y
0
Why do you need this? YaYaBinks3 110 — 7y
0
I need a countdown that exactly counts down a hour. Such as 5:00pm to 6:00pm. Nickoakz 231 — 7y
1
BlueTaslem 18044
7 years ago
The solution is to check actual elapsed time rather than assume the previous times add up to something.
Consider the following functions which count to ten in ten seconds:
function pause(time)
wait(time - 0.1 + math.random() * 0.2);
end
-- We use this just to introduce extra
-- variance, for demonstration, although
-- the actual variance in wait is
-- really small.
function loopOne()
for i = 1, 10 do
print(i,10,pause(1));
end
end
function loopSmart()
local last = -1;
local start = tick();
while true do
pause(0.1);
local elapsed = math.floor(tick() - start);
if elapsed ~= last then
print(elapsed+1,10);
end
last = elapsed;
if elapsed > 10 then
break;
end
end
end
loopOne();
-- May take variable amounts of time
loopSmart();
--The prints might be off slightly,
-- (more than with other method),
-- but the overall time is
-- guaranteed to be bounded to within
-- however small you can wait for. |
# MOS
## Seminar
### Moduli of Symplectic Maximal Representations
Guichard, O (Paris-Sud 11 )
Tuesday 22 February 2011, 10:00-11:00
Seminar Room 1, Newton Institute
#### Abstract
Maximal representations of surface groups into symplectic real groups have been extensively studied in the last years. Beautiful results have been obtained using either the algebraic approach offered by the theory of Higgs bundles or a geometric approach based on a formula coming from bounded cohomology. After having recalled those results, we will construct, for a maximal representation $\rho: \pi_1(\Sigma_g) \to \mathrm{Sp}(2n, \mathbf{R})$, an open subset $\Omega \subset \mathbf{R} \mathbb{P}^{2n-1}$ where $\pi_1( \Sigma_g)$ acts properly with compact quotient. The topology of the quotient will then be determined. Finally we shall consider the problem of giving an interpretation of the moduli of maximal symplectic representations as a moduli space of $\mathbf{R} \mathbb{P}^{2n-1}$-structures and what are the questions that remain to give a complete answer to that problem.
#### Video
Available Video Formats |
Back to all chapters
# Mastering Triangles
The Euler Line will blow your mind.
# A preview of "Mastering Triangles"Join Brilliant Premium
Above, a line cuts across two adjacent rectangles.
How many right triangles can you find in this diagram?
In the diagram below, two chords that are diameters of a circle intersect at right angles. What is the area of the circle?
Pythagorean Theorem Proof #1
In both diagrams below, the right triangles have a short leg length of $$a$$ and a long leg length of $$b.$$ If the area of the figure on the right is $$4\left(\frac{1}{2}ab\right) + c^2,$$ which expression could represent the area of the figure on the left?
× |
# chain rule explained
Several examples are demonstrated. Skip to navigation (Press Enter) Skip to main content (Press Enter) Home; Threads; Index; About; Math Insight. But above all, try something. Whenever the argument of a function is anything other than a plain old x, you’ve got a composite […] Due to the nature of the mathematics on this site it is best views in landscape mode. Section. Here are useful rules to help you work out the derivatives of many functions (with examples below). pptx, 203 KB. Just to re-iterate, tables are bunch of chains, and chains are bunch of firewall rules. If it fails, admit it frankly and try another. It is used where the function is within another function. Differentiating vector-valued functions (articles) Derivatives of vector-valued functions. The Derivative tells us the slope of a function at any point.. Chain-rule-practice. Mathematics; Mathematics / Advanced pure; Mathematics / Advanced pure / Differentiation; 14-16; 16+ View more . About this resource. Next Section . In differential calculus, the chain rule is a way of finding the derivative of a function. Top; Examples. Photo from Wikimedia So Billy brought the giant diamond to the Squaring Machine, and they placed it inside. The chain rule for derivatives can be extended to higher dimensions. The chain rule is a rule, in which the composition of functions is differentiable. By recalling the chain rule, Integration Reverse Chain Rule comes from the usual chain rule of differentiation. Multivariable chain rule, simple version. Imagine we collected weight and height measurements from three people and then we fit a line to the data. This rule is called the chain rule because we use it to take derivatives of composties of functions by chaining together … Whenever we are finding the derivative of a function, be it a composite function or not, we are in fact using the Chain Rule. g ' (x). Sometimes, when you need to find the derivative of a nested function with the chain rule, figuring out which function is inside which can be a bit tricky — especially when a function is nested inside another and then both of them are inside a third function (you can have four or more nested functions, but three is probably the most you’ll see). This makes it look very analogous to the single-variable chain rule. Chain-rule-practice. The best fit line for those 3 data points. Example of Chain Rule. pptx, 203 KB. Explanation; Transcript; The logarithm rule is a special case of the chain rule. It is useful when finding the derivative of the natural logarithm of a function. For example, I can't understand why I can say: $$p(x,y\mid z)=p(y\mid z)p(x\mid y,z)$$ I can not understand how one can end up to this equation from the general rule! Try to imagine "zooming into" different variable's point of view. This is called a composite function. Show Mobile Notice Show All Notes Hide All Notes. Starting from dx and looking up, you see the entire chain of transformations needed before the impulse reaches g. Chain Rule… -Franklin D. Roosevelt, 32nd United States President We all know how to take a derivative of a basic function (such as y x2 2x 8 or y ln x), right? The following chain rule examples show you how to differentiate (find the derivative of) many functions that have an “inner function” and an “outer function.”For an example, take the function y = √ (x 2 – 3). Filter is default table for iptables. I. IPTABLES TABLES and CHAINS. Photo from Pixnio. Let us understand the chain rule with the help of a well-known example from Wikipedia. If your device is … Cards and effects go on a Chain if and only if they activate. But once you get the hang of it, you're just going to say, alright, well, let me take the derivative of the outside of something to the third power with respect to the inside. For a more rigorous proof, see The Chain Rule - a More Formal Approach. Example 1; Example 2; Example 3; Example 4; Example 5; Example 6; Example 7; Example 8 ; In threads. If we state the chain rule with words instead of symbols, it says this: to find the derivative of the composition f(g(x)), identify the outside and inside functions find the derivative of the outside function and then use the original inside function as the input multiply by the derivative of the inside function. Curvature. The logarithm rule states that this derivative is 1 divided by the function times the derivative of the function. Filter Table. Categories & Ages. The chain rule works for several variables (a depends on b depends on c), just propagate the wiggle as you go. Chain rule explained. Report a problem. There are rules we can follow to find many derivatives.. For example: The slope of a constant value (like 3) is always 0; The slope of a line like 2x is 2, or 3x is 3 etc; and so on. (11.3) The notation really makes a di↵erence here. Each player has the opportunity to respond to each activation by activating another card or effect. Chain Rule. This is more formally stated as, if the functions f (x) and g (x) are both differentiable and define F (x) = (f o g)(x), then the required derivative of the function F(x) is, This formal approach … It turns out that this rule holds for all composite functions, and is invaluable for taking derivatives. Both df /dx and @f/@x appear in the equation and they are not the same thing! Basic examples that show how to use the chain rule to calculate the derivative of the composition of functions. If you're seeing this message, it means we're having trouble loading external resources on our website. The chain rule provides us a technique for finding the derivative of composite functions, with the number of functions that make up the composition determining how many differentiation steps are necessary. 4 min read. Show Step-by-step Solutions. y0. Fig: IPTables Table, Chain, and Rule Structure. Chain rule definition is - a mathematical rule concerning the differentiation of a function of a function (such as f [u(x)]) by which under suitable conditions of continuity and differentiability one function is differentiated with respect to the second function considered as an independent variable and then the second function is differentiated with respect to its independent variable. Info. Legend has it, whatever you place into the Squaring Machine, the machine will give you back that number of objects squared. Mobile Notice. Chain-Rule. This skill is to be used to integrate composite functions such as $$e^{x^2+5x}, \cos{(x^3+x)}, \log_{e}{(4x^2+2x)}$$. In calculus, the chain rule is a formula to compute the derivative of a composite function. Created: Dec 13, 2015. Assume that you are falling from the sky, the atmospheric pressure keeps changing during the fall. Chains are used when a card or effect is activated before another activated card or effect resolves. Chain rule Statement Examples Table of Contents JJ II J I Page1of8 Back Print Version Home Page 21.Chain rule 21.1.Statement The power rule says that d dx [xn] = nxn 1: This rule is valid for any power n, but not for any base other than the simple input variable x. 1. The inner function is the one inside the parentheses: x 2-3.The outer function is √(x). I'm trying to explain the chain rule at the same time. In single-variable calculus, we found that one of the most useful differentiation rules is the chain rule, which allows us to find the derivative of the composition of two functions. The problem is recognizing those functions that you can differentiate using the rule. The Chain Rule Derivative Explained with Comics It all started when Seth stumbled upon the mythical "Squaring Machine": Photo from Pixnio Legend has it, whatever you place into the Squaring Machine, the machine will give you back that number of objects squared. Chain Rule: The General Exponential Rule The exponential rule is a special case of the chain rule. Prev. Here we see what that looks like in the relatively simple case where the composition is a single-variable function. Now let’s dive into the chain rule with a super simple example! Chain rule. Check out the graph below to understand this change. This tutorial presents the chain rule and a specialized version called the generalized power rule. Errata: at (9:00) the question was changed from x 2 to x 4. Determining height with respect to weight. Derivative along an explicitly parametrized curve One common application of the multivariate chain rule … In the section we extend the idea of the chain rule to functions of several variables. The chain rule is by far the trickiest derivative rule, but it’s not really that bad if you carefully focus on a few important points. The multivariable chain rule is more often expressed in terms of the gradient and a vector-valued derivative. A Chain (Japanese: チェーン Chēn) is a stack that determines the order of resolution of activated cards and effects. Photo from Wikimedia. You appear to be on a device with a "narrow" screen width (i.e. Page Navigation. IPTables has the following 4 built-in tables. Google Classroom Facebook Twitter. When my teacher told us about the chain rule I found it quite easy, but when I am trying to prove something based on this rule I kind of get confused about what are the allowed forms of this rule. The Chain Rule Explained It is common sense to take a method and try it. By the way, here’s one way to quickly recognize a composite function. In particular, we will see that there are multiple variants to the chain rule here all depending on how many variables our function is dependent on and how each of those variables can, in turn, be written in terms of different variables. Let me just treat that cosine of x like as if it was an x. Derivative Rules. Updated: Feb 22, 2018. docx, 16 KB. Now if someone tells us they weigh this much we can use the green line to predict that they are this tall. Using the chain rule as explained above, So, our rule checks out, at least for this example. you are probably on a mobile phone). Chain Rule appears everywhere in the world of differential calculus. The same thing is true for multivariable calculus, but this time we have to deal with more than one form of the chain rule. Home / Calculus I / Derivatives / Chain Rule. That is, if f and g are differentiable functions, then the chain rule expresses the derivative of their composite f ∘ g — the function which maps x to (()) — in terms of the derivatives of f and g and the product of functions as follows: (∘) ′ = (′ ∘) ⋅ ′. chain rule logarithmic functions properties of logarithms derivative of natural log. Email. Notes Practice Problems Assignment Problems. Chain-Rule. Jump to navigation Jump to search. Whatever you place into the Squaring Machine, and they placed it.! Simple case where the function is √ ( x ) try another collected weight height. Mathematics on this site it is useful when finding the derivative of the mathematics on this it. 16 KB to quickly recognize a composite function rule states that this derivative is 1 divided the! Functions properties of logarithms derivative of the chain rule to functions of variables! Rigorous proof, see the chain rule to calculate the derivative of a function to re-iterate, are... 1 divided by the way, here ’ s one way to quickly recognize a composite function relatively case! Loading external resources on our website nature of the composition of functions functions of several variables is differentiable sky the. Finding the derivative tells us they weigh this much we can use the chain rule is a case. Has the opportunity to respond to each activation by activating another card or effect activated! One way to quickly recognize a composite function where the composition of functions,. The way, here ’ s dive into the chain rule of differentiation a example. On this site it is used where the function is within another function to x 4 the logarithm. From Wikimedia So Billy brought the giant diamond to the nature of the composition is a rule in. Is √ ( x ) from x 2 to x 4 fit a to... / chain rule is a single-variable function when finding the derivative of mathematics! To predict that they are not the same thing articles ) Derivatives of vector-valued functions logarithm a!: Feb 22, 2018. docx, 16 KB trouble loading external on... Rule comes from the usual chain rule appears everywhere in the world of differential calculus number of objects squared analogous! Df /dx and @ f/ @ x appear in the equation and they placed it inside '' different 's... It frankly and try it are used when a card or effect resolves website! Card or effect in landscape mode, the chain rule at the same time recognizing those that! Fig: IPTables Table, chain, and is invaluable for taking Derivatives weigh this much we can the! Respond to each activation by activating another card or effect resolves, our rule out... And chains are bunch of chains, and they placed it inside collected weight height. Several variables now if someone tells us they weigh this much we can use the green line to Squaring... Here we see what that looks like in the equation and they placed it inside logarithms derivative of natural.. This tall the composition of functions Formal Approach and is invaluable for taking Derivatives the notation really makes a here! Way of finding the derivative of a function that this derivative is 1 divided by function. ( with examples below ) of activated cards and effects of x like as if it,. Of logarithms derivative of the natural logarithm of a well-known example from Wikipedia very analogous to nature... Only if they activate the usual chain rule at the same thing ’! You can differentiate using the chain rule explained it is best views in landscape.. Functions is differentiable extend the idea of the natural logarithm of a function /dx and @ f/ @ x in! The way, here ’ s one way to quickly recognize a composite function here we see that! Just treat that cosine of x like as if it fails, admit it frankly and try it legend it. Tells chain rule explained the slope of a function site it is best views in landscape mode -... Rule at the same time that looks like in the world of differential calculus, the rule... ( i.e: IPTables Table, chain, and chains are bunch firewall! Common sense to take a method and try it, admit it frankly and try another appear in world! The one inside the parentheses: x 2-3.The outer function is √ ( x ) parentheses: 2-3.The! Of firewall rules are bunch of firewall rules opportunity to respond to each activation by activating another card or is... Of a composite function a di↵erence here rule explained it is used the... Someone tells us the slope of a composite function rule the Exponential rule is a formula to the... Keeps changing during the fall 'm trying to explain the chain rule of differentiation at for. Below ) help of a composite function rule of differentiation try to imagine into! They weigh this much we can use the green line to predict that they this. Holds for All composite functions, and rule Structure show All Notes Hide Notes. Means we 're having trouble loading external resources on our website by activating another card or effect resolves it out... Whatever you place into the chain rule and a specialized version called generalized! Rule holds for All composite functions, and chains are bunch of chains, and they placed inside! Take a method and try it here ’ s dive into the chain rule explained is. Is common sense to take a method and try it the slope of a function opportunity respond. Calculus, the Machine will give you back that number of objects squared So, our rule out!: at ( 9:00 ) the notation really makes a di↵erence here rule logarithmic functions properties of logarithms derivative the...: at ( 9:00 ) the notation really makes a di↵erence here, our rule checks,. 16+ view more is common sense to take a method and try it message, it means we 're trouble! The natural logarithm of a function, our rule checks out, at least this! In which the composition of functions is differentiable this message, it means 're! Activated before another activated card or effect resolves ( with examples below ) basic examples that how... ( 9:00 ) the notation really makes a di↵erence here ( 9:00 ) the question changed. Effects go on a chain ( Japanese: チェーン Chēn ) is a formula to compute the derivative tells the!, the atmospheric pressure keeps changing during the fall natural logarithm of a well-known from... / differentiation ; 14-16 ; 16+ view more and effects go on a device with a simple... Wikimedia So Billy brought the giant diamond to the data differential calculus nature of the mathematics on site! Divided by the way, here ’ s one way to quickly recognize a composite function this message, means! Version called the generalized power rule functions ( with examples below ) you... Effect is activated before another activated card or effect the section we extend the idea of the of! Natural logarithm of a function height measurements from three people and then we fit a line to predict that are. Power rule if you 're seeing this message, it means we 're trouble! /Dx and @ f/ @ x appear in the section we extend idea! Place into the chain rule: the General Exponential rule is a single-variable function used where the composition functions! Data points to each activation by activating another card or effect is activated before another activated card or effect activated! Pure ; mathematics / Advanced pure / differentiation ; 14-16 ; 16+ view more the question was changed from 2! Machine, and is invaluable for taking Derivatives, here ’ s dive into the chain rule explained. A composite function way of finding the derivative of natural log rule states that derivative! Within another function let us understand the chain rule as explained above, So, rule! S one way to quickly recognize a composite function 16 KB @ @... For those 3 data points by activating another card or effect out at..., and chains are used when a card or effect resolves atmospheric pressure changing! Question was changed from x 2 to x 4 logarithm rule states this! Recognize a composite function Formal Approach rule states that this derivative is divided! The one inside the parentheses: x 2-3.The chain rule explained function is within another function of. Best views in landscape mode ( 9:00 ) the question was changed from x 2 to x 4,. That you can differentiate using the rule if and only if they chain rule explained resolution of cards... Df /dx and @ f/ @ x appear in the equation and they placed inside. Understand the chain rule - a more Formal Approach recognize a composite.... Of vector-valued functions check out the Derivatives of many functions ( with examples below.. Is activated before another activated card or effect let ’ s one way to quickly recognize a composite function to! 'S point of view the section we extend the idea of the natural logarithm of chain rule explained well-known example from.. That looks like in the relatively simple case where the function is within another function ; view... Billy brought the giant diamond to the single-variable chain rule of differentiation as if it was an x giant to! Logarithmic functions properties of logarithms derivative of a well-known example from Wikipedia is best views in mode. A method and try it best fit line for those 3 data points a stack that determines the order resolution... Of finding the derivative of the composition of functions is differentiable Wikimedia So Billy brought the giant diamond the! Divided by the function times the derivative of natural log a special case of the chain and... Collected weight and height measurements from three people and then we fit a line to predict they! To re-iterate, tables are bunch of chains, and rule Structure you back that number of objects.. This message, it means we 're having trouble loading external resources on website. Tutorial presents the chain rule to functions of several variables holds for All composite functions, and they it. |
# How to find correlation between 3 signals
I have 3 signals, S1 S2 S3, with a same time domain and same time steps. I'm wondering if there is any correlation between these 3 signals. more specifically I want to know if it is possible to make Signal S3 by any combination of signals S1 and S2.
You need to process them jointly in pairs to get the correlation coefficients $$\rho_{13}=\frac{\mathsf{Cov}(S_1,S_3)}{\sqrt{\mathsf{Var}(S_1)\mathsf{Var}(S_3)}}$$ $$\rho_{23}=\frac{\mathsf{Cov}(S_2,S_3)}{\sqrt{\mathsf{Var}(S_2)\mathsf{Var}(S_3)}}$$ that represent the linear dependencies between $S_1$, $S_3$ and $S_2$, $S_3$, respectively. So when $|\rho_{xy}|$ is close to one, it indicates high linear dependency (and values close to zero mean small linear dependency).
That said, the problem you mentioned can be solved using a least squares approximation. In MATLAB, assuming your signals are in form of column vectors S1, S2, and S3, you can do this to find the weights:
w = [S1 S2]\S3;
So the signal S3 can be approximated as
S3_hat = [S1 S2]*w;
and the mean squared approximation error is
mse_err = mse(S3-S3_hat); |
By Topic
# IEEE Transactions on Electromagnetic Compatibility
## Filter Results
Displaying Results 1 - 25 of 34
Publication Year: 2010, Page(s):C1 - 1
| PDF (90 KB)
• ### IEEE Transactions on Electromagnetic Compatibility publication information
Publication Year: 2010, Page(s): C2
| PDF (36 KB)
• ### From the Incoming Editor-in-Chief
Publication Year: 2010, Page(s): 2
| PDF (57 KB) | HTML
• ### Electromagnetic Compatibility Uncertainty, Risk, and Margin Management
Publication Year: 2010, Page(s):3 - 10
Cited by: Papers (8)
| | PDF (1457 KB) | HTML
A global methodology of electromagnetic compatibility (EMC) physics and risk analysis is presented in this paper. In order to manage the uncertainties of complex systems, the nine steps of the methodology based on statistics are described. A simple but representative EMC failure mode illustrates the methodology: the failure due to interference resulting from a lightning impact to the terminal equi... View full abstract»
• ### New Advances on Correlating TEM Cell and OATS Emission Measurements
Publication Year: 2010, Page(s):11 - 20
Cited by: Papers (3)
| | PDF (1487 KB) | HTML
This paper proposes an improvement for the standard algorithms used to correlate the radiated emission measurements performed in a gigahertz transverse electromagnetic (GTEM) cell to measurements performed at an open area test site (OATS). The improvement compared with standard correlation algorithms lies in the ability to use literally all data measured. This is achieved by using nonlinear regres... View full abstract»
• ### Characterization of a Far-Field Microwave Magnetic Field Strength Sensor Based on Double Radiooptical Resonance
Publication Year: 2010, Page(s):21 - 31
Cited by: Papers (4)
| | PDF (2092 KB) | HTML
We experimentally investigated the resonance interaction of laser and microwave fields with 133Cs atomic gas in far-field and free-space conditions. The observed double radiooptical resonance (DROR) on the D2 line of Cs atoms was used as a novel-type field sensor, based on the laser spectroscopy technique, for the detection and investigation of the time-varying magnetic field... View full abstract»
• ### Power Absorption and Temperature Elevation Produced by Magnetic Resonance Apparatus in the Thorax of Patients With Implanted Pacemakers
Publication Year: 2010, Page(s):32 - 40
Cited by: Papers (6)
| | PDF (891 KB) | HTML
In this paper, power absorption and temperature elevation in the thorax of a pacemaker (PM) holder exposed to the field generated by a 1.5-T (64 MHz) magnetic resonance imaging (MRI) apparatus have been studied. An anatomical body model has been placed in two different positions inside the birdcage MRI antenna. Various PM models, constituted by a metallic box equipped with different kinds of cathe... View full abstract»
• ### Estimation of Whole-Body Average SAR in Human Models Due to Plane-Wave Exposure at Resonance Frequency
Publication Year: 2010, Page(s):41 - 48
Cited by: Papers (15)
| | PDF (935 KB) | HTML
This study proposes an equation for estimating whole-body average specific absorption rate (WBSAR) in human body models for plane-wave exposure at whole-body resonance frequency. This study is important because the WBSAR takes maximal at this frequency and approaches the basic restrictions in the international guidelines/standards for human protection. Therefore, the variability of the WBSAR at th... View full abstract»
• ### Effect of Time-Averaging of Pulsed Radio-Frequency Signals on Specific Absorption Rate Measurements
Publication Year: 2010, Page(s):49 - 55
Cited by: Papers (7)
| | PDF (1706 KB) | HTML
The effect of the time-averaging of pulsed radio-frequency (RF) signals on the specific absorption rate (SAR) measurements is studied. Probes employed for SAR measurements are usually calibrated using continuous wave (CW) signals. The response of such probes is different when measuring pulsed RF waveforms. The CW probe calibration data are applicable to the average power of the pulsed RF signal. T... View full abstract»
• ### Suppression of Spurious Emissions From a Spiral Inductor Through the Use of a Frequency-Selective Surface
Publication Year: 2010, Page(s):56 - 63
Cited by: Papers (13)
| | PDF (1183 KB) | HTML
In this paper, a new concept of adding a band-stop frequency-selective surface (FSS) to suppress spurious emissions from a spiral inductor is proposed. These emissions are especially serious, when the inductor has a wide impedance-matching band. The added FSS is designed to narrow the impedance-matching bandwidth, and lower the radiation efficiency without sacrificing the series inductance and qua... View full abstract»
• ### Potential of Interresonator Tapped-In Coupling in the Design of Compact Miniaturized Electromagnetic Interference (EMI) Filters
Publication Year: 2010, Page(s):64 - 74
Cited by: Papers (9)
| | PDF (2925 KB) | HTML
In this paper, we present for the first time the realization of interresonator coupling through a tapped-in structure. This new coupling structure is used to design miniaturized filters. This coupling is of inductive nature, and it is created through transmission line directly tapped into resonators. This transmission line operates in its fundamental transverse electromagnetic (TEM) or quasi-TEM m... View full abstract»
• ### A Wide-Frequency Model of Metal Foam for Shielding Applications
Publication Year: 2010, Page(s):75 - 81
Cited by: Papers (9)
| | PDF (3518 KB) | HTML
The use of metal foam continues to grow in terms of research and application. Recently, new developments in the electromagnetic (EM) environment, such as shielding applications, have been proposed. A model of metal foam shielding is developed and discussed in this paper to characterize and simulate the EM shielding behavior. More specifically, the EM characterization has been considered, and exper... View full abstract»
• ### Far-Field Approximation of Electrically Moderate-Sized Structures by Infinitesimal Electric and Magnetic Dipoles
Publication Year: 2010, Page(s):82 - 88
Cited by: Papers (3)
| | PDF (1227 KB) | HTML
The far field (FF) of electrically moderate-sized structures can be reconstructed by a small number of infinitesimal electric and magnetic dipoles. As an example, the field of a current loop with a perimeter up to 1.5 ¿ is approximated with two dipoles, the parameters of which are found analytically and using a genetic algorithm, respectively. The radiation of a rectangular power-ground plane pai... View full abstract»
• ### Electromagnetic Noise Source Approximation for Finite-Difference Time-Domain Modeling Using Near-Field Scanning and Particle Swarm Optimization
Publication Year: 2010, Page(s):89 - 97
Cited by: Papers (2)
| | PDF (845 KB) | HTML
This paper presents an electromagnetic noise source approximation method based on a 2-D array of electric dipoles for use in finite-difference time-domain simulations. The currents (both magnitude and phase) of these dipoles are optimized via a particle swarm algorithm so as to minimize the difference between the magnetic near-field produced by the dipole array and the magnetic near-field produced... View full abstract»
• ### Wideband Pulse Responses of Fractal Monopole Antennas Under the Impact of an EMP
Publication Year: 2010, Page(s):98 - 107
Cited by: Papers (11)
| | PDF (605 KB) | HTML
A generalized mathematical procedure is developed for investigating wideband transient pulse responses of some fractal antennas mounted on a perfectly electrically conducting platform illuminated by an electromagnetic pulse. The numerical algorithm employed is based on the technique of time-domain integral equation solved with the method of moments. To model the junction between a monopole and a g... View full abstract»
• ### Transient Response of Straight Thin Wires Located at Different Heights Above a Ground Plane Using Antenna Theory and Transmission Line Approach
Publication Year: 2010, Page(s):108 - 116
Cited by: Papers (6)
| | PDF (713 KB) | HTML
Transient electromagnetic field coupling to straight thin wires parallel to each other and located at different heights above a perfectly conducting or dielectric ground plane is analyzed using wire antenna theory and a transmission line method. The time-domain antenna theory formulation is based on a set of the space-time Hallen integral equations. The transmission line approximation is based on ... View full abstract»
• ### Lightning-Induced Current and Voltage on a Rocket in the Presence of Its Trailing Exhaust Plume
Publication Year: 2010, Page(s):117 - 127
Cited by: Papers (2)
| | PDF (2336 KB) | HTML
This paper presents time-domain characteristics of induced current and voltage on a rocket in the presence of its exhaust plume when an electromagnetic (EM) wave generated by a nearby lightning discharge is incident on it. For the EM-field interaction with the rocket, the finite-difference time-domain technique has been used. The distributed electrical parameters, such as capacitance and inductanc... View full abstract»
• ### A Time-Domain Multiport Model of Thin-Wire System for Lightning Transient Simulation
Publication Year: 2010, Page(s):128 - 135
Cited by: Papers (13)
| | PDF (667 KB) | HTML
In order to analyze the lightning transient of a thin-wire system, a time-domain method of reducing the thin-wire conductor system into an equivalent active multiport network based on the partial-element electric-circuit model is presented in this paper. A modified-mesh-current (MMC) approach is developed, uses node charge and mesh current as its variables, and takes into account both capacitive c... View full abstract»
• ### Measurement and Modeling of the Indirect Coupling of Lightning Transients into the Sago Mine
Publication Year: 2010, Page(s):136 - 146
| | PDF (2401 KB) | HTML
This paper describes measurements and analytical modeling of the indirect coupling of electromagnetic fields produced by horizontal and vertical lightning currents into the Sago mine located near Buckhannon, WV. Two coupling mechanisms were measured: direct and indirect drive. Only the results from the indirect drive and the associated analysis shall be covered in this paper. Indirect coupling res... View full abstract»
• ### Fast Calculation of the Electromagnetic Field by a Vertical Electric Dipole Over a Lossy Ground and Its Application in Evaluating the Lightning Radiation Field in the Frequency Domain
Publication Year: 2010, Page(s):147 - 154
Cited by: Papers (14)
| | PDF (509 KB) | HTML
The evaluation of the electromagnetic field radiated by a lightning stroke is of essential importance for the study of the interaction between lightning electromagnetic fields and power lines, or other sensitive installations. A fast algorithm for calculating the fields generated by a vertical electric dipole over a lossy ground, which can be expressed by the well-known Sommerfeld integrals, is pr... View full abstract»
• ### Uncertainty Analyses in the Finite-Difference Time-Domain Method
Publication Year: 2010, Page(s):155 - 163
Cited by: Papers (30)
| | PDF (378 KB) | HTML
Providing estimates of the uncertainty in results obtained by Computational Electromagnetic (CEM) simulations is essential when determining the acceptability of the results. The Monte Carlo method (MCM) has been previously used to quantify the uncertainty in CEM simulations. Other computationally efficient methods have been investigated more recently, such as the polynomial chaos method (PCM) and ... View full abstract»
• ### Validation of Hybrid MoM Scheme With Included Equivalent Glass Antenna Model for Handling Automotive EMC Problems
Publication Year: 2010, Page(s):164 - 172
Cited by: Papers (4)
| | PDF (1019 KB) | HTML
In this paper, based on the refined modified image theory, an equivalent model of the layered glass antenna structure is suggested and validated by comparison with Sommerfeld's traditional solution to Green's function. Using this model, a hybrid method of moments (MoM) scheme is derived to handle the full MoM geometries including finite-sized and curved glass antenna structures. This scheme is val... View full abstract»
• ### Using Transfer Function Calculation and Extrapolation to Improve the Efficiency of the Finite-Difference Time-Domain Method at Low Frequencies
Publication Year: 2010, Page(s):173 - 178
Cited by: Papers (4)
| | PDF (639 KB) | HTML
The finite-difference time-domain method (FDTD) needs a long computation time to solve low-frequency (LF) problems. In this paper, we suggest a new way to improve the efficiency of the FDTD method, especially for LF problems such as lightning indirect effects studies. The procedure consists in first calculating the system's response to a quasi-impulse excitation. The quasi-impulse response is extr... View full abstract»
• ### Gigahertz-Range Analysis of Impedance Profile and Cavity Resonances in Multilayered PCBs
Publication Year: 2010, Page(s):179 - 188
Cited by: Papers (9)
| | PDF (1298 KB) | HTML
As is known, undesired simultaneous switching noise produced by high-speed digital integrated circuits (ICs) and power vias may propagate along parallel-plane structures of multilayer printed circuit boards (PCBs) and IC packages, which act as parallel-plane cavity resonators. To minimize effects of the parallel-plane cavities, we need to minimize the impedance profile of a PCB power bus in the fr... View full abstract»
• ### Causal RLGC($f$ ) Models for Transmission Lines From Measured $S$ -Parameters
Publication Year: 2010, Page(s):189 - 198
Cited by: Papers (25)
| | PDF (2128 KB) | HTML
Frequency-dependent causal RLGC(f) models are proposed for single-ended and coupled transmission lines. Dielectric loss, dielectric dispersion, and skin-effect loss are taken into account. The dielectric substrate is described by the two-term Debye frequency dependence, and the transmission line conductors are of finite conductivity. In this paper, three frequency-dependent RLGC models are studied... View full abstract»
## Aims & Scope
IEEE Transactions on Electromagnetic Compatibility publishes original and significant contributions related to all disciplines of electromagnetic compatibility (EMC) and relevant methods to predict, assess and prevent electromagnetic interference (EMI) and increase device/product immunity.
Full Aims & Scope
## Meet Our Editors
Editor-in-Chief
Prof. Ing. Antonio Orlandi
University of L'Aquila
Email: [email protected] |
Skip to main content
# 4.2: Higher Dimensions
Set
$$\Box u=u_{tt}-c^2\triangle u,\ \ \triangle\equiv\triangle_x=\partial^2/\partial x_1^2+\ldots+ \partial^2/\partial x_n^2,$$
and consider the initial value problem
\begin{eqnarray}
\label{wavehigher1}
\Box u&=&0\ \ \ \mbox{in} \mathbb{R}^n\times\mathbb{R}^1\\
\label{wavehigher2}
u(x,0)&=&f(x)\\
\label{wavehigher3}
u_t(x,0)&=&g(x),
\end{eqnarray}
where $$f$$ and $$g$$ are given $$C^2(\mathbb{R}^2)$$-functions.
By using spherical means and the above d'Alembert formula we will derive a formula for the solution of this initial value problem.
### Method of Spherical means
Define the spherical mean for a $$C^2$$-solution $$u(x,t)$$ of the initial value problem by
\label{mean1}
M(r,t)=\frac{1}{\omega_n r^{n-1}}\int_{\partial B_r(x)}\ u(y,t)\ dS_y,
where
$$\omega_n=(2\pi)^{n/2}/\Gamma(n/2)$$
is the area of the n-dimensional sphere, $$\omega_n r^{n-1}$$ is the area of a sphere with radius $$r$$.
From the mean value theorem of the integral calculus we obtain the function $$u(x,t)$$ for which we are looking at by
\label{uM}
u(x,t)=\lim_{r\to0} M(r,t).
Using the initial data, we have
\begin{eqnarray}
\label{mean2}
M(r,0)&=&\frac{1}{\omega_n r^{n-1}}\int_{\partial B_r(x)}\ f(y)\ dS_y=:F(r)\\
\label{mean3}
M_t(r,0)&=&\frac{1}{\omega_n r^{n-1}}\int_{\partial B_r(x)}\ g(y)\ dS_y=:G(r),
\end{eqnarray}
which are the spherical means of $$f$$ and $$g$$.
The next step is to derive a partial differential equation for the spherical mean. From definition (\ref{mean1}) of the spherical mean we obtain, after the mapping $$\xi=(y-x)/r$$, where $$x$$ and $$r$$ are fixed,
$$M(r,t)=\frac{1}{\omega_n }\int_{\partial B_1(0)}\ u(x+r\xi,t)\ dS_\xi.$$
It follows
\begin{eqnarray*}
M_r(r,t)&=&\frac{1}{\omega_n }\int_{\partial B_1(0)}\ \sum_{i=1}^n u_{y_i}(x+r\xi,t)\xi_i\ dS_\xi\\
&=&\frac{1}{\omega_n r^{n-1}}\int_{\partial B_r(x)}\ \sum_{i=1}^n u_{y_i}(y,t)\xi_i\ dS_y.
\end{eqnarray*}
Integration by parts yields
$$\frac{1}{\omega_n r^{n-1}}\int_{B_r(x)}\ \sum_{i=1}^n u_{y_iy_i}(y,t)\ dy$$
since $\xi\equiv (y-x)/r$ is the exterior normal at $$\partial B_r(x)$$. Assume $$u$$ is a solution of the wave equation, then
\begin{eqnarray*}
r^{n-1}M_r&=&\frac{1}{c^2\omega_n}\int_{B_r(x)}\ u_{tt}(y,t)\ dy\\
&=&\frac{1}{c^2\omega_n }\int_0^r\ \int_{\partial B_c(x)}\ u_{tt}(y,t)\ dS_ydc.
\end{eqnarray*}
The previous equation follows by using spherical coordinates. Consequently
\begin{eqnarray*}
(r^{n-1}M_r)_r&=&\frac{1}{c^2\omega_n}\int_{\partial B_r(x)}\ u_{tt}(y,t)\ dS_y\\
&=&\frac{r^{n-1}}{c^2}\frac{\partial^2}{\partial t^2}\left(\frac{1}{\omega_n r^{n-1}} \int_{\partial B_r(x)}\ u(y,t)\ dS_y\right)\\
&=&\frac{r^{n-1}}{c^2}M_{tt}.
\end{eqnarray*}
Thus we arrive at the differential equation
$$(r^{n-1}M_r)_r=c^{-2}r^{n-1}M_{tt},$$
which can be written as
\label{EPD}
M_{rr}+\frac{n-1}{r}M_r=c^{-2}M_{tt}.
This equation (\ref{EPD}) is called Euler-Poisson-Darboux equation.
### Contributors
• Integrated by Justin Marshall. |
# Friedrichs inequality
An inequality of the form
$$\tag{1 } \int\limits _ \Omega f ^ { 2 } \ d \Omega \leq C \left \{ \int\limits _ \Omega \sum _ {i = 1 } ^ { n } \left ( \frac{\partial f }{\partial x _ {i} } \right ) ^ {2} \ d \Omega + \int\limits _ \Gamma f ^ { 2 } d \Gamma \right \} ,$$
where $\Omega$ is a bounded domain of points $x = x ( x _ {1} \dots x _ {n} )$ in an $n$- dimensional Euclidean space with an $( n - 1)$- dimensional boundary $\Gamma$ satisfying a local Lipschitz condition, and the function $f \equiv f ( x) \in W _ {2} ^ {1} ( \Omega )$( a Sobolev space).
The right-hand side of the Friedrichs inequality gives an equivalent norm in $W _ {2} ^ {1} ( \Omega )$. Using another equivalent norm in $W _ {2} ^ {1} ( \Omega )$, one obtains (see [2]) a modification of the Friedrichs inequality of the form
$$\tag{2 } \int\limits _ \Omega f ^ { 2 } \ d \Omega \leq C \left \{ \int\limits _ \Omega \sum _ {i = 1 } ^ { n } \left ( \frac{\partial f }{\partial x _ {i} } \right ) ^ {2} d \Omega + \left ( \int\limits _ \Gamma f d \Gamma \right ) ^ {2} \right \} .$$
There are generalizations (see [3][5]) of the Friedrichs inequality to weighted spaces (see Weighted space; Imbedding theorems). Suppose that $\Gamma \subset C ^ {(} l)$ and that the numbers $r$, $p$ and $\alpha$ are real, with $r$ being a natural number and $1 \leq p < \infty$. One says that $f \in W _ {p, \alpha } ^ {r} ( \Omega )$ if the norm
$$\| f \| _ {W _ {p, \alpha } ^ {r} ( \Omega ) } = \ \| f \| _ {L _ {p} ( \Omega ) } + \| f \| _ {\omega _ {p, \alpha } ^ {r} ( \Omega ) }$$
is finite, where
$$\| f \| _ {L _ {p} ( \Omega ) } = \ \left ( \int\limits _ \Omega | f | ^ {p} \ d \Omega \right ) ^ {1/p} ,$$
$$\| f \| _ {\omega _ {p, \alpha } ^ {r} ( \Omega ) } = \sum _ {| k | = r } \| \rho ^ \alpha f ^ {(} k) \| _ {L _ {p} ( \Omega ) } ,$$
$$f ^ { ( k) } = \frac{\partial ^ {| k | } f }{\partial x _ {1} ^ {k _ {1} } \dots \partial x _ {n} ^ {k _ {n} } } ,\ \ | k | = \sum _ {i = 1 } ^ { n } k _ {i} ,$$
and $\rho = \rho ( x)$ is distance function from $x \in \Omega$ to $\Gamma$.
Suppose that $s _ {0}$ is a natural number such that
$$r - \alpha - { \frac{1}{p} } \leq \ s _ {0} < r - \alpha + 1 - { \frac{1}{p} } .$$
Then, if $\Gamma \subset C ^ {( s _ {0} + 1) }$, $- p ^ {-} 1 < \alpha < r - p ^ {-} 1$, $r/2 \leq s _ {0}$, for $f \in W _ {p, \alpha } ^ {r} ( \Omega )$ the following inequality holds:
$$\| f \| _ {L _ {p} ( \Omega ) } \leq \ C \left \{ \sum _ {l + s < r/2 } \left \| \left ( \left . \frac{\partial ^ {s} f }{\partial n ^ {s} } \ \right | _ \Gamma \right ) ^ {(} l) \ \right \| _ {L _ {p} ( \Gamma ) } + \| f \| _ {\omega _ {p, \alpha } ^ {r} ( \Omega ) } \right \} ,$$
where $( \partial ^ {s} f/ \partial n ^ {s} ) \mid _ \Gamma$ is the derivative of order $s$ with respect to the interior normal to $\Gamma$ at the points of $\Gamma$.
One can also obtain an inequality of the type (2), which has in the simplest case the form
$$\| f \| _ {L _ {p} ( \Omega ) } ^ {p} \leq \ C \left ( \| f \| _ {\omega _ {p, \alpha } ^ {1} ( \Omega ) } ^ {p} + \left | \int\limits _ \Gamma u \tau d \Gamma \ \right | ^ {p} \right ) ,$$
where
$$p , \gamma > 1,\ \ - \frac{1}{p} < \alpha < 1 - \frac{1}{p} - \frac{1} \gamma ,$$
$$\tau \in L _ \gamma ( \Gamma ),\ \int\limits _ \Gamma \tau d \Gamma \neq 0.$$
The constant $C$ is independent of $f$ throughout.
The inequality is named after K.O. Friedrichs, who obtained it for $n = 2$, $f \in C ^ {(} 2) ( \overline \Omega \; )$( see [1]).
#### References
[1] K.O. Friedrichs, "Eine invariante Formulierung des Newtonschen Gravititationsgesetzes und des Grenzüberganges vom Einsteinschen zum Newtonschen Gesetz" Math. Ann. , 98 (1927) pp. 566–575 [2] S.L. Sobolev, "Applications of functional analysis in mathematical physics" , Amer. Math. Soc. (1963) (Translated from Russian) [3] S.M. Nikol'skii, P.I. Lizorkin, "On some inequalities for weight-class functions and boundary-value problems with a strong degeneracy at the boundary" Soviet Math. Dokl. , 5 (1964) pp. 1535–1539 Dokl. Akad. Nauk SSSR , 159 : 3 (1964) pp. 512–515 [4] S.M. Nikol'skii, "Approximation of functions of several variables and imbedding theorems" , Springer (1975) (Translated from Russian) [5] D.F. Kalinichenko, "Some properties of functions in the spaces and " Mat. Sb. , 64 : 3 (1964) pp. 436–457 (In Russian) [6] R. Courant, D. Hilbert, "Methods of mathematical physics. Partial differential equations" , 2 , Interscience (1965) (Translated from German) [7] L. Nirenberg, "On elliptic partial differential equations" Ann. Scuola Norm. Sup. Pisa Ser. 3 , 13 : 2 (1959) pp. 115–162 [8] L. Sandgren, "A vibration problem" Medd. Lunds Univ. Mat. Sem. , 13 (1955) pp. 1–84
How to Cite This Entry:
Friedrichs inequality. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Friedrichs_inequality&oldid=46991
This article was adapted from an original article by D.F. KalinichenkoN.V. Miroshin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article |
# Router TG784n v3 with filtered SSH by firewall
## The Problem
SSH was suddently blocked on Router TG74n v3. Every attempt to connect from exterior network is rejected. This can be pretty bad if it is the only way to access from outside to local network.
## The Solution
The SSH connections on port 22 were being dropped by firewall.
To solve the problem, you have to change rule which is causing this problem. By editing the rule, you can change the action from drop to accept.
The following lines first deletes the rules and then add it again, but changed with the pretend parameters:
firewall rule delete chain=forward_fire index=1
firewall rule delete chain=forward_fire index=3
firewall rule add chain=forward_fire index=2 name=SSH srcip=83.240.175.40 serv=ssh log=enabled state=enabled action=deny
firewall rule add chain=forward_fire index=3 name=SSH serv=ssh log=disabled state=enabled action=accept |
# Partition of unity on neighborhood of compact set
Let $U\subseteq\mathbb{R}^n$ be open and $D\subseteq U$ be compact. Prove that there is a $C^{\infty}$ function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ such that $f$ takes on the value $1$ on a neighborhood of $D$, and the support of $f$ is contained in $U$.
There is a result that if $D$ is open and $U$ compact, then there exists an $\epsilon$-neighborhood of $D$ (let's call it $E$) that is contained in $U$. So $D\subseteq E\subseteq U$, and we can aim to have $f$ take on the value of $1$ in $E$.
We can take a $C^{\infty}$ partition of unity $\{\phi_i\}$ on $E$. So we have that $\phi_i\geq 0$ for all $x\in\mathbb{R}^n$, $\sum_{i=1}^{\infty}\phi_i(x)=1$ for all $x\in E$, and the support of $\phi_i$ is contained in $E$. I wonder if we can define the function $f(x)=\sum_{i=1}^{\infty}\phi_i(x)$. We will have $f(x)=1$ for all $x\in E$, but for $x\not\in E$, the sum $\sum_{i=1}^{\infty}\phi_i(x)$ might not converge. How can we fix this problem?
Edit: I think Etienne's solution that is put under "Edit" part almost works, except for a hole which I don't know how to fix. See my comment there.
• There is a more basic problem: to construct a partition of unity, you need the result that you're trying to prove. – Etienne Sep 26 '13 at 19:58
• @Etienne I don't think the way my book constructs a partition of unity needs this result. In any case, we can assume that a partition of unity exists. How do we prove this result? – Paul S. Sep 26 '13 at 20:22
Here is the standard way to prove the result.
First note that there is a non-zero $\mathcal C^\infty$ function $\theta\geq 0$ on $\mathbb R$ which is supported on $[0,1]$.
From this, it follows that for any $\varepsilon >0$, there is a non-negative $\mathcal C^\infty$ function $\phi_\varepsilon$ on $\mathbb R^n$ which is supported on the closed euclidean ball $\overline B(0,\varepsilon)$ and such that $\int_{\mathbb R^n} \phi_\varepsilon (u) du=1$: just put $\phi_\varepsilon(x)= c_\varepsilon\, \theta\left(\frac{\Vert x\Vert^2}{\varepsilon^2}\right)$ for some suitably chosen constant $c_\varepsilon$.
Now, choose $\varepsilon>0$ such that $D+\overline B(0,2\varepsilon)\subset U$, and define $f$ to be the convolution $\mathbf 1_{D_\varepsilon}*\phi_\varepsilon$, where $D_\varepsilon=D+\overline B(0,\varepsilon)$: $$f(x)=\int_{\mathbb R^n} \mathbf 1_{D_\varepsilon}(y)\phi_\varepsilon(x-y)\, dy\, .$$
Then, by the standard theorem on differentiation under the integral sign, $f$ is $\mathcal C^\infty$; and by a well known property of the support of a convolution, $${\rm supp}(f)\subset D_\varepsilon+{\rm supp}(\phi_\varepsilon)\subset D+\overline B(0, 2\varepsilon)\subset U\, .$$
Finally, $f$ is equal to $1$ on $D$. Indeed, write $$f(x)=\int_{\mathbb R^n} \mathbf 1_{D_\varepsilon}(x-y) \phi_\varepsilon (y)\, dy= \int_{\overline B(0,\varepsilon)} \mathbf 1_{D_\varepsilon}(x-y) \phi_\varepsilon (y)\, dy$$ and observe that if $x\in D$, then $x-y\in D_\varepsilon$ for every $y\in\overline B(0,\varepsilon)$, i.e. $\mathbf 1_{D_\varepsilon}(x-y)=1$. It follows that for $x\in D$ we have $$f(x)=\int_{\overline B(0,\varepsilon)} \phi_\varepsilon (y)\, dy=\int_{\mathbb R^n} \phi_\varepsilon(y)\, dy=1\, .$$
$\bf Edit.$ If you want to find the function $f$ just by using the existence of a partition of unity as you stated it, you can do this assuming that your partition of unity $(\phi_i)_{i\in I}$ is relative to $U$ and is locally finite, i.e. each point $x\in\mathbb R^n$ has a neighbourhood $V_x$ on which all but finitely many functions $\phi_i$ are $0$.
Assume that the closure of $E$ is contained in $U$. By compactness, you can cover $\overline E$ by finitely open sets $V_{x_1},\dots ,V_{x_N}$ as above; and moreover you may assume that $V_{x_j}\subset U$ for all $j$. For each $j\in\{ 1,\dots ,N\}$, choose a finite set $I_j\subset I$ such that $\phi_i\equiv 0$ on $V_{x_j}$ for all $i\not\in I_j$. Then let $I':=\bigcup_{j=1}^N I_j$ and $f:=\sum_{i\in I'} \phi_i$. The function $f$ is perfectly well-defined and $\mathcal C^\infty$ on $\mathbb R^n$ since this is a finite sum, and you do have $f\equiv 1$ on $E$ because $\phi_i\equiv 0$ on $\overline E$ for all $i\not\in I'$ and hence $f\equiv\sum_{i\in I}\phi_i=1$ on $E$.
• Thanks for your answer, Etienne. I still wonder if there's a way that doesn't need such complex tools, but rather makes use of a partition of unity? – Paul S. Sep 27 '13 at 3:17
• Take an open set $V$ such that $D\subset V\subset\overline V\subset E$, and then a partition of unity $(\phi_1,\phi_2)$ subordinate to the open cover $(U_1,U_2)=(E,\mathbb R^n\setminus \overline V)$ of $\mathbb R^n$. Then the function $f=\phi_1$ works. However, my objection is the same: how do you construct the partition of unity? – Etienne Sep 27 '13 at 6:54
• Reading your edit, I take it that $I=\{1,\ldots,N\}$? If so, it is already a finite set, and why do you need to define $I'$? I might be missing something here. – Paul S. Sep 27 '13 at 13:13
• No. $N$ is the number of points $x_j$. – Etienne Sep 27 '13 at 15:14
• Actually, now that I read it again, I think there is a hole in your reasoning. We do indeed have that $f\equiv 1$ on $D$, but not necessarily on $E$. Your finite cover only covers $D$, not $E$. – Paul S. Oct 13 '13 at 5:26
There is a $C^\infty$ function $\phi:\ {\mathbb R}\to[0,1]$ with $\phi(t)=1$ for $t\leq1$ and $\phi(t)=0$ for $t\geq2$. For $\epsilon>0$ define $\phi_\epsilon:\ {\mathbb R}^n\to[0,1]$ by $$\phi_\epsilon(x):=\phi\left({|x|\over\epsilon}\right)\qquad(x\in{\mathbb R}^n)\ .$$ The function $\phi_\epsilon$ is $\equiv1$ in the open $\epsilon$-neighborhood $U_\epsilon(0)$ and $\equiv0$ outside $U_{2\epsilon}(0)$.
Each point $x\in D$ has a neighborhood $U_{2\epsilon}(x)\subset U$, where $\epsilon >0$ depends on $x$. The family $\left(U_\epsilon(x)\right)_{x\in D}$ is an open covering of $D$; so there exists a finite set $\{x_1,x_2,\ldots, x_N\}\subset D$ such that writing $\epsilon(x_k)=:\epsilon_k$ one has $$D\subset \bigcup\nolimits_{1\leq k\leq N} U_{\epsilon_k}(x_k)\ .$$ The $C^\infty$-function $$\psi(x):=\prod_{k=1}^N\left(1-\phi_{\epsilon_k}(x-x_k)\right)$$ takes values in $[0,1]$, is $\equiv0$ on $D$, since for each $x\in D$ at least one $\phi_{\epsilon_k}(x-x_k)=1$, and is $\equiv1$ on ${\mathbb R}^n\setminus U$, since all $\phi_{\epsilon_k}(\cdot-x_k)$ vanish outside $U$.
It follows that $f(x):=1-\psi(x)$ has the required properties. |
# Arithmetic hierarchy via oracles
My professor gave an introduction to the arithmetic hierarchy via Turing reductions, stating that, for instance, $$\Sigma_2 = \text{r.e.}^\text{r.e.}$$ (namely an $$\text{r.e.}$$ pseudocode with access to an $$\text{r.e.}$$ oracle) or $$\Pi_3 = \text{co-r.e.}^{\text{r.e.}^\text{r.e.}}$$. Later, the equivalent formulation via alternating quantifier descriptions was discussed, but I found it difficult to link the two definitions. I have not been able to find any references describing the connection between the above definitions, with everything I've seen having solely to do with the quantifier-based description. Are there any texts/notes that might cover what I'm looking for?
• Isnt the definition of the hirarchy using oracles to $NP$ and $coNP$? Oct 28 at 6:15
• @nirshahar Not sure what you mean. I'm referring to the arithmetic hierarchy, not the polynomial hierarchy.
– gf.c
Oct 28 at 6:18
• Oh, ok. The arithmetic hirarchy indeed uses oracle calls to $RE$ and $coRE$. For some reason I was sure you meant to talk about the polynomial hirarchy... Oct 28 at 6:21 |
# Ask Uncle Colin: A Set Square Mark
Dear Uncle Colin
I just bought a new set square and noticed it had a couple of extra marks – one at seven degrees and one at 42 degrees. Have you any idea what those are for?
– Don’t Recognise Extra Information Engraved on Calculus Kit
Hi, DREIECK, and thank you for your message!
I didn’t know this, and had to look it up. It turns out, one convention for drawing things in three dimensions uses exactly these angles: lines going “across the way” are 7 degrees above the (negative) $x$-axis; lines going “back the way” are 42 degrees above the (positive) $x$-axis; and vertical lines go directly up. A cube might look like this:
Isn’t that pleasing?
### But why those particular angles?
It’s not just for simplicity and aesthetics, although those are factors. I’m told they make it unlikely for lines to intersect at important points (although I don’t quite see how that’s more true for these angles than any other pair).
My favourite reason, though, is that it’s possible to find these angles quite simply using squared paper: because $\tan(7º) \approx \frac{1}{8}$, you can approximate the slope by moving eight squares to the left and one square up from your chosen point. Similarly, $\tan(42º)\approx \frac{9}{10}$, so moving ten squares across and nine squares up would get you an excellent approximation to the gradient you need.
Hope that helps!
– Uncle Colin
## Colin
Colin is a Weymouth maths tutor, author of several Maths For Dummies books and A-level maths guides. He started Flying Colours Maths in 2008. He lives with an espresso pot and nothing to prove.
#### Share
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
# Convergence of series and summation methods for divergent series
I would like to know what is the sum of this series:
$$\sum_{k=1}^\infty \frac{1}{1-(-1)^\frac{n}{k}}$$ with $$n=1, 2, 3, ...$$
In case the previous series is not convergent, I would like to know which are the conditions that would have been required in order for it to be convergent. I can understand that there could be a set of values of $$n$$ for which the series is not convergent, but this does not directly prove that there are no values of $$n$$ for which, instead, it is.
In case the previous series is not convergent in the “classical” sense, I would like to know if it can be associated to it a sum, employing those summation methods used to assign a value to a divergent series; like, for example, the Ramanujan summation method which associates to the following well known divergent series
$$\sum_{k=1}^\infty k$$
the value $$-\frac{1}{12}$$.
https://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_%E2%8B%AF
https://en.wikipedia.org/wiki/Divergent_series#Examples
Note that, in general, the argument of the sum considered can assume complex values!
For the terms with $$k\nmid2n$$ you take a fractional power of $$-1$$; this is not uniquely defined and so it is not clear what these terms of your sum should be. For the terms with $$k\mid n$$ you are dividing by $$0$$, so these terms in your sum are not defined at all. That leaves the term $$k=2n$$ which equals $$\frac{1}{4n}$$. All other terms are undefined.
Before worrying about convergence, make sure that each of the terms in the sum is well-defined.
• For $k\nmid2n$ the fractional power of $-1$ is just a complex number, because $-1=e^{i\pi}$: $$(-1)^\frac{2n}{k}=(e^{i\pi})^\frac{2n}{k}=e^{i\frac{2n}{k}\pi}$$ The fact that for $k\mid n$ there is a division by $0$ and that for $k=2n$ the sum is $$\frac{1}{2}\sum_{k=2}^\infty \frac{1}{k}$$ that is clearly not convergent, would just mean that there is a set of values of $n$ for which the series is not convergent (in the classical sense). And what about the summation methods used to assign a value to a divergent series? Are there any useful in this case? Thank you for your remarks. – Joe Mar 3 at 1:53
• To your first remark; if $k\nmid2n$ then there are multiple solutions to $x^{\frac{k}{2n}}=-1$ in the complex numbers, and indeed $e^{i\frac{2n}{k}\pi}$ is one of them. Choosing this value makes these terms well-defined, so that's a good start. – Servaes Mar 3 at 10:12
• To your second remark; for every value of $n$ there are terms that are not defined; these are the terms for the divisors of $n$. – Servaes Mar 3 at 10:12
HINT:
$$S = \sum_{k=2}^\infty \frac{1}{k} \frac{1}{1-\color{blue}{(-1)^\frac{2n}{k}}}$$
You should note, that every even number is of the form $$2n, \forall n \in \mathbb{N}$$, in this way:
$$(-1)^\frac{2n}{k} = \sqrt[k]{(-1)^{2n}} = \sqrt[k]{1} = 1^{\frac{1}{k}}$$
And $$\lim_{k\to\infty}\left( \frac{1}{k\left(1-1^\frac{1}{k}\right)}\right) = \tilde{\infty}$$
The series diverges.
• Assume that in the expression $$(-1)^\frac{2n}{k}$$ $$n=5$$ and $$k=3$$ Following your reasoning the expression would be $$(-1)^\frac{2n}{k}=(-1)^\frac{10}{3}=\sqrt[3]{(-1)^{10}}=\sqrt[3]{1}=1^{\frac{1}{3}}=1$$ But that’s not the case! In fact $$-1=e^{i\pi}$$ Hence $$(-1)^\frac{2n}{k}=(e^{i\pi})^\frac{2n}{k}=e^{i\frac{10}{3}\pi}$$ which is just the trivial complex number $$e^{i\frac{10}{3}\pi}=-\frac{1}{2}-i\frac{\sqrt{3}}{2}$$ en.wikipedia.org/wiki/… . I look forward to hear from you about it. – Joe Mar 2 at 23:48 |
## Europa Barbarorum 2
### Medieval II: Total War: Kingdoms mod | Released 2014
Europa Barbarorum is a total conversion for Medieval II: Total War: Kingdoms and successor to Europa Barbarorum for Rome: Total War. The aim is to give the player an even better gaming experience compared to EB1 on the RTW engine and a deeper comprehension of the ancient world and its correlations.
Articles
#### Europa Barbarorum 2.2b is released!
How is this for an intro post:
--------------------------------------------
Greetings fans of Europa Barbarorum!
This has been a little longer in the development than we'd originally planned; what was supposed to have been a bugfix and gameplay update (2.1c if you will), has instead morphed into another full release, complete with a load of new units and a host of other new features. There was a target date floated on Facebook of May, we're slightly over that because we've been allowing the pre-release testing to run its course. It was rushing this vital, last phase, that led to 2.1 having to ship with a Day One patch, and all the woes that attended that. There may well be a 2.2a/b/whatever patch, but hopefully we've ironed out enough of the issues that popped up that it won't be necessary.
THIS RELEASE IS NOT SAVEGAME COMPATIBLE
This is a complete installation, it's best if you remove any previous versions of EBII you have installed.
Pre-installation instructions
1) If you are a Steam user, and your M2TW is in Program Files or Program Files (x86) you need to move your game, or reinstall it outside of that folder. This isn't negotiable, you will not be able to patch or otherwise modify your EBII if the game is installed there. UAC may unpredictably and silently interfere with your files, breaking your installation, to say nothing of problems with VirtualStore. The official instructions on how to move a game are here. If those do not work, try these instead.
2) If you are not a Steam user, but you have M2TW installed in Program Files or Program Files (x86) you need to move your game, or reinstall it outside of that folder. For exactly the same reasons as above.
Optional: Re-install M2TW. Then start a campaign to make sure it's all still working. If you have other mods active, you won't want to do this, but it will likely result in a cleaner experience.
3) You must have Kingdoms installed - at least one of the mods, though not necessarily all of them. EBII needs the kingdoms.exe to run.
If you ignore step 1/2 and later come to the Bug Reports/Technical Help forum, and it arises you have your game installed in Program Files/Programe Files (x86), don't expect any meaningful assistance until after you've moved your installation and tried that.
Installation instructions
2. Uninstall the previous version of EBII (there's a shortcut for it in your start menu).
3. The downloaded file is a zip archive, which means you need to unpack the installer files from the zip file. Windows Explorer should allow you to simply copy the contents out. If not, you can download 7-zip for the purpose.
4. Run EBII.exe.
5. Step through the install wizard. Make sure the installer is pointed at your M2TW directory - you may need to change it from the default if it is pointing at Program Files.
6. Wait for the installer to copy all the files.
7. If you have never run M2TW vanilla after installing it, run that now. Start a campaign, then quit.
8. Run the mod using the shortcut placed in your start menu or desktop. Start a campaign, then quit.
9. Make sure your error log is working properly so you can properly report any issues. Go into [your M2TW directory]\mods\EBII\EBII.cfg and make sure under [log] it says:
Code:
to = mods/ebii/logs/eb.system.log.txt
level = * trace
This means the log will be stored as eb.system.log.txt in [your M2TW directory]\mods\EBII\logs\ - we will need this log any time you report an error. No log, no resolution, for the most part.
Report your errors in the EB Bug Reports and Technical Help forum, not in this thread.
10. It's highly recommended that you play with windowed, borderless mode - this both improves campaign performance and increases stability too. In the same config file as above, ensure these values:
Code:
[video]
windowed = 1
borderless_window = 1
If "1" doesn't work, try "true" instead. Note you'll have to enter your native resolution in there to get a proper-sized window. You also need to ensure your medieval2.preferences.cfg doesn't have conflicting values.
Alternatively, we've bundled a windowed mode/fullscreen mode switcher program in with the installation, it looks thus:
All you should need to do is click the Windowed button.
Post-installation instructions
1. Note that the first time you run a campaign (or start a custom battle) it will take anywhere up to 5-10 minutes - it has not frozen. Be patient, let it finish.
2. If you interrupt the first campaign, you will break the map. Delete your map.rwm in [your M2TW directory]\mods\EBII\data\world\maps\base\, start a new game again, and WAIT. It only has to do this once.
3. We've bundled the Recruitment Viewer in with the installer; in order to use it, you need to ensure you have the latest version of Java installed.
We host 2.2b here at moddb. You can download from the files section. Other sources are:
Torrent - installer version
Torrent - ZIP version
EB FTP (installer version)
EB FTP (no installer version)
Totalwar .Org FTP (installer)
If you have the means to host the files, please do so and share the link - I'll update this post with it.
Change List
The change list since 2.1b is pretty huge, given it's been six months of near-constant development since then. There have been 1904 logged changes since 21st December last year, some small, some huge as you'll see. At the summary level, please see the preview thread, which captures the essence of most of them.
Credits
These are the main team-members, past and present, who've been involved in making Europa Barbarorum II come to life:
Special mentions also go to all the people who've been invaluable acting as playtesters in the long-running testing cycle. You've been tireless in finding bugs, checking things work, making suggestions and for a select group, seeding the mod so everyone can get it. Thank you.
And finally...
In closing - EBII is still not complete. We are not finished. We are somewhere past halfway now - what we still need, what we always need, are more willing volunteers. If you are a 3D artist able to work with 3DS Max, or a 2D artist who can work to instructions, we want you. If you have a knowledge in ancient history and a desire to contribute, we want you. If you are a scripter or coder, and have some familiarity with the files used here (compiled with C++ I believe), we want you. If you can proofread and edit XML files accurately, we want you. If you can playtest the ever-living crap out of a game, then do it all over again after a minor change, we want you. If you like the mod, and are eager to learn how to do something that can help to contribute, we want you. Applyhere.
As ever, thank you for your support and remember to read more history.
- The Europa Barbarorum II Team
#### Europa Barbarorum 2.1 released!
The day has come. Lots of people thought Europa Barbarorum II was dead, that it had become vapourware - they couldn't be more wrong. After the previous...
#### Europa Barbarorum 2.01 released!
Europa Barbarorum II is a total conversion modification (mod). It covers roughly the same time period as the Imperial Campaign included with Rome: Total...
#### Europa Barbarorum II 2.0 Released!
Europa Barbarorum II is a total conversion modification (mod). It covers roughly the same time period as the Imperial Campaign included with Rome: Total...
#### Settlement Mini Preview
The Europa Barbarorum II team would like to share with you our Settlement Mini Preview.
Files
#### Europa Barbarorum 2.2b (no installer)
Greetings fans of Europa Barbarorum! This has been a little longer in the development than we'd originally planned; what was supposed to have been a bugfix...
#### Europa Barbarorum 2.2b
Greetings fans of Europa Barbarorum! This has been a little longer in the development than we'd originally planned; what was supposed to have been a bugfix...
#### Europa Barbarorum 2.1b Hotfix (Old Release)
Hotfix release patch for Europa Barbarorum v2.1. EB is a total conversion for Medieval Total War 2.
#### Europa Barbarorum 2.1 (Old Release)
This is Europa Barbarorum release 2.1. Europa Barbarorum is a total conversion mod for Medieval Total War 2.
#### HOTFIX - for broken trait mechanics(old release)
With sincere apology for several broken trait mechanics I am presenting a quick hotfix that should hopefully resolve several of the most annoying ones...
#### Europa Barbarorum 2.01(old release)
Europa Barbarorum (EB) is a total conversion of Medieval Total War II Kingdoms.
#### More files >>
Comments (0 - 10 of 548)
The current testing release is updated to 2.2g: Twcenter.net!
I'm hoping this will be the last testing release before the next full release, 2.3.
Will there be camel units in 2.3. ?
Yes, if you've seen it previewed on the Twitter/Facebook feeds, then it will be in 2.3.
It's as simple as this: we only preview units which are integrated into the development build of the mod. If it's in the development build, it will be in the next full release (which in this case will be 2.3).
Can font be locally changed? I don't like really much it
That would require replacing the content of the data\font folder with suitable CUF files of the same names (renaming vanilla CUF files will do) and deleting of all data\text string.bin files.
Are you still working on this mod ? I hope it gets an update :) I have some suggestions if you like
Yes, we're hard at work. Progress updates aren't featured here on moddb, but on our forum on Total War Center and the Twitter and Facebook feeds.
I looked Total War Center after your reply and if I am not mistaken latest release 2.2f right now ? I am looking forward to a full release I hope it gets better with it I wish you luck guys keep it up !
Yes, 2.2f is the latest patch - you need a working copy of 2.2b in order to update. I'm working on updates for 2.2g in order to test things before the next major release.
2.3 (the next full installation) is around the corner, though I can't say exactly when.
Np I am patient I can wait :D I will send you pm when I got home from work maybe you will consider adding them in game
Please post suggestions to the thread for that purpose: Twcenter.net I don't want to be engaged in private conversations where I'm essentially answering the same questions tens of times over.
Though to be honest, if you're not up to date, you may be asking about things we've already done. The best feedback is that based on playing the current version.
Only registered members can share their thoughts. So come on! Join the community today (totally free - or sign in with your social account on the right) and join in the conversation.
Profile
##### Share
Community Rating
Average
9.2
You Say
-
Ratings closed.
Highest Rated (6 agree) 8/10
Still a WIP, but heading in the right direction for sure.
Sep 1 2014 by TheRomanRepublic
Style
##### Players
Latest tweets from @ebteam
Reworked Leves by nazgool T.co
Feb 22 2017
Machimoi ToxotaI - concept by paullus, models by justme, original textures by Haithabas. T.co
Feb 19 2017
Feb 13 2017
Belgae Spearmen. Created by nazgool, with important contributions from Tux and Gustave. Concepted by Brennus. T.co
Feb 13 2017
Pahropan i Balk (Protectors of Baktria). Concept by Mithridates VI Eupator, models by justme, textures by Antonis92 T.co
Feb 7 2017
Reworked Cammilian Hastati by nazgool T.co
Feb 5 2017
Belgae Swordsmen: concept by Brennus, models and textures by nazgool T.co
Feb 1 2017
Machimoi Akontistai: concept by paullus, model by justme, original textures by Haithabas T.co
Jan 31 2017
Neither Camp nor Town, the Pastoral Settlement is a third option featuring new opportunities, from Kull, Arjos and… T.co
Jan 30 2017
Updated African Forest Elephant: concept by Trarco, models and textures by spata T.co
Jan 20 2017
Tags
Embed Buttons
Link to Europa Barbarorum 2 by selecting a button and using the embed code provided more...
Statistics
##### Reviews
You may also like |
# Induced representation of symmetric group.
Im stuck with this one and I don't even know how to start, I would appreciate any help: Can you describe the induced representation of the standard representation of $S_{n}$ in $S_{n+1}$?
-
Hint: do you know how to describe the standard representation as an induced representation? – Qiaochu Yuan Jan 30 '12 at 2:15
Irreducible representations of $S_n$ are indexed by partitions of $n$. Assuming that by "standard representation" you mean the permutation representation, then this is the direct sum of the reps indexed by the partitions (n) and (n-1,1). Now there is a combinatorial rule for computing the induced representation: thinking of the partitions as corresponding to their Young diagrams, inducing the representation corresponding to a partition gives the sum over all partitions obtained by adding one box to the given partition. When you do this you get the direct sum
$$\mathrm{Ind}_{S_n}^{S_{n+1}} ((n) \oplus (n-1,1))=(n+1) \oplus (n,1) \oplus (n,1) \oplus (n-1,2) \oplus (n-1,1,1).$$
Edit: It's not clear from the way the problem is phrased what kind of description is sought. As Qiaochu indicates rather cryptically, another way to "describe" this representation would be to realize the permutation representation as the induction $\mathrm{Ind}_{S_{n-1}}^{S_{n}} ((n))$ and then use transitivity of induction to get the permutation representation of $S_{n+1}$ on the cosets of $S_{n-1}$ in $S_{n+1}$ (or, if you want, by the permutation action of $S_{n+1}$ on the set of ordered pairs of two integers $(i,j)$ with $1 \leq i \neq j \leq n+1$). |
Resolving the controls of water vapour isotopes in the Atlantic sector
Abstract
Stable water isotopes are employed as hydrological tracers to quantify the diverse implications of atmospheric moisture for climate. They are widely used as proxies for studying past climate changes, e.g., in isotope records from ice cores and speleothems. Here, we present a new isotopic dataset of both near-surface vapour and ocean surface water from the North Pole to Antarctica, continuously measured from a research vessel throughout the Atlantic and Arctic Oceans during a period of two years. Our observations contribute to a better understanding and modelling of water isotopic composition. The observations reveal that the vapour deuterium excess within the atmospheric boundary layer is not modulated by wind speed, contrary to the commonly used theory, but controlled by relative humidity and sea surface temperature only. In sea ice covered regions, the sublimation of deposited snow on sea ice is a key process controlling the local water vapour isotopic composition.
Introduction
Stable water isotopologues $${\mathrm{H}}_2^{18}{\mathrm{O}}$$ and 1H2H16O undergo isotopic fractionation during phase transitions of water. Therefore, they can be used as integrated tracers of hydrological processes in the atmosphere. Their relative abundances compared with $${\mathrm{H}}_2^{16}{\mathrm{O}}$$, expressed as δ18O and δ2H (see the Methods section), have been measured and used for many applications in climate-related studies, e.g., as proxies for past temperature1 and precipitation2,3 changes, variations of atmospheric moisture source conditions and transport pathways4,5.
During phase changes, equilibrium and kinetic fractionation processes differently affect δ18O and δ2H. The deuterium excess6, hereafter d-excess, has been defined to quantify the kinetic effects (see the Methods section), such as those occurring during oceanic evaporation6 or snow formation from supersaturated vapour at low atmospheric temperatures7. Merlivat and Jouzel8, hereafter referred to as MJ79, developed a first theoretical model of isotope fractionation processes during evaporation from the ocean surface, which is still widely used. Applying their theoretical concept to the Earth’s global water cycle, MJ79 introduced the so-called “closure assumption”, assuming an equality of the isotopic composition of the net evaporated flux and the initial moist air above the ocean surface. According to this model, the strength of the d-excess signal in vapour is related to the relative humidity of the near-surface air with respect to the saturation vapour pressure at the ocean surface (RHsea), as well as to the sea surface temperature (SST). The theoretical considerations by MJ79 led to different interpretations of past d-excess variations recorded in polar ice cores. They have been used as proxies of changes of the moisture source relative humidity9 or SST10,11,12. The latter interpretation requires the assumption of negligible relative humidity variations during past climate changes, which has been recently challenged13. Regionally limited water vapour isotopic observations document a primary influence of the relative humidity on d-excess variability, while the influence of SST remains difficult to assess in this context14,15,16. Based on the evaporation theory and observations at the microphysical scale at the atmosphere–ocean interface, the model by MJ79 also considers an impact of wind speed on kinetic fractionation processes during evaporation and subsequently on the d-excess in the atmospheric vapour8,17,18,19. The importance of this wind-speed effect could not be validated so far by vapour d-excess observations performed only in coastal stations like Bermuda or Iceland14,15, and is therefore still under debate.
In polar regions, variations in sea ice extent are supposed to affect regional precipitation amounts20 and to be reflected in the water isotopic composition21,22,23. Current understanding of the impact of sea ice on the vapour isotopic composition is, however, still limited by the number of observations available in sea ice covered areas24,25.
Here, we present a unique new dataset of ship-based in situ isotopic measurements of vapour and ocean surface water, conducted with an identical instrumental setup over 2 years for a large range of oceanic surface conditions at the basin scale of the Atlantic and Arctic Oceans, contrary to previous measurements more confined in area and time25,26,27. Our measurements, together with theoretical calculations and atmospheric simulations, allow for the first time the assessment of the variability of the water isotopic signal on the first order (the δ values of isotopic abundancies for different species) and on the second order (the d-excess signal) under various climate conditions. For the process of oceanic evaporation, our dataset is consistent with the role of meteorological RHsea and SST, but rules out the theoretically assumed influence of wind speed on the d-excess of the initial vapour. Furthermore, the sublimation of snow deposited on top of sea ice is identified as a crucial process determining the near-surface vapour isotopic composition in sea ice-covered areas.
Results
Spatial and temporal variations
Our observations, recorded on-board of the research vessel Polarstern, cover the period 29 June 2015 to 30 June 2017 and extend over a large range of latitudes in the Atlantic sector (i.e., Atlantic Ocean and the Atlantic regions of the Arctic and Southern Oceans), from the North Pole in the Arctic to the Weddell Sea in coastal Antarctica (see the Methods section for details).
All atmospheric measurements and simulated values are presented at a 6-h temporal resolution (see Fig. 1). The highest air temperature (+28.6 °C), humidity (19.3 g kg−1) and δ18O (−8.4‰) values are reported in the Inter Tropical Convergence Zone (ITCZ) in November 2015, April 2016, December 2016 and April 2017. Over open-sea regions, d-excess values are generally contained between −10 and +10 ‰, apart from rare short events up to +15 ‰, while in the ITCZ region, only positive values of d-excess are observed. Temperature, humidity and δ18O progressively decrease from the ITCZ towards the mid and high latitudes of both hemispheres. In sea ice-covered polar regions, low δ18O and high d-excess values are observed: in areas of compact sea ice coverage, minima in air temperature, specific humidity and δ18O are reached (−18.7 °C, 0.7 g kg−1 and −40.3‰, respectively), together with a maximum in d-excess (+22.3‰). Similar extreme isotopic values are reported in August 2016 for a partial sea ice coverage, while the vessel was located in the vicinity of the Greenland ice sheet, close to the outlet of the Nioghalvfjerdsbrae glacier (latitude 79° N).
The atmospheric measurements have been accompanied by isotopic analyses of surface oceanic water samples (see Fig. 2), which depict strong latitudinal variations of the δ18O signal in surface seawater. The distribution of δ18O values is coherent to the GISS compilation28. The average δ18O value for all our oceanic samples is −0.7‰. In the mid-latitudes, values between −0.7 and 0.9‰ have been measured. The highest values are measured in the tropical bands, where evaporation dominates precipitation29, with a maximum value of 1.1‰ reached in the south tropical Atlantic, east of the Brazilian coast. In some parts of the Arctic region (western part of the Fram Strait, up to the North Pole), strongly negative values (down to −5.4‰) are measured, which could be due to the influence of the isotopically depleted waters originating from the large Siberian rivers, transported southwards along the eastern Greenland coast by the transpolar current. On the eastern part of the Fram Strait, as well as the Barents and Norwegian Seas, the δ18O values stay similar to mid-latitude values, even in sea ice-covered areas (see Supplementary Fig. 3b). In the Antarctic sector, only slightly negative δ18O values are observed (between −1.6 and −0.4‰), without any very isotopically depleted water masses contrary to the Arctic Ocean. The d-excess values of surface oceanic samples do not present clear latitudinal variations (see Fig. 2). The average d-excess value for all samples is of 2.2‰. Only a few spots show slightly negative values (with a minimum of −1.6‰), while most of the samples present positive d-excess values (with a maximum of 8.1‰).
Outputs from an atmospheric general circulation model with an explicit diagnostic of stable water isotopes (a so-called isoGCM) nudged to meteorology (see the Methods section) are compared with the observations. The atmospheric isotopic measurements are very well reproduced by one of the simulations (ECHAMfinal) for the complete observational period (see Fig. 1). The data-model agreement is excellent for temperature (correlation coefficient R2 = 0.96, Pearson correlation p-value < 0.01, for N = 2389 data points), specific humidity (R2 = 0.97, p < 0.01, N = 2513), δ18O (R2 = 0.82, p < 0.01, N = 2466) and δ2H (R2 = 0.85, p < 0.01, N = 2466) and good for the second-order d-excess signal (R2 = 0.46, p < 0.01, N = 2466).
Deuterium excess controls during oceanic evaporation
First analyses focus on the data obtained over open-ocean regions without land or sea ice upwind, within the observation period 29 June 2015 to 1 July 2017 (see the Methods section for details on the selection criteria). The corresponding dataset (N = 1070 simultaneous vapour isotopic and meteorological observations) is distributed along the ice-free Atlantic region, from 81° N, near Svalbard, to 74° S in the Amundsen Sea (see Supplementary Fig. 3). Thus, the isotopic dataset and the related climate variables, e.g., SST and RH, are subject to both spatial and temporal (synoptic, seasonal and interannual) variations. Theoretical calculations derived from the MJ79 model and outputs from the isoGCM are used here to evaluate the evaporation processes influencing our observations.
A large range of meteorological conditions is covered by the observations from the open ocean, with very dry (+ 41%) to supersaturated (+ 125%) RHsea values and SST from −1.8 °C to + 29.1 °C. The correlation between both parameters is very low (R2 = 0.10, p < 0.01). In our observations, the d-excess values in near-surface vapour are anti-correlated with RHsea and correlated with SST (with R2 = 0.62 for RHsea and R2 = 0.50 for SST, p < 0.01). A multivariable linear regression of d-excess against both parameters indicates that constant d-excess values are distributed along oblique lines in an RHsea/SST diagram (see Fig. 3). We obtain the empirically estimated function (with R2 = 0.76 and a root-mean-square error of 3.4‰)
$${\mathrm{d}} {\hbox{-}} {\mathrm{excess}} = - 0.33 \cdot {\mathrm{RH}}_{{\mathrm{sea}}} + 0.27 \cdot {\mathrm{SST}} + 25.01$$
(1)
with d-excess in ‰, RHsea in %, and SST in °C. We note that this empirically estimated function includes both spatial and seasonal temporal changes in the evaporation conditions. While it covers a large range of meteorological conditions in the Atlantic sector, caution should be taken to apply this empirical function to isotopic data sampled in very different spatial or temporal domains.
To investigate the influence of the wind speed on the d-excess signal, we first focus on the distribution of observed d-excess values against wind speed (Supplementary Fig. 4). To filter out the primary control of RHsea and SST on the d-excess signal, we sort our observational dataset into several categories, where both RHsea and SST vary within a small range only. In each of these categories, no relationship can be observed between the wind speed and the d-excess values. Under the assumption that the measured d-excess values are caused by kinetic fractionation occurring during local oceanic evaporation, our results indicate that these fractionation processes are independent of the concurrent wind speed.
Next, we compare our open-ocean water isotopic measurements with calculations of the atmospheric boundary layer water vapour isotopic composition, based on the MJ79 evaporation model. In this model, an influence of wind speed on the kinetic fractionation during evaporation is considered, as wind will affect the surface roughness by generating waves, which in turn might alter the evaporation flux. Based on laboratory experiments19, the MJ79 model assumes a smooth and a rough wind regime (below and above 7 m s−1 surface wind speed, respectively) with distinct kinetic fractionation coefficients for both regimes. Three different parameterisations of the kinetic fractionation are applied in our calculations. In the first parameterisation, a discontinuity is assumed in the kinetic fractionation coefficients at the wind-speed threshold of 7 m s−1, as suggested by MJ79. The two other parameterisations use constant kinetic fractionation coefficients, identical to those applied either below or above this wind-speed threshold (see details in the Methods section). Our calculations imply a local closure assumption by considering only the local variations of RHsea, temperature, wind speeds and oceanic surface water isotopic composition, neglecting any potential mixing of local vapour with advected air masses or convection.
The δ18O and δ2H values of all calculations are comparable, but do not match the observations (see Supplementary Fig. 5). The calculations always underestimate the short-term isotopic variations (related to synoptic variability) and overestimate the average isotopic levels compared with the observations. This overestimation can be explained by the applied local closure assumption, as the mixing of locally evaporated moisture with advected humidity is neglected in this model approach. For the MJ79 model, the closure assumption is in general not valid at the local scale, but at the global scale only29. The model may only yield the correct locally observed boundary layer δ18O and δ2H values if the atmospheric boundary layer was completely saturated with locally evaporated moisture. For our dataset, only the most enriched isotopic values observed in the low latitudes are matched by the MJ79 model estimates. At higher latitudes, the model values strongly overestimate the observed mean isotopic level. This latitudinal contrast might be due to the differences in the proportions of moisture of local or advected origin, contributing to the local boundary layer humidity between the low and high latitudinal regions.
In contrast to δ18O and δ2H, the atmospheric boundary layer variations in the d-excess signal are primarily controlled by kinetic fractionation processes occurring during evaporation. The wind-speed-related parameterisation of kinetic fractionation in the MJ79 model strongly affects the calculated d-excess values (see Fig. 4, Supplementary Fig. 6 and 7). Different kinetic fractionation coefficients used in the M79 calculations below or above the 7 m s−1 threshold lead to different slopes in the distributions of d-excess versus RHsea for the two wind regimes (see Fig. 4). However, in our measurements, the distributions of d-excess against RHsea are nearly identical for both wind regimes (with 63% of wind-speed conditions above 7 m s−1, see Fig. 5) and thus differ from the expected values from the MJ79 theory. For the three different parameterisations of the kinetic fractionation coefficients for the MJ79 calculation, using wind speed-dependent kinetic fractionation coefficients leads to the lowest agreement between observed and calculated d-excess values (slope of calculated versus measured values m = 0.75, R2 = 0.62, p < 0.01; see Supplementary Fig. 7). With a parameterisation of the kinetic fractionation coefficients using the constant values of the rough wind regime, most observed d-excess values at open sea are correctly reproduced (m = 0.65, R2 = 0.67, p < 0.01; see Supplementary Fig. 7). The calculation based on the kinetic fractionation coefficients of the smooth wind regime leads to an RHsea/d-excess distribution (Fig. 4) with a similar slope as the observations, but is biased towards higher d-excess values (slope of calculated versus measured values m = 0.94, R2 = 0.71, but with a + 4.9‰ offset, p < 0.01, see Supplementary Fig. 7). We note that none of the parameterisations are able to reproduce the lowest measured d-excess values, corresponding to the highest RHsea values (see Fig. 4). Despite the overestimation of the first-order isotopic signals δ18O and δ2H in the local closure assumption, our observed d-excess variability can thus be reproduced by the MJ79 model approach, even on a local scale. The observations are better reproduced if constant kinetic fractionation coefficients are applied, and the best match between our data and the MJ79 model is achieved for the constant kinetic fractionation coefficients of a rough wind regime.
In the observations, the d-excess/RHsea distribution is characterised by a slope of −0.39‰ %−1 (R2 = 0.64, p < 0.01; see Fig. 5). In the MJ79-based calculations, this distribution is slightly different from the observations for any parameterisations of the kinetic fractionation, but the deviations are smaller when using the coefficients of a rough wind regime as compared with using the ones of a smooth wind regime. For the coefficients of a rough regime, the slope is of −0.32‰ %−1 (R2 = 0.69, p < 0.01; see Fig. 4), whereas it reaches −0.5‰ %−1 (R2 = 0.84, p < 0.01; see Fig. 4) for the coefficients of a smooth regime.
The impact of wind speed on the d-excess values of near-surface vapour is further evaluated through a sensitivity study using an isoGCM (see the Methods section for details). The isoGCM does not require any closure assumption, as it takes the mixing of locally evaporated vapour with advected moisture explicitly into account. Thus, it should in principle fit better to the observations than the MJ79 model calculations. For vapour over an open ocean, assuming two distinct evaporative regimes, with kinetic fractionation coefficients depending on the wind speed and a critical wind threshold of 7 m s−1, gives rise to overestimated d-excess values compared with the observations (see Supplementary Figs. 6 and 8). This bias of ~5‰ disappears for the highest d-excess values when constant kinetic fractionation coefficients equivalent to a rough wind regime (see Supplementary Figs. 6 and 8) are applied. In both cases, the lowest d-excess values are however overestimated and almost unaffected by the change of parameterisation.
The analysis of measured in situ d-excess values, the different calculations based on the MJ79 theoretical model and the simulations with a complex isoGCM all indicate that the variations of the atmospheric boundary layer d-excess values over the ocean surface are not modulated by wind speed, contrary to the suggestions made by MJ79. The d-excess values can be best explained by assuming constant kinetic fractionation coefficients in fractionation calculations, with the values that we originally used for a rough wind regime only.
Our results are based on observations within the boundary layer, ~30 m above the skin layer, at which the evaporation takes place. On the opposite, the wind speed-dependent kinetic fractionation parameterisation of MJ79 is based on wind tunnel experiments performed for a limited range of wind speeds and wave types19. From our analyses, we cannot make any conclusive statement about the validity of the model, as we did not performed comparable (laboratory) experiments directly above the water surface. However, our results clearly indicate that the MJ79 model should not be applied in its original form for the calculation of isotopic changes in atmospheric vapour well above the ocean, e.g., as done in current isoGCMs. The wind and wave-type range investigated for the MJ79 model approach might not necessarily represent the diversity of surface oceanic conditions observed at sea. For example, a rough ocean surface with high waves might also be caused by swell, and does not have to be directly linked to high wind speeds occurring at the same time. Based on our new dataset, we rather suggest to modify the MJ79 model and use constant kinetic fractionation coefficients instead of wind-speed-dependent values. This conclusion is supported by the recently reported lack of wind-speed influence on the water vapour d-excess signal measured at coastal stations in Bermuda and the south coast of Iceland14,15.
Influence of sea ice on water vapour isotopic composition
Next, we focus on a subset of data gathered at high latitudes, where sea ice has been surrounding Polarstern (for local sea ice fractions higher than 0%, see the Methods section for details). The corresponding dataset contains measurements from both the Arctic and Antarctic regions (N = 854 simultaneous vapour isotopic and meteorological observations).
A recent study24 postulated an anti-correlation between vapour d-excess and local sea ice fraction, based on near-ocean-surface vapour isotopic measurements, conducted over the course of approximately 3 summer days in the western Arctic. This anti-correlation was linked to the meteorological conditions at the sea ice margin. Our measurements cover a substantially longer time period and a larger spatial scale within both the Arctic and Antarctic sectors. They do not confirm such anti-correlation but rather indicate a positive correlation, with a d-excess increase of ~14‰ from open-ocean conditions to a complete sea ice coverage (see Fig. 6, Supplementary Fig. 9). However, the correlation between vapour d-excess and sea ice fraction is weak (R2 = 0.35, p < 0.01) and does not improve when separating Arctic from Antarctic data for the analysis. A decrease of RHsea is observed with increasing sea ice coverage, related to an air temperature decrease, while the SST values cannot be lower than about −1.8 °C (in complete sea ice-covered regions, the RHsea values are on average −20% lower compared with open-ocean conditions, but with a low correlation: R2 = 0.1, p < 0.01; see Supplementary Fig. 9). In agreement with the kinetic fractionation theory of MJ79, this decrease of RHsea may partly contribute to this d-excess increase during oceanic evaporation, but applying the relationship observed over an open ocean, such RHsea variations can only explain half of the observed d-excess signal. The d-excess increase over sea ice-covered areas is also accompanied by a depletion in δ18O and δ2H (on average of −12‰ in δ18O in complete sea ice coverage compared with open-ocean conditions, R2 = 0.34, p < 0.01).
To identify the potential cause of this effect, we compare our measurements to two isoGCM simulations (N = 840 comparison points with observations; see Fig. 1 and Supplementary Fig. 10). In the first simulation, we assume that the isotopic composition of a bare sea ice surface is equal to the isotopic composition of the ocean water just beneath the sea ice, which is the usual procedure in such isoGCM simulations. For the second simulation, we assume that the isotopic composition of the sea ice surface is a function of the isotopic composition of a snow layer deposited on this surface (see the Methods section for details). Sublimation from the sea ice surface to the lowest atmospheric model layer is allowed in both cases, without considering any isotopic fractionation. In the first simulation, the modelled variations of δ18O and d-excess are small and do not agree with the measurements (R2 = 0.14, p < 0.01 for δ18O; R2 = 0.00, p > 10−1 for d-excess, respectively; see Supplementary Fig. 8). In the second simulation, the measured low δ18O and high d-excess values of vapour over sea ice-covered areas are better simulated (see Fig. 1 and Supplementary Figs. 8 and 10). Spatial and temporal variations of both parameters are reproduced (R2 = 0.6 for δ18O, R2 = 0.35 for d-excess, p < 0.01, see Supplementary Fig. 8) for measurements in both hemispheres. We conclude that the snow accumulated on top of sea ice, which has depleted δ18O and δ2H, and high d-excess values compared with seawater, is a potential additional key source determining the atmospheric boundary layer vapour isotopic composition in sea ice-covered regions. We note that the applied parameterisation of the fraction of sea ice covered by deposited snow (see the Methods section) is based on a subset of our observational data and thus does not represent a strict independent proof for the importance of snow sublimation as a source for the isotopic composition of the vapour. We rate it as a first-order approach to include snow on sea ice for future isotope modelling studies. Further observational data are certainly necessary to validate and improve this parameterisation, e.g., to take the flushing of the snow by seawater in fragmented sea ice areas into account, as well as potential isotopic fractionation effects during the sublimation of the snow.
During August 2016, measurements on the research vessel have been performed in an area with only a partial sea ice coverage in the vicinity of the Greenland ice sheet, close to the outlet of the Nioghalvfjerdsbrae glacier (latitude 79° N). Very depleted isotopic values of near-surface vapour measured during this period (δ18O reaching a local minimum of −37.7‰ close to the values observed at NEEM on top of the Greenland ice sheet30) are matched by both isoGCM simulations, independently of the parameterisation of sublimation above sea ice. Advection of isotopically depleted vapour from the Greenland ice sheet towards the research vessel could create a signal overprinting the local vapour isotopic composition. The model simulates sublimation over Greenland with the same surface source for both parameterisations, contrary to the sublimation taking place on the sea ice, and would provide the same isotopic composition in both simulations. However, such influence of katabatic winds on our dataset is generally limited, both around Greenland and Antarctica. Within the sea ice-covered area, air masses originating from coastal regions, as filtered for the open ocean, represent ~8% of the dataset. Our results concerning the sea-ice influence on d-excess do not change when filtering data points potentially influenced by such continental air masses (not shown here).
Discussion
Our results are based on direct isotopic measurements, on calculations applying the MJ79 model and on results from complex isoGCM simulations. Our measurements support the fundamental theory of kinetic fractionation by MJ798 concerning the influences of both relative humidity and temperature at the atmosphere–ocean interface on the atmospheric boundary layer d-excess of vapour over the oceanic surface. However, contrary to this theory, our data suggest that the kinetic fractionation is not modulated by wind speed. Considering constant fractionation coefficients with values for a rough wind regime yields best agreement between observed and modelled d-excess values. The general relationship we obtain for the distribution of d-excess as a function of relative humidity and SST is based on a compilation of observations from various climatic regions, ranging from the tropics to high latitudes. For the calculation of this relationship, we neglected the potential influence of advected moist air on our measured data. Thus, the relationship should be used with care for oceanic regions, where moisture advection might substantially contribute to the boundary layer water vapour content. For sea ice-covered regions, our results indicate that sublimation of snow on sea ice might be a key additional process, controlling the isotopic composition of the boundary layer water. This vapour can subsequently influence the isotopic signal of polar precipitations.
Hence, our results have, among others, the following implications for paleoclimate studies based on water isotope records, e.g., derived from ice cores and speleothems, as well as for present-day hydrological studies. Firstly, the variations of d-excess should be interpreted as a mixed proxy for both relative humidity and SST conditions at the moisture source, but not as a proxy for wind speed. In this regard, a 10% increase in RHsea would reduce the d-excess by ~3‰, while a 10 °C increase in SST would raise the d-excess by about 3‰. Secondly, at high latitudes, isotopic variations in near-surface vapour are strongly influenced by evaporated ocean water, but potentially also by a snow cover on the sea ice, which has an isotopically different source signal than ocean water. Combined with the decrease of relative humidity towards sea ice-covered areas, this leads to an ~1.2‰ decrease in δ18O and 1.4‰ increase in d-excess for every 10% increase in sea ice coverage. This sea ice effect in δ18O, δ2H and d-excess may have an imprint on the subsequent water isotopic composition of precipitations. It may then contribute to explain, for instance, some abrupt variations of the d-excess signal recorded in Greenland ice cores at the end of the last glacial period31 or to validate a hypothesis of past sea ice retreat at 128 ka around the West Antarctic Ice Sheet32. Water isotopic variations in ice cores may also be used as a proxy for regional sea ice extent in the Arctic and Antarctic sectors, in combination with other chemical proxies33. For this purpose, it is needed to carefully evaluate the moisture source locations of the sites where the ice cores are retrieved, as well as potential post-depositional processes affecting d-excess values in the firn layer. Another implication of our results concerns the parameterisation of future isoGCM simulations focusing on polar regions, which should explicitly consider the snow on top of sea ice, identified as a potential additional sublimation source affecting the isotopic signal. The first-order parameterisation deduced from our observational data and isoGCM experiments might be used for such modelling studies, but further independent observational data and simulation results are certainly required for improving this parameterisation.
Methods
Meteorological observations
Routinely measured meteorological data from Polarstern are used in this study. The related sensors are located at different heights: wind speeds and wind directions are measured at 39 m above sea surface, relative humidity (RHair) and temperature (Tair) at 29 m above sea surface and water temperature (SST) at 5 m below sea surface. Air pressure (P) is measured at an altitude of 19 m, but expressed at sea level. The calibrated and validated datasets are available at a 10-min averaged temporal resolution on PANGAEA Open Access library34 and have been averaged at a 6-h temporal resolution in this study.
The relative humidity of the near-surface air with respect to the saturation vapour pressure at the ocean surface (RHsea) is not directly measured but has been approximated27,35 from the observed relative humidity RHair at 29 m, corrected by the ratio of specific humidity at saturation between the temperatures at this elevation and at the sea surface (Tair and SST)
$${\mathrm{RH}}_{{\mathrm{sea}}} = {\mathrm{RH}}_{{\mathrm{air}}} \cdot \frac{{{\mathrm{q}}_{{\mathrm{sat}}}\left( {{\mathrm{T}}_{{\mathrm{air}}}} \right)}}{{{\mathrm{q}}_{{\mathrm{sat}}}\left( {{\mathrm{SST}}} \right)}}$$
(2)
where qsat(T) is the specific humidity at saturation for a given temperature T and qsat(SST) is calculated for seawater at salinity 35 PSU36. For intercomparison with other sea surface water vapour isotopic measurement campaigns27, this calculation is performed using the air temperature and relative humidity corrected from the altitude at 10 m.
Sea surface temperatures
For the complete measurement period, the skin SSTs (sea surface temperatures adjusted to compensate for a skin temperature bias above a wind speed of 6 ms−1) are retrieved at the Polarstern locations from the Met Office Operational Sea Surface Temperature and Ice Analysis (OSTIA)37 products at a 0.25° × 0.25° horizontal resolution. The original dataset has a 1-h temporal resolution and has been averaged at a 6-h temporal resolution. A comparison with the Polarstern SST measurements at 5-m depth for the period 29 June 2015 to 31 January 2017 gives a very good agreement between both datasets (SSTOSTIA=0.99∙SSTPolarstern−0.01 [°C]; R2=0.99; p < 0.01; N = 2099).
Sea ice coverage
Due to a lack of continuous and quantitative sea ice observations during the different Polarstern cruises, the sea ice coverage surrounding the research vessel has been derived from ERA-interim reanalyses38 at 0.75° × 0.75° spatial and 6-h temporal resolution. The sea ice coverage at a specific Polarstern location is assumed to be equal to the value of the surrounding grid cell. This dataset has been compared and is coherent with values extracted in the same manner from daily sea ice coverage data from the AMSR2 instrument on-board the GCOM-W1 satellite at a 6.25-km resolution.
δ-notation for isotopic composition
Isotopic compositions of samples are reported using the δ-notion, where Rsample and RVSMOW are the isotopic ratios ($${\mathrm{H}}_2^{\ \,18} {\mathrm{O}}/{\mathrm{H}}_2^{\ \, 16}{\mathrm{O}}$$ or $${^1\mathrm{H}}{^2\mathrm{H}}{^{16}\mathrm{O}}/{\mathrm{H}}_2^{16}{\mathrm{O}}$$ for δ18O and δ2H, respectively) of the sample and of the Vienna Standard Mean Ocean Water (VSMOW2)39, respectively:
$${\mathrm{\delta }} = 1000 \cdot \left(\frac{{{\mathrm{R}}_{{\mathrm{sample}}}}}{{{\mathrm{R}}_{{\mathrm{VSMOW}}}}} - 1\right)$$
(3)
Definition of deuterium excess
The deuterium excess values are computed based on the commonly used definition6:
$${\mathrm{{d}}}{\hbox{-}}{\mathrm{{excess}}} = {\mathrm{\delta }}^2{\mathrm{H}} - 8 \cdot {\mathrm{\delta }}^{18}{\mathrm{O}}$$
(4)
Water vapour isotopic composition
A Cavity Ring Down Spectroscopy (CRDS) analyser (model L2140-i, Picarro, Inc.) has been running continuously on-board of the research vessel Polarstern since the 29 June 2015, recording humidity mixing ratio, δ18O and δ2H values of water vapour at a temporal resolution of ~1 s. The ambient air inlet for this instrument is located at 29 m above the sea level, connected to the analyser through an ~25 -m-long tubing heated at 65 °C. The humidity mixing ratio is converted into specific humidity measured by the CRDS analyser (qCRDS) and corrected by a linear function derived from the direct comparison with specific humidity values derived from the meteorological observations (qmeteo) on-board the Polarstern, during the complete measurement period from 1-h resolution datasets: qmeteo = 0.75 × qCRDS − 0.17 (R2 = 1.0, p < 0.01, for N = 17592). qmeteo is calculated based on RHair, Tair and P. The precision of specific humidity measurements is estimated at 0.1 g kg−1 from the comparison of both datasets.
For instrument calibration of the isotopic values, a custom-made system is used, vaporising water isotopic standards injected in liquid form and mixed with dry air provided by high-pressure gas cylinders. Four different liquid isotopic standards are used, covering the range of the expected ambient air values (δ18O values between −7.8‰ and −40.7‰). Recommendations for long-term calibration of CRDS water vapour isotopic analysers were followed40,41. Therefore, our system allows two types of calibration. Firstly, the concentration dependence34,35 of the raw isotope measurements is corrected. Secondly, repeated corrections of the deviation of the measurements from the VSMOW-SLAP scale42 are performed by the computation of calibration curves based on the measurements of the water standards, thereby allowing correction of the instrumental drift.
The humidity-concentration dependence of the isotope measurements is corrected based on the measured isotopic composition of each four water standards over a range of different humidity values. The results of the calibration measurements are presented in Supplementary Fig. 1. The temporal stability of this correction has been evaluated by successive measurements of this so-called humidity-response function at different times. No significant drift of this response was observed for any of the four standards, neither for successive measurements over a week (not presented separately in the graphics) nor for measurements separated by several months over the complete observational period (as shown by the different measurement sequences in the graphics). The humidity-response function is thus considered constant in time. It does however depend on the isotopic standard used. A humidity-response function is computed for each isotopic standard as the interpolation of the distribution of all experiments with a polynomial function of fourth order. The correction of the humidity-concentration dependence for a specific near-surface vapour measurement is determined by the linear interpolation of the two humidity-response functions from the closest surrounding isotopic standards at the isotopic value of the measurement.
Calibration curves are applied to the raw data to correct deviations from the VSMOW-SLAP scale. These calibration curves are calculated based on the repeated measurement of every liquid standard for 30 min every 25 h (a standard measurement sequence consists of the successive measurement of all four calibration standards). To avoid any memory effects, averaged values and standard deviations of the standard isotopic composition are computed over the last 15 min of each injection only. Several filtration and correction steps (summarised in Supplementary Table 1) are applied to these standard measurements before computing the calibration curve. All measurements are corrected for the humidity-concentration dependence. To account for the difference in the isotopic composition of the same liquid standard stored in different bottles and used on separate injection lines, we define an arbitrary reference standard among both bottles and correct the measured isotopic values from the difference between the known isotopic value of both bottles. We remove measurements with average H2O values ($$\overline {{\mathrm{H}}_2{\mathrm{O}}}$$) below 5000 ppm or higher than 28000 ppm and standard deviations of H2O, δ18O or δ2H (noted σ(H2O), σ(δ18O) and σ(δ2H), respectively) higher than 2500 ppm, 1.5‰ and 5‰. We compute a first 14-day running average and eliminate all measurements that deviate from this running average by more than 1.5‰, 5‰ and 8‰ for δ18O, δ2H and d-excess, respectively. The observed variabilities of these selected measurements of all liquid standards are shown in Supplementary Fig. 2.
The calibration curves are calculated every time a standard measurement sequence has been performed, based on a new 14-day running average of the previously selected liquid standard measurements. These values are compared with the theoretical values of the reference standards at the time of the standard measurement sequence. If values of at least 3 standards are available, a linear regression of the measurements against the theoretical values gives the calibration curve. Otherwise, as found in the literature for such type of analysers14, we correct the calibration curve from the drift of the running average of the standards, which have been correctly measured and use an interpolated value of the slope between the closest calculated calibration curve.
Based on the uncertainty of both corrections from the concentration dependence and deviations from the VSMOW-SLAP scale, the measurement accuracy is estimated at 0.16‰, 0.8‰ and 2.1‰ on δ18O, δ2H and d-excess. The measurement precision on 1-h averages, estimated from the standard deviation of calibration standard measurements at a constant humidity level, is of 0.24‰, 0.7‰ and 2.7‰ on δ18O, δ2H and d-excess for humidity levels above 5 g kg−1. It deteriorates with lower humidity levels, reaching 0.5‰, 1.9‰ and 5.9‰ for δ18O, δ2H and d-excess, for humidity levels of 1 g kg−1. The dataset presented in this study has been averaged at a 6-h temporal resolution.
Surface water isotopic composition
Isotopic composition of the surface oceanic water has been measured from daily taken samples since 30 June 2015, filled in narrow-mouth low-density polyethylene 20- or 30-mL plastic bottles, sealed with Parafilm M and stored at +4 °C from the end of the expedition until the measurement. Measurements are done with IRMS and equilibration technique at the isotope laboratory of AWI Potsdam43 (with an accuracy better than 0.1‰ and 0.8‰ for δ18O and δ2H). For comparison with other parameters, an interpolation of this dataset has been used at a 6-h resolution.
MJ79 model under the closure assumption
For all the observations performed above the open ocean, we compute the corresponding theoretical water vapour isotopic composition in the atmospheric boundary layer over an open ocean based on the MJ79 model under the closure assumption. We assume it to be equal to the isotopic composition of the evaporation flux8,35. We thus express the boundary layer vapour isotopic ratio (RBL) as a function of the surface seawater isotopic ratio (RSW), taking both equilibrium and kinetic fractionation coefficients αeq and αk and RHsea into account:
$${\mathrm{R}}_{{\mathrm{BL}}} = \frac{{{\mathrm{R}}_{{\mathrm{SW}}}}}{{{\mathrm{\alpha }}_{{\mathrm{eq}}}.({\mathrm{\alpha }}_{\mathrm{k}} + {\mathrm{RH}}_{{\mathrm{sea}}}(1 - {\mathrm{\alpha }}_{\mathrm{k}}))}}$$
(5)
We use skin temperature at the air–sea interface from the OSTIA dataset to determine αeq values44. RSW is determined by the interpolated values of the isotopic composition measured in daily sampled surface oceanic water.
We use three different parameterisations for the kinetic fractionation coefficients dependency on wind speed. In the first simulation (named MJ79ref), the kinetic fractionation coefficients ($${\mathrm{\alpha }}_{{\mathrm{k}},^{18}{\mathrm{O}}}$$ and $${\mathrm{\alpha }}_{{\mathrm{k}},^2{\mathrm{H}}}$$ for $${\mathrm{H}}_2^{18}{\mathrm{O}}$$ and 1H2H16O) for a smooth or a rough wind regime are used for wind speed, respectively, below or above the threshold of 7 m s−1. We respectively apply kinetic fractionation coefficients35,45: $${\mathrm{\alpha }}_{{\mathrm{k}},^{18}{\mathrm{O}}}$$=1.0060, $${\mathrm{\alpha }}_{{\mathrm{k}},^2{\mathrm{H}}}$$ = 1.0053 for a smooth wind regime; $${\mathrm{\alpha }}_{{\mathrm{k}},^{18}{\mathrm{O}}}$$ = 1.0035, $${\mathrm{\alpha }}_{{\mathrm{k}},^2{\mathrm{H}}}$$ = 1.0031 for a rough wind regime. We use the values of the measured wind speed on Polarstern (at 39 m above sea surface). In two additional simulations (respectively named MJ79smooth and MJ79rough), the kinetic fractionation coefficients are set constant and independent of the wind speed, either to the value of a smooth or a rough wind regime.
Atmosphere general circulation model with water isotopes
In this study, isoGCM simulations are performed with the ECHAM5-wiso model46 with a horizontal grid size of ~1.1×1.1° (T106 spectral resolution) and 31 vertical levels. The model is nudged to ERA-interim surface pressure, temperature, vorticity and divergence fields47 to ensure that the simulated large-scale atmospheric flow is modelled in agreement with the ECMWF reanalysis data on all analysed timescales during the years 2015–2017. For each time step of 6 h, isoGCM simulation results of near-surface vapour amount and its isotopic composition are extracted from the model grid cell encompassing the position of Polarstern. In the vertical direction, this grid cell extends from the surface to ~60 m above the surface. Two different ECHAM5-wiso simulations are performed, all covering the period from January 2015 to July 2017 after a 12-month spin-up period.
In the first simulation (named ECHAMexp), different kinetic fractionation coefficients during evaporation over open water are applied depending on wind speed: constant coefficients for a smooth wind regime, and wind speed-dependent coefficients for a rough wind regime are used for wind speeds below or above the threshold of 7 m s−1, respectively48. Over sea ice-covered areas, bare ice is prescribed with an isotopic composition of ocean surface water, based on a global gridded data compilation of δ18O in seawater49.
In the second simulation (named ECHAMfinal), constant fractionation coefficients for δ18O and δ2H, suggested for a rough wind regime35, are applied under all different meteorological conditions for evaporation processes. Over sea ice-covered areas, a 2 -cm-deep snow layer pad is assumed on top of any sea ice-covered grid-cell fraction, to account for accumulation and sublimation of snow on sea ice. The isotopic composition of the bare sea ice is assumed to be equal to the composition of surface ocean waters, neglecting a potential small fractionation process occurring during the formation of sea ice. The prescribed surface ocean δ18O and δ2H values are taken from a reference global gridded dataset compilation49. The isotopic composition of the snow layer is determined by the isotopic composition of the accumulated snowfall. During sublimation processes, no fractionation of the snow is assumed. This treatment of snow on sea ice as a single-layer bucket model is equivalent to treatment of snow on land surfaces in ECHAM5-wiso. The deposited snow in the model is locally controlled, without taking any advection of sea ice or snow drift into account. In reality, the isotopic composition of the sea ice surface will not only be determined by the isotope signal of bare ice or snow on top of the sea ice, but also by further processes altering the sea ice surface. For example, sea spray or breaking waves might substantially alter the isotopic composition of a snow-covered sea ice surface, especially for regions with only a minor area fraction covered by sea ice. These effects will lead to a further mixing of the isotopic signal of the original fallen snow with the isotopic composition of the surrounding ocean surface waters. To account for such processes in ECHAM5-wiso, the isotopic composition of the sea ice surface is assumed as
$${\mathrm{\delta }}_{{\mathrm{sea}}\;{\mathrm{ice}}\;{\mathrm{surface}}} = {\mathrm{f}}^4 \cdot {\mathrm{\delta }}_{{\mathrm{snow}}\;{\mathrm{bucket}}} + \left( {1 - {\mathrm{f}}^4} \right) \cdot {\mathrm{\delta }}_{{\mathrm{ocean}}}$$
(6)
with δ as δ18O or δ2H, δsea ice surface the isotopic composition of the sea ice surface, δsnow bucket the isotopic composition of the snow bucket on sea ice, δocean the isotopic composition of the surrounding ocean surface water and f as the fraction of sea ice in each grid cell. This empirical formula is based on a comparison of measured and simulated δ18O and d-excess values for the period from August to October 2015 and applied for the data-model comparison over the whole measurement period of this study (see Fig. 1) afterwards.
Data filtering
For analysing evaporation processes occurring over an open ocean, without any potential influence of land- or sea ice-based processes, we filtered all the data with sea ice or land situated upwind of each measurement. The upwind area is defined by a 40° angle centred around the wind origin and a maximum distance of vapour transport within 14 h previous to the measurement, which is determined by the measured Polarstern wind speed. Only if this area is free of both sea ice and land (0% sea ice index and no land area), we consider the corresponding measurements as influenced by surface processes over the open ocean. Vice versa, the isotope data corresponding to sea ice-covered areas are selected by including all measurements, where the ERA-interim sea ice coverage in the grid cell surrounding the research vessel Polarstern is higher than 0%. The locations of all filtered datasets are displayed in Supplementary Fig. 3a, b.
Data availability
All presented instrumental and modelling data of this study are available on the PANGAEA database50.
Code availability
The code of the ECHAM model can be retrieved from the Max-Planck-Institut für Meteorologie and is subject to a license. The isotope enhanced version is available by personal contact to the authors.
References
1. 1.
Jouzel, J. Water stable isotopes: Atmospheric composition and applications in polar ice core studies. in Treatise on Geochemistry (ed. Editors-in-Chief Heinrich, D. H. & Karl, K. T.) 213–243 (Pergamon, Elsevier 2003). https://doi.org/10.1016/B0-08-043751-6/04040-8
2. 2.
Hoffmann, G. et al. Coherent isotope history of Andean ice cores over the last century. Geophys. Res. Lett. 30, 1179 (2003).
3. 3.
Ramirez, E. et al. A new Andean deep ice core from Nevado Illimani (6350 m), Bolivia. Earth. Planet. Sci. Lett. 212, 337–350 (2003).
4. 4.
Ortega, P. et al. Characterizing atmospheric circulation signals in Greenland ice cores: insights from a weather regime approach. Clim. Dyn. 43, 2585–2605 (2014).
5. 5.
Sodemann, H., Masson-Delmotte, V., Schwierz, C., Vinther, B. M. & Wernli, H. Interannual variability of Greenland winter precipitation sources: 2. Effects of North Atlantic Oscillation variability on stable isotopes in precipitation. J. Geophys. Res. 113, D12111 (2008).
6. 6.
Dansgaard, W. Stable isotopes in precipitation. Tellus 16, 436–468 (1964).
7. 7.
Jouzel, J. & Merlivat, L. Deuterium and oxygen 18 in precipitation: modeling of the isotopic effects during snow formation. J. Geophys. Res. Atmospheres 89, 11749–11757 (1984).
8. 8.
Merlivat, L. & Jouzel, J. Global climatic interpretation of the deuterium-oxygen 18 relationship for precipitation. J. Geophys. Res. Oceans 84, 5029–5033 (1979).
9. 9.
Jouzel, J., Merlivat, L. & Lorius, C. Deuterium excess in an East Antarctic ice core suggests higher relative humidity at the oceanic surface during the last glacial maximum. Nature 299, 688–691 (1982).
10. 10.
Stenni, B. et al. An oceanic cold reversal during the last deglaciation. Science 293, 2074–2077 (2001).
11. 11.
Uemura, R. et al. Ranges of moisture-source temperature estimated from Antarctic ice cores stable isotope records over glacial–interglacial cycles. Clim. Past. 8, 1109–1125 (2012).
12. 12.
Vimeux, F., Masson, V., Jouzel, J., Stievenard, M. & Petit, J. R. Glacial–interglacial changes in ocean surface conditions in the Southern Hemisphere. Nature 398, 410–413 (1999).
13. 13.
Pfahl, S. & Sodemann, H. What controls deuterium excess in global precipitation? Clim. Past. 10, 771–781 (2014).
14. 14.
Steen-Larsen, H. C. et al. Climatic controls on water vapor deuterium excess in the marine boundary layer of the North Atlantic based on 500 days of in situ, continuous measurements. Atmos. Chem. Phys. 14, 7741–7756 (2014).
15. 15.
Steen-Larsen, H. C. et al. Moisture sources and synoptic to seasonal variability of North Atlantic water vapor isotopic composition. J. Geophys. Res. Atmospheres 120, JD023234 (2015). 2015.
16. 16.
Bonne, J.-L. et al. The isotopic composition of water vapour and precipitation in Ivittuut, southern Greenland. Atmos. Chem. Phys. 14, 4419–4439 (2014).
17. 17.
Brutsaert, W. The roughness length for water vapor sensible heat, and other scalars. J. Atmos. Sci. 32, 2028–2031 (1975).
18. 18.
Brutsaert, W. A theory for local evaporation (or heat transfer) from rough and smooth surfaces at ground level. Water Resour. Res. 11, 543–550 (1975).
19. 19.
Merlivat, L. The dependence of bulk evaporation coefficients on air-water interfacial conditions as determined by the isotopic method. J. Geophys. Res. Oceans 83, 2977–2980 (1978).
20. 20.
Bintanja, R. & Selten, F. M. Future increases in Arctic precipitation linked to local evaporation and sea-ice retreat. Nature 509, 479–482 (2014).
21. 21.
Faber, A.-K., Møllesøe Vinther, B., Sjolte, J. & Anker Pedersen, R. How does sea ice influence δ18O of Arctic precipitation? Atmos. Chem. Phys. 17, 5865–5876 (2017).
22. 22.
Kurita, N. Origin of Arctic water vapor during the ice-growth season. Geophys. Res. Lett. 38, L02709 (2011).
23. 23.
Noone David & Simmonds Ian. Sea ice control of water isotope transport to Antarctica and implications for ice core interpretation. J. Geophys. Res. Atmos. 109, D07105 (2004).
24. 24.
Klein, E. S. & Welker, J. M. Influence of sea ice on ocean water vapor isotopes and Greenland ice core records. Geophys. Res. Lett. 2016, GL071748 (2016).
25. 25.
Kurita, N. et al. Influence of large-scale atmospheric circulation on marine air intrusion towards the East Antarctic coast. Geophys. Res. Lett. 2016, GL070246 (2016).
26. 26.
Uemura, R., Matsui, Y., Yoshimura, K., Motoyama, H. & Yoshida, N. Evidence of deuterium excess in water vapor as an indicator of ocean surface conditions. J. Geophys. Res. Atmospheres 113, D19114 (2008).
27. 27.
Benetti, M. et al. Stable isotopes in the atmospheric marine boundary layer water vapour over the Atlantic Ocean, 2012–2015. Sci. Data 4, 160128 (2017).
28. 28.
Schmidt, G. A. Global seawater oxygen-18 database. http://data.giss.nasa.gov/o18data/ (1999).
29. 29.
Jouzel, J. & Koster, R. D. A reconsideration of the initial conditions used for stable water isotope models. J. Geophys. Res. Atmospheres 101, 22933–22938 (1996).
30. 30.
Steen-Larsen, H. C. et al. What controls the isotopic composition of Greenland surface snow? Clim. Past. 10, 377–392 (2014).
31. 31.
Steffensen, J. P. et al. High-resolution Greenland ice core data show abrupt climate change happens in few years. Science 321, 680–684 (2008).
32. 32.
Holloway, M. D. et al. Antarctic last interglacial isotope peak in response to sea ice retreat not ice-sheet collapse. Nat. Commun. 7, 12293 (2016).
33. 33.
Grumet, N. S. et al. Variability of sea-ice extent in Baffin Bay over the last millennium. Clim. Change 49, 129–145 (2001).
34. 34.
König-Langlo, G., Loose, B. & Bräuer, B. 25 Years of Polarstern Meteorology. World Data Center for Marine Environmental Sciences (2006). https://doi.org/10.1594/PANGAEA.761654
35. 35.
Benetti, M. et al. Deuterium excess in marine water vapor: dependency on relative humidity and surface wind speed during evaporation. J. Geophys. Res. Atmospheres 119, 584–593 (2014).
36. 36.
Curry, J. A. & Webster, P. J. Thermodynamics of Atmospheres and Oceans (San Diego, CA, USA: Academic Press, Elsevier, 1998).
37. 37.
Donlon, C. J. et al. The operational sea surface temperature and sea ice analysis (OSTIA) system. Remote Sens. Environ. 116, 140–158 (2012).
38. 38.
Dee, D. P. et al. The ERA-Interim reanalysis: configuration and performance of the data assimilation system. Q. J. R. Meteorol. Soc. 137, 553–597 (2011).
39. 39.
Coplen, T. B. Guidelines and recommended terms for expression of stable-isotope-ratio and gas-ratio measurement results. Rapid Commun. Mass Spectrom. 25, 2538–2560 (2011).
40. 40.
Tremoy, G. et al. Measurements of water vapor isotope ratios with wavelength-scanned cavity ring-down spectroscopy technology: new insights and important caveats for deuterium excess measurements in tropical areas in comparison with isotope-ratio mass spectrometry. Rapid Commun. Mass Spectrom. 25, 3469–3480 (2011).
41. 41.
Aemisegger, F. et al. Measuring variations of δ18O and δ2H in atmospheric water vapour using two commercial laser-based spectrometers: an instrument characterisation study. Atmos. Meas. Tech. 5, 1491–1511 (2012).
42. 42.
Coplen, T. B. Reporting of stable carbon, hydrogen, and oxygen isotopic abundances. Ref. Intercomp. Mater. Stable Isot. Light Elem. 825, 31–34 (1995).
43. 43.
Meyer, H., Schönicke, L., Wand, U., Hubberten, H. W. & Friedrichsen, H. Isotope studies of hydrogen and oxygen in ground ice - experiences with the equilibration technique. Isot. Environ. Health Stud. 36, 133–149 (2000).
44. 44.
Majoube, M. Fractionnement en oxygène 18 et en deutérium entre l’eau et sa vapeur. J. Chim. Phys. 68, 1423–1436 (1971).
45. 45.
Merlivat, L. Molecular diffusivities of H216O, HD16O, and H218O in gases. J. Chem. Phys. 69, 2864–2871 (1978).
46. 46.
Werner, M., Langebroek, P. M., Carlsen, T., Herold, M. & Lohmann, G. Stable water isotopes in the ECHAM5 general circulation model: Toward high-resolution isotope modeling on a global scale. J. Geophys. Res. Atmos. 116, D15109 (2011).
47. 47.
Butzin, M. et al. Variations of oxygen-18 in West Siberian precipitation during the last 50 years. Atmos. Chem. Phys. 14, 5853–5869 (2014).
48. 48.
Hoffmann, G., Werner, M. & Heimann, M. Water isotope module of the ECHAM atmospheric general circulation model: A study on timescales from days to several years. J. Geophys. Res. Atmospheres 103, 16871–16896 (1998).
49. 49.
LeGrande, A. N. & Schmidt, G. A. Global gridded data set of the oxygen isotopic composition in seawater. Geophys. Res. Lett. 33, L12604 (2006).
50. 50.
Bonne, J.-L. et al. Near-surface atmospheric vapour and oceanic surface water isotopic compositions calibrated data from Polarstern cruises, 2015-2017. PANGAEA (2019). https://doi.org/10.1594/PANGAEA.897578
Acknowledgements
This study has been funded by the AWI Strategy Fund project ISOARC. The measurements on-board Polarstern research vessel were conducted during the PS93.1 (grant no. awi-ps9301), PS93.2, PS94, PS95.1, PS95.2, PS96, PS97, PS98, PS99.1, PS99.2, PS100, PS101, PS102, PS103, PS104, PS105, PS106.1 and PS106.2 expeditions. We deeply acknowledge the different persons who took part in the maintenance of the instrument during these campaigns: Sandra Tippenhauer, Mario Hoppmann and Hendrik Hampe, Ronny Engelmann, Stephanie Bohlmann, Stefanie Arndt, Leonard Rossmann, Lester Lembke-Jene, Vera Schlindwein, Ole Valk, Myriel Horn, Mooritz Haarig, Heike Kalesse, Hendrik Hampe, Elke Burkhart, Michael Flau, Boris Christian, Julia Goedecke, Anna Nikolopoulos, Torsten Linders and Céline Heuzé.
Author information
Authors
Contributions
All authors contributed to the design of this study. Instrument layout and Picarro installation on Polarstern was done by J.-L.B., M.B., H.M., S.K., L.S., H.C.S.-L. and M.W. Isotope measurements and instrument maintenance were performed by J.-L.B. and M.B. Ocean isotope sampling on-board of Polarstern was advised by B.R. and ocean isotope measurements were done by H.M. IsoGCM simulations were performed by M.W. The first paper draft was written by J.-L.B. and M.W., and all authors contributed to the discussion of the results and the final article.
Corresponding author
Correspondence to Jean-Louis Bonne.
Ethics declarations
Competing interests
The authors declare no competing interests.
Journal peer review information: Nature Communications thanks the anonymous reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
Bonne, JL., Behrens, M., Meyer, H. et al. Resolving the controls of water vapour isotopes in the Atlantic sector. Nat Commun 10, 1632 (2019). https://doi.org/10.1038/s41467-019-09242-6
• Accepted:
• Published:
• Spatial and temporal variations of fractionation of stable isotopes in East-Antarctic snow
• Chuanjin Li
• , Jiawen Ren
• , Guitao Shi
• , Hongxi Pang
• , Yetang Wang
• , Shugui Hou
• , Zhongqin Li
• , Zhiheng Du
• , Minghu Ding
• , Xiangyu Ma
• , Jiao Yang
• , Aihong Xie
• , Puyu Wang
• , Xiaoming Wang
• , Bo Sun
• & Cunde Xiao
Journal of Glaciology (2021)
• Evaluation of Poorly Soluble Drugs’ Dissolution Rate by Laser Scattering in Different Water Isotopologues
• Elena V. Uspenskaya
• , Tatiana V. Pleteneva
• , Ilaha V. Kazimova
• & Anton V. Syroeshkin
Molecules (2021)
• Estimation of Environmental Behavior in Recent Years Based on the Viewpoint of Tritium (T) Concentration and Oxygen, Hydrogen Stable Isotope Ratios (δ<sup>18</sup>O, δD) in Precipitation
• Naoki Kano
• , Takahide Morohashi
• & Naoto Miyamoto
• Correcting the impact of the isotope composition on the mixing ratio dependency of water vapour isotope measurements with cavity ring-down spectrometers
• Yongbiao Weng
• , Alexandra Touzeau
• & Harald Sodemann
Atmospheric Measurement Techniques (2020)
• Baffin Bay sea ice extent and synoptic moisture transport drive water vapor isotope (<i>δ</i><sup>18</sup>O, <i>δ</i><sup>2</sup>H, and deuterium excess) variability in coastal northwest Greenland
• Pete D. Akers
• , Ben G. Kopec
• , Kyle S. Mattingly
• , Eric S. Klein
• , Douglas Causey
• & Jeffrey M. Welker
Atmospheric Chemistry and Physics (2020) |
# A primality proving algorithm using least strong pseudoprimes
This post is the continuation of this previous post. In this post, we discuss a deterministic primality proving algorithm that uses the least strong pseudoprimes to several prime bases. After describing the test, we present several examples.
The previous post discusses the notion of witness for the strong probable prime test (the Miller-Rabin test). One important characteristic of the strong probable prime test is that for any composite number, there is always at least one witness (in fact lots of them). This means that the strong probable prime test is not going to be tripped up on a Carmichael number like it is for the Fermat test.
When there is a guarantee that every composite number has a witness for its compositeness, it makes sense to talk about the least witness $w(n)$ for a composite number $n$. The statement that $w(n)>B$ is equivalent to the statement that $n$ is a strong pseudoprime to all the bases less than or equal to $B$. Strong pseudoprimes to base 2 are rare. Strong pseudoprimes to multiple bases are even rarer. According to [2], there are only 13 numbers below $25 \cdot 10^9$ that are strong pseudoprimes to all of the bases 2, 3 and 5. Thus it is rare to have composite numbers $n$ whose $w(n)>5$. Because they are rare, knowing about strong pseudoprimes can help us find the primes.
The test in question comes from [1]. It had been improved and sharpened over the years. The paper [1] seems to contain the best results to date regarding this test. To see how the method evolved and got improved, any interested reader can look at the references provided in [1]. Let $\psi_n$ be the least strong pseudoprime to all of the first $n$ prime bases. The paper [1] presents the following 11 least strong pseudoprimes.
$\psi_1=$ 2047
$\psi_2=$ 1373653
$\psi_3=$ 25326001
$\psi_4=$ 3215031751
$\psi_5=$ 2152302898747
$\psi_6=$ 3474749660383
$\psi_7=\psi_8=$ 341550071723321
$\psi_9=\psi_{10}=\psi_{11}=$ 3825123056546413051
To illustrate, the number 25326001 is the smallest strong pseudoprime to all the bases 2, 3 and 5. For any odd number $n$ less than 25326001, check whether $n$ is a strong probable to these 3 bases. If it is, then $n$ has to be prime. Otherwise, $n$ is a strong pseudoprime to bases 2, 3 and 5 that is less than 25326001! Of course, if $n$ happens to be not a strong probable prime to one of the 3 bases, then it is a composite number.
The test using $\psi_n$ represents a primality test that actually proves primality rather than just giving strong evidence for primality. Using $\psi_n$, the test only requires $n$ modular exponentiations. This test is a limited test since it only applies to numbers less than $\psi_n$. However, it is interesting to note that the notions of strong probable primes and strong pseudoprimes give a deterministic primality test (though limited) that is fast and easy to use in addition to the usual Miller-Rabin probabilistic primality test.
___________________________________________________________________
Examples
Example 1
Consider the number $n=$ 2795830049. This number is below $\psi_4$. So we check for probable primality of $n$ to the bases 2, 3, 5, and 7. First of all, $n-1=2^5 \cdot Q$ where $Q=$ 87369689. Here’s the calculation.
$2^Q \equiv 937249258 \ (\text{mod} \ 2795830049)$
$2^{2 \cdot Q} \equiv 2693069488 \ (\text{mod} \ 2795830049)$
$2^{4 \cdot Q} \equiv 226823779 \ (\text{mod} \ 2795830049)$
$2^{8 \cdot Q} \equiv 2795830048 \equiv -1 \ (\text{mod} \ 2795830049)$
$2^{16 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
$2^{32 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
Note that the first term that is a 1 in the above sequence is $2^{16 \cdot Q}$. The preceding term is a -1. Thus $n=$ 2795830049 is a strong probable prime to base 2. Now the base 3 calculation.
$3^Q \equiv 268289123 \ (\text{mod} \ 2795830049)$
$3^{2 \cdot Q} \equiv 717416975 \ (\text{mod} \ 2795830049)$
$3^{4 \cdot Q} \equiv 17652213 \ (\text{mod} \ 2795830049)$
$3^{8 \cdot Q} \equiv 2569006270 \ (\text{mod} \ 2795830049)$
$3^{16 \cdot Q} \equiv 2795830048 \equiv -1 \ (\text{mod} \ 2795830049)$
$3^{32 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
Note that the first term that is a 1 in the above sequence is the last term $3^{32 \cdot Q}$. The preceding term is a -1. Thus $n=$ 2795830049 is a strong probable prime to base 3. Now the base 5 calculation.
$5^Q \equiv 102760561 \ (\text{mod} \ 2795830049)$
$5^{2 \cdot Q} \equiv 226823779 \ (\text{mod} \ 2795830049)$
$5^{4 \cdot Q} \equiv 2795830048 \equiv -1 \ (\text{mod} \ 2795830049)$
$5^{8 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
$5^{16 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
$5^{32 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
The base 7 calculation.
$7^Q \equiv 121266349 \ (\text{mod} \ 2795830049)$
$7^{2 \cdot Q} \equiv 937249258 \ (\text{mod} \ 2795830049)$
$7^{4 \cdot Q} \equiv 2693069488 \ (\text{mod} \ 2795830049)$
$7^{8 \cdot Q} \equiv 226823779 \ (\text{mod} \ 2795830049)$
$7^{16 \cdot Q} \equiv 2795830048 \equiv -1 \ (\text{mod} \ 2795830049)$
$7^{32 \cdot Q} \equiv 1 \ (\text{mod} \ 2795830049)$
Both the base 5 and base 7 calculations show that $n=$ 2795830049 is a strong probable prime to both bases. The calculations for the 4 bases conclusively prove that $n=$ 2795830049 is a prime number.
Example 2
Consider the number $n=$ 62834664835837. This number is below $\psi_7$. So we check for probable primality to the bases 2, 3, 5, 7, 11, 13, and 17. First, $n-1=2^2 \cdot Q$ where $Q=$ 15708666208959.
$2^Q \equiv 49994720924726 \ (\text{mod} \ 62834664835837)$
$2^{2 \cdot Q} \equiv 62834664835836 \equiv -1 \ (\text{mod} \ 62834664835837)$
$2^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
__________________
$3^Q \equiv 1 \ (\text{mod} \ 62834664835837)$
$3^{2 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
$3^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
__________________
$5^Q \equiv 49994720924726 \ (\text{mod} \ 62834664835837)$
$5^{2 \cdot Q} \equiv 62834664835836 \equiv -1 \ (\text{mod} \ 62834664835837)$
$5^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
__________________
$7^Q \equiv 49994720924726 \ (\text{mod} \ 62834664835837)$
$7^{2 \cdot Q} \equiv 62834664835836 \equiv -1 \ (\text{mod} \ 62834664835837)$
$7^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
__________________
$11^Q \equiv 1 \ (\text{mod} \ 62834664835837)$
$11^{2 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
$11^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
__________________
$13^Q \equiv 12839943911111 \ (\text{mod} \ 62834664835837)$
$13^{2 \cdot Q} \equiv 62834664835836 \equiv -1 \ (\text{mod} \ 62834664835837)$
$13^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
__________________
$17^Q \equiv 1 \ (\text{mod} \ 62834664835837)$
$17^{2 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
$17^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 62834664835837)$
The calculation for all bases shows that $n=$ 62834664835837 is a strong probable primes to all 7 prime bases. This shows that $n=$ 62834664835837 is prime.
Example 3
Consider the number $n=$ 21276028621. This is a 11-digit number and is less than $\psi_5$. The algorithm is to check for the strong probable primality of $n$ to the first 5 prime bases – 2, 3, 5, 7, 11. First, $n-1=2^2 \cdot Q$ where $Q=$ 5319007155.
$2^Q \equiv 560973617 \ (\text{mod} \ 21276028621)$
$2^{2 \cdot Q} \equiv 21276028620 \equiv -1 \ (\text{mod} \ 21276028621)$
$2^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 21276028621)$
__________________
$3^Q \equiv 1 \ (\text{mod} \ 21276028621)$
$3^{2 \cdot Q} \equiv 1 \ (\text{mod} \ 21276028621)$
$3^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 21276028621)$
__________________
$5^Q \equiv 1 \ (\text{mod} \ 21276028621)$
$5^{2 \cdot Q} \equiv 1 \ (\text{mod} \ 21276028621)$
$5^{4 \cdot Q} \equiv 1 \ (\text{mod} \ 21276028621)$
__________________
$7^Q \equiv 10282342854 \ (\text{mod} \ 21276028621)$
$7^{2 \cdot Q} \equiv 19716227277 \ (\text{mod} \ 21276028621)$
$7^{4 \cdot Q} \equiv 21275616058 \ (\text{mod} \ 21276028621)$
Things are going well for the first 3 prime bases. The number $n=$ 21276028621 is a strong pseudoprime to the first 3 prime bases. However, it is not a strong probable prime to base 7. Thus the number is composite. In fact, $n=$ 21276028621 is one of the 13 numbers below $25 \cdot 10^9$ that are strong pseudoprimes to bases 2, 3 and 5.
___________________________________________________________________
Exercises
Use the least strong pseudoprime primality test that is described here to determine the primality or compositeness of the following numbers:
• 58300313
• 235993423
• 1777288949
• 40590868757
• 874191954161
• 8667694799429
• 1250195846428003
___________________________________________________________________
Reference
1. Yupeng Jiang, Yingpu Deng, Strong pseudoprimes to the first 9 prime bases, arXiv:1207.0063v1 [math.NT], June 30, 2012.
2. Pomerance C., Selfridge J. L., Wagstaff, S. S., The pseudoprimes to $25 \cdot 10^9$, Math. Comp., Volume 35, 1003-1026, 1980.
___________________________________________________________________
$\copyright \ \ 2014 \ \text{Dan Ma}$ |
Página 8 dos resultados de 179249 itens digitais encontrados em 0.208 segundos
## From Over-charging to Like-charge Attraction in the Weak Coupling Regime
Xing, Xiangjun; Xu, Zhenli; Ma, Hongru
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
Despite decades of intensive studies, the effective interactions between strongly charged colloids still remain elusive. Here we show that a strongly charged surface with a layer of condensed counter- ions behaves effectively as a conductor, due to the mobile nature of the condensed ions. An external source charge in its vicinity is therefore attracted towards the surface, due to the image charge effect. This mechanism leads to correlational energies for counter-ions condensed on two distinct surfaces, as well as for free ions in the bulk. Generalizing Debye-Huckel theory and image charge methods, we analytically calculate these correlation energies for the two-plates problem, at the iso-electric point, where condensed counterions precisely balance the bare surface charges. At this point, the effective interaction between two plates is always attractive at small separation and repulsive at large separation.; Comment: 5 pages, 3 eps figures
## Polymers as compressible soft spheres
D'Adamo, Giuseppe; Pelissetto, Andrea; Pierleoni, Carlo
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
We consider a coarse-grained model in which polymers under good-solvent conditions are represented by soft spheres whose radii, which should be identified with the polymer radii of gyrations, are allowed to fluctuate. The corresponding pair potential depends on the sphere radii. This model is a single-sphere version of the one proposed in Vettorel et al., Soft Matter 6, 2282 (2010), and it is sufficiently simple to allow us to determine all potentials accurately from full-monomer simulations of two isolated polymers (zero-density potentials). We find that in the dilute regime (which is the expected validity range of single-sphere coarse-grained models based on zero-density potentials) this model correctly reproduces the density dependence of the radius of gyration. However, for the thermodynamics and the intermolecular structure, the model is largely equivalent to the simpler one in which the sphere radii are fixed to the average value of the radius of gyration and radiiindependent potentials are used: for the thermodynamics there is no advantage in considering a fluctuating sphere size.; Comment: 21 pages, 7 figures
## Orientational Order in Liquids upon Condensation in Nanochannels: An Optical Birefringence Study on Rodlike and Disclike Molecules in Monolithic Mesoporous Silica
Wolff, Matthias; Knorr, Klaus; Huber, Patrick; Kityk, Andriy V.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
We present high-resolution optical birefringence measurements upon sequential filling of an array of parallel-aligned nanochannels (14~nm mean diameter) with rod-like (acetonitrile) and disc-like (hexafluorobenzene) molecules. We will demonstrate that such birefringence isotherms, when performed simultaneously with optically isotropic and index-matched counterparts (neopentane and hexafluoromethane), allow one to characterize the orientational state of the confined liquids with a high accuracy as a function of pore filling. The pore condensates are almost bulk-like, optically isotropic liquids. For both anisotropic species we find, however, a weak orientational order (of a few percent at maximum) upon film-condensation in the monolithic mesoporous membrane. It occurs upon formation of the second and third adsorbed layer, only, and vanishes gradually upon onset of capillary condensation. Presumably, it originates in the breaking of the full rotational symmetry of the interaction potential at the cylindrical, free liquid-vapor interface in the film-condensed state rather than at the silica-liquid interface. This conclusion is corroborated by comparisons of our experimental results with molecular dynamics simulations reported in the literature.; Comment: 6 pages...
## Phase behavior of hard spheres confined between parallel hard plates: Manipulation of colloidal crystal structures by confinement
Fortini, Andrea; Dijkstra, Marjolein
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
We study the phase behavior of hard spheres confined between two parallel hard plates using extensive computer simulations. We determine the full equilibrium phase diagram for arbitrary densities and plate separations from one to five hard-sphere diameters using free energy calculations. We find a first-order fluid-solid transition, which corresponds to either capillary freezing or melting depending on the plate separation. The coexisting solid phase consists of crystalline layers with either triangular or square symmetry. Increasing the plate separation, we find a sequence of crystal structures from n triangular to (n+1) square to (n+1) triangular, where n is the number of crystal layers, in agreement with experiments on colloids. At high densities, the transition between square to triangular phases are intervened by intermediate structures, e.g., prism, buckled, and rhombic phases.; Comment: 9 pages, 4 figures. Accepted for publication in J. Phys.: Condens. Matter
## Correlation between crystalline order and vitrification in colloidal monolayers
Tamborini, Elisa; Royall, C. Patrick; Cicuta, Pietro
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
We investigate experimentally the relationship between local structure and dynamical arrest in a quasi-2d colloidal model system which approximates hard discs. We introduce polydispersity to the system to suppress crystallisation. Upon compression, the increase in structural relaxation time is accompanied by the emergence of local hexagonal symmetry. Examining the dynamical heterogeneity of the system, we identify three types of motion : "zero-dimensional" corresponding to beta-relaxation, "one-dimensional" or stringlike motion and "two-dimensional" motion. The dynamic heterogeneity is correlated with the local order, that is to say locally hexagonal regions are more likely to be dynamically slow. However we find that lengthscales corresponding to dynamic heterogeneity and local structure do not appear to scale together approaching the glass transition.; Comment: 13 papes, to appear in J. Phys.: Condens. Matter
## Aging as dynamics in configuration space
Kob, Walter; Sciortino, Francesco; Tartaglia, Piero
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
The relaxation dynamics of many disordered systems, such as structural glasses, proteins, granular materials or spin glasses, is not completely frozen even at very low temperatures. This residual motion leads to a change of the properties of the material, a process commonly called aging. Despite recent advances in the theoretical description of such aging processes, the microscopic mechanisms leading to the aging dynamics are still a matter of dispute. In this Letter we investigate the aging dynamics of a simple glass former by means of molecular dynamics computer simulation. Using the concept of the inherent structure we give evidence that aging dynamics can be understood as a decrease of the effective configurational temperature T of the system. From our results we conclude that the equilibration process is faster when the system is quenched to T_c, the critical T of mode-coupling theory, and that thermodynamic concepts are useful to describe the out-of-equilibrium aging process.; Comment: Latex 4 figures
## Self-assembling DNA-caged particles: nanoblocks for hierarchical self-assembly
Licata, Nicholas A.; Tkachenko, Alexei V.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
DNA is an ideal candidate to organize matter on the nanoscale, primarily due to the specificity and complexity of DNA based interactions. Recent advances in this direction include the self-assembly of colloidal crystals using DNA grafted particles. In this article we theoretically study the self-assembly of DNA-caged particles. These nanoblocks combine DNA grafted particles with more complicated purely DNA based constructs. Geometrically the nanoblock is a sphere (DNA grafted particle) inscribed inside a polyhedron (DNA cage). The faces of the DNA cage are open, and the edges are made from double stranded DNA. The cage vertices are modified DNA junctions. We calculate the equilibriuim yield of self-assembled, tetrahedrally caged particles, and discuss their stability with respect to alternative structures. The experimental feasability of the method is discussed. To conclude we indicate the usefulness of DNA-caged particles as nanoblocks in a hierarchical self-assembly strategy.; Comment: v2: 21 pages, 8 figures; revised discussion in Sec. 2, replaced 2 figures, added new references
## Nematic twist-bend phase with nanoscale modulation of molecular orientation
Borshch, Volodymyr; Kim, Young-Ki; Xiang, Jie; Gao, Min; Jákli, Antal; Panov, Vitaly P.; Vij, Jagdish K.; Imrie, Corrie T.; Tamba, Maria-Gabriela; Mehl, Georg H.; Lavrentovich, Oleg D.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
A state of matter in which molecules show a long-range orientational order and no positional order is called a nematic liquid crystal. The best known and most widely used (for example, in modern displays) is the uniaxial nematic, with the rod-like molecules aligned along a single axis, called the director. When the molecules are chiral, the director twists in space, drawing a right-angle helicoid and remaining perpendicular to the helix axis; the structure is called a chiral nematic. In this work, using transmission electron and optical microscopy, we experimentally demonstrate a new nematic order, formed by achiral molecules, in which the director follows an oblique helicoid, maintaining a constant oblique angle with the helix axis and experiencing twist and bend. The oblique helicoids have a nanoscale pitch. The new twist-bend nematic represents a structural link between the uniaxial nematic (no tilt) and a chiral nematic (helicoids with right-angle tilt).; Comment: 31 pages: 8 Figures and Supplementary Information with 3 Figures
## Connecting short and long time dynamics in hard-sphere-like colloidal glasses
Pastore, Raffaele; Ciamarra, Massimo Pica; Pesce, Giuseppe; Sasso, Antonio
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
Glass-forming materials are characterized by an intermittent motion at the microscopic scale. Particles spend most of their time rattling within the cages formed by their neighbors, and seldom jump to a different cage. In molecular glass formers the temperature dependence of the jump features, such as the average caging time and jump length, characterizes the relaxation processes and allows for a short-time prediction of the diffusivity. Here we experimentally investigate the cage-jump motion of a two-dimensional hard-sphere-like colloidal suspension, where the volume fraction is the relevant parameter controlling the slowing down of the dynamics. We characterize the volume fraction dependence of the cage-jump features and show that, as in molecular systems, they allow for a short time prediction of the diffusivity.; Comment: 5 pages, 6 figures, Soft Matter 2015
## Yielding dynamics of a Herschel-Bulkley fluid: a critical-like fluidization behaviour
Divoux, Thibaut; Tamarii, David; Barentin, Catherine; Teitel, Stephen; Manneville, Sébastien
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
The shear-induced fluidization of a carbopol microgel is investigated during long start-up experiments using combined rheology and velocimetry in Couette cells of varying gap widths and boundary conditions. As already described in [Divoux et al., {\it Phys. Rev. Lett.}, 2010, {\bf 104}, 208301], we show that the fluidization process of this simple yield stress fluid involves a transient shear-banding regime whose duration $\tau_f$ decreases as a power law of the applied shear rate $\gp$. Here we go one step further by an exhaustive investigation of the influence of the shearing geometry through the gap width $e$ and the boundary conditions. While slip conditions at the walls seem to have a negligible influence on the fluidization time $\tau_f$, different fluidization processes are observed depending on $\gp$ and $e$: the shear band remains almost stationary for several hours at low shear rates or small gap widths before strong fluctuations lead to a homogeneous flow, whereas at larger values of $\gp$ or $e$, the transient shear band is seen to invade the whole gap in a much smoother way. Still, the power-law behaviour appears as very robust and hints to critical-like dynamics. To further discuss these results, we propose (i) a qualitative scenario to explain the induction-like period that precedes full fluidization and (ii) an analogy with critical phenomena that naturally leads to the observed power laws if one assumes that the yield point is the critical point of an underlying out-of-equilibrium phase transition.; Comment: 16 pages...
## Capillary leveling of stepped films with inhomogeneous molecular mobility
McGraw, Joshua D.; Salez, Thomas; Bäumchen, Oliver; Raphaël, Elie; Dalnoki-Veress, Kari
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
A homogeneous thin polymer film with a stepped height profile levels due to the presence of Laplace pressure gradients. Here we report on studies of polymeric samples with precisely controlled, spatially inhomogeneous molecular weight distributions. The viscosity of a polymer melt strongly depends on the chain length distribution; thus, we learn about thin-film hydrodynamics with viscosity gradients. These gradients are achieved by stacking two films with different molecular weights atop one another. After a sufficient time these samples can be well described as having one dimensional viscosity gradients in the plane of the film, with a uniform viscosity normal to the film. We develop a hydrodynamic model that accurately predicts the shape of the experimentally observed self-similar profiles. The model allows for the extraction of a capillary velocity, the ratio of the surface tension and the viscosity, in the system. The results are in excellent agreement with capillary velocity measurements of uniform mono- and bi-disperse stepped films and are consistent with bulk polymer rheology.; Comment: Accepted for publication in Soft Matter, Themed Issue on "The Geometry and Topology of Soft Materials"
## The Geometry of Soft Materials: A Primer
Kamien, Randall D.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
We present an overview of the differential geometry of curves and surfaces using examples from soft matter as illustrations. The presentation requires a background only in vector calculus and is otherwise self-contained.; Comment: 45 pages, RevTeX, 12 eps figures
## A micromechanical model of collapsing quicksand
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
The discrete element method constitutes a general class of modeling techniques to simulate the microscopic behavior (i.e. at the particle scale) of granular/soil materials. We present a contact dynamics method, accounting for the cohesive nature of fine powders and soils. A modification of the model adjusted to capture the essential physical processes underlying the dynamics of generation and collapse of loose systems is able to simulate "quicksand" behavior of a collapsing soil material, in particular of a specific type, which we call "living quicksand". We investigate the penetration behavior of an object for varying density of the material. We also investigate the dynamics of the penetration process, by measuring the relation between the driving force and the resulting velocity of the intruder, leading to a "power law" behavior with exponent 1/2, i.e. a quadratic velocity dependence of the drag force on the intruder.; Comment: 5 pages, 4 figures, accepted for granular matter
## Non--Newtonian viscosity of interacting Brownian particles: comparison of theory and data
Fuchs, Matthias; Cates, Michael E.
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
A recent first-principles approach to the non-linear rheology of dense colloidal suspensions is evaluated and compared to simulation results of sheared systems close to their glass transitions. The predicted scenario of a universal transition of the structural dynamics between yielding of glasses and non-Newtonian (shear-thinning) fluid flow appears well obeyed, and calculations within simplified models rationalize the data over variations in shear rate and viscosity of up to 3 decades.; Comment: 6 pages, 2 figures; J. Phys. Condens. Matter to be published (Jan. 2003)
## Dynamical heterogeneity in aging colloidal glasses of Laponite
Jabbari-Farouji, Sara; Zargar, Rojman; Wegdam, Gerard; Bonn, Daniel
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
Glasses behave as solids due to their long relaxation time; however the origin of this slow response remains a puzzle. Growing dynamic length scales due to cooperative motion of particles are believed to be central to the understanding of both the slow dynamics and the emergence of rigidity. Here, we provide experimental evidence of a growing dynamical heterogeneity length scale that increases with increasing waiting time in an aging colloidal glass of Laponite. The signature of heterogeneity in the dynamics follows from dynamic light scattering measurements in which we study both the rotational and translational diffusion of the disk-shaped particles of Laponite in suspension. These measurements are accompanied by simultaneous microrheology and macroscopic rheology experiments. We find that rotational diffusion of particles slows down at a faster rate than their translational motion. Such decoupling of translational and orientational degrees of freedom finds its origin in the dynamic heterogeneity since rotation and translation probe different length scales in the sample. The macroscopic rheology experiments show that the low frequency shear viscosity increases at a much faster rate than both rotational and translational diffusive relaxation times.; Comment: 12 pages...
## The nature of the glass and gel transitions in sticky spheres
Royall, C. Patrick; Williams, Stephen R.; Tanaka, Hajime
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
Glasses and gels are the two dynamically arrested, disordered states of matter. Despite their importance, their similarities and differences remain elusive, especially at high density. We identify dynamical and structural signatures which distinguish the gel and glass transitions in a colloidal model system of hard and "sticky" spheres. Gelation is induced by crossing the gas-liquid phase-separation line and the resulting rapid densification of the colloid-rich phase leads to a sharp change in dynamics and local structure. Thus, we find that gelation is first-order-like and can occur at much higher densities than previously thought: far from being low-density networks, gels have a clear "thermodynamic" definition which nevertheless leads to a non-equilibrium state with a distinct local structure characteristic of a rapidly quenched glass. In contrast, approaching the glass transition, the dynamics slow continuously accompanied by the emergence of local five-fold symmetric structure. Our findings provide a general thermodynamic, kinetic, and structural basis upon which to distinguish gelation from vitrification.; Comment: 12 pages
## Wet to dry crossover and a flow vortex-lattice in active nematics
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
Active systems, from bacterial suspensions to vibrated granular matter, are continuously driven out of equilibrium by local injection of energy from their constituent elements. The energy input leads to exotic behaviour such as collective motion, pattern formation, topological defects and active turbulence, but theories that link the different manifestations of activity across systems and length scales are lacking. Here we unify two different classes of active matter by using friction as a control parameter to interpolate between wet active systems, whose behaviour is dominated by hydrodynamics, and dry active matter where any flow is screened. At the wet-dry crossover, we find a novel lattice of flow vortices interleaved with an ordered network of topological defects which arises from the competition between friction and viscous dissipation. Our results contribute to understanding the physics of matter operating out-of-equilibrium, with its potential in the design of active micro- and nano-machines.
## Absence of `fragility' and mechanical response of jammed granular materials
Pastore, Raffaele; Ciamarra, Massimo Pica; Coniglio, Antonio
Tipo: Artigo de Revista Científica
Relevância na Pesquisa
55.56%
We perform molecular dynamic (MD) simulations of frictional non-thermal particles driven by an externally applied shear stress. After the system jams following a transient flow, we probe its mechanical response in order to clarify whether the resulting solid is 'fragile'. We find the system to respond elastically and isotropically to small perturbations of the shear stress, suggesting absence of fragility. These results are interpreted in terms of the energy landscape of dissipative systems. For the same values of the control parameters, we check the behaviour of the system during a stress cycle. Increasing the maximum stress value, a crossover from a visco-elastic to a plastic regime is observed.; Comment: 6 pages, 9 figures, accepted in Granular Matter on 01-02-2012
## From cage-jump motion to macroscopic diffusion in supercooled liquids
Pastore, Raffaele; Coniglio, Antonio; Ciamarra, Massimo Pica |
The standard unit of kinetic energy is the joule, while the imperial unit of kinetic energy is the foot-pound. − {\displaystyle {\begin{smallmatrix}{\frac {1}{2}}mv^{2}\end{smallmatrix}}} The adjective kinetic has its roots in the Greek word κίνησις kinesis, meaning "motion". Our experts can answer your tough homework and study questions. The kinetic energy in the moving cyclist and the bicycle can be converted to other forms. m Without loss or gain, however, the sum of the kinetic and potential energy remains constant. Kinetic energy can be passed from one object to another. ( Thus, the chemical energy converted to kinetic energy by a rocket engine is divided differently between the rocket ship and its exhaust stream depending upon the chosen reference frame. is the mass and [2], The principle in classical mechanics that E ∝ mv2 was first developed by Gottfried Leibniz and Johann Bernoulli, who described kinetic energy as the living force, vis viva. 0 i Relativistic kinetic energy is equal to the increase in the mass of a particle over that which it has at rest multiplied by the square of the speed of light. {\displaystyle \nabla _{i}^{2}} On a level surface, this speed can be maintained without further work, except to overcome air resistance and friction. , as seen above is equal to. is a sum of 1-electron operator expectation values: where The chemical energy has been converted into kinetic energy, the energy of motion, but the process is not completely efficient and produces heat within the cyclist. {\displaystyle T[\rho ]} Given an electron density 2 v 1 It describes the motion of the gas molecules as being a random zig-zag motion where they collide with each other and transfer momentum and energy. The same amount of work is done by the body when decelerating from its current speed to a state of rest. 2 A particle may also be able to do work on its environment when its kinetic energy is zero. The unit of energy in the metre - kilogram - … This minimum kinetic energy contributes to the invariant mass of the system as a whole. KE is the energy possessed by a Body by vitue of it's Speed or momentum. where 2 For example, the cyclist could encounter a hill just high enough to coast up, so that the bicycle comes to a complete halt at the top. 2 measure of the average value of the kinetic energy of the molecules in a substance. In fluid dynamics, the kinetic energy per unit volume at each point in an incompressible fluid flow field is called the dynamic pressure at that point.[7]. {\displaystyle \mathbf {p} =m\gamma \mathbf {v} } v Several mathematical descriptions of kinetic energy exist that describe it in the appropriate physical situation. Kinetic energy can be transferred between objects and transformed into other kinds of energy.[6]. M This is why an average temperature is taken. Assuming the object was at rest at time 0, we integrate from time 0 to time t because the work done by the force to bring the object from rest to velocity v is equal to the work necessary to do the reverse: This equation states that the kinetic energy (Ek) is equal to the integral of the dot product of the velocity (v) of a body and the infinitesimal change of the body's momentum (p). Spacecraft use chemical energy to launch and gain considerable kinetic energy to reach orbital velocity. m {\displaystyle p} {\displaystyle {\hat {p}}} In classical mechanics, the kinetic energy of a point object (an object so small that its mass can be assumed to exist at one point), or a non-rotating rigid body depends on the mass of the body as well as its speed. To simply put, Kinetic energy can be calculated by the basic process of computing the work (W) that is done by a force (F). (There’s no degree symbol associated with K.) The energy is not destroyed; it has only been converted to another form by friction. and Different observers moving with different reference frames would however disagree on the value of this conserved energy. Rotational kinetic energy is the kinetic energy an object has due to its rotational motion around an axis. | = The hotter the substance, higher is the average kinetic energy of its constituent particles. Substituting, we get:[9]. {\displaystyle E_{0}} The next term in the Taylor series approximation. The expectation value of the electron kinetic energy, r The temperature reading from your thermometer is related to the average kinetic energy of the particles. , for a system of N electrons described by the wavefunction The kinetic energy of such systems depends on the choice of reference frame: the reference frame that gives the minimum value of that energy is the center of momentum frame, i.e. {\displaystyle v} When discussing movements of a macroscopic body, the kinetic energy referred to is usually that of the macroscopic movement only. m This suggests that the formulae for energy and momentum are not special and axiomatic, but concepts emerging from the equivalence of mass and energy and the principles of relativity. . {\displaystyle \mathbf {v} =0,\ \gamma =1} {\displaystyle \gamma =\left(1-v^{2}/c^{2}\right)^{-{\frac {1}{2}}}} In SI units, mass is measured in kilograms, speed in metres per second, and the resulting kinetic energy is in joules. i {\displaystyle v\;} B.temperature. i and The total energy of a system is sometimes called the Hamiltonian, after William Rowan Hamilton. i = − − answer! / The density functional formalism of quantum mechanics requires knowledge of the electron density only, i.e., it formally does not require knowledge of the wavefunction. You might remember that every object in motion carries a certain amount of energy, known as its kinetic energy. is the dynamic pressure, and ρ is the density of the incompressible fluid. Not all the molecules have the same amount of kinetic energy because the molecules are traveling in random directions at a variety of speeds, fast and slow. e Kinetic energy may be best understood by examples that demonstrate how it is transformed to and from other forms of energy. Notice that this can be obtained by replacing {\displaystyle \tau \,} Relativistic kinetic energy of rigid bodies, Tests of relativistic energy and momentum, Kinetic energy per unit mass of projectiles, Physics notes - Kinetic energy in the CM frame, "Biography of Gaspard-Gustave de Coriolis (1792-1843)", https://en.wikipedia.org/w/index.php?title=Kinetic_energy&oldid=996415724, Wikipedia indefinitely semi-protected pages, Wikipedia articles needing clarification from June 2020, Creative Commons Attribution-ShareAlike License, This page was last edited on 26 December 2020, at 13:36. Since the bicycle lost some of its energy to friction, it never regains all of its speed without additional pedaling. Relevance. Temperature is related to average kinatic energy, it is not defined as the average kinatic energy, in thermodynamics inverse of the temprature is well defined and this is how we define it 1/T =k (dS/dU) at constant volume and number of particles U is internal energy, S is entropy and k is Boltzmann constant. However the total energy of an isolated system, i.e. Services, Average Kinetic Energy & Temperature of a System, Working Scholars® Bringing Tuition-Free College to the Community. . v ] measure of the average kinetic energy of the particles in an object is called _____.? The mathematical by-product of this calculation is the mass-energy equivalence formula—the body at rest must have energy content. 2 In the SI, kinetic energy, like all energy, is measured in joules. 1 When thermal energy is transferred, this is called heat. Approved by eNotes Editorial Team We’ll help your grades soar. Dimensions of Kinetic Energy - Click here to know the dimensional formula of kinetic energy. 2 takes the form is small for low speeds. The kinetic energy of a moving object is equal to the work required to bring it from rest to that speed, or the work the object can do while being brought to rest: net force × displacement = kinetic energy, i.e.. C.thermal energy(not it) D.plasma. {\displaystyle \rho (\mathbf {r} )} 1849–51. k By dropping weights from different heights into a block of clay, Willem 's Gravesande determined that their penetration depth was proportional to the square of their impact speed. ρ If a body's speed is a significant fraction of the speed of light, it is necessary to use relativistic mechanics to calculate its kinetic energy. The work done in accelerating a particle with mass m during the infinitesimal time interval dt is given by the dot product of force F and the infinitesimal displacement dx, where we have assumed the relationship p = m v and the validity of Newton's Second Law. This may be simply shown: let 1 r γ Alternatively, the cyclist could connect a dynamo to one of the wheels and generate some electrical energy on the descent. m τ p 2 ∫ where Φ is the Newtonian gravitational potential. c. convection. Kinetic energy is the movement energy of an object. The law of conservation of energy shows us that the maximum amount of energy at any point will not change regardless of how much of each type of energy is present, so if at maximum height, where only potential energy is present we calculate the energy as (100)(2)(-10)= 2000J, then the kinetic energy at max or right before the crate hits the water will also be 2000J. {\displaystyle m_{\text{e}}} p When energy is converted from one form into another, energy is neither created nor destroyed (law of conservation of energy or first law of thermodynamics). / The classical equations of motion can be written in terms of the Hamiltonian, even for highly complex or abstract systems. Another possibility would be for the cyclist to apply the brakes, in which case the kinetic energy would be dissipated through friction as heat. These can be categorized in two main classes: potential energy and kinetic energy. William Thomson, later Lord Kelvin, is given the credit for coining the term "kinetic energy" c. k At a low speed (v ≪ c), the relativistic kinetic energy is approximated well by the classical kinetic energy. γ If a rigid body Q is rotating about any line through the center of mass then it has rotational kinetic energy ( is found by observing that when What is needed for gases is a temperature scale in which zero means the particles are not moving at all (i.e., have zero kinetic energy). In a tank of gas, the molecules are moving in all directions. ( It is called kinetic energy, from the Greek word kinetikos, meaning “motion.”. The kinetic energy of any entity depends on the reference frame in which it is measured. The kinetic energy has now largely been converted to gravitational potential energy that can be released by freewheeling down the other side of the hill. i − For example, a particle falling through a distance h in a vacuum (so we can ignore air resistance) at the earth's surface acquires a speed v = √2 gh and a kinetic energy K = 1/2 mv 2 = mgh . The potential energy becomes kinetic energy as you swing through the arc. = v in the classical expression for kinetic energy in terms of momentum, In the Schrödinger picture, Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. {\displaystyle {\hat {p}}} m The jet engines are converting potential energy in fuel to the kinetic energy of movement. = The moving ball can then hit something and push it, doing work on what it hits. Energy associated with objects in motion is called kinetic energy. The measure of the average kinetic energy of the particles in an object is called: The Kinetic Molecular theory of gases is a microscopic model of gases that describes the behavior and the properties of the gas molecules at the particle level. In any other case, the total kinetic energy has a non-zero minimum, as no inertial reference frame can be chosen in which all the objects are stationary. As we will see below, the particles of a substance at 0 ̊C still has a considerable amount of kinetic energy. E 1 Since, where vα is the ordinary velocity measured w.r.t. Question: The measure of the average kinetic energy of a substance is called the_____? For example, when an airplane is in flight, the airplane is moving through air very quickly—doing work to enact change on its surroundings. q For example, a car traveling twice as fast as another requires four times as much distance to stop, assuming a constant braking force. {\displaystyle q} and the kinetic energy can be expressed as the total energy minus the rest energy: Consider the case of a metric that is diagonal and spatially isotropic (gtt, gss, gss, gss). 2 Willem 's Gravesande of the Netherlands provided experimental evidence of this relationship. = is the Laplacian operator acting upon the coordinates of the ith electron and the summation runs over all electrons. But the total energy of the system, including kinetic energy, fuel chemical energy, heat, etc., is conserved over time, regardless of the choice of reference frame. b. conduction. ) Like any physical quantity that is a function of velocity, the kinetic energy of an object depends on the relationship between the object and the observer's frame of reference. A measure of the average kinetic energy of the individual particles in an object is called _____ temp During _____ heat is transferred by the movement of currents within a fluid {\displaystyle -i\hbar \nabla } Science Temperature. , whose center of mass is moving in a straight line with speed , the exact N-electron kinetic energy functional is unknown; however, for the specific case of a 1-electron system, the kinetic energy can be written as. An object in motion has the ability to do work and thus can be said to have energy. {\displaystyle E_{\text{k}}=0} Thus the kinetic energy of a system is lowest to center of momentum reference frames, i.e., frames of reference in which the center of mass is stationary (either the center of mass frame or any other center of momentum frame). Kinetic Energy. v the coordinate system, we get, and thus the kinetic energy takes the form, This expression reduces to the special relativistic case for the flat-space metric where, In the Newtonian approximation to general relativity. {\displaystyle \textstyle \mathbf {V} } {\displaystyle \int dm=M} c This illustrates that kinetic energy is also stored in rotational motion. Having gained this energy during its acceleration, the body maintains this kinetic energy unless its speed changes. The kinetic energy of the system is the sum of the kinetic energies of the bodies it contains. The kinetic energy operator in the non-relativistic case can be written as. [4][5], Energy occurs in many forms, including chemical energy, thermal energy, electromagnetic radiation, gravitational energy, electric energy, elastic energy, nuclear energy, and rest energy. The history of kinetic energy has been scientifically studied since the end of the 18th century by the German philosopher and mathematician Gottfried Leibniz and the Swiss mathematician and doctor Johann Bernouilli, who called it “living force” or “vis viva“. kinetic energy of the water just before impact, so in principle we could measure the water's mass, velocity, and kinetic energy, and see how they relate to one another. Answer Save. ^ / Radiant Energy is Potential or Kinetic because _____ 2. For objects and processes in common human experience, the formula ½mv² given by Newtonian (classical) mechanics is suitable. {\displaystyle m\;} ) These classical equations have remarkably direct analogs in … is the speed (or the velocity) of the body. ⟩ In the game of billiards, the player imposes kinetic energy on the cue ball by striking it with the cue stick. The same bullet is stationary to an observer moving with the same velocity as the bullet, and so has zero kinetic energy. In an entirely circular orbit, this kinetic energy remains constant because there is almost no friction in near-earth space. As a consequence of this quadrupling, it takes four times the work to double the speed. 2 For example, one would calculate the kinetic energy of an 80 kg mass (about 180 lbs) traveling at 18 metres per second (about 40 mph, or 65 km/h) as. in everyday phenomena on Earth), the first two terms of the series predominate. This is not a scalar, and it's not a vector. {\displaystyle E_{0}} {\displaystyle \gamma =1/{\sqrt {1-v^{2}/c^{2}}}} A system of bodies may have internal kinetic energy due to the relative motion of the bodies in the system. This means clocks run slower and measuring rods are shorter near massive bodies. If the orbit is elliptical or hyperbolic, then throughout the orbit kinetic and potential energy are exchanged; kinetic energy is greatest and potential energy lowest at closest approach to the earth or other massive body, while potential energy is greatest and kinetic energy the lowest at maximum distance. 8 years ago. If temperature is a measure of kinetic energy in a substance why is the from ENG 10 at University of California, Davis ∫ These all contribute to the body's mass, as provided by the special theory of relativity. The law of conservation of energy states that energy cannot be destroyed but can only be transformed from one form into another. Thus it is impossible to accelerate an object across this boundary. Thus, the kinetic energy of an object is not invariant. For example, a bullet passing an observer has kinetic energy in the reference frame of this observer. It is defined as the work needed to accelerate a body of a given mass from rest to its stated velocity. When you calculate the KE from the kinetic molecular model, (average over all the particles) then you find that each degree of freedom has an average kinetic energy of 1/2RT. a reference frame has been chosen to correspond to the body's center of momentum) may have various kinds of internal energy at the molecular or atomic level, which may be regarded as kinetic energy, due to molecular translation, rotation, and vibration, electron translation and spin, and nuclear spin. For one particle of mass m, the kinetic energy operator appears as a term in the Hamiltonian and is defined in terms of the more fundamental momentum operator A substance at a temperature of 0 ̊C does not mean it has zero kinetic energy. However, for particle physics, the unit "electron-volt" is often used instead. Years later, the Dutchman Willem’s Gravensade carried out a research that confirms the importance of the vis viva and was twice what is now known as kinetic energy. Temperature is a measure of the average kinetic energy of the particles such as the molecules in a gas or a liquid. ) which is simply the sum of the kinetic energies of its moving parts, and is thus given by: (In this equation the moment of inertia must be taken about an axis through the center of mass and the rotation measured by ω must be around that axis; more general equations exist for systems where the object is subject to wobble due to its eccentric shape). Temperature is a measure of how hot or cold something is; specifically, a measure of the average kinetic energy of the particles in an object, which is a … as it passes by an observer with four-velocity uobs, then the expression for total energy of the particle as observed (measured in a local inertial frame) is. The speed, and thus the kinetic energy of a single object is frame-dependent (relative): it can take any non-negative value, by choosing a suitable inertial frame of reference. NOTE: Remember that temperature is a measure of the average kinetic energy of the particles. is the proper time of the particle, there is also an expression for the kinetic energy of the particle in general relativity. Early understandings of these ideas can be attributed to Gaspard-Gustave Coriolis, who in 1829 published the paper titled Du Calcul de l'Effet des Machines outlining the mathematics of kinetic energy. 2 Temperature is a measure of the average kinetic energy of the particles in a sample. In classical mechanics, the kinetic energy of a non-rotating object of mass m traveling at a speed v is is the mass of the electron and [1] p Émilie du Châtelet recognized the implications of the experiment and published an explanation. m , ^ The kinetic energy of the system in the center of momentum frame is a quantity that is invariant (all observers see it to be the same). {\displaystyle \int {\frac {v_{i}^{2}}{2}}dm=E_{i}} Learn to derive the expression for dimensions of kinetic energy with detailed explanation. The kinetic energy is equal to 1/2 the product of the mass and the square of the speed. The kinetic energy also depends linearly on the mass, which is a numerical measure of object’s inertia and the measure of an object’s resistance to acceleration when a force is applied. Become a Study.com member to unlock this E In inelastic collisions, kinetic energy is dissipated in various forms of energy, such as heat, sound, binding energy (breaking bound structures). The bicycle would be traveling slower at the bottom of the hill than without the generator because some of the energy has been diverted into electrical energy. , giving. Through the arc maintained without further work, except to overcome air resistance and friction at the.! Work expended accelerating an object in motion is called kinetic energy, from the Greek word kinetikos, “. Homework and study questions a considerable amount of kinetic energy is potential kinetic... Concepts of actuality and potentiality energy to launch and gain considerable kinetic energy functional with... The terms kinetic energy contributes to the average kinetic energy. [ ]... Thermometer is related to the system 's invariant mass, which is independent of the kinetic energy preserved. Help your grades soar meaning “ motion. ” it contains system 's invariant mass, which is independent of series. Units, mass is at rest ( motionless ) it is transformed to from! Can answer your tough homework and study questions it, doing work on what hits... Relative position, composition, or condition bullet is stationary to an observer moving with different reference would... Never regains all of its energy to friction measure of kinetic energy is called it never regains all its. Not be destroyed but can only be transformed from one object to another form by friction work... Contribute to the system as a method of energy. [ 6 ] speed or momentum energy. Grades soar the reference frame in which energy can be transferred between objects transformed..., after William Rowan Hamilton has been done, will be converted into different forms of energy [! All other trademarks and copyrights are the property of their respective owners these can converted... Or gain, however, it becomes apparent at re-entry when some of its energy to energy! Has been done, will be converted into different forms of energy. 6... An isolated system, i.e well by the classical kinetic energy can neither enter nor leave, does change... Work needed to exist to define it a macroscopic body, the formula ½mv² given Newtonian... Note: remember that temperature is a measure of average kinetic energy the. When it is impossible to accelerate a body by vitue of it 's not a vector and the of! Minimum kinetic energy your tough homework and study questions movements of a system is sometimes called Hamiltonian! Elastic and so has zero kinetic energy can neither enter nor leave, does not it! Observers moving with the same bullet is stationary to an observer moving with cue... Stored in rotational motion standard unit of kinetic energy of the system remain constant that temperature is a measure the! Which the total energy of motion can be said to have energy content a person a... Motion carries a certain amount of kinetic energy on the descent and work in present. One measure of the system be said to have energy content the mass and the bicycle can transferred... Internal combustion engines convert thermal energy to reach orbital velocity another form by friction and. Bicycle can be said to have energy content speed to a chosen speed the.... A group of molecules neither enter nor leave, does not change over time in the Solar system planets! Destroyed but can only be transformed from one form into another _____ 2 and potentiality movement energy a! It is defined as the ability to do work and thus can be categorized in two main:! In SI units, mass is at rest must have energy content study questions because _____ 2 reach orbital.... This quadrupling, it takes four times the work to double the speed to friction, becomes... ) is the density of the average kinetic energy. [ 6 ] same velocity as work. The system is the sum of the Hamiltonian, after William Rowan Hamilton Get access to this video and entire! Motion has the ability to do work billiards, the unit electron-volt '' is often used instead energy its. With no kinetic energy contributes to the average kinetic energy of its speed without additional pedaling engines are converting energy! Connect a dynamo to one of which is kinetic energy. [ 6 ] collisions... But can only be transformed from one form into another bodies in the reference frame in it... Sum of the system remain constant all directions approved by eNotes Editorial Team we ’ ll help your grades.! Energy is preserved of this relationship engines are converting potential energy becomes kinetic energy of motion ; energy. Relative motion of the object are effectively elastic collisions, in which kinetic energy the speed in billiards are elastic! Motionless ) below measure of kinetic energy is called ) body at rest a particle is one measure of the macroscopic movement only 0 }! Any entity depends on the value of this quadrupling, it never regains all of constituent! In motion is called a. thermal energy to mechanical energy in the frame... That energy can be passed from one form into another cyclist uses chemical energy provided by the special relativistic below! Without additional pedaling work needed to accelerate an object unit electron-volt '' is often used instead be passed one! ( e.g not change over time in the Solar system the planets and planetoids are orbiting the.. This is called kinetic energy of motion ; potential energy is approximated well by the classical energy. Moves higher and due to the kinetic energy is the foot-pound move at a low speed ( v ≪ )! Units, mass is at rest formula of kinetic energy is the average kinetic can. Answer your tough homework and study questions particle is one measure of the average kinetic measure of kinetic energy is called [. Amounts of kinetic energy - Click here to know the dimensional formula of energy! Speed has four times the work to double the speed classical equations of motion ; energy! In quantum mechanics, observables like kinetic energy is the dynamic pressure, and ρ is the velocity... Maintained without further work, except to overcome air resistance and friction William Thomson, later Lord Kelvin, given! Copyrights are the property of their respective owners a consequence of this quadrupling, becomes! Are effectively elastic collisions, in which it is at rest must have energy. [ 6 ] to... For particle physics, the terms kinetic energy is energy due to the invariant mass of the fluid... Terms kinetic energy of a group of molecules stationary to an observer moving with the cue stick energy. Energies of all types contribute to body 's mass, as provided by food to accelerate bicycle... No kinetic energy of the Netherlands provided experimental evidence of this calculation is the is! Not needed to exist to define it ½mv² given by Newtonian ( classical ) mechanics is suitable ( constant! Of kinetic energy is the ordinary velocity measured w.r.t your Degree, Get access to this and... Newtonian ( classical ) mechanics is suitable energy on the cue stick Team! The moving ball can then hit something and push it, doing work on its environment when kinetic... Be said to have energy content to derive the expression for dimensions of kinetic -! Independent of the system is the average kinetic energy. [ 6.. Object to another form by friction rest ( measure of kinetic energy is called ) one form into.! Human experience, the terms kinetic energy is equal to 1/2 the of! Energy associated with rotation, even if its center of mass is at (... A consequence of this calculation is the foot-pound is stationary to an observer moving with the ball. A process called the _____ four-stroke cycle kilograms, speed in metres per second, and ρ is the of! Bicycle can be written as 6 ] neither enter nor leave, does not change over time in the frame., known as the ability to do work we define the quantity K! Non-Relativistic case can be written as maintains this kinetic energy when it is to. Energy to reach orbital velocity maintains this kinetic energy. [ 6 ] substance at 0 ̊C not. Incompressible measure of kinetic energy is called the appropriate physical situation describe it in the reference frame of this observer from Greek! Mid-19Th century every object in motion carries a certain amount of energy. [ 6 ] invariant mass of Netherlands. The arc K = ½ mv2 to be the translational kinetic energy is.! Without further work, except to overcome air resistance and friction energies of the in. Get your Degree, Get access to this video and our entire &... And kinetic energy is preserved regains all of its constituent particles help your grades.... Particle is one measure of the series predominate in two main classes: potential energy in reference..., will be converted into different forms of energy, from the Greek word kinesis. Molecules are moving in measure of kinetic energy is called directions mass so that dm = 0 ), the person does work its. To overcome air resistance and friction and ρ is the mass-energy equivalence formula—the body at rest must have content... Also see the special relativistic derivation below. ) series predominate reach orbital.! William Thomson, later Lord Kelvin, is given the credit for coining the measure of kinetic energy is called kinetic (... A method of energy, from the Greek word kinetikos, meaning motion '' dimensions of energy. Our entire q & a library defined as the ability to do work on its environment when its energy. When a person throws a ball, the expression for linear momentum is modified by (... A cyclist uses chemical energy to mechanical energy in the system remain constant striking it the. The Hamiltonian, even if its center of mass is at rest must have energy content Gravesande! Mass of the average kinetic energy q { \displaystyle q } is known as its kinetic energy to launch gain., from the Greek word kinetikos, meaning motion '' 's not a vector electrical energy the. Are represented as operators more clearly whatever work has been done, will be converted into forms... |
# Closure of regular languages under non-contiguous subsequence
Suppose there is a language $L$ on alphabet $Σ$. Now consider the language
$$S(L) = \{x : wxy ∈ L, w, y ∈ Σ^*\} ∪ \{x : w ∈ L,\text{ and x is a subsequence of w}\}.$$
How to prove that if $L$ is regular then $S(L)$ is also regular?
For first part I think if there is a DFA that accepts $wxy$ then there is a DFA that accepts $x$. For second part I have no clue. Can anyone shed some light on this or can provide a formal proof for this?
Here are several ways of showing this:
1. Starting with a DFA/NFA for $L$, add $\epsilon$ transitions parallel to all other transitions.
2. Apply the regular substitution that maps $\sigma \in \Sigma$ to $\{\sigma,\epsilon\}$.
3. Starting with a regular expression for $L$, replace all copies of each $\sigma \in \Sigma^*$ by $\epsilon + \sigma$.
• I think a little more explanation is required . Is your solution is for second part only ? What do you mean by adding transitions parallel to other transitions ? Whqt substitution you are talking about ? I am understanding it a little bit but not completely. – Desperado Jul 7 '17 at 17:48
• I am not able to understand 2nd point . – Desperado Jul 7 '17 at 17:49
• The first part is actually a subset of the second, so you don't really need it. – Yuval Filmus Jul 7 '17 at 17:56
• As for the second point, the regular languages are closed under regular substitution. A regular substitution is a mapping $s\colon \Sigma \to 2^{\Delta^*}$ in which $s(\sigma)$ is regular for all $\sigma \in \Sigma$. We extend $s$ to words by concatenation, and to languages by $s(L) = \bigcup_{w \in L} s(w)$. – Yuval Filmus Jul 7 '17 at 17:58
• As to the level of detail, that's intentional. You'll have to work it out. – Yuval Filmus Jul 7 '17 at 17:58
Given a DFA $M$ for $L$ create a new NFA with $\epsilon$-moves $M'$ by adding a new transition $\delta(q_i, \epsilon) = q_j$ for each existing transition $\delta(q_i,a) = q_j$ for some symbol $a \in \Sigma$.
Now for example if $w = abaabab$ is accepted by $M$ and $u = aab$ is a subsequence of $w$, then $u$ can be written as $a \epsilon \epsilon ab \epsilon \epsilon$ which is accepted by $M'$ according to the new NFA rules.
• Thanks as I am not very good with this, your explanation is very good and simple. – Desperado Jul 7 '17 at 18:38 |
# Notes: Curse of Dimensionality
Posted on
“Curse of Dimensionality”, in the sense of Bellman as well as in machine learning.
## Geometry
In $$\mathbb{R}^n$$ where $$n$$ is large,
• The volume of the unit ball goes to 0 as the dimension goes to infinity.
• Most of the volume of a n-ball is concentrated around its boundary.
• Pick two vectors on the surface of the unit ball independently. The two are orthogonal with high probability.
• Johnson-Lindenstrauss lemma: a set of points can be embedded almost isometrically w.r.t. $$L_2$$ into a space of a much lower dimension. The $$L_1$$ analogue doesn’t hold. Look up fast JL transform.
• Distances between points are concentrated: $$\frac{ \max_{x, y} d(x,y) - \min_{x, y} d(x,y) } { \min_{x, y} d(x,y) } \to 0$$ where $$d$$ is the $$L_p$$ norm where $$p \geq 1$$.
## ML
Suppose the data has $$p$$ features,
• Suppose that the features are binary categorical. The cardinality of the feature space grows exponentially with $$p$$. Suppose the size of the training set is fixed; then, the model is highly likely to overfit when $$p$$ is large.
• Suppose that the features are continuous and in $$[0,1]$$. Then, most points are located outside of the sphere inscribed in the cube. |
# Computer Systems APA (Bryant & O'Hallaron)/Chapter 9
## 11
### A
We are given the address 0x027c
We translate the address into binary of 14 bits: 0x027c = 00.0010.0111.1100
### B
From page 851 we know that the VPO has size $p = 6$. And VPN has size $n - p = 14 - 6 = 8$. Thus we get
• VPN = 00.0010.01 = 0x9.
• VPO = 11.1100.
The TLB has $4$ sets. TLBI has size of $lg_2 sets = 2$. TLBI is the lower order bits of the VPN. TLBT (TLB tag) is the rest of the VPN. Thus
• TLBI = 01 = 0x1.
• TLBT = 00.0010 = 0x2.
## 12
### A
We are given the address 0x03a9
We translate the address into binary of 14 bits: 0x03a9 = 00.0011.1100.1001
### B
From page 851 we know that the VPO has size $p = 6$. And VPN has size $n - p = 14 - 6 = 8$. Thus we get
• VPN = 00.0011.11 = 0x0f.
• VPO = 00.1001.
The TLB has $4$ sets. TLBI has size of $lg_2 sets = 2$. TLBI is the lower order bits of the VPN. TLBT (TLB tag) is the rest of the VPN. Thus
• TLBI = 11 = 0x3.
• TLBT = 00.0011 = 0x03.
We obtain from the TLB table the valid PPN = 0x0d = 0000.1101.
We check for validity using second table from top. We get a valid hit in the last entry of the table.
### C
We know VPO = PPO = 00.1001. We concatenate to obtain the 12 bit physical address PA = 00.1101 . 00.1001.
Now a cache block is 4 bytes. That is the 2 low order bits of the physical address serves as CO (block offset). We have 16 sets, so the next 4 bits serves as CI (cache index). The remaining 6 bits is CT (cache tag). We get
• CO = 01 = 0x1.
• CI = 0010 = 0x02.
• CT = 00.1101 = 0x0d
And by a lookup in the bottom table we get the column for idx = 0x02 does not have the tag we are looking for. Hence cache miss.
We look in the upper most table on page 858. The table index (set) is the TLBI, and we look for the tag $0x2$ which is found in the second box from left. This entry's valid bit is not set, that is we obtain PPN = -. We do not have TLB-hit, we have page fault. And we cannot obtain the PPN or finish part D. In this situation the MMU needs to fetch the PTE from main memory, but this is not given.
## 13
### A
We are given the address 0x0040.
### B
Translation : 0x0040 = 00.0000.0100.0000. Separated into VPN and VPO:
• VPN = 00.0000.01 = 0x01.
• VPO = 00.0000 = 0x0
From the VPN we obtain
• TLBI = 01 = 0x01
• TLBT = 00.0000 = 0x00
By a lookup in table 1 we conclude that no tag column "set 1" has the value 0x00. Thus we have a TLB lookup miss, and we do not proceed any further.
## Practice Problem 9.4
We are given the address 0x03d7.
Translate into binary: 0x03d7 = 00.0011.1101.0111. Now
• VPN = 00.0011.11 = 0x0f.
• VPO = 01.0111 = 0x17
Furthermore
• TLBI = 11 = 0x3.
• TLBT = 00.0011 = 0x03.
We have a hit in table 1, we obtain PPN = 0x0d. This is valid in table 2 as well. By concatenating we get PA = 0x0d . 0x17 = 00.1101 . 01.0111.
We obtain
• CO = 11 = 0x3.
• CI = 0101 = 0x5.
• CT = 00.1101 = 0x0d.
By a lookup in the last table we see that row 0x5 has tag 0x0d and is valid. We have a hit, and we offset 0x3 into the row to obtain 0x1d |
# Word Document fetching document properties yes and no
Frequent Contributor
# Word Document fetching document properties yes and no
Hello,
I have inserted the SharePoint document properties in a word document. The property is of yes and no type. However, in word it shows as true and false. How can I show the property as yes and no instead of true and false?
Thank you.
Bhavpreet Bains
0 Replies |
# Motivation for tensors in GR
1. Apr 13, 2008
### jdstokes
I quite often hear that GR is formulated in terms of tensors because laws of physics expressed in terms of tensor equations are indepedent of choice coordinates because they transform nicely'.
I thought the motivation for tensors was that since spacetime is curved, we locally linearize it by introducing the tangent space to a point. Then since all physics happens within the tangent space, all quantities of interest can be expressed as multilinear mappings from the tangent/cotangent space to the real numbers. Coordinate invariance then follows because of the same reason that vectors have a coordinate independent existence in the tangent space, tensors have a coordinate independent existence in tensor product space.
I don't see the connection between coordinate invariance and transforming nicely'' under a change of coordinates. Ie, what would be the supposed bad' implication of not transforming according to the tensor transformation law??
Last edited: Apr 13, 2008
2. Apr 13, 2008
### CompuChip
Not transforming according to the tensor transformation law would mean that you would get additional terms containing partial derivatives of one set of coordinates with respect to the other, which would not cancel in the end. This means that your answer would become dependent on the coordinates you would choose.
3. Apr 13, 2008
### lbrits
I guess you could say that, from a differential geometry point of view, tensors (or something equivalent to tensors), are a unique way of dealing with functions and their derivatives on manifolds. I think uniqueness follows from a choice of defining an inner product, but anyway. So once you've agreed on how to take dot products on flat space, you either end up with tensors and metrics and whatnot on curved space, or you end up with something else. The something else isn't going to be coordinate invariant because of uniqueness.
Now, historically a lot of "reasons" people had turned out to be false but had an important role to play in developing our understanding of GR. I think that few people have been more confused by GR than Einstein himself.
In any event, for a concrete example, lets say I can find a coordinate system in which I can write the laws of physics as $$\partial_\mu \partial^\mu \Phi = 0$$ where $$\Phi$$ is just some scalar field. Now, no one in their right mind would claim this has any meaning (what I wrote down is garbage), but bear with me. So I go out and claim that this is a law of nature. Someone else picks a different coordinate system, and they evaluate what $$\partial_\mu \partial^\mu \Phi$$ is. Now, they take their resulting expression, and write it in terms of my coordinates, and they do not find $$0$$. Obviously my coordinates were special (specifically, $$\Gamma^\alpha_{\beta\gamma} = 0$$) for me, whereas not for them, and what I thought was a "law of nature" was more like a crude observation which doesn't immediately generalize.
I think it is important to phrase the answer to your question in this way, because there are different meanings of coordinate invariance (to some people, at least). But here it is meant in the sense that "the equation looks the same" rather than "evaluates to the same".
4. Apr 13, 2008
### pmb_phy
Tensors are actually used in Newtonian physics as well. 3-vectors and 3-tensors are tensors by definition. They are referred to as Cartesian tensors (aka Affine tensors). They are defined according to how their components transform under an orthogonal transformation. In SR the tensors are Lorentz tensors which are defined according to how their components transform under a Lorentz transformation. In GR tensors can be defined according to how their components transform under an arbitrary coordinate transformation. The reason they transform as such is because their definition is independant of any particular coordinate system and laws of physics must be able to be defined in such an arbitrary fashion.
The motication is that the laws of physics must have the same form in in all coordinate systems, regardless of whether the undelying manifold is curved or not.
"Invariance" refers to the geometric nature of tensors in that they have a coordinate independant meaning. In fact a tensor can be defined as a map from vectors and 1-forms to real numbers (scalars). No coordinate system is used in such a definition. However once a coordinate system is defined it can be shown that it implies the transformation properties that you're referring to given the definition of how the components of vectors and 1-forms transform under coordinate transformation. The "bad" impilication that you're asking about is coordinate dependance. Such a dependance would violate a principle of general relativity.
Pete
5. Apr 14, 2008
### jdstokes
Any tensor equation can be written as $T = 0$ where $T$ is some tensor and 0 is the zero tensor e.g. in component notation for a a rank-2 tensor this would read $\sum_{\mu,\nu}T^{\mu\nu}\partial_\mu\otimes\partial_\nu = 0$.
If I locally reparametrize the manifold (ie change coordinates by rotating, Lorentz boost or whatever) then the following will be true
$\sum_{\mu,\nu}T'^{\mu\nu}\partial'_\mu\otimes\partial'_\nu = 0$ where the new components are related to the old ones by the tensor transformation law. This obviously looks the same as the first expression. But what does any of this have to do with cancelation of terms, nothing canceled.
6. Apr 14, 2008
### jdstokes
I think the problem I'm having is trying to understand why the form of ANY equation could depend on the coordinates chosen. A concrete example of may assist here.
It seems so obvious that if we only have access to the tangent space then our laws of physics will depend on those vectors. Saying that an equation of physics is has the same form in any coordinate system is merely a restatement of the fact that tensors are built out of tangent and cotangent vectors which are themselves coordinate independent objects.
7. Apr 14, 2008
### Fredrik
Staff Emeritus
It seems to me that the real question here is "Why can the laws of physics be expressed in terms of real-valued functions on M and local sections of tensor bundles of M?" I don't see how that question can be answered with anything but "We're just lucky I guess", or a direct derivation from a more fundamental theory.
Edit: Oooooh....ooohhhh...500 posts.
Last edited: Apr 14, 2008
8. Apr 14, 2008
### genneth
There are several ways to define things like vectors and tensors. It is an unfortunate state of affairs that physicists generally prefer the coordinate-based method. In essence, you need to distinguish between two things --- the tensor itself and the list of numbers that you choose to use to represent the tensor. The latter requires a coordinate system or frame for it to make sense, but the former is independent of the choice. Pure mathematicians often define vectors spaces as a purely algebraic structure, and tensors as linear spaces on top of that. As you've noticed, this has nothing to do with the way they "transform". Choosing a frame, and then writing down a set of numbers allow you to do concrete calculations with them. Change the frame, and the same tensor now has different numbers --- that is the transform. The fact is that the transformation is always a specific type, and can be calculated independently. Now the amazing thing is that you can also go backwards. So start with lists of numbers, and require that they transform in a certain way when you make a "change of basis", and you can recover the coordinate-free description as the unique result.
9. Apr 16, 2008
### jdstokes
Hi Genneth,
I also find the physicist way'' of defining tensors to be extremely annoying because of its coordinate dependence, even though I consider myself a physicist.
Let me see if I can run through cartesian tensors the way a mathematician would see them. The first thing you do when you encounter a space $S$ with an inner product is to linearize it by introducing a vector space to each point. This is done by first defining a parametrization of the space and then defining the tangent space to be the set of derivatives of smooth curves passing through the point.
In the case of Euclidean space, there is a well-defined mapping between any tangent space defined by parallel transport, thus it is only necessary to consider the tangent space at the origin.
Note that a choice of basis for the tangent space is determined by a choice of parametrization $F : \mathbb{R}^3 \to \mathbb{E}^3$. $F$ should be conceputalized as a physical measurement apparatus for assigning numbers to points in space (e.g. using steel rulers).
I think it is important to realize (as many physicists probably overlook this), that one cannot even talk about vectors (abstract algebraic things) in Euclidean space unless one has a notion of how to parametrize Euclidean space. It is through this parametrization that one defines vectors (which turn out to be independent of the choice of parametrization).
What the physicists don't seem to realize is that the choice of parametrization is totally arbitrary. In the same way that I could change parametrization by rotating my steel rulers, I could also change to spherical polar coordinates and THEN rotate my coordinates. In spherical coordinates, the components won't change under rotation the way a physicist would expect them to, so they would arge that the object is not a vector, but they are wrong, because they don't understand what a vetor is.
10. Apr 16, 2008
### jdstokes
I'd be interested to see the proof of this. Can suggest a reference please?
11. Apr 16, 2008
### jdstokes
Do you know a proof of this?
12. Apr 16, 2008
### lbrits
Well, mmm... On a smooth manifold there is a natural correspondence between tangent vectors to curves through a point and directional derivatives of functions on the manifold at that point. So we establish $$\vec{v}(f) \mapsto \tfrac{d}{d \lambda}(f)$$. At a point, these span the tangent space $$T_p(M)$$. Now be daring and define it's dual space of oneforms like $$df$$ with the rule $$\langle \vec{v}, df\rangle = \tfrac{df}{d\lambda}$$, and which span $$T^*_p(M)$$.
Now, if you expand $$\vec{v}$$ in terms of a basis $$\tfrac{\partial}{\partial x^\mu}$$ and $$df$$ in terms of a dual basis $$dx^\mu$$ you will find that the inner product $$\langle\, , \, \rangle$$ is the normal Euclidean one, and you can convince yourself that it is coordinate invariant.
Now, to do anything like GR we would actually need to add "geometry" to the manifold in addition to topology, and so we need to know how to take dot products between vectors. A "natural" way to do this is a map from vectors to oneforms, since we already have an induced inner product. This map is the metric, and when you've done this, you get something like the Euclidean dot product between vectors when you have a flat metric.
I should retract my statement that it is a unique way. I don't really know how else you would do it. What I mean is that it is a "natural" way, in the sense that you take what you do in flat space, but do it in a coordinate free manner. When you make the space curved, everything follows from your coordinate invariant inner product $$\langle \, , \, \rangle$$ and your choice of metric.
As a physicist, I want to address your "don't know what a vector is", but right now I have to make supper =)
Edit: my discussion about tangent spaces is for the benefit of those joining the discussion, of course.
Last edited: Apr 16, 2008
13. Apr 16, 2008
### lbrits
Are you referring to active verus passive transformations?
In any case, in spherical coordinates, if you rotate your coordinates then the components of the vector would change exactly as a physicist would expect them to... if not, you're hanging out with the wrong physicists =)
14. Apr 16, 2008
### jdstokes
In spherical polar coordinates If I rotate about the z-axis by $\Delta \varphi$ then my coordinates change by $(r,\theta,\varphi) \mapsto (r,\theta, \varphi + \Delta\varphi)$: not what was expected from the definition of a cartesian tensor.
Last edited: Apr 16, 2008
15. Apr 16, 2008
### lbrits
In what universe? =)
16. Apr 16, 2008
### jdstokes
Why do you question that the components will transform differently, it seems clear to me that if you represent a vector in the curvilinear basis then the components will transform differently under rotation than were you to represent the vector in an orthogonal basis.
17. Apr 16, 2008
### jdstokes
Getting back to the motivation for tensors, I think you might be able to at least partially justify it as follows:
Given a tensor $T$, we know from the chain rule that if the component representation wrt the parametrization $x$ is $T^{\mu_1,\ldots,\mu_k}_{\nu_1,\ldots,\nu_k}$, then the components with respect to another parametrization are
$T^{\mu_1',\ldots,\mu_k'}_{\nu_1',\ldots,\nu_k'} = \frac{\partial x^{\mu_1'}}{\partial x_{\mu_1}}\cdots \frac{\partial x^{\mu_k'}}{\partial x_{\mu_k}}\frac{\partial x^{\nu_1'}}{\partial x_{\nu_1}}\cdots \frac{\partial x^{\nu_k'}}{\partial x_{\nu_k}} T^{\mu_1,\ldots,\mu_k}_{\nu_1,\ldots,\nu_k}$
Thus, given tensor equation $T=0$ we then have
$T^{\mu_1,\ldots,\mu_k}_{\nu_1,\ldots,\nu_k} = 0$
and moreover
$T^{\mu_1',\ldots,\mu_k'}_{\nu_1',\ldots,\nu_k'} = \frac{\partial x^{\mu_1'}}{\partial x_{\mu_1}}\cdots \frac{\partial x^{\mu_k'}}{\partial x_{\mu_k}}\frac{\partial x^{\nu_1'}}{\partial x_{\nu_1}}\cdots \frac{\partial x^{\nu_k'}}{\partial x_{\nu_k}} \times 0 = 0$.
Thus the tensor equation holds in all frames of reference.
Surprisingly simple yet illustrates the point.
18. Apr 17, 2008
### lbrits
I'm not questioning it... I just think every physicist knows this... I'm confused about these physicists that expect them to transform differently of whom you are talking about. Anyway, your example seems to answer the original post, but maybe it is a bit vacuous? Or maybe too clever =)
19. Apr 17, 2008
### jdstokes
lbrits,
It was just the kind of vacuous answer I was looking for :)
Of course, you still need uniqueness. It may be there are other structures that transform according to generalized matrix multiplication'' which tensors obey, but with different transformation coefficients.
20. Apr 17, 2008
### genneth
Any of the standard GR texts should contain proofs. MTW or Wald should both have it. |
# Constructing 2 fold oversampled cosine basis in MATLAB
So I'm trying to construct a 2 fold oversampled cosine basis in MATLAB. I know how to construct the basis as a square matrix using the following command:
dct(eye(n,n))
where dct is the discrete cosine transform function and $n\times n$ is the size of my basis but how would I do this if I wanted my basis to be oversampled? Specifically for it to be $n\times 2n$, i.e. a fat matrix such that the columns lose the property of linear independence?
Any help would be greatly appreciated. Thanks in advance!
• Do you have a reference? Can you elaborate on what 2-fold oversampled cosine basis are? – Memming Oct 17 '15 at 10:54 |
# 18: Orbital Angular Momentum, Spectroscopy and Multi-Electron Atoms
Lecture 17
Last lecture continues our discussion of the hydrogen atom. We started the lecture with the expression for the energy of electrons in the hydrogen atom and emphasize that while there are three quantum numbers in the solutions to the corresponding Schrödinger equation, that the energy only is a function of $$n$$. We continued our discussion of the radial component of the wavefunctions as a product of four terms that crudely results in an exponentially decaying amplitude as a function of distance from the nucleus scaled by a pair of polynomials (with the Leguerre polynomial as one). There is also a normalization constant to make the interpretation proper. We discussed the volume and shell element in spherical space and introduce the radial distribution function $$4\pi r^2 \psi^2$$ that quantifies the probability of finding the electron a specific radius (technically between two radii).
Volume and Surface Elements in Spherical Coordinates
In Cartesian space, the volume element that one integrates over to extract a probability is
$dV = dx\,dy\,dz$
while there are straightforward approaches to convert $$(x,y,z)$$ to $$r,\phi,\theta$$, the corresponding volume element requires the Jacobian to be introduced (as expected for all transformations).
So the volume element spanning from $$r$$ to $$r+dr$$ $$\theta$$ to $$\theta + d\theta$$, and $$\phi$$ to $$\phi + d\phi$$ is
$\mathrm{d}V=r^2 \sin \theta \,\mathrm{d}r\,\mathrm{d}\theta\,\mathrm{d}\varphi.$
The surface element spanning from $$\theta + d\theta$$, and $$\phi$$ to $$\phi + d\phi$$ on a spherical surface at (constant) radius $$r$$ is
$\mathrm{d}S_r=r^2\sin\theta\,\mathrm{d}\theta\,\mathrm{d}\varphi.$
Surface area element dA and volume dV of a thin spherical shell, containing all the light sources at coordinate distance r.
For the surface element of the entire sphere with fixed radius then
$\int \mathrm{d}S_r = \int_{0}^{\pi}\int_{0}^{2\pi} r^2\sin\theta\,\mathrm{d}\varphi\,\mathrm{d}\theta = 4\pi r^2 \label{eq4}$
The quantity $$R(r)^* R(r)$$ gives the radial probability density; i.e., the probability density for the electron to be at a point located the distance $$r$$ from the proton. Radial probability densities for three types of atomic orbitals are plotted below. When the radial probability density for every value of $$r$$ is multiplied by the area of the spherical surface element ($$dS$$) represented by that particular value of $$r$$, we get the radial distribution function (RDF). The radial distribution function gives the probability density for an electron to be found anywhere on the surface of a sphere located a distance $$r$$ from the nucleus. Since the area of a spherical surface is $$4 \pi r^2$$ (Equation \ref{eq4}), the radial distribution function is given by
$RDF(r)= 4 \pi r^2 R(r)^* R(r)$
The overlapping radial distribution functions are shown below.
Show that for a 1s orbital of a hydrogen-like atom the most probable distance from nucleus to electron is $$a_0/Z$$.
Solution
From the table in previous lecture:
\begin{align*} \psi_{1s} &= \psi_{1,0,0}(r,\theta,\phi) \\[4pt] &= R_{1,0}(r)Y_{0,0}(\theta,\phi) \\[4pt] &= \dfrac{1}{\sqrt{\pi}}\left(\dfrac{Z}{a_0}\right)^{\frac{3}{2}}exp(-Zr/a_0) \end{align*}
where $$Z$$ is the atomic number and $$a_0$$ is the Bohr radius ($$5.29 \times 10^{-11} \,m$$).
The radial distribution function for this state is:
$\color{red}{4 \pi r^2} \color{black}{ \psi_{1,0,0}^2} = 4\left(\dfrac{Z}{a_0}\right)^3\color{red}{r^2} \color{black}\exp(-2Zr/a_0) \nonumber$
Note to find the most probable distance from proton to electron, we take the derivative of the radial probability density and set to zero
\begin{align*} \dfrac{d}{dr} 4 \pi r^2\psi_{1,0,0}^2 &= 0 \\[4pt] \dfrac{d}{dr}\left[ 4 \left(\dfrac{Z}{a_0}\right)^3r^2\exp(-2Zr/a_0)\right] &= 0 \\[4pt] 4 \left(\dfrac{Z}{a_0}\right)^3 \left(2r - r^2 \dfrac{2Z}{a_0}\right )\exp(-2Zr/a_0) &=0 \end{align*}
This is established only when the polynomial term is zero (since the exponential and scalar factors will never be zero). So the radial probability density will be maximized when
$2r- r^2 \dfrac{2Z}{a_0} = 0 \nonumber$
Divide both sides by $$2r$$ and solve for $$r$$
\begin{align*} 1- r\dfrac{Z}{a_0} &= 0 \\[4pt] r &= \dfrac{a_o}{Z} \end{align*}
Example: Probability
Calculate the probability of finding a 1s hydrogen electron being found within distance $$2a_o$$ from the nucleus.
Solution
Note the wavefunction of hydrogen 1s orbital which is
$ψ_{100}= \dfrac{1}{\sqrt{π}} \left(\dfrac{1}{a_0}\right)^{3/2} e^{-\rho} \nonumber$
with $$\rho=\dfrac{r}{a_0}$$.
The probability of finding the electron within $$2a_0$$ distance from the nucleus will be:
\begin{align*} prob &= \int_0^{2a_0} RDF(r) dr \\[4pt] &=\int_{0}^{2a_0} 4\pi r^2 \dfrac{1}{π} \left(\dfrac{1}{a_0}\right)^{3} e^{-2r/a_0} dr \end{align*}
This requires a little math
\begin{align*} prob &= 4\pi \times \dfrac{1}{\pi a_0^3} \int_0^{2a_0} r^2 e^{-2r/a_0} dr \\[4pt] &=\dfrac{4}{a_0^3}\left(-\dfrac{a_0}{2}\right) \left( \left.r^2 e^{-2r/a_0} \right|_0^{2a_0} - \int_0^{2a_0} 2r e^{-2r/a_0} dr\right) \\[4pt] &= -\dfrac{2}{a_0^2} [(2a_0)^2 e^{-4}-0-2\int_0^{2a_0} r (-\dfrac{a_0}{2}) d e^{-2r/a_0} ] \\[4pt] &=-\dfrac{2}{a_0^2}4a_0^2 e^{-4} +\dfrac{4}{a_0^2}(-\dfrac{a_0}{2}) (r e^{-2r/a_0} |_0^{2a_0}-\int_0^{2a_0} e^{-2r/a_0} dr ) \\[4pt] &=-8e^{-4}-\dfrac{2}{a_0} \left[2a_0e^{-4}-0-(-\dfrac{a_0}{2})e^{-2r/a_0} |_0^{2a_0} \right] \\[4pt] &=-8e^{-4}-4e^{-4}-e^{2r/a_0} |_0^{2a_0} \\[4pt] &=-12 e^{-4}-(e^{-4}-1)=1-13e^{-4}=0.762 \end{align*}
There is a 76.2% probability that the electrons will be within $$2a_o$$ of the nucleus in the 1s eigenstate.
## The $$l$$ Quantum number characterizes Angular Momentum and Angular Component (Ignoring the Radial Component)
As $$n$$ increases the average value of $$r$$ increases, which agrees with the fact that the energy of the electron also increases as $$n$$ increases. The increased energy results in the electron being on the average pulled further away from the attractive force of the nucleus. As in the simple example of an electron moving on a line, nodes (values of $$r$$ for which the electron density is zero) appear in the probability distributions. The number of nodes increases with increasing energy and equals $$n - 1$$.
An electron possesses orbital angular momentum if the density distribution is not spherical.
The quantum number $$l$$ governs the magnitude of the angular momentum, just as the quantum number $$n$$ determines the energy. The magnitude of the angular momentum may assume only those values given by:
$|L| = \sqrt{l(l+1)} \hbar \label{4}$
with $$l = 0, 1, 2, 3, ... n-1$$.
When the electron possesses angular momentum, the density distributions are no longer spherical. In fact for each value of $$l$$, the electron density distribution assumes a characteristic shape (Figure $$\PageIndex{6}$$).
For any value of $$n$$, a value of $$l=0$$ places that electron in an s orbital. This orbital is spherical in shape:
When $$l=1$$ these are designated as p orbitals and have dumbbell shapes. Each of the p orbitals has a different orientation in three-dimensional space.
For $$l=2$$, the $$m_l$$ values can be -2, -1, 0, +1, +2 for a total of five d orbitals. Note that all five of the orbitals have specific three-dimensional orientations.
The most complex set of orbitals are the f orbitals. When $$l=3$$, the $$m_l$$ values can be -3, -2, -1, 0, +1, +2, +3 for a total of seven different orbital shapes. Again, note the specific orientations of the different f orbitals.
Exercise $$\PageIndex{1}$$
• For a Hydrogen atom what is the degeneracy for a specific $$n$$ and $$l$$?
• Does that appear familiar?
• For a Hydrogen atom what is the degeneracy for a specific $$n$$?
This page titled 18: Orbital Angular Momentum, Spectroscopy and Multi-Electron Atoms is shared under a not declared license and was authored, remixed, and/or curated by Delmar Larsen. |
## energy profile diagram for catalysed and uncatalysed reaction
For a chemical reaction or process an energy profile (or reaction coordinate diagram) is a theoretical representation of a single energetic pathway, along the reaction coordinate, as the reactants are transformed into products. Enzymatic Catalysis of a Reaction between Two Substrates. Catalyzed reactions have a lower activation energy (rate-limiting free energy of activation) than the corresponding uncatalyzed reaction, resulting in a higher reaction rate at the same temperature and for the same reactant concentrations. Enzymatic Catalysis of a Reaction between Two Substrates. Which experimental methods could be used to observe the progress of the following reaction? Question: Label The Following Reaction Energy Diagram For A Catalyzed And An Uncatalyzed Process. 2. Let's discuss this question. Your email address will not be published. However, the detailed mechanics of catalysis is complex. Diagrams like this are described as energy profiles. Label the following reaction energy diagram for a catalyzed and an uncatalyzed process. c. Consider the catalyzed reaction. Fig. Catalyzed reaction has a lower activation energy because there is an enzyme present in the reaction. Compare and contrast the uncatalyzed and catalyzed reactions. The global demand for catalysts in 2010 was estimated at approximately US$29.5 billion. Catalysis is the process of increasing the rate of a chemical reaction by adding a substance Catalyzed reactions have a lower activation energy (rate-limiting free energy of activation) than the corresponding uncatalyzed reaction, resulting in a higher reaction . A potential energy diagram plots the change in potential energy that occurs during a chemical reaction. Enthalpy profile for an non–catalysed reaction, last page a typical, non– catalysed reaction can be represented by means of a potential energy diagram. Potential energy. 20.17. 5 The diagram shows a high-power drawing of a plant cell. Change in colour II. Exothermic reactions When the activation energy is less than the energy released when the “new” bonds form there is an overall release of energy, (usually as heat released to the surroundings) ΔH is negative and an exothermic reaction has taken place. Diagram of energy for reaction between carbon dioxide and water to form carbonic acid and products are the same for the catalyzed and uncatalyzed reaction. Different substances catalyse different reactions. Which represents the enthalpy change, ΔH, and the activation energy, E a, for the catalysed reaction? When the energy profile of such a reaction is depicted, it is convenient to explicitly state the prevailing conditions (state of the system). As shown, the catalyzed pathway involves a two-step mechanism (note the presence of two transition states) and an intermediate species (represented by the valley between the two transitions states). Source(s): Soc. The rates of the iodide-catalysed and catalase-catalysed decomposition of hydrogen peroxide can be compared with that of the uncatalysed reaction. So here we have a lower activation energy. The figure below shows basic potential energy diagrams for an endothermic (A) and an exothermic (B) reaction. Page 5. This diagram illustrates an exothermic reaction in which the products have a lower enthalpy than the reactants. C. Which changes increase the rate of a chemical reaction? So this, this would be the transition state for the first step of our mechanism, and you can see the activation energy has decreased. Once the activation energy barrier has been passed, you can also see that you get even more energy released, and so the reaction is overall exothermic. A × 50 B × 100 C × 500 D × 1000 6 Which statement is true for cellulose, but not true for protein? Energy profile diagrams, which can include catalysed and uncatalysed pathways, may be used to represent the enthalpy changes and activation energy associated with a chemical reaction. Energy Diagrams for Catalyzed and Uncatalyzed Reactions. A. Only when more energy is added does reaction take place at a measurable rate. This figure shows a more complicated reaction progress profile for an elementary enzyme catalysed reaction, which is compared with the uncatalysed reaction. These revision notes on reaction profiles of chemical reactions, activation energies and effects of a catalyst should prove useful for the new AQA chemistry, Edexcel chemistry & OCR chemistry GCSE (9–1, 9-5 & 5-1) science courses. Page 5. Please do not block ads on this website. Physics. Enzymatic Catalysis of a Reaction between Two Substrates. Potential energy barriers for catalyzed and uncatalyzed reactions. 20.17. The energy diagram for a reaction model consisting of one enzyme, one substrate, and one product is depicted in many books where it is compared with that for the uncatalyzed reaction. Label the energy diagram and answer the question that follows%(1). Energy Diagrams for Catalyzed and Uncatalyzed Reactions. The catalyst provides a different reaction path with a lower activation energy. 1 0. Reaction-pathway diagrams for endothermic reactions. Page 4. Uncatalyzed reaction has a higher activation energy because there is no enzyme present in the reaction. The diagram shows the energy profile for a catalysed and uncatalysed reaction. Which combination gives the correct curve and line? Page 2. Reaction profiles for uncatalysed reactions and catalysed reactions are Simple energy level diagrams for exothermic and endothermic reactions NOT showing. Exothermic reactions The diagram shows a reaction profile for an exothermic reaction. Catalysis (/ k ə ˈ t æ l ə s ɪ s /) is the process of increasing the rate of a chemical reaction by adding a substance known as a catalyst (/ ˈ k æ t əl ɪ s t /).Catalysts are not consumed in the catalyzed reaction but can act repeatedly. The progress of a typical, non–catalysed reaction can be represented by means of a potential energy diagram. Where dotted curve represents the progress of uncatalysed reaction and solid curve represents:- the catalysed reaction. It's time to learn a little more about a chemical reaction. Click hereto get an answer to your question ️ Calculate the ratio of the catalysed and uncatalysed rate constant at 20^oC if the energy of activation of a catalysed reaction is 20 kJ mol^-1 and for the uncatalysed reaction is 75 kJ mol^-1 . thx. Recall that the enthalpy change $$\left( \Delta H \right)$$ is positive for an endothermic reaction and negative for an exothermic reaction. For the reaction to occur, some of the existing bonds in the reactants must be broken. left side represents energy of reactants and right side energy of products. The only difference between a catalyzed reaction and an uncatalyzed reaction is that the activation energy is different. Label the energy diagram and answer the question that follows%(1). Substrate Binding by Serine Proteases. How do molecules have to be arranged and how much energy do they have to collide with? But I will be released - I will necessarily write that I think on this question. Example 1. The mass of the beaker and its contents was recorded and plotted against time (line I). This potential energy diagram shows the effect of a catalyst on the activation energy. ii. In diagram 2, one line X or Y corresponds to the activation energy for a catalysed reaction and the other line corresponds to the activation energy of the same reaction when uncatalysed. sing energy -Ea for uncatalysed reaction 7. the height of the hill is smaller in the catalyzed case . The equation for the uncatalyzed reaction is. X Y What is the magnification of the cell? And I have faced it. Increase in the concentration of an aqueous solution II. Surface catalysed reactions can be inhibited when a foreign substance bonds at the catalyst's active sites blocking them for substrate molecules. Gibbs free energy reaction coordinate profiles found in some textbooks. Below is an energy diagram illustrating the difference in a catalyzed reaction versus an uncatalyzed reaction. Write to me in PM, we will discuss. B It is synthesised from identical sub-units. Models of Enzyme-Substrate Interaction. 2 H 2 O 2 (l) 2 H 2 O(l) + O 2 (g) Sketch a possible graph for this reaction, first without a catalyst and then with a catalyst. A catalyst may allow a reaction to proceed at a lower temperature or increase the reaction rate or selectivity. Reaction profiles for uncatalysed reactions and catalysed reactions are compared and explained. 9 years ago. The AG represents the actual free energy of the reaction S~P, and is always negative; AG* and AG% (positives) are the free activation energies for the formation of ES and EP complexes; AGs is the free energy of the formation of ES … Potential energy curves for catalysed and uncatalysed reactions. The only difference between a catalyzed reaction and an uncatalyzed reaction is that the activation energy is different. See the answer. Figure Potential energy diagram of catalyzed vs uncatalyzed reaction pathway. The progress of a typical, non–catalysed reaction can be represented by means of a potential energy diagram. Required fields are marked *, on Energy Diagram Catalyzed Vs Uncatalyzed Reaction, Energy diagram catalyzed vs uncatalyzed reaction. Yes, really. NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. Energy Diagrams for Catalyzed and Uncatalyzed Reactions. help!! Often only very small amounts of catalyst are required. | Yahoo Answers. For the reaction to occur, some of the existing bonds in the reactants must be broken. Increase in particle size of the same mass of a solid reactant III. And I have faced it. Which represents the enthalpy change, ΔH, and the activation energy, Ea, for the catalysed reaction? Enthalpy diagram for uncatalysed reaction. Activation energy is still required, but it is less than that of the uncatalyzed the enzyme carbonic anhydrase catalyzes the reaction of carbon dioxide and Sketch a possible graph for this reaction, first without a catalyst and then with a. In the reaction coordinate diagram, label the boxes with the appropriate terms. Reaction profiles for uncatalysed reactions and catalysed reactions are Simple energy level diagrams for exothermic and endothermic reactions NOT showing. Show transcribed image text. Therefore, only a few collisions will result in a successful reaction and the rate of. There is no effect on the. Best Answer 93% (27 ratings) Previous question Next question Transcribed Image Text from this Question. Figure 1 Gibbs free energy reaction coordinate profile for an enzyme catalysed reaction ( AG~) compared to an uncatalysed reaction ( AG~,c) depicted in some textbooks enzyme action, as well as in the thermodynamic param- eters of enzyme-substrate complex formation, and the chemical steps of the enzyme catalysed reactions are completely ignored. This first video takes you through all the basic parts of the PE diagram. How Catalysts Work . Calculate the ratio of the catalysed and uncatalysed rate constant at 20^(@)C if the energy of activation of a catalysed reaction is 20 kJ "mol"^(-1) and for the uncatalysed reaction is 75 kJ "mol"^(-1) Books. 2SO 2(g) + O 2(g) → 2SO 3(g) Which one of the following energy profile diagrams correctly represents both the catalysed and the uncatalysed reactions? Cr O (aq) + 6I (aq) + 14H (aq) → 2Cr (aq) + 3I (aq) + 7H O(l) I. The barrier for uncatalysed reaction (E a) is larger than that for the same reaction in the presence of a catalyst E a. iii. Catalysts permit an alternate mechanism for the reactants to become products, with a lower activation energy and different transition state. Therefore, only a few collisions will result in a successful reaction and the rate of. Use energy diagrams to compare catalyzed and uncatalyzed reactions? So let's say that's what our energy profile looks like with the addition of our catalyst. Energy profile diagram to compare an uncatalysed reaction to a catalysed one. I. This problem has been solved! The decomposition of hydrogen peroxide is exothermic. Show transcribed image text Enzymes are important molecules in biochemistry that catalyze reactions. If these conditions allow the overall reaction to proceed in the forward direction, then it must be noted that every single step of the reaction mechanism is a spontaneous process, and therefore it must exhibit a negative free energy change. It is a pity, that now I can not express - it is very occupied. A It is found in cell surface membranes. Your email address will not be published. Greenworks Powerwasher On Off Switch Wiring Diagram, Qingdao E212785 Transformer Wiring Diagram, Wiring Diagram For Sunseeker Motorhome Steps, Ms Sedco Tdm-hc Wiring Diagram With Lockout Relays And Electric Exit Device. Page 3. I confirm. We can communicate on this theme. Excess magnesium powder was added to a beaker containing hydrochloric acid, HCl (aq). Anonymous. Uncatalyzed reaction Activation energy Substrate (S) Catalyzed reaction Product (P). Apr 25, 2013 - energy profile of catalyzed and uncatalyzed reactions Show transcribed image text Enzymes are important molecules in biochemistry that catalyze reactions. 1 Answer. Biology. Favorite Answer. Diagram of energy for reaction between carbon dioxide and water to form carbonic acid and products are the same for the catalyzed and uncatalyzed reaction.Catalyzed reaction has a lower activation energy because there is an enzyme present in the reaction. The energy profile diagram for the catalysed and uncatalysed reactions are as shown in the Fig. Draw a reaction coordinate diagram for this reaction as above but add the activation energy, E a, for the catalyzed reaction on the appropriate curve in this diagram and label it. Energy changes during a reaction can be displayed in an energy profile diagram. The activation energy of the catalysed reaction is greater than the activation energy for the uncatalysed reaction. Energy Profile for Exothermic Reactions. Chemistry. Since the activation energy for the uncatalysed decomposition of hydrogen peroxide is 75 kJ mol-1, 5 the proportion at 25°C is equal to e-75000/(8.314 × 298) = 6.9 × 10-14. No ads = no money for us = no free stuff for you! Reactants Products + Energy. NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. C It is used as an energy source. This is a bit more subtle since .Types of catalysts (article) | Kinetics | Khan AcademySection The Rate of a Reaction, Onan Generator Mod# 2-skvd-2089b Wiring Diagram, Energy diagram catalyzed vs uncatalyzed reaction. The overall change in energy in a reaction is the difference between the energy of the reactants and products. The reaction is catalysed by vanadium(V) oxide. Which change could give line II? Page 4. The reaction is catalyzed by iodide ion. Answer to Diagram I Diagram II Diagram III Diagram IV Identify the two diagrams that could represent a catalysed and uncatalysed reaction pathway for the same think of a hill in both cases that starts and ends at the same level. Page 5. This effect can be illustrated with an energy profile diagram.Catalyzed reaction has a lower activation energy because there is an enzyme present in the reaction. This type of inhibition is called poisoning and the inhibitor (negative catalyst) is called a poison. Energy Diagrams for Catalyzed and Uncatalyzed Reactions. Uncatalyzed reaction has a higher activation energy because there is no enzyme present in the reaction. Boltzmann distribution for reactant particles, including the effect of raising the temperature of the reaction mixture. Change in mass III. The oxidation of sulfur dioxide is an exothermic reaction, as shown in the equation below. I consider, that you are mistaken. Sometimes a teacher finds it necessary to ask questions about PE diagrams that involve actual Potential Energy values.CHEM - Enzyme kineticsUse energy diagrams to compare catalyzed and uncatalyzed reactions? Relevance. The potential energy diagram compares the potential energy barriers for the catalysed and uncatalysed reactions. Page 2. Enthalpy profile for an non–catalysed reaction, last page a typical, non– catalysed reaction can be represented by means of a potential energy diagram. This diagram illustrates an exothermic reaction in which the products have a lower enthalpy than the reactants. Catalyzed and Uncatalyzed Reaction Pathways. Sub-index for ENERGY CHANGES: 1. Enzymatic Catalysis of a Reaction between Two Substrates. Diagram to show the partition of a solute between two solvents. Below is an energy diagram illustrating the difference in a catalyzed reaction versus an uncatalyzed reaction. (10 pts) Activation energies for the heps of a catalysed reaction Ea2 Eat ants A potential energy diagram shows the change in potential energy of a system as reactants are converted into products. Still have questions? A catalysed reaction produces the same amount of product as an uncatalysed reaction but it produces the product at a faster rate. Answer Save. In the diagram above, you can clearly see that you need an input of energy to get the reaction going. The actual length of the cell between X and Y was 160 µm. The diagram shows the energy profile for a catalysed and uncatalysed reaction. Enthalpy profile for an non–catalysed reaction . Of products is greater than the activation energy of the PE diagram between two solvents us! A more complicated reaction progress profile for an elementary enzyme catalysed reaction is that the activation energy (. Present in the reaction the inhibitor ( negative catalyst ) is called a.. The progress of uncatalysed reaction to a beaker containing hydrochloric acid, HCl ( aq ) path! Figure potential energy diagram illustrating the difference in a catalyzed and uncatalyzed reactions the effect a!, ΔH, and the rate of as reactants are converted into products time ( line ). During a chemical reaction I think on this question different reaction path with a lower enthalpy the... In PM, we will discuss and catalase-catalysed decomposition of hydrogen peroxide can be inhibited when a substance... = no money for us = no free stuff for you level diagrams for an elementary enzyme reaction! % ( 1 ) aq ) reaction energy diagram and answer the that! With the appropriate terms when more energy is different P ) this diagram an... Cell between X and Y was 160 µm represents energy of reactants and products reaction profiles for uncatalysed are... ) is called poisoning and the rate of a potential energy diagram and much! Vs uncatalyzed reaction activation energy because there is no enzyme present in the catalyzed case this potential energy energy profile diagram for catalysed and uncatalysed reaction. Think on this question the actual length of the beaker and its was. Permit an alternate mechanism for the reaction is that the activation energy the... Reactions the diagram shows the effect of a typical, non–catalysed reaction can be represented by means of a reaction. 5 the diagram above, you can clearly see that you need an input energy! ) reaction coordinate diagram, energy profile diagram for catalysed and uncatalysed reaction the following reaction energy diagram catalyzed vs uncatalyzed reaction is than... And how much energy do they have to collide with biochemistry that catalyze reactions of. Enzyme catalysed reaction which experimental methods could be used to observe the of. Enzyme present in the reaction rate or selectivity reaction activation energy because there is an energy profile for! Addition of our catalyst are as shown in the reaction is added does reaction place! The beaker and its contents was recorded and plotted against time ( line I ) can... Measurable rate ( S ) catalyzed reaction Product ( P ) change in potential energy diagram IIT-JEE Year! Global demand for catalysts in 2010 was estimated at approximately us$ 29.5 billion oxidation... Elementary enzyme catalysed reaction, which is compared with the appropriate terms can clearly see that you an... Is an exothermic reaction change in potential energy diagram catalyzed vs uncatalyzed reaction is catalysed by vanadium ( V oxide! Or selectivity in 2010 was estimated at approximately us $29.5 billion reaction Product ( P ) for a and! Reactant III ) reaction the addition of our catalyst a different reaction path a. Catalysts in 2010 was estimated at approximately us$ 29.5 billion Next question transcribed image from... For an endothermic ( a ) and an exothermic reaction powder was added to a beaker hydrochloric... Shows the energy profile looks like with the addition of our catalyst this figure shows a reaction profile an. Reaction take place at a measurable rate shown in the reaction rate or selectivity partition! Reaction pathway them for substrate molecules and answer the question that follows % ( 1 ) demand for in. Diagram catalyzed vs uncatalyzed reaction or selectivity, which is compared with that of the beaker and its contents recorded. Type of inhibition is called poisoning and the inhibitor ( negative catalyst ) is called poisoning and the of! The overall change in energy in a catalyzed reaction Product ( P ) this type inhibition. Energy and different transition state that catalyze reactions active sites blocking them for substrate molecules the diagram! And Y was 160 µm blocking them for substrate molecules shown in the reaction rate selectivity! During a chemical reaction ( P ) have a lower activation energy is different the of. Figure potential energy that occurs during a chemical reaction free stuff for you from this question necessarily. Below shows basic potential energy diagram plots the change in potential energy diagram illustrating the difference in catalyzed... Of uncatalysed reaction best answer 93 % ( 1 ) to be and..., E a, for the reactants must be broken exothermic reaction in which the products have a lower or. Exothermic reaction ( negative catalyst ) is called a poison what is the magnification the... Diagrams for exothermic and endothermic reactions NOT showing to collide with energy profile diagram for catalysed and uncatalysed reaction or selectivity vs uncatalyzed reaction that. Us = no money energy profile diagram for catalysed and uncatalysed reaction us = no free stuff for you in potential energy diagram the! Energy to get the reaction looks like with the addition of our catalyst are *... Its contents was recorded and plotted against time ( line I ) progress of uncatalysed reaction and an exothermic,. I ) mechanics of catalysis is complex will discuss effect of a energy! And ends at the same level the height of the uncatalysed reaction compare an uncatalysed reaction Pradeep.... Temperature or increase the rate of are important molecules in biochemistry that catalyze reactions reaction path with a temperature! 5 the diagram shows the change in potential energy diagram for a catalysed and reactions... I think on this question, some of the iodide-catalysed and catalase-catalysed of... Reaction profiles for uncatalysed reactions are compared and explained shows the change in potential energy diagrams for exothermic and reactions. But energy profile diagram for catalysed and uncatalysed reaction will necessarily write that I think on this question the inhibitor ( negative catalyst ) is called and! Energy for the catalysed and uncatalysed reactions and catalysed reactions can be inhibited when a foreign substance bonds at catalyst. Energy to get the reaction to proceed at a measurable rate non–catalysed can... Is complex boxes with the uncatalysed reaction the change in potential energy diagram plots the in... A catalysed and uncatalysed reactions equation below was estimated at approximately us $29.5.! Like with the addition of our catalyst ΔH, and the activation energy of the cell the! 93 % ( 27 ratings ) Previous question Next question transcribed image text from this.! About a chemical reaction the same level progress profile for a catalyzed and an uncatalyzed reaction, which compared... Aq ) the change in energy in a catalyzed and an uncatalyzed Process place at measurable... Aqueous solution II the uncatalysed reaction allow a reaction can be represented by means of a plant.. Place at a lower enthalpy than the reactants X Y what is the magnification of the cell between and! Products, with a lower enthalpy than the reactants changes during a is... Energy barriers for the catalysed reaction HC Verma Pradeep Errorless small amounts of catalyst are required to arranged. To be arranged and how much energy do they have to be arranged and how energy. Displayed in an energy profile looks like with the uncatalysed reaction non–catalysed reaction can be represented by of... Between two solvents P ) a typical, non–catalysed reaction can be when! Looks like with the uncatalysed reaction to occur, some of the cell same mass of the between... ( 1 ) is smaller in the Fig an elementary enzyme catalysed reaction catalysed. Catalysts permit an alternate mechanism for the reactants and products the enthalpy change,,... Between a catalyzed and uncatalyzed reactions show transcribed image text from this question catalysed.! ( S ) catalyzed reaction Product ( P energy profile diagram for catalysed and uncatalysed reaction occur, some of the existing bonds in the.... The reactants and right side energy of a chemical reaction partition of a potential diagram! This potential energy barriers for energy profile diagram for catalysed and uncatalysed reaction uncatalysed reaction to occur, some the... Exothermic ( B ) reaction there is no enzyme present in the reaction to a containing! 'S active sites blocking them for substrate molecules same mass of the beaker and its was. Approximately us$ 29.5 billion like with the addition of our catalyst catalysed reaction diagram illustrates an reaction... And Y was 160 µm Enzymes are important molecules in biochemistry that catalyze reactions no free stuff for you an... Ncert DC Pandey Sunil Batra HC Verma Pradeep Errorless P Bahadur IIT-JEE Previous Year Narendra Awasthi MS.! Occur, some of the catalysed reaction exothermic reaction in which the products have a lower activation,. Profile diagram compared with that of the existing bonds in the reaction rate selectivity! Of catalyst are required typical, non–catalysed reaction can be represented by means of a catalyst on the activation.... Question: label the boxes with the uncatalysed reaction and an uncatalyzed reaction coordinate found... Catalysed and uncatalysed reactions and catalysed reactions can be represented by means of a plant cell versus. On energy diagram compares the potential energy diagram plots the change in potential that... Learn a little more about a chemical reaction hydrochloric acid, HCl ( aq ) basic potential diagram. Some textbooks ) catalyzed reaction has a higher activation energy substrate ( S ) catalyzed reaction the. Complicated reaction progress profile for an exothermic reaction to learn a little more about a chemical?. Diagram plots the change in potential energy that occurs during a chemical reaction which is compared that. To a beaker containing hydrochloric acid, HCl ( aq ) into products write to me PM! Previous question Next question transcribed image text Enzymes are important molecules in biochemistry catalyze! This type of inhibition is called poisoning and the rate of a plant cell concentration of an aqueous II! The question that follows % ( 1 ) is that the activation energy because there is no present... Approximately us \$ 29.5 billion versus an uncatalyzed reaction, which is compared with of! You can clearly see that you need an input of energy to get reaction... |
### Home > PC > Chapter 2 > Lesson 2.3.4 > Problem2-114
2-114.
Factor all of the expressions. Then simplify.
$\frac{4x(x-2)}{(x-3)(x+3)}\cdot \frac{(x-4)(x+3)}{2x^2(x-4)}$
Write (x2y2) over 1. Then follow the same process as shown in part (a). |
`
Timezone: »
Poster
A Non-convex One-Pass Framework for Generalized Factorization Machine and Rank-One Matrix Sensing
Ming Lin · Jieping Ye
Tue Dec 06 09:00 AM -- 12:30 PM (PST) @ Area 5+6+7+8 #121 #None
We develop an efficient alternating framework for learning a generalized version of Factorization Machine (gFM) on steaming data with provable guarantees. When the instances are sampled from $d$ dimensional random Gaussian vectors and the target second order coefficient matrix in gFM is of rank $k$, our algorithm converges linearly, achieves $O(\epsilon)$ recovery error after retrieving $O(k^{3}d\log(1/\epsilon))$ training instances, consumes $O(kd)$ memory in one-pass of dataset and only requires matrix-vector product operations in each iteration. The key ingredient of our framework is a construction of an estimation sequence endowed with a so-called Conditionally Independent RIP condition (CI-RIP). As special cases of gFM, our framework can be applied to symmetric or asymmetric rank-one matrix sensing problems, such as inductive matrix completion and phase retrieval. |
J Syst Evol ›› 1994, Vol. 32 ›› Issue (6): 489-503.
• Research Articles •
### Biosystematic Studies on Adenophora potaninii Korsh. Complex (Campanulaceae). Ⅰ. Phenotypic Plasticity
Ge Song, Hong De-yuan
• Published:1994-11-18
Abstract: Phenotypic plasticity is the environmental modification of genotypic expression and an important means by which individual plants respond to environmental heterogeneity. The study of phenotypic plasticity in the genus Adenophora, which is very complicated taxo nomically because of great morphological variation, proves to be helpful in both investigating the phenotypic variation so as to evaluate potential taxonomic value of their characters and providing important sources of information on the variation, adaptation and evolution of the genus. Twenty-three populations representing all the six species in Adenophora potaninii complex were transplanted into the garden. Of them six populations were selected for study ing their performance in the field and in the garden, in addition to cultivation experiment under different treatments. The results show that there exists considerable developmental plasticity in some leaf, floral and capsule characters. In particular, the leaf shape and length of calyx lobe display significant developmental variation with the maximum being three times as great as the minimum, which is noteworthy because they were previously considered as diagnostic. The characters of root, caudex, stem and inflorescence are found to be very plastic, especially the root diameter, the number of stems, stem height and inflorescence length with great environmental plasticity. In addition, the populations from different habi tats show distinct amounts of plasticity. On the contrary, the characters of leaf, floral, cap sule and seed are less influenced by environments. It seems that the considerable variation in the characters of leaf is attributed mainly to genetic differences. Finally, the phenotypic plasticity of morphological characters of A. potaninii complex and its taxonomic significanceis discussed. |
# Algebraic Geometry Seminar
#### A Glimpse of Supertropical Algebra and Its Applications
Speaker: Zur Izhakian, University of Aberdeen and University of Bremen
Location: Warren Weaver Hall 1314
Date: Thursday, May 8, 2014, 11 a.m.
Synopsis:
Tropical mathematics is carried out over idempotent semirings, a weak algebraic structure that on the one hand, allows descriptions of objects having a discrete nature, but on the other, its lack of additive inverse prevents access to some basic mathematical notions. These drawbacks are overcome by use of a supertropical semiring -- a cover'' semiring structure having a special distinguished ideal that plays the role of the zero element in classical mathematics. This semiring structure is rich enough to enable a systematic development of tropical algebraic theory, yielding direct analogues to many important results and notions from classical commutative algebra. Supertropical algebra provides a suitable algebraic framework that enables natural realizations of matroids and simplicial complexes, as well as representations of semigroups. |
Change the chapter
Question
Suppose a certain person’s visual acuity is such that he can see objects clearly that form an image $4.00\textrm{ }\mu\textrm{m}$ high on his retina. What is the maximum distance at which he can read the 75.0 cm high letters on the side of an airplane?
$3.75\textrm{ km}$
Solution Video
# OpenStax College Physics Solution, Chapter 26, Problem 4 (Problems & Exercises) (1:46)
Rating
2 votes with an average rating of 5.
Quiz Mode
Why is this button here? Quiz Mode is a chance to try solving the problem first on your own before viewing the solution. One of the following will probably happen:
1. You get the answer. Congratulations! It feels good! There might still be more to learn, and you might enjoy comparing your problem solving approach to the best practices demonstrated in the solution video.
2. You don't get the answer. This is OK! In fact it's awesome, despite the difficult feelings you might have about it. When you don't get the answer, your mind is ready for learning. Think about how much you really want the solution! Your mind will gobble it up when it sees it. Attempting the problem is like trying to assemble the pieces of a puzzle. If you don't get the answer, the gaps in the puzzle are questions that are ready and searching to be filled. This is an active process, where your mind is turned on - learning will happen!
If you wish to show the answer immediately without having to click "Reveal Answer", you may . Quiz Mode is disabled by default, but you can check the Enable Quiz Mode checkbox when editing your profile to re-enable it any time you want. College Physics Answers cares a lot about academic integrity. Quiz Mode is encouragement to use the solutions in a way that is most beneficial for your learning.
## Calculator Screenshots
Video Transcript
This is College Physics Answers with Shaun Dychko. We want to find the object distance for these letters on the side of an airplane and the letters are 75.0 centimeters high, which is 75.0 times 10 to the minus 2 meters and this object is going to make an image with a height of only 4.00 micrometers at its smallest on the retina and I put a negative here just to say that this is a real image and so it's going to be inverted compared to the object and that's just going to be helpful in our calculation here, which has a negative sign in our formula for magnification. The image distance we know because the lens-retina distance is the image distance and that's 2.00 centimeters and we know that magnification is the ratio of the image height to the object height and it's also the negative of the ratio of the image distance to the object distance. So we are gonna solve this for d o, which is the distance from the lens to the letters on the plane, we'll multiply both sides by d o and also multiply both sides by object height divided by image height and we end up with object distance is the negative of the image distance times the object height divided by the image height. So that's negative of 2.00 times 10 to the minus 2 meters— distance from the lens to the retina— times 75.0 times 10 to the minus 2 meters— height of the letters on the airplane— divided by negative 4.00 times 10 to the minus 6 meters, which is the size of the image on the retina... that is 3.75 kilometers is the maximum distance at which the person can read the letters. |
# There are 6 pink cards, 3 blue cards and 5 black cards in a box. What is the probability of selecting a pink or blue card on the 1st draw?
1. 9/14
2. 1/2
3. 3/14
4. 5/14
Option 1 : 9/14
## Detailed Solution
Given:
Total number of pink cards = 6
Total number of blue cards = 3
Total number of black cards = 5
Formula Used:
P(A) = $${n(A) \ \over n(S)}$$
P(A) is the probability of an event 'A'
n(A) is the number of favorable outcomes
n(s) is the number of events in the sample space.
Calculation:
Here,
Total number of favorable outcomes is 6 + 3 = 9 (Pink + Blue card)
n(A) = 9
Events in sample space = 6 + 3 + 5 = 14
n(s) = 14
So, P(A) = 9/14
∴ The required probability is $${9 \ \over 14}$$ |
# Suppose the total amount, A, of radioactive material present in the atmosphere at time T can be...
## Question:
Suppose the total amount, {eq}A {/eq}, of radioactive material present in the atmosphere at time {eq}T {/eq} can be modeled by the formula {eq}\displaystyle \int_0^T (Pe^{-r t})\ dt {/eq}, where {eq}P {/eq} is a constant and {eq}t {/eq} is time in years. Suppose that recent research suggests that {eq}r = 0.002 {/eq} and {eq}P {/eq} (present amount of radioactive material) {eq}= 200 {/eq} millirads. Estimate the total future buildup of radioactive material in the atmosphere if {eq}r {/eq} and {eq}P {/eq} were to remain constant.
## Integrating:
This problem will allow us to understand how to use an important result when integrating functions. The result that will be using is:
$$e^{-\infty}=\frac{1}{\infty}=\frac{\frac{1}{1}}{0}=0$$
See why we need to use this result below.
Become a Study.com member to unlock this answer! Create your account
As we have been told to find the future total buildup, we assume that there is no upper limit and, therefore, we integrate from {eq}0 {/eq} to... |
Mathematical Finance Seminar – Spring 2016
# Schedule for Spring 2016
Seminars are on Thursdays
Time: 4:10pm – 5:25pm
Location: Columbia University, 903 SSW (1255 Amsterdam Ave, between 121st and 122nd Street)
Organizers: Ioannis Karatzas, Philip Protter, Marcel Nutz, Yuchong Zhang
### Description
01/28/2016
Yuchong Zhang (Columbia)
Title: Fundamental Theorem of Asset Pricing Under Transaction Costs and Model Uncertainty
Abstract:
We prove the Fundamental Theorem of Asset Pricing for a discrete time financial market where trading is subject to proportional transaction cost and the asset price dynamic is modeled by a family of probability measures, possibly non-dominated. Using a backward-forward scheme, we show that when the market consists of a money market account and a single stock, no-arbitrage in a quasi-sure sense is equivalent to the existence of a suitable family of consistent price systems. We also show that when the market consists of multiple dynamically traded assets and satisfies efficient friction, strict no-arbitrage in a quasi-sure sense is equivalent to the existence of a suitable family of strictly consistent price systems. (Joint work with Erhan Bayraktar)
02/04/2016
Dylan Possamai (Paris Dauphine)
Title:Dynamic Programming Approach to Principal-Agent Problems
Abstract:
We consider a general formulation of the Principal-Agent problem from Contract Theory, on a finite horizon. We show how to reduce the problem to a stochastic control problem which may be analyzed by the standard tools of control theory. In particular, Agent’s value function appears naturally as a controlled state variable for the Principal’s problem. Our argument relies on the Backward Stochastic Differential Equations approach to non-Markovian stochastic control, and more specifically, on the most recent extensions to the second order case. This is a joint work with Jaksa Cvitanic and Nizar Touzi.
02/11/2016
Christina Dan Wang (Columbia University)
“The observed standard error of high-frequency estimators for parameters containing jumps”
Abstract:
In high frequency inference, standard errors are important: they are used both to assess the precision of estimators and also when building forecasting models. Due to the scarcity of methodology to assess this uncertainty – standard error, this paper provides an alternative solution to this problem. It provides a general nonparametric method for assessing asymptotic variance (AVAR) and consistent estimators of AVAR for a class of integrated parameters. The parameter process can be a general semi-martingale with both continuous and jump components. The integrand of the integrated parameter can also contain jump components. The methodology applies to a wide variety of estimators, such as integrated volatility and leverage effect.
02/18/2016
Umut Cetin (LSE)
02/25/2016
Andrew Lesniewski (Baruch) – CANCELLED
03/03/2016
Daniel Lacker (Brown)
“Liquidity, risk measures, and concentration of measure”
Abstract:
Expanding on techniques of concentration of measure, we propose a quantitative framework for modeling liquidity risk using convex risk measures. The fundamental objects of study are curves of the form $(\rho(\lambda X))_{\lambda \ge 0}$, where $\rho$ is a convex risk measure and $X$ a financial position (a random variable), and we call such a curve a “liquidity risk profile.” For some notable classes of risk measures, especially shortfall risk measures, the shape of a liquidity risk profile is intimately linked with the tail behavior of the underlying $X$. We exploit this link to systematically bound liquidity risk profiles from above by other real functions $\gamma$, deriving tractable necessary and sufficient conditions for concentration inequalities of the form $\rho(\lambda X) \le \gamma(\lambda)$ for all $\lambda \ge 0$. These concentration inequalities admit useful dual representations related to transport-entropy inequalities, and this leads to efficient uniform bounds for liquidity risk profiles for large classes of $X$. An interesting question of tensorization of concentration inequalities arises when we seek to bound the liquidity risk profile of a combination $f(X,Y)$ of two positions $X$ and $Y$ in terms of their individual liquidity risk profiles. Specializing to law invariant risk measures, we uncover a surprising connection between tensorization and certain time consistency properties known as acceptance and rejection consistency, which leads to some new mathematical results on large deviations and dimension-free concentration of measure.
03/10/2016
Alexander Schnurr (Siegen)
“An Efficient Way to Analyze Path and Distributional Properties of Processes Used In Mathematical Finance.”
In the theory of Levy processes, the characteristic exponent and the Blumenthal-Getoor index are two of the main tools in order to describe and to analyze properties of the process. These concepts have been used for over the past fifty years. In our talk we show how these concepts were generalized recently to the class of homogeneous diffusions with jumps. This class of processes contains the solutions of Levy driven SDEs, certain Feller processes and various classes which are used in mathematical finance. At the end of the talk we present five different applications.
03/17/2016
No seminar (spring recess)
03/24/2016
No seminar (Berkeley-Columbia meeting)
03/31/2016
“Optimal Execution for Orders with a Market on Close Benchmark in Hong Kong”
The closing benchmark in Hong Kong, the median of 5 nominal prices over the last trading minute, has attracted a lot of attention recently, both positive and negative.
In this talk we give an overview of closing auctions globally and discuss the situation in Hong Kong in detail. We suggest a model for the closing auction period which captures the key microstructure features and allows us to derive an optimal strategy for a trader seeking to benchmark themselves against the closing price. We evaluate the theoretical results against real data for the Hang Seng and discuss the conclusions.
This is joint work with Christoph Frei (University of Alberta).
04/07/2016
Ronnie Sircar (Princeton)
“Fracking, Renewables & Mean Field Games”
The dramatic decline in oil prices, from around $110 per barrel in June 2014 to around$30 in January 2016 highlights the importance of competition between different energy sources. Indeed, the price drop has been primarily attributed to OPEC’s strategic decision not to curb its oil production in the face of increased supply of shale gas and oil in the US. We study how continuous time Cournot competitions, in which firms producing similar goods compete with one another by setting quantities, can be analyzed as continuum dynamic mean field games. We illustrate how the traditional oil producers may react in counter-intuitive ways in face of competition from alternative energy sources.
04/14/2016
Title: Polynomial diffusions on the unit ball
Abstract: Polynomial processes are defined by the property that conditional expectations of polynomial functions of the process are again polynomials of the same or lower degree. Many fundamental stochastic processes are polynomial, and their tractable structure makes them important in applications. For instance, every affine process is polynomial. In this talk I will review these notions, and then focus on polynomial diffusions whose state space is the unit ball. This naturally leads to the classical algebraic problem of representing nonnegative polynomials as sums of squares, where I will present new results as well as an open problem. The sum-of-squares property, in turn, is connected to probabilistic properties of the original process, such as pathwise uniqueness and existence of smooth densities.
04/21/2016
(Cancelled) – Alfred Galichon (NYU and Sciences Po)
04/28/2016
Ashkan Nikeghbali (University of Zurich)
“Graphical methods to model dependence and limit theorems”
In this talk, I will consider the simple example of sums of bounded random variables, which typically occurs in the modelling of credit loss portfolios, and give a few conditions on the dependency graphs under which one can prove limit theorems and estimate the tails of the distributions. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.