text
stringlengths 104
605k
|
---|
# All Questions
224 views
### XOR cipher with three different ciphertexts and repeated key, key length known. How do I find the plaintexts?
Let us say we have three different plaintexts (all alphabets, A-Z): $x$, $y$ and $z$, each of length $21$. Let the key, $a$, be also of length $21$. Now, what we have is $x \oplus a$, $y \oplus a$ ...
101 views
### Usage of GF(p^m) fields, where p != 2
$GF(2^m)$ Galois fields are widely used in different cryptographic algorithms, for example, in Rijndael. However, $GF(p^m)$ fields are possible with any prime $p$, not only 2, but $GF(2^m)$ fields ...
102 views
### Problem in understanding Blakley's Secret Sharing Scheme
I need to implement Blakley's Secret Sharing Scheme. I have read below mentioned two research papers but still unable to understand how to implement it. Safeguarding cryptographic keys Two Matrices ...
88 views
### Proving that an encryption scheme is susceptible to certain attacks
I'm currently trying to prove the following: Where p is a prime number of cryptographic size, prove that: e(m) = am + b (mod p) where a and b are private is open to a known plaintext attack e(m) = ...
102 views
### display an image encrypted with ECB [closed]
On wikipedia there is an example of a picture encrypted with ECB: http://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Electronic_codebook_.28ECB.29 I just wanted to know how an encrypted ...
396 views
### Public key encryption and big files with NaCL
I am trying to encrypt big files using NaCL (actually PyNaCL) - see http://nacl.cr.yp.to/box.html After reading some docs, I came up with this prototype : Bob wants to send a big file to Sarah. They ...
124 views
### What kind of attack does the current brokenness of SHA-1 allow?
I understand that the following is becoming feasible, or already is: Find any 2 data (d1 and d2), for which SHA1(d1) = SHA1(d2) However, it is not entirely clear ...
236 views
### Programming language for modular arithmetic over large numbers [closed]
I'am trying to implement algorithms on integer factorization.This involves dealing with integers of 200-500 digits and doing modular arithmetic over them.Which programming language has inbuilt support ...
159 views
### The perfect way of using IV in CTR mode
I understand that it is necessary to use the same IV for both encryption and decryption in the CTR mode. I'm thinking about the case when I concatenate the secret ...
232 views
### Strength of RSA with OAEP
In our current system we use an encryption solution based on RSA with OAEP padding. Key size is optional but default 2048. What is the general strength of RSA with OAEP? Are there know attacks that ...
36 views
### Question about key verification
I am studying for network security exam. And in the slide about Key Management, under a heading Key Verification I see this: Key Verification Almost all cryptographic algorithms have some weak keys ...
208 views
### CBC with a fixed or random IV
I have two probably quite naive questions: 1) Why, exactly, is it so bad to have a fixed (or predictable) IV in CBC mode? An example would be great! 2) Given 1), why is a random IV better? And if ...
161 views
### Transforming a key into a seed with the most entropy
I'm making a very simple encrypter in C# which is basically a Stream cipher. The user enters a key(password), converted to a seed for a pseudo-random generator, and the pseudo-random generator ...
465 views
### Finding discrete logarithm with baby-step-giant-step algorithm
I am trying to use the Baby Step Giant Step algorithm to find discrete logarithm in: $$a^x= B \pmod p$$ with using BSGS: $$x = im+j$$ $$a^j = B a^{-im}$$ where $m = \sqrt{p}$ Wikipedia says: ...
234 views
### (Re-)Using deterministic IV in CTR mode / How to: deterministic AES
I am aware of the requirement of an IV to be unique in CTR mode (Why must IV/key-pairs not be reused in CTR mode?). However I wonder if I can use an IV depending on the plaintext deterministically. ...
52 views
### Is AES_CMAC specified to double ciphered zeroes during initialization?
Reading the SIV RFC https://tools.ietf.org/html/rfc5297#page-9, I found that the S2V function starts by calculating the MAC of a "zero" block, doubles it and then XORs further data to the result. Now ...
110 views
### KPA on Feistel cipher?
I heard that DES is technically “broken” because of attacks involving large amounts of known plaintext. These attacks are obviously academic and highly complicated, so for some intuition I was hoping ...
109 views
### Does a cryptosystem provide unconditional security if and only if it provides perfect secrecy?
Is unconditional security and perfect secrecy one and the same thing, i.e a cryptosystem provides unconditional security if and only if it provides perfect secrecy ? I've wondered about the above and ...
91 views
### Looking for a secure PRNG that I can implement in hardware
I'm trying to implement a (sort of) simple PRNG in hardware for fun. My idea would be to allow a user to enter a key using a keypad (or some dip switch settings that can be set and hidden) and to ...
116 views
### Group description in pairing based cryptography
Suppose we have a public key encryption scheme, in which public parameter contains $(p, G, G_t, e, g)$, where $p$ is prime number, $G$ is a (cyclic) group of prime order, $e:G \times G \mapsto G_t$. ...
72 views
### MD-compliant hashes don't really accept arbitary length input, do they?
When people talk about hash functions, they usually say that they accept arbitrary-length input, but if you actually look at the padding (eg MD-strengthening padding), you see it's like ...
275 views
### Secure double encryption using CPA and CCA
Do you mind if you give me any hints, links or ideas about how to improve the security of double regular encryption and decryption, by using CPA game and CCA game, it sounds interesting question, and ...
47 views
### Repeating something encrypted and non-encrypted?
If one wants to keep the receiver's name non encrypted, but it also appears in the encrypted message - will it leak information? (other than the receiver's name, of course.) Let's assume a "bad" case ...
109 views
### block cipher algorithms with variable block lengths
Rijndael supports block lengths of 128, 192 and 256. AES does not but Rijndael does. What other algorithms support variable block lengths? Or is Rijndael unique in that regard?
83 views
### simulating rc4-256 with rc4-128
OpenSSL supports rc4 with 128-bit keys and rc4 with 40-bit keys. It does not support rc4 with 256-bit keys. My question is... is it possible to modify the state of the pseudo-random generation ...
1k views
### What Java actually stores inside Keystore when generating Keys?
When we use Keytool to generate a Keystore to store Private/Public keys, what Java actually stores inside the Keystore file ? I ...
616 views
### BruteForcer XOR (bfxor.exe) to attack 64-bit keys and longer
First of all, this is not a beginner's question since I already know a good deal about encryption and brute-force attacks. This is also no question on how to code programs for brute-force attacks ...
279 views
### OpenSSL RSA same plaintext but different ciphertext
how OpenSSL RSA work ? I generate public (n,e) and private key (n,d) then I encypted a file by : ...
56 views
### Condition on Vector Boolean Function to be Bijective
Suppose the vector boolean function be \begin{align} f:F^n_2 \longrightarrow F_2^n \\ (x_1,\dots ,x_n) \longrightarrow (x_2,\dots x_n,g) \\ \\ g:F^n_2 \longrightarrow F_2 \\ (x_1,\dots ,x_n) ...
356 views
### Blowfish ECB mode: Tools for known-plaintext attack?
I'm currently dealing with multiple blowfish-encrypted files that share the same key. All are encrypted using ECB mode judging from their appearance: I don't know what the key is but I know 64 byte ...
241 views
### Using bcrypt derived keys for encryption?
Sorry for this ignorant question but I am currently weighing up the advantages and disadvantages of BCrypt over PBKDF2 and from what I have read BCrypt is considered more secure but from what I have ...
170 views
325 views
### Correctness vs Completeness
What is the conceptual difference between the definition of correctness and completeness in verifiable cryptographic protocols? They justify that if a statement is correct then the verification should ...
|
# Math Help - Row operations to solve a system of equations
1. ## Row operations to solve a system of equations
Hi
I need help in solving a question involving row operations by converting the equations in a matrix.
i have attached the question
thank you
2. Originally Posted by rpatel
Hi
I need help in solving a question involving row operations by converting the equations in a matrix.
i have attached the question
thank you
write the augmented matrix of the system and do the row operations until your matrix is in an echelon form, which is: $\begin{bmatrix}1 & 1 & 1 & | & 1 \\ 0 & -1 & -3 & | & -1 \\ 0 & 0 & a-4 & | & 0 \end{bmatrix}.$ you should be able to finish the proof now.
|
# D.Series of $$\sigma$$
What is special about the sequence below ?
$8505 , 85050 , 850500 , 8505000 , ......$
2 years, 1 month ago
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$...$$ or $...$ to ensure proper formatting.
2 \times 3 $$2 \times 3$$
2^{34} $$2^{34}$$
a_{i-1} $$a_{i-1}$$
\frac{2}{3} $$\frac{2}{3}$$
\sqrt{2} $$\sqrt{2}$$
\sum_{i=1}^3 $$\sum_{i=1}^3$$
\sin \theta $$\sin \theta$$
\boxed{123} $$\boxed{123}$$
Sort by:
All that i can see is your question and $$\displaystyle \sum _{ n=1 }^{ \infty }{ \frac { { \sigma }_{ 2 }\left( n \right) }{ { n }^{ 6 } } } =\frac { { \pi }^{ 6 } }{ 945 } \times \frac { { \pi }^{ 4 } }{ 90 } =\frac { { \pi }^{ 10 } }{ 85050 }$$
- 2 years ago
Go further .... Try with sigma_3
- 2 years ago
Can you actually find 8505000 or 8505?
- 2 years ago
It will give odd zeta which is what you do not what as they are irreducable
- 2 years ago
Sorry not with odd zetas try with even zetas andbakso change power of n with change in sigma_x
- 2 years ago
That will give odd zetas
- 2 years ago
Yes you are right Joel.
- 2 years ago
I see the pattern
- 2 years ago
Is a geometric progression with reason 10
- 2 years ago
- 2 years, 1 month ago
Not in that sense .
- 2 years, 1 month ago
Then, what is the next number?
- 2 years, 1 month ago
I think your number is correct , but i want its speciality
- 2 years, 1 month ago
no '0" after 5 in 1st...... 1 '0' after 5 in 2nd.....Therefore ans is 85050000.I did like this. :P
- 2 years, 1 month ago
nope it is related to Dirichlet series .
- 2 years, 1 month ago
Something related to zeta as the sigma is the divisor function
- 2 years, 1 month ago
You. Are right ... Dirichlet's series of sigma function...
- 2 years, 1 month ago
×
|
# If the plasticity index of a soil is 45%, then the soil will be:
1. non-plastic
2. very highly plastic
3. low plastic
4. medium plastic
Option 2 : very highly plastic
## Index Properties MCQ Question 1 Detailed Solution
Plasticity index (PI) is the range of water content over which the soil remains in the plastic state. Mathematically defined as,
Plasticity Index = Liquid Limit (wL) – Plastic Limit (wp)
Plasticity Index Soil Description 0 Non plastic 1 – 5 Slight Plastic 5 – 10 Low Plastic 10 – 20 Medium Plastic 20 – 40 Highly Plastic > 40 Very Highly Plastic
# The ratio of dry unit weight to unit weight of water represents.
1. Specific gravity of soil solids
2. Specific gravity of soil mass
3. Shrinkage limit
4. Shrinkage ratio
Option 4 : Shrinkage ratio
## Index Properties MCQ Question 2 Detailed Solution
Explanation:
Shrinkage ratio, R can be defined in the following ways:
1. It is the ratio of given volume change in soil, expressed as a percentage of the dry volume of soil to a corresponding change in water content above the shrinkage limit.
$$R = \;\frac{{\frac{{{V_1} - {V_2}}}{{{V_d}}}}}{{{w_1} - {w_2}}}$$
V1 is the volume of soil mass at water content w1.
V2 is the volume of soil mass at water content w2.
Vd is dry volume of soil
2. It can also be defined as the mass-specific gravity of the soil in a dry state i.e. ratio of dry unit weight to unit weight of water represents the weight of water.
$$R = \frac{{{γ _{d\;}}}}{{{γ _w}}}$$
∴ Statement ‘4’ is true.
3. It is also given by:
$$\frac{1}{{\rm{R}}} = {\rm{\;}}{{\rm{w}}_{\rm{s}}} + \frac{1}{{\rm{G}}}$$ (ws is shrinkage limit)
Specific gravity of soil solids, G = unit wt of soil solids (γs)/unit wt of water (γw)
Specific gravity of soil mass, Gm = unit weight of soil mass (γ)/unit wt of water (γw)
# A fine-grained soil has 60% (by weight) silt content. The soil behaves as semi-solid when water content is between 15% and 28%. The soil behaves fluid-like when the water content is more than 40%. The ‘Activity’ of the soil is
1. 3.33
2. 0.42
3. 0.30
4. 0.20
Option 3 : 0.30
## Index Properties MCQ Question 3 Detailed Solution
The Activity of the soil (A)
The activity of the soil is given by the ratio of the plasticity index (Ip) and the percentage of clay fraction (C) in the soil.
It indicates water absorption capacity or indicates swelling and shrinkage characteristics
$${\bf{A}} = \frac{{{{\bf{I}}_{\bf{P}}}}}{{\bf{C}}}$$
Where,
C = percentage of clay particles less than 2-micron size
IP = plasticity index = WL - WP
WL = liquid limit of the soil, WP = plastic limit of the soil
Activity number Type of soil A < 0.75 Inactive A = 0.75 to 1.25 Normal A > 1.25 Active
Calculation:
Given,
Fine-grained soil has 60% (by weight) silt content
∴ % clay = 100 – silt content = 100 – 60 = 40%
Soil behaves as a semi solid between 15 % and 28 %
∴ WS = 15 %, WP = 28 %
Soil behaves fluid-like when the water content is more than 40%
∴ WL = 40 %
IP = 40 – 28 = 12 %
$$A = \frac{{{I_P}}}{{\%\ C}}$$
$$A = \frac{{{12}}}{{40}}$$
Activity = 0.3
# In a shrinkage limit test, the volume and mass of a dry soil pat are found to be 50 cm3 and 88 g respectively. The specific gravity of the solids is 2.71 and the density of water is 1 g/cc. The shrinkage limit (in % up to two decimal places) is _____
## Index Properties MCQ Question 4 Detailed Solution
Concept:
Shrinkage limit: This is the minimum water content is a soil, at which the soil remains in semi-solid state.
Formula:
$${{\rm{W}}_{\rm{S}}} = {{\rm{W}}_1} - \frac{{\left( {{{\rm{V}}_1} - {{\rm{V}}_{\rm{d}}}} \right){{\rm{P}}_{\rm{w}}}}}{{{{\rm{M}}_{\rm{d}}}}}$$
$${{\rm{W}}_{\rm{S}}} = \frac{1}{{\rm{R}}} - \frac{1}{{\rm{G}}}$$
$${\rm{R}} = \frac{{{{\rm{\rho }}_{\rm{d}}}}}{{{{\rm{\rho }}_{\rm{w}}}}} = {\rm{Shrinkage\;ratio}}$$
ρd = Dry density of coil, ρw = Density of water, and G = Specific gravity of soil.
Calculation:
Md = 88 grams, Vd = 50 cm3
Pd = 88/50 gm/cc = 1.76 gm/cc
$${{\rm{W}}_{\rm{S}}} = \frac{1}{{\rm{R}}} - \frac{1}{{\rm{G}}} = \frac{1}{{\frac{{{{\rm{\rho }}_{\rm{d}}}}}{{{\rho _w}}}}} - \frac{1}{{\rm{G}}} = \frac{{{{\rm{\rho }}_{\rm{W}}}}}{{{{\rm{\rho }}_{\rm{d}}}}} - \frac{1}{{\rm{G}}} = \frac{1}{{1.76}} - \frac{1}{{2.71}} = 0.1990 = 19.90\;\%$$
# The water content in a soil at which just shear strength develops is called
1. liquid limit
2. plastic limit
3. elastic limit
4. shrinkage limit
Option 1 : liquid limit
## Index Properties MCQ Question 5 Detailed Solution
Concept:
The minimum water content at which soil is just fully saturated is called Shrinkage limit or it the maximum water content at which soil remains partially saturated.
The water content at which soil transit from semi-solid state to plastic state called plastic limit or it is the water content at which soil behave like plastic i.e. it deforms on application of load and once the load is removed it does not regain its shape i.e. permanently got deformed.
The minimum water content at which soil just starts developing its shear strength is called liquid limit or water content at which soil transit from plastic state to liquid state called liquid limit.
# The standard plasticity chart by Casagrande’s to classify fine-grained soils is shown in the figure.The marked P represents
1. Inorganic clays of high plasticity
2. Organic clays and highly plastic organic silts
3. Organic and inorganic silts and silt clays
4. Clays
Option 1 : Inorganic clays of high plasticity
## Index Properties MCQ Question 6 Detailed Solution
1. For soil to be organic its liquid limit at oven dry sample must be less than ¾ of the given liquid limit. Since, oven dry liquid limit is not given, organic or inorganic nature cannot be predicted accurately. However, any soil which lies below A- line is organic in nature. So, Point P represents inorganic soil.
2. Degree of Compressibility
WL < 35%: Low compressible
35% ≤ WL ≤ 50%: Intermediate compressible
WL > 35%: High compressible
∴ Point P represent high compressible soil.
3. Type of soil (silt or clay)
Soil above A-line is clay and soil below A –line is either silt or organic as per A Casagrande’s.
∴ Point P represent Clay.
Finally, Point P represent inorganic clay with highly compressible ( high plasticity).
# Assuming specific gravity of soil solids to be 2.5 and dry unit weight of soil to be 19.62 kN/m3. what is the shrinkage limit?
1. 5%
2. 15%
3. 12.5%
4. 10%
Option 4 : 10%
## Index Properties MCQ Question 7 Detailed Solution
Given,
specific gravity of soil solids (G) = 2.5
dry unit weight (rd) = 19.62 kN / m3
Find shrinkage limit (ws) = ?
shrinkage limit is the smallest value of water content (w) at which soil mass is completely saturated (S = 100%)
So, at shrinkage limit
S = 1
For finding water content, we know the relation
Se = wsG
given, S = 1, G = 2.5
we know,
$$r_d = \left( \frac G {1 + e}\right)r_w$$
rd = 19.62 kN / m3
rw = 9.81 kN / m3
$$19.62 = \left(\frac {2.5}{1+e}\right)9.81$$
$$\frac {19.62}{9.81} = \frac {2.5}{1 + e}$$
$$1 + e = \frac {2.5} 2$$
e = 0.25
put value of e in relation
Se = wsG
S = 1, G = 2.5, e = 0.25
e = wsG
0.25 = ws × 2.5
$$w_s = \frac {0.25 \times 10}{2.5 \times 100}$$
$$w_s = \frac 1 {10}$$
ws = 10%
So, option 4 is correct.
# A dry soil has mass specific gravity of 1.4. If the specific gravity of solids is 2.8, then the void ratio will be
1. 0.4
2. 0.8
3. 1.0
4. 1.2
Option 3 : 1.0
## Index Properties MCQ Question 8 Detailed Solution
Explanation:
Relation between Mass specific gravity (Gm), True specific gravity (Gs) and void ratio (e) will be:
$${G_m} = \frac{{{G_s}}}{{1 + e}}$$
Calculation:
Given,
Gm = 1.4
Gs = 2.8
$$1.4 = \frac{{2.8}}{{1 + e}}$$
∴ e = 1
# Soil has liquid limit = 32, plastic limit = 18, shrinkage limit = 8 and natural moisture content = 22%. What will be its liquidity index and plasticity index?
1. 0.67 and 15%
2. 0.285 and 14%
3. 0.67 and 25%
4. 0.33 and 12%
Option 2 : 0.285 and 14%
## Index Properties MCQ Question 9 Detailed Solution
Concept:
The plasticity Index, IP of soil is given as:
IP = WL - Wp
The Liquidity Index, I is given as:
$$({I_L} = \frac{{{W_n} - \;{W_P}}}{{{W_L} - \;{W_P}}})$$
Where
WL is the liquid limit of soil
WP is the Plastic Limit of soil
Wn is water content of soil under given conditions or natural water content
Calculation
Given: WL = 32%; WP = 18 %; Wn = 22%
Now, Plasticity Index is calculated as:
IP = WL - Wp = 32 – 18 = 14%
The Liquidity Index, I is calculated as :
$$({I_L} = \frac{{22 - \;18}}{{32 - \;18}})$$ = 0.2857
Important Point:
1. The Liquidity index of soil indicates its degree of consistency i.e. Higher the liquidity index, more the soil will be consistent.
2. If IL ≥ 1, then soil is in liquid state.
3. If IL = 0, then soil is in plastic state.
# Which one of these methods is used to find the in-situ density?
1. Dry sieve analysis method
2. Chemical method
3. Rubber balloon method
4. Alcohol method
Option 3 : Rubber balloon method
## Index Properties MCQ Question 10 Detailed Solution
Explanation:
Methods to determine the in situ density of soil:
1) Core cutter method:
• It is a field method, suitable for soft and fine-grained clayey soil.
• Not suitable for Stoney, gravely soil, and dry soil.
2) Sand replacement method:
• Used for gravelly, sand, and dry soil.
3) Water replacement method:
• Suitable for cohesive soil only and paraffin wax is used in it.
4) Rubber ballon method:
• This volume of the pit is measured by covering the pit with a plastic sheet and then filling it with water.
• Thus the weight of water calculated is equal to the volume of soil excavated.
• Used for finding the bulk density of in-situ soi.
Methods used for finding water content:
Method Properties Oven drying method Most accurate method and is a standard laboratory method Pycnometer method More suitable for cohesionless soil as removal of entrapped air from cohesive soil difficult. Sand bath method It is a rapid method, hence not very accurate Torsion balance method Drying and weighing done simultaneously, hence one of the accurate methods Calcium carbide method Takes just 5-7 minutes and used as a field test Alcohol test It is a quick field test and not used for soils containing calcium or organic compound Radiation method Gives water content in an in-situ condition
# At what temperature is the Hydrometer calibrated ?
1. 45°C
2. 20°C
3. 27°C
4. 32°C
Option 3 : 27°C
## Index Properties MCQ Question 11 Detailed Solution
Explanation:
Hydrometer:
• It is an instrument used for carrying out sedimentation analysis based on Stokes's law for particle size less than 75 μ.
• They are typically calibrated and graduated with one or more scales to measure the density or specific gravity of suspension.
Note:
The calibration temperature is 27°C.
Assumptions
• It is assumed that particles show discrete settling (i.e. grains of different sizes fall through a liquid at different velocities).
• Particles assumed spherical in shape.
• Medium is assumed infinite.
• Particles size range 0.2 μ to 0.2 mm.
Settling velocity,
$${V_s} = {\frac{{(G - 1){γ _w}d}}{{18μ }}^2}$$
where,
G = specific gravity, γ= unit weight of water, d = diameter of particle, μ = dynamic viscosity
Density by a hydrometer
$$\rho = \frac{{{R_h}}}{{1000}} + 1$$ ( R= Reduced hydrometer reading)
% Finer by a hydrometer,
$$\% N = \frac{{100{R_h}}}{{{M_D}}}(\frac{G}{{G - 1}})$$ (M= Mass of solids)
# The plastic limit and liquid limit of a soil are 30% and 42% respectively. The percentage volume change from liquid limit to dry state is 35% of the dry volume. Similarly the percentage volume change from plastic limit to dry state is 22% of the dry volume. The shrinkage ratio will be nearly
1. 4.2
2. 3.1
3. 2.2
4. 1.1
Option 4 : 1.1
## Index Properties MCQ Question 12 Detailed Solution
Concept:
Shrinkage ratio: It is the ratio of a given volume change expressed as a percentage of dry volume to the corresponding charge in water content above the shrinkage limit expressed as a percentage of the weight of the oven dried soil.
$$\text{SR}=\frac{\frac{\left( {{\text{V}}_{1}}-{{\text{V}}_{2}} \right)}{{{\text{V}}_{\text{d}}}}\times 100}{{{\text{w}}_{1}}-{{\text{w}}_{2}}}=\frac{{{\text{V}}_{1}}-{{\text{V}}_{2}}}{{{\text{V}}_{\text{d}}}\left( {{\text{w}}_{1}}-{{\text{w}}_{2}} \right)}\times 100$$
Where,
V1 = Volume of soil mass at water content (w1)
V2 = Volume of soil mass at water content (w2)
Vd = Volume of dry soil mass
Note: Shrinkage ratio is equal to mass specific gravity of soil in its dry state.
Calculation:
WL = 42%
WP = 30%
As per given data,
VL – Vd = 35
VP – Vd = 22
Also, $$\frac{{{V_L} - {V_d}}}{{{W_L} - {W_s}}} = \frac{{{V_p} - {V_d}}}{{{W_p} - {W_s}}}$$
$$\frac{{35}}{{0.42 - {W_s}}} = \frac{{22}}{{0.3 - {W_s}}}$$
Solving we get WS = 9.69%
Shrinkage Ratio, $$SR = {{{{{V_L} - {V_d}} \over {{V_d}}} \times 100} \over {{w_L} - {w_s}}}$$
∴ $$S.R = \frac{{35}}{{42 - 9.69}} = 1.1$$
# Let G be the specific gravity of soil solids, w the water content in the soil sample, γw the unit weight of water, and γd the dry unit weight of the soil. The equation for the zero air voids line in a compaction test plot is
1. $${{\rm{\gamma }}_{\rm{d}}} = \frac{{{\rm{G}}{{\rm{\gamma }}_{\rm{w}}}}}{{1 + {\rm{Gw}}}}$$
2. $${{\rm{\gamma }}_{\rm{d}}} = \frac{{{\rm{G}}{{\rm{\gamma }}_{\rm{w}}}}}{{{\rm{Gw}}}}$$
3. $${{\rm{\gamma }}_{\rm{d}}} = \frac{{{{\rm{G}}_{\rm{w}}}}}{{1 + {{\rm{\gamma }}_{\rm{w}}}}}$$
4. $${{\rm{\gamma }}_{\rm{d}}} = \frac{{{{\rm{G}}_{\rm{w}}}}}{{1 - {{\rm{\gamma }}_{\rm{w}}}}}$$
Option 1 : $${{\rm{\gamma }}_{\rm{d}}} = \frac{{{\rm{G}}{{\rm{\gamma }}_{\rm{w}}}}}{{1 + {\rm{Gw}}}}$$
## Index Properties MCQ Question 13 Detailed Solution
Concept:
The dry density (γd) of soil is given by,
$${{\rm{γ }}_{\rm{d}}} = \frac{{\left( {1 - {{\rm{η }}_{\rm{a}}}} \right){{\rm{G}}_{\rm{}}}{{\rm{γ }}_{\rm{w}}}}}{{1 + {\rm{e}}}}$$
ηa - Percentage of air voids
G - Specific gravity of soil solids
e - Voids ratio
For zero air voids ⇒ volume of air is zero, hence ηa = 0, ac = 0
Also, S + ac = 1
Hence, S = 1
So above equation becomes, $${{\rm{γ }}_{\rm{d}}} = \frac{{\left( {1 - 0} \right){{\rm{G}}_{\rm{}}}{{\rm{γ }}_{\rm{w}}}}}{{1 + \frac{{{\rm{w}}{{\rm{G}}_{\rm{}}}}}{{\rm{S}}}}}$$
⇒ $${{\rm{γ }}_{\rm{d}}} = \frac{{{{\rm{G}}_{\rm{}}}{{\rm{γ }}_{\rm{w}}}}}{{1 + {\rm{w}}{{\rm{G}}_{\rm{}}}}}$$
# If the water content of a fully saturated soil mass is 100%, the void ratio of the sample is:
1. less than specific gravity of soil
2. equal to specific gravity of soil
3. greater than specific gravity of soil
4. independent of specific gravity of soil
Option 2 : equal to specific gravity of soil
## Index Properties MCQ Question 14 Detailed Solution
Concept:
Voids ratio:
It is defined as the ratio of total volume of voids to the volume of solids in given soil mass.
$${\rm{e}} = \frac{{{{\rm{V}}_{\rm{v}}}}}{{{{\rm{V}}_{\rm{s}}}{\rm{\;}}}}$$
e > 0, voids ratio has no upper limit
Water content (W):
Water content also called as moisture content is defined as the ratio of weight of water to the weight of soil solids in the soil mass
$${\rm{W}} = \frac{{{{\rm{W}}_{\rm{w}}}}}{{{{\rm{W}}_{\rm{s}}}}} × 100$$$${\rm{W}} = \frac{{{{\rm{W}}_{\rm{w}}}}}{{{{\rm{W}}_{\rm{s}}}}} × 100$$
For dy soils, W = 0
For moist soils it is around, W = 60 %
For a Saturated soils, W > 0
Degree of Saturation (S):
Degree of saturation of a soil is defined as the ratio of volume of water to the volume of voids in the soil mass
$${\rm{S}} = \frac{{{{\rm{V}}_{\rm{w}}}}}{{{{\rm{V}}_{\rm{v}}}}} × 100$$
For dy soils, S = 0
For a Saturated soils, S = 100 %
For partially saturated soils it ranges as 0 % ≤ S ≥ 100 %
The relation between the degree of saturation (S), voids ratio (e), moisture content (W), and specific gravity (G) is given by,
e × S = W × G
Calculation:
Given W% = 100%,
For fully saturated soil mass, we know that
S = 100%
e × s= w × G
In this case, 1 × e = 1 × G
e = G
Hence void ratio is equal to the specific gravity of the soil.
# Active Clay has an activity greater than?
1. 1.4
2. 5
3. 3.5
4. 2
Option 1 : 1.4
## Index Properties MCQ Question 15 Detailed Solution
Explanation:
Activity of the soil(A):
The activity of the soil is given by the ratio of the plasticity index (Ip) and the percentage of clay fraction (C) in the soil. It indicates water absorption capacity or indicates swelling and shrinkage characteristics.
Activity $$= \frac{{Plasticity\;Index}}{{Per\;cent\;of\;clay\;particles\;finer\;than\;2\;\mu m}}$$
So, C = Percentage of clay fraction finer than 2 μ
Activity-based classification of clays:
Activity Classification < 0.75 Inactive 0.75 - 1.25 Normal > 1.25 Active
# The plastic limit and liquid limit of a soil sample are 35% and 70% respectively. The percentage of soil fraction with grain size finer than 0.002 mm is 25. The activity ratio of the sample is
1. 0.6
2. 1.0
3. 1.4
4. 1.8
Option 3 : 1.4
## Index Properties MCQ Question 16 Detailed Solution
Activity of the soil is defined as the ratio of plasticity index of the soil to the percentage of particles finner than 2μ or 0.002 mm.
$${A_t} = \frac{{{I_P}}}{c}$$
Ip = Plasticity index of the soil = wL – wP
wL = Liquid limit of the soil = 70 %
wP = Plastic limit of the soil = 35 %
Ip = 70 – 35 = 35%
c = 25 %
$${A_t} = \frac{{35}}{{25}} = 1.4$$
# A soil sample has a porosity of 40 percent, its void ratio is
1. 0.06
2. 0.28
3. 0.40
4. 0.66
Option 4 : 0.66
## Index Properties MCQ Question 17 Detailed Solution
Concept:
Void ratio (e): Void ratio is usually defined as the ratio of the volume of voids to the total volume of soil solid.
Porosity (n): Porosity is defined as the ratio of the volume of voids to the total volume of the soil.
The relationship between void ratio and porosity are as follows:
$$\rm{e=\frac{n}{1-n}\; and\; n=\frac{e}{1+e}}$$
Solution:
Porosity, η = 40%
Void, Ratio, $$e = \frac{\eta }{{1 - \eta }}$$
$$e = \frac{{0.40}}{{1 - 0.40}}$$
$$e = \frac{{0.40}}{{0.60}} = \frac{2}{3} = 0.66$$
# Identify the consistency limit corresponding to the smallest water content at which the soil is still in liquid state.
1. Consistency index
2. Shrinkage limit
3. Plastic limit
4. Liquid limit
Option 4 : Liquid limit
## Index Properties MCQ Question 18 Detailed Solution
Explanation:
The minimum water content at which soil just starts behaving liquid is called liquid limit or water content at which soil transit from plastic state to liquid state called liquid limit.
The minimum water content at which soil if just fully saturated is called Shrinkage limit or it the maximum water content at which soil remains partially saturated.
The water content at which soil transit from semi-solid state to plastic state called plastic limit.
Consistency Index, Ic = $$\frac{{{W_L} - \;W}}{{{W_L} - {W_P}}}$$
Where, WL is liquid limit; W = natural water content; WP = Plastic limit
# A pycnometer is used to determine
1. Voids ratio and dry density
2. Water content and void ratio
3. Specific gravity and dry density
4. Water content and specific gravity
Option 4 : Water content and specific gravity
## Index Properties MCQ Question 19 Detailed Solution
Pycnometer test is used to determine the specific gravity of cohesion less soils and water content.
Dry density or in-situ unit weight is determined by using the following methods:
1. Sand Replacement
2. Core-Cutter
3. Water Displacement
Other methods to determine the water content are:
1. Oven Drying Method
2. Calcium Carbide/Rapid Moisture
3. Sand Bath Method
5. Torsion Balance Moisture Meter
# For a soil, if the sensitivity value varies from 2.0 to 4.0, then such a soil is classified as:
1. Extra sensitive
2. Sensitive
3. Moderately sensitive
4. Little sensitive
Option 3 : Moderately sensitive
## Index Properties MCQ Question 20 Detailed Solution
Explanation:
The sensitivity of the soil is given by:
$$S = \frac{{{q_u}}}{{{q_r}}}$$
qu = unconfined compressive strength of soil in an undisturbed state
qr = unconfined compressive strength of soil in a remolded state
Classification of Soil-based on Sensitivity:
Sensitivity Nature of soil Less than 1 Insensitive 1 to 2 Little Sensitive 2 to 4 Moderately Sensitive 4 to 8 Sensitive 8 to 16 Extra sensitive >16 Quick
∵ Sensitivity (S) = 2 to 4
∴ Soil is classified as Moderately Sensitive
|
# Sequential continuity of linear operators
Let $u\colon L\to M$ be a linear map of locally convex linear topological vector spaces.
Assume that $u$ is sequentually continuous, i.e. maps convergent sequences to convergent ones.
(This notion is formally weaker that the usual topological continuity in the case of non-metrizable spaces.) Let $L_0\subset L$ be a topologically dense linear subspace. Assume that
$u|_{L_0}\equiv 0$.
QUESTION: Does it follow that $u\equiv 0$?
I am interested in rather concrete examples of spaces: spaces of generalized functions on smooth manifolds (say $R ^n$) with the wave-front set contained in a given closed set.
Take $c(\Gamma)$ with $\Gamma$ uncountable under the topology of pointwise convergence. $c_0(\Gamma)$ is dense but not sequentially dense. Let $u$ be the linear functional that vanishes on $c_0(\Gamma)$ and is one at $1_\Gamma$.
|
# Thread: Problems with updating World
1. Originally Posted by mariow
Hello 125125 I download SqLOG and i'm try Excuite my Merge-sql.sql And He give me errors "There was an error while executing a query.
The query and the error message has been logged at:
C:\Users\Mario\AppData\Roaming\SQLyog\sqlyog.err.
Please click on "Open Error File..." to open the error file."
But to my SqLOG.err says that "Query:
INSERT INTO `gameobject` (`guid`, `id`, `map`, `spawnMask`, `phaseMask`, `position_x`, `position_y`, `position_z`, `orientation`, `rotation0`, `rotation1`, `rotation2`, `rotation3`, `spawntimesecs`, `animprogress`, `state`, `VerifiedBuild`) VALUES
(@GUID,188367,571,1,1,5048.28,-4817.63,219.778,2.49582,0,0,0,1,-60,255,1,0)
Error occured at:2014-08-27 11:59:55
Line no.:15
Error Code: 1054 - Unknown column 'VerifiedBuild' in 'field list'
Query:
INSERT INTO `gameobject` (`guid`, `id`, `map`, `spawnMask`, `phaseMask`, `position_x`, `position_y`, `position_z`, `orientation`, `rotation0`, `rotation1`, `rotation2`, `rotation3`, `spawntimesecs`, `animprogress`, `state`, `VerifiedBuild`) VALUES
(@GUID,188367,571,1,1,5048.28,-4817.63,219.778,2.49582,0,0,0,1,-60,255,1,0)
Error occured at:2014-08-27 12:00:06
Line no.:15
Error Code: 1054 - Unknown column 'VerifiedBuild' in 'field list'
Query:
INSERT INTO `gameobject` (`guid`, `id`, `map`, `spawnMask`, `phaseMask`, `position_x`, `position_y`, `position_z`, `orientation`, `rotation0`, `rotation1`, `rotation2`, `rotation3`, `spawntimesecs`, `animprogress`, `state`, `VerifiedBuild`) VALUES
(@GUID,188367,571,1,1,5048.28,-4817.63,219.778,2.49582,0,0,0,1,-60,255,1,0)
Error occured at:2014-08-27 12:00:29
Line no.:16
Error Code: 1054 - Unknown column 'VerifiedBuild' in 'field list'
Query:
INSERT INTO `gameobject` (`guid`, `id`, `map`, `spawnMask`, `phaseMask`, `position_x`, `position_y`, `position_z`, `orientation`, `rotation0`, `rotation1`, `rotation2`, `rotation3`, `spawntimesecs`, `animprogress`, `state`, `VerifiedBuild`) VALUES
(@GUID,188367,571,1,1,5048.28,-4817.63,219.778,2.49582,0,0,0,1,-60,255,1,0)
Error occured at:2014-08-27 12:02:12
Line no.:16
Error Code: 1054 - Unknown column 'VerifiedBuild' in 'field list'
Query:
INSERT INTO `gameobject` (`guid`, `id`, `map`, `spawnMask`, `phaseMask`, `position_x`, `position_y`, `position_z`, `orientation`, `rotation0`, `rotation1`, `rotation2`, `rotation3`, `spawntimesecs`, `animprogress`, `state`, `VerifiedBuild`) VALUES
(@GUID,188367,571,1,1,5048.28,-4817.63,219.778,2.49582,0,0,0,1,-60,255,1,0)
Error occured at:2014-08-27 12:02:20
Line no.:16
Error Code: 1054 - Unknown column 'VerifiedBuild' in 'field list'
Query:
INSERT INTO `gameobject` (`guid`, `id`, `map`, `spawnMask`, `phaseMask`, `position_x`, `position_y`, `position_z`, `orientation`, `rotation0`, `rotation1`, `rotation2`, `rotation3`, `spawntimesecs`, `animprogress`, `state`, `VerifiedBuild`) VALUES
(@GUID,188367,571,1,1,5048.28,-4817.63,219.778,2.49582,0,0,0,1,-60,255,1,0)
Error occured at:2014-08-27 12:02:29
Line no.:16
Error Code: 1054 - Unknown column 'VerifiedBuild' in 'field list'
"
Rename your WDBVerified column to VerifiedBuild.
You can use this query
Code:
`ALTER TABLE gameobject CHANGE WDBVerified VerifiedBuild smallint(5);`
Run this query in world DB :)
2. Originally Posted by 125125
Rename your WDBVerified column to VerifiedBuild.
You can use this query
Code:
`ALTER TABLE gameobject CHANGE WDBVerified VerifiedBuild smallint(5);`
Run this query in world DB :)
I use but He give me error
Code:
`/* SQL Error (1054): Unknown column 'WDBVerified' in 'gameobject' */`
3. Originally Posted by mariow
I use but He give me error
Code:
`/* SQL Error (1054): Unknown column 'WDBVerified' in 'gameobject' */`
Well, You told me on skype that your last column in gameobject table was state. Last column should be VerifiedBuild. That means that you have to add another column to the table. So add a column and name it VerifiedBuild and set it to SmallINT and set the length to 5. So smallint(5)
I guess you have even more errors but this will solve the error for gameobject table atleast
I'm not sure if you know how to add columns to a table so you can use this query
Code:
```ALTER TABLE gameobject
ADD VerifiedBuild SmallINT(5) NULL DEFAULT 0;```
4. Ok now i will test it and i tell you work/no work
|
## Notes of Johan Hastad’s talk, jan. 20
Approximating linear threshold predicates
Joint with Cheragchi, Isaksson and Svensson.
1. Resistance
Let ${P}$ be a predicate on ${\{\pm 1\}}$-valued variables, i.e. a ${k}$-variable boolean function. The probability that a random element of ${\{\pm 1\}^k}$ satisfies ${P}$ is ${r_P =2^{-k}|P^{-1}(1)|}$. This gives an easy ${(c,r_P)}$-approximation to MAX P for any ${c}$ and ${P}$ (deterministic, but easy to make deterministic using conditional expectations).
Question: is MAX P ${(1,r_P +\epsilon)}$-inapproximable ? is MAX P ${(1-\epsilon,r_P +\epsilon)}$-inapproximable ?
Say ${P}$ is approximation resistant (AR) in the second case, approximation resistant on satisfiable instances (ARSI) in the first case.
Example 1 3SAT is approximation resistant on satisfiable instances, 3LIN is approximation resistant.
AR looks sort of monotone. It is more likely for predicates accept more inputs. Not rigorously true.
Difference between AR and ARSI: SDP or LP do not care about perfect satisfiability, but hardness reductions do.
Steurer: what about BETWEEN US ?
2. Why study linear predicates ?
Austrin Mossel 2009:
Theorem 1 Take a measure ${\mu}$ on ${\{\pm 1\}^k}$ such that coordinates are independent, vanishing expectation. Assume that
$\displaystyle \begin{array}{rcl} \mathop{\mathbb P}_{\mu} (P(x)\textrm{ is true})=c. \end{array}$
Then MAX P is ${(c-\epsilon,r_P +\epsilon)}$-inapproximable under UGC.
This, like most of this talk, is contained in Raghavendra’s work, except it is not so easy to know when Raghavendra’s theorem applies or not.
Lemma 2 A subset ${S\in\{\pm 1\}^k}$ does not support a pairwise independent measure ${\mu}$ iff there exist a quadratic predicate without constant term such that ${Q>0}$ on ${S}$.
Proof: Let ${T:x\mapsto (x,x\otimes x)\in {\mathbb R}^{k(k+1)/2}}$. If ${0\in}$convex hull of ${T(S)}$, then ${0=\sum_{x\in S}\mu(x)T(x)}$ and ${\mu}$ is the required measure. If not, there is a separationg hyperplane, this provides the required quadratic polynomial. $\Box$
So the maximal ${P}$ not satisfying Austrin-Mossel’s assumptions is ${P=sign(Q)}$, ${Q}$ a quadratic polynomial without constant term. When ${P}$ is quadratic, SDP seem to be efficient. E.g. MAX CUT. One may believe that this ${P}$ can be handled using SDP, and we get a characterization of inapproximable predicates. It does not work so. So we lower our ambition and study predicates of the form ${P=sign(L)}$ where ${L}$ is linear. E.g. Majority on an odd number of inputs.
3. Our LP approximations
Take Majority on ${k=3}$ inputs. This becomes a linear integer program, with constraints of the form
$\displaystyle \begin{array}{rcl} y_1 \pm y_2 \pm y_3 \geq 1 \end{array}$
and no objective. Note that ${r_P=1/2}$, by symmetry. Relax integrality into ${y_i \in [-1,1]}$. Let ${\alpha}$ be the LP solution.
Rounding: set ${x_i =1}$ with probability ${\frac{1+\alpha_i}{2}}$. For instance
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(Maj(x_1 ,x_2 ,x_3))=\frac{1}{2}(\alpha_1 +\alpha_2 +\alpha_3 +\alpha_1 \alpha_2 \alpha_3). \end{array}$
Hard to compute for more variables. Result is a symmetric polynomial. So change probability from ${\frac{1+\alpha_i}{2}}$ to ${\frac{1+\epsilon\alpha_i}{2}}$. Expand, for ${\epsilon=\Theta(1/k)}$, get ${\mathop{\mathbb E}(Maj(x_1 ,\ldots ,x_k))\sim k^{-3/2}}$. For ${\epsilon}$ constant, with more work, one can get ${\mathop{\mathbb E}(Maj(x_1 ,\ldots ,x_k))\sim k^{-1/2}}$. This shows that the LP algorithm achieves a ${\frac{1}{2}-O(k^{-1/2})}$-fraction of equations, and proves sharpness of Austrin-Mossel’s theorem for Majority.
For general linear functions ${L}$, same LP, same rounding,
$\displaystyle \begin{array}{rcl} \mathop{\mathbb E}(sign(L))=(c_1 \alpha_1 +\cdots+c_k \alpha_k)\epsilon+ O(\epsilon^2), \end{array}$
where the ${c_i}$‘s are the Fourier coefficients of ${P}$.
Theorem 3 If ${\hat{P}_i}$ are usable as weights, i.e. if
$\displaystyle \begin{array}{rcl} P(x)=sign(\sum \hat{P}_i x_i), \end{array}$
then we get an ${(1,\frac{1}{2})}$-approximation for all ${\delta>0}$.
This happens roughly when the coefficients ${w_i}$ of ${L}$ satisfy ${\sum_{i}w_{i}^{3}-w_{i}\leq 3\sum_{i}w_{i}^{2}}$ up to some error.
Example 2 The Republic predicate ${P=sign(\frac{k}{3}x_1 +\sum_{i=2}^{k}x_i)}$ has ${\hat{P}_1 =1-c^k}$ and other ${\hat{P}_i =c^k}$. Unknown wether AR or not.
The Monarchy predicate ${P=sign((k-2)x_1 +\sum_{i=2}^{k}x_i)}$ has ${\hat{P}_1 =1-c^k}$ and other ${\hat{P}_i =c^k}$. This has been solved by Austrin B and Magen: it is not AR.
4. Lower bounds
We can show that
• Majority is ${(1-\frac{1}{k+1},\frac{1}{2}+\epsilon)}$-inapproximable.
• Majority is ${(1-\epsilon,\frac{1}{2}+\theta(k^{-1/2})+\epsilon)}$-inapproximable.
This is not sufficient to prove that Majority is AR. I wonder wether there is a linear predicate which is AR.
References
|
# The Naive Bayes Algorithm: Step by Step
[mathjax]The naive Bayes algorithm is fundamental to machine learning, combining good predictive power with a compact representation of the predictors. In many cases, naive Bayes is a good initial choice to estimate a baseline model, which can then be refined or compared to more sophisticated models.
In this post, I’m going to step through the naive Bayes algorithm using standard R functions. The goal is not to come up with production-level code–R already has that–but to use R for the “grunt work” needed to implement the algorithm. This lets us focus on how naive Bayes makes predictions, rather than the mechanics of probability calculations.
### Files Used in This Post
Files for post “The Naive Bayes Algorithm: Step by Step”
### The Idea Behind the Algorithm
We start with a standard machine learning problem: given a data set of features and targets, how can we predict the target of a new query based on its features? Assume we have:
1. A list of customers
2. Measures of their engagement with our website (each with the value Low, Moderate, or High)
3. Whether each customer signed up for Premium service
In classical statistics, we treat the values of the measurements as evidence, and ask whether there are more customers with specific values who signed up for Premium service, or who did not subscribe. For example, assume the measures of engagement are (Low, Low, Moderate), and that 4 customers with those values signed up for Premium service, while 10 did not. We would predict a customer with that level of engagement would not subscribe.
One complication is that there are relatively fewer customers who sign up compared to those that don’t: in that case, given the same measurements, we’re less likely to predict a customer will sign up because our data set has fewer cases where they did. Bayes Theorem mitigates this by weighting the measurements with the unconditional probability that the customer did or did not sign up: that is, the probability that a customer did subscribe, regardless their engagement.
This revision leads to better predictions; however, it requires us to calculate the relationships between all the measures, as well as each measure and the targets. in real data sets, this isn’t feasible: instead, we assume the measures are independent of one another, and just calculate the relationship between the measures and the target. This is the naive Bayes model, so called because the assumption of independent measures is made whether or not it is literally true.
### A Step-By-Step Example
For this example, we assume than an accounting firm wants to predict the audit risk of a client based upon the characteristics of their tax return. There are four characteristics that we are interested in, along with their possible values.
1. Percentage of Business that is Cash-Based: “Less than 25%”, “Between 25% & 50%”, “More than 50%”
2. Number of Wire Transfers: “Zero”, “Less than 5”, “5 or More”
3. Home Office: “Yes”, “No”
4. Amount of Tax Owed: “Less than $1,000”, “Between$1,000 and $5,000”, “More than$5,000”
The target (Audit Risk) can be Low, Moderate, or High. We want to develop a naive Bayesian model that predicts the Audit Risk for a generic client based upon these characteristics.
#### Preliminaries
We start by removing all objects from the workspace, then creating the data set and assigning factor levels to make it easier to interpret.
# NaiveBayesStepByStep.R
# Illustrates the calculations used by the naive Bayes algorithm to predict a query target level given a training data set
# c. 2017 David Schwab
# Preliminaries
rm(list=ls())
options(digits=3) # This makes the numbers display better; feel free to change it.
# Construct the data set
pct.cash <- factor(c(1,1,1,2,2,1,2,2,1,3,3,3,3,2,3,3,1,1,2))
wire.transfers <- factor(c(1,1,2,1,2,1,2,2,1,3,3,3,3,2,3,3,1,1,2))
home.office <- factor(c(1,1,1,1,2,2,1,2,2,2,2,1,1,1,2,1,2,1,2))
tax.owed <- factor(c(1,1,1,2,2,2,1,1,2,3,3,3,3,1,3,3,2,1,1))
audit.risk <- factor(c(1,1,1,1,2,2,2,2,2,3,3,3,3,3,1,2,3,2,1))
# Add the factor levels and make the data frame
levels(pct.cash) <- c("Less than 25%", "Between 25% & 50%", "More than 50%")
levels(wire.transfers) <- c("Zero", "Less than 5", "5 or More")
levels(home.office) <- c("Yes", "No")
levels(tax.owed) <- c("Less than $1,000", "Between$1,000 and $5,000", "More than$5,000")
levels(audit.risk) <- c("Low", "Moderate", "High")
audit.risk.data <- data.frame(pct.cash,wire.transfers,home.office,tax.owed,audit.risk)
#### Calculate Conditional Probabilities for Each Feature
Next, we calculate the conditional probability that each feature is associated with each target. There are $9+9+6+9=33$ separate probabilities, so we will let R do the calculation using table() and apply().
To start, we count the number of times each target level occurs and store it as audit.risk.targets. We use table() to get the counts, then cast them to numeric.
Next, we call apply() with an in-line user-defined function: this function creates a contingency table between one of the features and the target. We transpose the table and divide it by audit.risk.targets, which R recycles for each row. The result, audit.risk.cond.prob, is a list of the conditional probabilities that each feature takes each target level (for housekeeping, we delete the final element, which has audit.risk in both rows and columns).
# Next, count the instances of each target level
audit.risk.targets <- as.numeric(table(audit.risk.data$audit.risk)) # Now, calculate the conditional probabilities needed to predict a target with the naive Bayes model audit.risk.cond.prob <- apply(audit.risk.data, 2, function(x){ t(table(x,audit.risk.data$audit.risk)) / audit.risk.targets
})
audit.risk.cond.prob$audit.risk <- NULL # Remove extraneous data #### Calculate Prior Probabilities for Each Target Level All that’s left is to calculate the prior probabilities for each target level: this is just the count of each level divided by the total number of data points. We also create a separate display variable for easier interpretation. # Calculate the target priors audit.risk.priors <- audit.risk.targets / nrow(audit.risk.data) audit.risk.priors.display <- data.frame(target=c("Low", "Moderate", "High"),priors=audit.risk.priors) #### Using the Model to Make a Prediction To make a prediction, we display both the conditional and prior probabilities. Here is how they will look: note that we display each feature separately to arrange the columns in the correct order, using the levels defined earlier. > audit.risk.priors.display target priors 1 Low 0.316 2 Moderate 0.368 3 High 0.316 > audit.risk.cond.prob$pct.cash[,levels(pct.cash)]
x
Less than 25% Between 25% & 50% More than 50%
Low 0.500 0.333 0.167
Moderate 0.429 0.429 0.143
High 0.167 0.167 0.667
> audit.risk.cond.prob$wire.transfers[,levels(wire.transfers)] x Zero Less than 5 5 or More Low 0.500 0.333 0.167 Moderate 0.429 0.429 0.143 High 0.167 0.167 0.667 > audit.risk.cond.prob$home.office[,levels(home.office)]
x
Yes No
Low 0.667 0.333
Moderate 0.429 0.571
High 0.500 0.500
> audit.risk.cond.prob$tax.owed[,levels(tax.owed)] x Less than$1,000 Between $1,000 and$5,000 More than $5,000 Low 0.667 0.167 0.167 Moderate 0.429 0.429 0.143 High 0.167 0.167 0.667 To make a prediction, consider the query q=(Between 25% and 50%, Less than 5, Yes, Less than$1,000). We need to calculate the naive Bayes estimate for each target level; the prediction is the target level with the greatest value.
By inspection, we can see that when the target level is Low, the conditional probabilities are .333, .333, .333, .667: the joint conditional probability is their product, which is 0.025. The prior probability of Low is 0.316, so we multiply the product by this to get the naive Bayes estimate of 0.008. Similar estimates for Moderate and High are 0.017 and 0.001, respectively. The level Moderate has the greatest value, so that is the naive Bayes prediction for this query.
NOTE: In this example, the naive Bayesian estimates are not the actual posterior probabilities of each target level: however, their relative ranking is identical to the actual probabilities, so we can be confident that Moderate is the correct estimate.
### Conclusion
As we can see, the naive Bayes algorithm allows a complex data set to be represented by relatively few predictors. It also performs well in many different applications, and has the intuitive appeal of predicting the most probable target level based on the relative frequency of the different target levels, as well as the set of features. For production work, the e1071 package provides the naiveBayes() function: you can find a good overview of its use (as well as more detail on the theory behind the algorithm) here.
|
## What im working on right now
Discussion related to the exporter plugin for Autodesk 3ds Max.
### Re: What im working on right now
I updated the code in my branch on mercurial. It break material but I should have that back soon. I now support .lxo files and the scripted ply exporter is more functional.
PerPixel
Posts: 107
Joined: Fri Jul 03, 2009 7:13 am
### Re: What im working on right now
is this latest version with materials support able to replace the old one from Hedphelym?
patro
Posts: 1988
Joined: Fri Feb 29, 2008 9:06 pm
Location: mount Etna
### Re: What im working on right now
tested.
the plugin is unable to export a plane with light material. it always abort export if the light plane are on a hidden layer.
i suppose that the omni are supported... but they are written as omni instad of point.
material is exported.
patro
Posts: 1988
Joined: Fri Feb 29, 2008 9:06 pm
Location: mount Etna
### Re: What im working on right now
Now im confused... Should I render hidden object or not...?
Solved the omni thing. Objects with light material work just fine here. Anyway I'll reconsider that concept. In blender 2.5 you just have to enable the Emission value of any material to make it work.
PerPixel
Posts: 107
Joined: Fri Jul 03, 2009 7:13 am
### Re: What im working on right now
skip exporting hidden objects is normal (mentalray \ scanline ) does this too.
I have not had much time to test the past days due to 3 projects at the same time at work,
but I hope to find time soon. Also get back to scripting.
Stig Atle Steffensen - http://stigatle.no | Follow me on GNU social: https://quitter.no/stigatle
LuxMax integrated:
http://www.luxrender.net/wiki/Luxmax_integrated
hedphelym
Posts: 1390
Joined: Mon Aug 18, 2008 7:37 am
Location: Kristiansand Norway
### Re: What im working on right now
PerPixel wrote:Objects with light material work just fine here.
there was an error in the lux export lights script iirc at line 75 - $transform. will check and report asap btw: what is supported at time? geometries, max lights... thanks patro Posts: 1988 Joined: Fri Feb 29, 2008 9:06 pm Location: mount Etna ### Re: What im working on right now Sorry Perpixel, at max start i was able to export omni and direct lights but after the first export i'm not able to export the direct lights. hidden or un hidden. i got alsways the same error in fn_export_maxlights.ms at line 75: tm =$.transform
edired:
i have to start the scene manually cause luxreder log sho an error that it's unable to read the lxs file.
patro
Posts: 1988
Joined: Fri Feb 29, 2008 9:06 pm
Location: mount Etna
### Re: What im working on right now
Fixed the light export error. Do you have space in your export path?
Now almost everything the previous script had is supported i believe. Add a luxCamera and materials. I'm just missing the sky light and sun.
PerPixel
Posts: 107
Joined: Fri Jul 03, 2009 7:13 am
### Re: What im working on right now
if you mean space on hard disk, yes 300 of Gb
if you mean space in the name of the file... yes "luxball5 & luxmax1_000000.lxs"
export path: C:\tmp
patro
Posts: 1988
Joined: Fri Feb 29, 2008 9:06 pm
Location: mount Etna
### Re: What im working on right now
I just updated the script to support space in filenames. Also try the Luxconsole option in External Engine configuration. Change Gui to Console in the drop down menu.
Sun and Sky are back too
PerPixel
Posts: 107
Joined: Fri Jul 03, 2009 7:13 am
|
# Homework Help: Inverse of exponential function
1. Sep 17, 2011
### mark187
1. The problem statement, all variables and given/known data
Give the inverse of this function
N=f(L)=1816-8L
The answer has to be filled in in Maple
2. Relevant equations
3. The attempt at a solution
N=1816-8L
16-8L=ln(N)/ln(18)
-8L=(ln(N)/ln(18))-16
L=-ln(N)/(8*ln(18))+2
Is this correct?
When i fill this in in maple it's incorrect.
I've really no clue why it is wrong.
2. Sep 17, 2011
### ehild
L=-ln(N)/(8*ln(18))+2 is the same function as the original. Choose L the independent variable and N the dependent one. (Just exchange L and N).
ehild
3. Sep 17, 2011
### Mentallic
Yes it's correct. What are you typing into maple exactly? Your syntax is probably slightly off.
edit: Sorry I didn't notice that you forgot to swap your variables.
4. Sep 17, 2011
### mark187
thanks! It's also mentioned that it should be simplified as much as possible.
Does that mean that the 8*LN(18) could be simplified to 8*LN(2*9) and more?
5. Sep 17, 2011
### Mentallic
It's as simple as it can get, unless you believe $\ln{\frac{1}{x}}$ is more simple than $-\ln{x}$ you shouldn't be changing anything.
$\ln{18}$ is definitely more desirable than $\ln{9\cdot2}$
6. Sep 17, 2011
### Ray Vickson
When I let Maple solve the equation for L it gives me exactly what you wrote. Why do you think Maple thinks it is incorrectÉ What type of error message are you receivingÉ
That annoying É is actually a question mark, but when I type it in it gives me that accented E; that only seems to happen when I access this website through Google Chrome!
RGV
7. Sep 17, 2011
### mark187
Well, this assignment was given in a little online test.
The point is, that i can't see why the answer was wrong.
I have 2 chances, so for the second change i will try to swap the variables.
|
What is the right word to describe something more than "great"
9
I need to write short email that will state something like:
..that will turn a good company into a ... company.
What would be a proper word that I can use?
8
".. into a supercalifragilisticexpialidocious company"
– CowperKettle – 2016-03-02T08:39:53.367
7You might want to focus on the qualities of a company this would improve, instead of just declaring that it would be better. For instance, 'a more successful company' or 'a more profitable company' Or '...that would make it a better place to work'. – AJFaraday – 2016-03-02T11:09:50.010
1An idiom for describing an increase in quality is "taking x to the next level"; e.g "Increasing our user base is key to taking our business to the next level." – Harrison Paine – 2016-03-02T14:57:06.187
Fantastic! – MikeTheLiar – 2016-03-02T15:05:11.430
if you are allowed to be informal maybe epic is a good choice – user13267 – 2016-03-03T01:54:22.027
13
Hrm, just a few observations on some of the answers here, from an American English speaker:
Many English versions of the word you're looking for were historically used to refer to things that are large in size, and can sometimes cause confusion on whether it's exceptionally good, or exceptionally large. In context the difference is often clear, but when describing a company (which could be made larger) the meaning may not be so clear. Be careful when using words like "tremendous" or even "great" / "grand".
Similarly, many English words come from descriptions of things that are so exceptional that they are better than real life. "Fantastic" can frequently be used to describe something that has an element of fantasy, which it's unlikely that your company would have. "Legendary" and "epic" can suffer from the same. Additionally, due to its (over)use in modern informal speech I think "epic" has lost much of its weight and may also appear informal, as user13267 mentioned.
"Marvelous" and "phenomenal" aren't bad suggestions at all, but CAN have an air of supernatural greatness (i.e. more "great" than something can ever be in real life) similar to the previous bunch. Personally, I think these would probably work in the given context.
"Exemplary" isn't bad, but may sometimes imply something that sets an example for others to follow. This might fit your case, however.
I like Thomas Mario Adams III's suggestion of "outstanding", particularly because this company would literally "stand out" from among other merely "great" companies. It's also term that is very familiar to corporate readers. You may want to add emphasis by saying the company will become "truly outstanding".
Very nice answer! – Hanky Panky – 2016-03-03T08:44:24.043
Good observations of other aspects. Also: excellent (highly performing), wonderful/awesome (full of wonder/awe), magnificent (majestic, worthy of a ruler)... there are numerous positive words, but a common issue is having sort of extra little meaning attached to them. Even terrific looks like the word "terrifyingly". So there might be some cases where the word isn't quite the perfect match, whereas "great" is more universal (mostly meaning "good" although there is a smaller implication of being "large", using "great" to mean "good" about something "small" would typically seem just fine). – TOOGAM – 2016-03-03T14:18:42.857
16
Consider excellent
very good of its kind, eminently good
or outstanding.
extremely good or excellent
note that these words start with vowels, so the 'a' turns into 'an'
1If we're talking about a company, I would definitely go with "outstanding". – daboross – 2016-03-03T02:47:21.180
8
You may consider
Exceptional
From the online dictionary, definition 2:
unusually excellent; superior: an exceptional violinist.
4
Phenomenal
highly extraordinary or prodigious; exceptional:
Source
astounding, exceptional
Source
Usage example:
...will turn a good company into a phenomenal success!
2
Consider
an exemplary company
as a way to express that this company will be turned into "something more than great"
1
Think of marvelous, fantastic, the best. The best is fine.
3
These are legitimate words, but I don't think you'll find marvelous or fantastic used to describe a corporation all that often. (There are a few, but not many.) These suggestions wouldn't be among my primary choices.
– J.R. – 2016-03-02T15:50:26.930
1'Marvelous company' might also refer to a group of people. That Google Ngram won't show how big a percentage refers to 'companies' in the sense of 'business entities'. – CowperKettle – 2016-03-03T05:38:45.830
1
Superb, Epic, Legendary
This answer over on English.SE has a good amount of information on intensifiers like this that is worth a read.
0
Many companies today are referred to as "great" companies. But great has become cliche. What I'm saying is even better than great, it's an "outstanding" company. An outlier to the great ones.
0
I would suggest the word remarkable
Remarkable:
1. notably or conspicuously unusual; extraordinary: a remarkable change.
2. worthy of notice or attention.
|
+0
# 0/0= 0?
0
170
2
0/0= 0?
Guest Oct 25, 2017
Sort:
#1
+6945
+1
No! 0/0 is undefined.
hectictar Oct 25, 2017
#2
+333
+1
When dividing, for example, $$a\over b$$, we ask the question, "What is x if $$a=bx$$?" For $$0\over0$$, this becomes $$0x=0$$. Here we have a problem. This has infinitely many solutions! Therefore, 0 divided by 0 is indeterminate.
Mathhemathh Oct 25, 2017
### 22 Online Users
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners. See details
|
## careless850 3 years ago Simplify.. again!
1. karatechopper
what to simplify?
2. careless850
$\frac {x^{2}-12x+20}{x-10}$
3. phi
you have to factor the top.
4. careless850
So, it'd be, x - 3x + 5?
5. phi
list the factors of the 3rd number 20 1,20 2,10 4,5 do any add up to 12 if so, that is the pair to use. the sign of the 2nd number (-12) tells you to make the factors negative I would use (x-2)(x-10)
6. careless850
Alright.. I didn't know about that part.
7. phi
once you have it factored up top, we can cancel the (x-10) from top and bottom
8. careless850
but wait..
9. careless850
What if I don't know the factors with numbers?
10. phi
I think that means you have to practice.... Here is a video, when you have time http://www.khanacademy.org/math/algebra/polynomials/v/factoring-quadratic-expressions
11. careless850
but I'm Simplifying Algebraic Ratios
12. phi
That means they think you know algebra, including factoring quadratics
13. careless850
Would the answer be x-2?
14. phi
yes. you can test the answer pick a simple number like x=0. If your answer is correct (x-2) with x=0 is 0-2= -2 replace x=0 into the original problem: and you get 20/-10= -2 It matches!
15. careless850
I see. And I lied about the factoring thing, I can factor numbers that are even or divisible by 5
16. phi
If the video helps, notice that Khan has LOTS of videos on pretty much any topic you might have a question on.
|
# Handling movement on sloped surfaces - clamping character to sloped surface
I've noticed that a lot of people seem to have this issue but I've yet to find an actual working solution - when a rigidbody-based character controller (I'm not using Unity's character controller) moves down a sloped surface, they will bounce/bunny hop on the way down instead of staying on the surface. I'm building a 3d platformer and ran into this issue - I've tried a fair amount of things but nothing seems to work cleanly. I managed to find a 2D solution here but I can't seem to get it to work cleanly in 3D. Here is my code:
void NormalizeSlope()
{
// Attempt vertical normalization
if (isGrounded)
{
RaycastHit hit;
if (Physics.Raycast(floorCheck.position, Vector3.down, out hit, groundCheckDistance))
{
if(hit.collider != null && Mathf.Abs(hit.normal.x) > 0.1f)
{
// Apply the opposite force against the slope force
// You will need to provide your own slopeFriction to stabalize movement
rigidbody.velocity = new Vector3(rigidbody.velocity.x - (hit.normal.x * 0.4f), rigidbody.velocity.y, rigidbody.velocity.z); //change 0.6 to y velocity
//Move Player up or down to compensate for the slope below them
Vector3 pos = transform.position;
pos.y += -hit.normal.x * Mathf.Abs(rigidbody.velocity.x) * Time.deltaTime * (rigidbody.velocity.x - hit.normal.x > 0 ? 1 : -1);
transform.position = pos;
}
}
}
}
With that, I get varying results on surfaces with different slopes. Also, my character jitters. On some slopes, my character even slowly inches their way up the surface. Has anyone else run into this issue? Does anyone have a working solution for this problem?
If you want your player to slide down, than it would be better to forget about rotation, if it's a simulation of real life sliding and actually use rigidbody and gravity, but then just right after the slope, when the normal changes or what make your player to stick to hit.position where hit is a variable of RaycastHit. It will make your character to stick to any surface when slope changes. But remember that maybe you want it to save it's speed when using velocity or AddForce, then you want to change only y position and also change the velocity on y to 0.
|
# Deep Beer Designer
This post is from Ieuan Evans, who has created a very unique example combining deep learning with LSTM and beer. (Please drink responsibly!) I love craft beer. Nowadays, there are so many choices that it can be overwhelming, which is a great problem to have! Lately I have found myself becoming lazy when it comes to carefully selecting beers in a bar, and I tend to just go for the beer with the best sounding name. I started to wonder: Could MATLAB automatically analyze a list of names and select a beer for me? Why stop there? Could I get MATLAB to design a unique beer just for me? In this example, I will show how to classify beer styles given the name, how to generate new beer names, and even automatically generate some tasting notes too.
Happe Hill Hefeweizen
"A rich and fruity traditional Marthe Belgian yeast. Full-bodied with nutty undertones and a slightly sweet fruit flavor."
(MATLAB-generated name and tasting notes. Not bad!)
## Import Data
There are two data sources available for this example:
Load the craft beers data from Kaggle.
rng(0)
filename = "beers.csv";
dataKaggle = readtable(filename,'TextType','string','Encoding','UTF-8');
View a random sample of the data.
idx = randperm(size(dataKaggle,1),10);
disp(dataKaggle(idx,["name" "style"]))
Name Style _______________________________________ _______________________________________ "Walloon (2014)" "Saison / Farmhouse Ale" "Yoshi's Nectar" "California Common / Steam Beer" "1327 Pod's ESB" "Extra Special / Strong Bitter (ESB)" "Parade Ground Coffee Porter" "American Porter" "Perpetual Darkness" "Belgian Strong Dark Ale" "La Frontera Premium IPA" "American IPA" "Canyon Cream Ale" "Cream Ale" "Pace Setter Belgian Style Wit" "Witbier" "Squatters Hop Rising Double IPA" "American Double / Imperial IPA" "Good Vibes IPA" "American IPA"
Load the data from the Cambridge Beer Festival, which in addition to names and styles, also contains tasting notes. Extract the data using the HTML parsing tools from Text Analytics Toolbox.
url = "https://www.cambridgebeerfestival.com/products/cbf44-beer";
tree = htmlTree(code);
Extract the beer names.
subtrees = findElement(tree,"span[class=""productname""]");
name = extractHTMLText(subtrees);
Extract the tasting notes.
subtrees = findElement(tree,"span[class=""tasting""]");
notes = extractHTMLText(subtrees);
dataCambridge = table(name,notes);
Visualize the tasting notes in a word cloud. The wordcloud function in Text Analytics Toolbox creates word clouds directly from string data.
figure
wordcloud(notes);
title("Tasting Notes")
Classify Beer Style First, using the Kaggle data, create a long short-term memory (LSTM) deep learning model to classify the beer style given the name. Visualize the distribution of the beer styles using a word cloud.
textData = dataKaggle.name;
labels = categorical(dataKaggle.style);
figure
wordcloud(labels);
title("Beer Styles")
As you can see in the wordcloud, the styles are very imbalanced, with some styles containing only a few instances. To improve the model, remove the styles with fewer than 5 instances, and then split the data into 90% training and 10% testing partitions. (The details of the data preparation can be found in the full example file) Convert each beer name to a sequence of integers, where each integer represents a character. The responses are the beer styles.
YTrain = labelsTrain;
YTest = labelsTest;
YTrain(1:6)
ans = 6x1 string array American Pale Lager American IPA American Double / Imperial IPA American IPA Oatmeal Stout
Next create the deep learning network architecture. Use a word embedding layer to learn an embedding of characters and map the integers to vectors. Use a bidirectional LSTM (BiLSTM) layer to learn bidirectional long-term dependencies between the characters in the beer names. To learn stronger interactions between the hidden units of the BiLSTM layer, include an extra fully connected layer of size 50. Use dropout layers to help prevent the network from overfitting.
numFeatures = 1;
embeddingDimension = 100;
numCharacters = max([XTrain{:}]);
numClasses = numel(categories(YTrain));
layers = [
sequenceInputLayer(numFeatures)
wordEmbeddingLayer(embeddingDimension,numCharacters)
bilstmLayer(200,'OutputMode','last')
dropoutLayer(0.5)
fullyConnectedLayer(50)
dropoutLayer(0.5)
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer];
Specify the training options.
options = trainingOptions('adam', ...
'MaxEpochs',100, ...
'InitialLearnRate',0.01, ...
'Shuffle','every-epoch', ...
'ValidationData',{XTest,YTest}, ...
'ValidationFrequency',80, ...
'Plots','training-progress', ...
'Verbose',false);
Train the network.
beerStyleNet = trainNetwork(XTrain,YTrain,layers,options);
Here, we can see that the model overfits. The model has effectively memorized the training data, but not generalized well enough to get as high accuracy on the test data. This is perhaps expected: lots of beer names don't given that much away when it comes to the style, so the network has little to work with. Some are easy to classify since they contain the style of the beer in the name. For example, what style of beer do you think the following are? Can you beat the classifier?
idx = [1 4 5 8 9 10 12 14 15 17];
textDataTest(idx)
ans = 10x1 string array "Sophomoric Saison" "Divided Sky" "Honey Kolsch" "Alaskan Amber" "California Lager" "Brotherhood Steam" "Angry Orchard Apple Ginger" "Long Leaf" "This Season's Blonde" "Raja"
Compare your guesses vs. predictions made by the network vs. the correct labels
YPred = classify(beerStyleNet,XTest);
disp(table(textDataTest(idx),YPred(idx),YTest(idx),'VariableNames',["Name" "Prediction" "True"]))
Name Prediction True ______________________________ ______________________________ ______________________________ "Sophomoric Saison" Saison / Farmhouse Ale Saison / Farmhouse Ale "Divided Sky" American Amber / Red Ale American IPA "Honey Kolsch" Kölsch Kölsch "Alaskan Amber" American Amber / Red Ale Altbier "California Lager" American Amber / Red Lager American Amber / Red Lager "Brotherhood Steam" American Pale Wheat Ale California Common / Steam Beer "Angry Orchard Apple Ginger" Cider Cider "Long Leaf" Munich Helles Lager American IPA "This Season's Blonde" Cream Ale American Blonde Ale "Raja" Fruit / Vegetable Beer American Double / Imperial IPA
So, can I use this network to select a beer for me? Suppose the test set contains all the beers available at a bar. I tend to go for some kind of IPA. Let's see which of these beers are classified as an IPA. This could be any of the class labels containing "IPA".
classNames = string(beerStyleNet.Layers(end).Classes);
idx = contains(classNames,"IPA");
classNamesIPA = classNames(idx)
ans = 5x1 string array "American IPA" "American IPA" "American White IPA" "Belgian IPA" "English India Pale Ale (IPA)"
[YPred,scores] = classify(beerStyleNet,XTest);
idx = contains(string(YPred),"IPA");
selection = textDataTest(idx);
Let's see what proportion of these actually are labelled as some kind of IPA.
accuracyIPA = mean(contains(string(YTest(idx)),"IPA"))
accuracyIPA = 0.7241 View the top 10 predictions sorted by classification score. And to make it even more exciting let's exclude any names with "IPA" in the name
topScores = max(scores(idx,:),[],2);
[~,idxSorted] = sort(topScores,'descend');
selectionSorted = selection(idxSorted);
% remove with IPA in the name
idx = contains(selectionSorted,["IPA" "India Pale Ale"]);
selectionSorted(idx) = [];
selectionSorted(1:10)
ans = 10x1 string array "American Idiot Ale (2012)" "Citra Faced" "Hopped on the High Seas (Calypso)" "Bengali Tiger" "The Sword Iron Swan Ale" "The 26th" "Isis" "En Parfaite Harmonie" "Sanctified" "Sockeye Maibock"
Looks like some good suggestions! Generate New Beer Names We have created a deep network that does a reasonable job of finding a beer for me. My next desire is for MATLAB to design a beer for me. First it needs a name. To do this, I'll use an LSTM network for sequence forecasting which predicts the next character of a sequence. To improve the model, I'll also include the beer names from the Cambridge Beer Festival in the UK. Validation data is not helpful here, so we will train on all the data.
textData = [dataKaggle.name; dataCambridge.name];
To help with the generation, replace all the space characters with a "·" (middle dot) character, insert a start of text character at the beginning, and an end of text character at the end.
startOfTextCharacter = compose("\x0002");
whitespaceCharacter = compose("\x00B7");
endOfTextCharacter = compose("\x2403");
For the predictors, insert the start of text character before the beer names. For the responses, append the end of text character after the beer names. Here, the responses are the same as the predictors, shifted by one time step.
textDataPredictors = startOfTextCharacter + replace(textData," ",whitespaceCharacter);
textDataResponses = replace(textData," ",whitespaceCharacter) + endOfTextCharacter;
XTrain = cellfun(@double,textDataPredictors,'UniformOutput',false);
YTrain = cellfun(@(Y) categorical(cellstr(Y')'),textDataResponses,'UniformOutput',false);
View the first sequence of predictors and responses.
XTrain{1}
ans = 1x9 2 80 117 98 183 66 101 101 114
YTrain{1}
ans = 1x9 categorical P u b · B e e r ␃
Construct the network architecture.
numFeatures = 1;
numClasses = numel(categories([YTrain{:}]));
numCharacters = max([XTrain{:}]);
layers = [
sequenceInputLayer(numFeatures)
wordEmbeddingLayer(200,numCharacters)
lstmLayer(400)
dropoutLayer(0.5)
fullyConnectedLayer(numClasses)
softmaxLayer
classificationLayer];
Specify the training options.
options = trainingOptions('adam', ...
'InitialLearnRate',0.01, ...
'Shuffle','every-epoch', ...
'Plots','training-progress', ...
'Verbose',false);
Train the network.
beerNameNet = trainNetwork(XTrain,YTrain,layers,options);
Here, the network might look like it is not doing particularly well. Again, this might be expected. To get high accuracy, the network must generate the training data exactly. We don't want the network to overfit too much because the network will simply generate the training data. Generate some beer names using the generateText function, which is included in the full example file at the end of the post.
numBeers = 30;
generatedBeers = strings(numBeers,1);
for i = 1:numBeers
generatedBeers(i) = generateText(beerNameNet,startOfTextCharacter,whitespaceCharacter,endOfTextCharacter);
end
Sometimes, the network might simply predict beer names from the training data. Remove them.
idx = ismember(generatedBeers,textData);
generatedBeers(idx) = [];
View the generated beers.
generatedBeers
generatedBeers = "Firis Amber" "Sprecian Claisper" "Worther Pale Ale" "Ma's Canido Winter Ale" "Hop Roust" "Honey Fuddel Pilsner" "Slowneck Lager" "CuDas Colora Lager" "No Ryer Pilsner" "Dark Light IPA"
Generate Tasting Notes We have our beer names, we now need some tasting notes. Similar to the name generator, create a tasting note generator from the Cambridge Beer Festival notes.
textData = dataCambridge.notes;
As before, to help with the name generation, replace all the space characters with a "·" (middle dot) character, insert a start of text character at the beginning, and an end of text character at the end. Once again, define the network architecture, specify the training options, and train the network. (details are found in the main example file - link at the very end of this post) Generate some tasting notes using the generateText function, listed at the end of the example.
numBeers = 5;
for i = 1:numBeers
generatedNotes = generateText(beerNotesNet,startOfTextCharacter,whitespaceCharacter,endOfTextCharacter)
end
"This pale ale has a good assertive pale and full-bodied and lagerong aftertaste." "A full-bodied Imperial stout with flavour with a slight but fuity bite from The Fussion of roasted, malty flavours and a delicate character that is also present in the aftertaste with a silk stout. Unfined." "Light copper traditional bitter with good malt flavours. Brewed with the finest English Maris Otter taste and a rowner fruit and bitter sweet finish." "Stout brewed with a variety of flavoursomen. Unfined." "Mixed malt and fruit start thise in the boil."
Perfect! I can now get started on brewing my own perfect beer. You can run the code many times to generate more names and tasting notes. My favorite design that I have seen so far is: Hopky Wolf IPA "This Double IPA has a big malt backbone and flavours of grapefruit, orange and lemon with an underlying floral quality and tent complex. Well balanced aroma reflects its taste. It's hopped with a blend of Fuggle and Golding hops." Now I just need MATLAB to automate the brewing process...
|
# MCU programming - C++ O2 optimization breaks while loop
I know people says code optimization should only bring out the hidden bug in the program, but hear me out. I am staying on a screen, until some input via an interrupt is met.
Here is what I see in the debugger. Notice the line inspected and the expression value intercepted.
Code in image:
//...
while (true) {
if (choice != 0) //debugger pause
break;
}
ui::Context::removeEventListener(ui::EventType::JOYSTICK_DOWN, &constant_dynamic_handler);
if (choice == 1) goto constant;
else if (choice == 2) goto dynamic;
else if (choice == 3) goto reset;
else if (choice == 4) goto exit;
//...
//debugger view:
//expression: choice
//value: 1
The constant_dynamic_handler is a lambda function declared earlier, that just changes choice to some integer other than 0. The fact that I can pause in the loop means that the loop is not exited, but the value is in fact changed. I cannot step over one step in the debugger as it will fail to read the memory on the CPU and requires a restart to debug again.
choice is declared simply in the same scope as the if-statement block, as int choice = 0;. It is altered only within an interrupt listener triggered with a hardware input.
The program works with O0 flag instead of O1 or O2.
I'm using NXP K60 and c++11, if that is required. Is it my problem? Could there be any thing that I'm not aware of? I am a beginner at MCU programming, and this code works on desktop(Just tried, doesn't work).
• Have you compiled it with optimizations on your desktop? – Arsenal Jul 23 '18 at 8:08
• I wouldn't expect this to work on a desktop system either. C compilers are allowed to read a variable once, and then assume it doesn't change, unless it's declared volatile. Every compiler I've used in the last 20 years performs this optimisation. – Jules Jul 23 '18 at 8:18
• Others have already pointed out the reason why this doesn't work with -O2. I'd also recommend (as a matter of style) simplifying the loop - why loop forever, and then break when a condition is met, rather than just doing while (choice == 0) {} ? – psmears Jul 23 '18 at 9:52
• Post code, not images of code. – Wilson Jul 23 '18 at 10:30
• How is choice declared? – Wilson Jul 23 '18 at 10:30
A data race on a non-atomic variable1 is Undefined Behaviour in C++112. i.e. potentially-concurrent read+write or write+write without any synchronization to provide a happens-before relationship, e.g. a mutex or release/acquire synchronization.
The compiler is allowed to assume that no other thread has modified choice between two reads of it (because that would be UB), so it can CSE and hoist the check out of the loop.
This is in fact what gcc does (and most other compilers too):
while(!choice){}
optimizes into asm that looks like this:
if(!choice) // conditional branch outside the loop to skip it
while(1){} // infinite loop, like ARM .L2: b .L2
This happens in the target-independent part of gcc, so it applies to all architectures.
You want the compiler to be able to do this kind of optimization, because real code contains stuff like for (int i=0 ; i < some_global ; i++ ) { ... }. You want the compiler to be able to load the global outside the loop, not keep re-loading it every loop iteration, or for every access later in a function. Data has to be in registers for the CPU to work with it, not memory.
The compiler could even assume the code is never reached with choice == 0, because an infinite loop with no side effects is Undefined Behaviour. (Reads / writes of non-volatile variables don't count as side effects). Stuff like printf is a side-effect, but calling a non-inline function would also stop the compiler from optimizing away the re-reads of choice, unless it was static int choice. (Then the compiler would know that printf couldn't modify it, unless something in this compilation unit passed &choice to a non-inline function. i.e. escape analysis might allow the compiler to prove that static int choice couldn't be modified by a call to an "unknown" non-inline function.)
In practice real compilers don't optimize away simple infinite loops, they assume (as a quality-of-implementation issue or something) that you meant to write while(42){}. But an example in https://en.cppreference.com/w/cpp/language/ub shows that clang will optimize away an infinite loop if there was code with no side effects in it which it optimized away.
## Officially supported 100% portable / legal C++11 ways to do this:
You don't really have multiple threads, you have an interrupt handler. In C++11 terms, that's exactly like a signal handler: it can run asynchronously with your main program, but on the same core.
C and C++ have had a solution for that for a long time: volatile sig_atomic_t is guaranteed to be ok to write in a signal handler and read in your main program
An integer type which can be accessed as an atomic entity even in the presence of asynchronous interrupts made by signals.
void reader() {
volatile sig_atomic_t shared_choice;
auto handler = a lambda that sets shared_choice;
... register lambda as interrupt handler
sig_atomic_t choice; // non-volatile local to read it into
while((choice=shared_choice) == 0){
// if your CPU has any kind of power-saving instruction like x86 pause, do it here.
// or a sleep-until-next-interrupt like x86 hlt
}
... unregister it.
switch(choice) {
case 1: goto constant;
...
case 0: // you could build the loop around this switch instead of a separate spinloop
// but it doesn't matter much
}
}
Other volatile types are not guaranteed by the standard to be atomic (although in practice they are up to at least pointer width on normal architectures like x86 and ARM, because locals will be naturally aligned. uint8_t is a single byte, and modern ISAs can atomically store a byte without a read/modify/write of the surrounding word, despite any misinformation you may have heard about word-oriented CPUs).
What you'd really like is a way to make a specific access volatile, instead of needing a separate variable. You might be able to do that with *(volatile sig_atomic_t*)&choice, like the Linux kernel's ACCESS_ONCE macro, but Linux compiles with strict-aliasing disabled to make that kind of thing safe. I think in practice that would work on gcc/clang, but I think it's not strictly legal C++.
### With std::atomic<T> for lock-free T
C++11 introduce a standard mechanism to handle the case where one thread reads a variable while another thread (or signal handler) writes it.
It provides control over memory-ordering, with sequential-consistency by default, which is expensive and not needed for your case. std::memory_order_relaxed atomic loads/stores will compile to the same asm (for your K60 ARM Cortex-M4 CPU) as volatile uint8_t, with the advantage of letting you use a uint8_t instead of whatever width sig_atomic_t is, while still avoiding even a hint of C++11 data race UB.
(Of course it's only portable to platforms where atomic<T> is lock-free for your T; otherwise async access from the main program and an interrupt handler can deadlock. C++ implementations aren't allowed to invent writes to surrounding objects, so if they have uint8_t at all, it should be lock-free atomic. But types to wide to be naturally atomic, atomic<T> will use a hidden lock. With regular code unable to ever wake up and release a lock while the only CPU core is stuck in an interrupt handler, you're screwed if a signal/interrupt arrives while that lock is taken.)
#include <atomic>
#include <stdint.h>
volatile uint8_t v;
std::atomic<uint8_t> a;
// std::atomic_signal_fence(std::memory_order_acquire); // optional
}
while (v == 0) {}
}
Both compile to the same asm, with gcc7.2 -O3 for ARM, on the Godbolt compiler explorer
a_reader():
.L2: @ do {
ldrb r3, [r2] @ zero_extendqisi2
cmp r3, #0
beq .L2 @ }while(choice eq 0)
bx lr
.L7:
.word .LANCHOR0
void v_writer() {
v = 1;
}
void a_writer() {
// a = 1; // seq_cst needs a DMB, or x86 xchg or mfence
a.store(1, std::memory_order_relaxed);
}
ARM asm for both:
ldr r3, .L15
movs r2, #1
strb r2, [r3, #1]
bx lr
So in this case for this implementation, volatile can do the same thing as std::atomic. On some platforms, volatile might imply using special instructions necessary for accessing memory-mapped I/O registers. (I'm not aware of any platforms like that, and it's not the case on ARM. But that's one feature of volatile you definitely don't want).
With atomic, you can even block compile-time reordering with respect to non-atomic variables, with no extra runtime cost if you're careful.
Don't use .load(mo_acquire), that will make asm that's safe with respect to other threads running on other cores at the same time. Instead, use relaxed loads/stores and use atomic_signal_fence (not thread_fence) after a relaxed load, or before a relaxed store, to get acquire or release ordering.
A possible use-case would be an interrupt handler that writes a small buffer and then sets an atomic flag to indicate that it's ready. Or an atomic index to specify which of a set of buffers.
Note that if the interrupt handler can run again while the main code is still reading the buffer, you have data race UB (and an actual bug on real hardware) In pure C++ where there are no timing restrictions or guarantees, you might have theoretical potential UB (which the compiler should assume never happens).
But it's only UB if it actually happens at runtime; If your embedded system has realtime guarantees then you may be able to guarantee that the reader can always finish checking the flag and reading the non-atomic data before the interrupt can fire again, even in the worst-case where some other interrupt comes in and delays things. You might need some kind of memory barrier to make sure the compiler doesn't optimize by continuing to reference the buffer, instead of whatever other object you read the buffer into. The compiler doesn't understand that UB-avoidance requires reading the buffer once right away, unless you tell it that somehow. (Something like GNU C asm("":::"memory") should do the trick, or even asm(""::"m"(shared_buffer[0]):"memory")).
Of course, read/modify/write operations like a++ will compile differently from v++, to a thread-safe atomic RMW, using an LL/SC retry loop, or an x86 lock add [mem], 1. The volatile version will compile to a load, then a separate store. You can express this with atomics like:
uint8_t non_atomic_inc() {
uint8_t old_val = tmp;
tmp++;
a.store(tmp, std::memory_order_relaxed);
return old_val;
}
If you actually want to increment choice in memory ever, you might consider volatile to avoid syntax pain if that's what you want instead of actual atomic increments. But remember that every access to a volatile or atomic is an extra load or store, so you should really just choose when to read it into a non-atomic / non-volatile local.
Compilers don't currently optimize atomics, but the standard allows it in cases that are safe unless you use volatile atomic<uint8_t> choice.
Again what we're really like is atomic access while the interrupt handler is registered, then normal access.
## C++20 provides this with std::atomic_ref<>
But neither gcc nor clang actually support this in their standard library yet (libstdc++ or libc++). no member named 'atomic_ref' in namespace 'std', with gcc and clang -std=gnu++2a. There shouldn't be a problem actually implementing it, though; GNU C builtins like __atomic_load work on regular objects, so atomicity is on a per-access basis rather than a per-object basis.
void reader(){
uint8_t choice;
{ // limited scope for the atomic reference
std::atomic_ref<uint8_t> atomic_choice(choice);
auto choice_setter = [&atomic_choice] (int x) { atomic_choice = x; };
while(!atomic_choice) {}
ui::Context::removeEventListener(ui::EventType::JOYSTICK_DOWN, &choice_setter);
}
switch(choice) { // then it's a normal non-atomic / non-volatile variable
}
}
You probably end up with one extra load of the variable vs. while(!(choice = shared_choice)) ;, but if you're calling a function between the spinloop and when you use it, it's probably easier not to force the compiler to record the last read result in another local (which it may have to spill). Or I guess after the deregister you could do a final choice = shared_choice; to make it possible for the compiler to keep choice in a register only, and re-read the atomic or volatile.
Footnote 1: volatile
Even data-races on volatile are technically UB, but in that case the behaviour you get in practice on real implementations is useful, and normally identical to atomic with memory_order_relaxed, if you avoid atomic read-modify-write operations.
Compiler-generated code that loads or stores uint8_t is atomic on your ARM CPU. Read/modify/write like choice++ would not be an atomic RMW, just an atomic load, then a later atomic store which could step on other atomic stores.
Footnote 2: C++03:
Before C++11 the ISO C++ standard didn't say anything about threads, but older compilers worked the same way; C++11 basically just made it official that the way compilers already work is correct, applying the as-if rule to preserve the behaviour of a single thread only unless you use special language features.
• Re: "volatile might imply using special instructions necessary for accessing memory-mapped I/O registers." - It does on the Xtensa ISA of, e.g., those ESP-8266 chips: The docs say the compiler should insert the MEMW ("memory wait") instruction before reads and after writes to volatile variables to make sure the data has propagated through any/all pipelines or caches. IIRC, there was also a known silicon bug where multiple writes to the same memory location in quick succession (w/o MEMW) could cause earlier writes to be skipped and only propagate the later writes to off-core hardware/memory. – JimmyB Apr 10 at 11:51
The code optimizer has analyzed the code and from what it can see the value of choice will never change. And since it will never change, there's no point in checking it in the first place.
The fix is to declare the variable volatile so that the compiler is forced to emit code that checks its value regardless of the optimization level used.
• I once ran across an embedded compiler that ignored volatile when you turned the optimiser on.... Oh how we laughed, it took ages to find and we would up grovelling thru the assembly output. All variables that are modified outside the normal control flow should be declared volatile, and yes, "volatile const" is a thing! – Dan Mills Jul 23 '18 at 9:58
• In modern C++, std::atomic<uint8_t> choice would be good for communication between an interrupt handler and other code. Use choice.store(value, std::memory_order_relaxed), and in this loop uint8_t tmp; while(0 == (tmp=choice.load(std::memory_order_relaxed)) {} would be good. (And probably compile to the same asm as volatile) – Peter Cordes Jul 23 '18 at 14:06
• @PeterCordes: using std::atomic<uint8_t> will very likely produce different assembly compared to volatile (unless you're using that weird MSVC extension, which IIRC only works for x86, x64 and possibly ARM). Atomics need to update the value atomically, i.e. no observer should be able to see any intermediate state. OTOH volatile only says "this value might have changed since you last read it", which is much less restrictive. Also, on some platforms there are special instructions for some special volatile values, e.g. memory mapped registers. – hoffmale Jul 23 '18 at 18:28
• @DanielCheung: A data race on a non-atomic variable is Undefined Behaviour in C++11 (concurrent read+write), which is why the compiler is allowed to convert while(!choice){} into if(!choice) infloop();, i.e. to hoist the load out of the loop. Lots of code repeatedly references the same global variable within a function, and forcing C++11 compilers to de-optimize it would suck a lot. – Peter Cordes Jul 23 '18 at 20:27
• @PeterCordes An infinate loop with no side effects is Undefined Behaviour. en.cppreference.com/w/cpp/language/ub As such the compiler is allowed to optimise further if it wants, and just simply remove it entirely. – UKMonkey Jul 24 '18 at 9:18
|
# A nonhomogeneous second-order linear equation and a complementary
A nonhomogeneous second-order linear equation and a complementary function ${y}_{c}$ are given. Find a particular solution of the equation.
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Gia Edwards
Calculation:
It is known that,
${y}_{1}\left(x\right)={x}^{2}$ and ${y}_{2}\left(x\right)={x}^{3}$
Now, computing the Wro
ian,
$W\left(x\right)=\left[\left({x}^{2},{x}^{3}\right),\left(\frac{d}{dx}\left({x}^{2}\right),\frac{d}{dx}\left({x}^{3}\right)\right]=\left[\begin{array}{cc}{x}^{2}& {x}^{3}\\ 2x& 3{x}^{2}\end{array}\right]={x}^{4}$
Dividing the differential equation by the leading terms coefficient
${x}^{2}y{}^{″}-4x{y}^{\prime }+6y={x}^{3}$
$\frac{{x}^{2}y{}^{″}-4x{y}^{\prime }+6y}{{x}^{2}}=\frac{{x}^{3}}{{x}^{2}}$
$y{}^{″}-\frac{4{y}^{\prime }}{x}+\frac{6y}{{x}^{2}}=x$
Then,
$f\left(x\right)=x$
${v}_{1}\left(x\right)=-\int \frac{x{y}_{2}\left(x\right)}{W\left(x\right)}dx$ and ${v}_{2}\left(x\right)=\int \frac{x{y}_{2}\left(x\right)}{W\left(x\right)}dx$
The particular solution is given by,
${y}_{p}\left(x\right)={v}_{1}\left(x\right){y}_{1}\left(x\right)+{v}_{2}\left(x\right){y}_{2}\left(x\right)$
${v}_{1}\left(x\right)=-\int \frac{x\left({x}^{3}\right)}{{x}^{4}}dx=-\int 1dx=-x$
${v}_{2}\left(x\right)=\int \frac{x\left({x}^{2}\right)}{{x}^{4}}dx=-\int \frac{1}{x}dx=\mathrm{ln}x$
${y}_{p}\left(x\right)=x\left({x}^{2}\right)+\left(\mathrm{ln}x\right)\left({x}^{3}\right)$
${y}_{p}\left(x\right)={x}^{3}\left(1+\mathrm{ln}x\right)$
###### Not exactly what you’re looking for?
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
|
<meta http-equiv="refresh" content="1; url=/nojavascript/"> Excluded Values for Rational Expressions ( Read ) | Algebra | CK-12 Foundation
# Excluded Values for Rational Expressions
%
Best Score
Practice Excluded Values for Rational Expressions
Best Score
%
Excluded Values for Rational Expressions
0 0 0
What if you had a rational expression like $\frac{x + 2}{x^2 + 3x + 2}$ ? How could you simplify it? After completing this Concept, you'll be able to reduce rational expressions like this one to their simplest terms and find their excluded values.
### Watch This
Watch this video for more examples of how to simplify rational expressions.
### Guidance
A simplified rational expression is one where the numerator and denominator have no common factors. In order to simplify an expression to lowest terms , we factor the numerator and denominator as much as we can and cancel common factors from the numerator and the denominator.
Simplify Rational Expressions
#### Example A
Reduce each rational expression to simplest terms.
a) $\frac{4x-2}{2x^2+x-1}$
b) $\frac{x^2-2x+1}{8x-8}$
c) $\frac{x^2-4}{x^2-5x+6}$
Solution
a) $\text{Factor the numerator and denominator completely:} \qquad \frac{2(2x-1)}{(2x-1)(x+1)}\!\\\\\text{Cancel the common factor} \ (2x - 1): \qquad \qquad \qquad \qquad \qquad \frac{2}{x+1}$
b) $\text{Factor the numerator and denominator completely:} \qquad \frac{(x-1)(x-1)}{8(x-1)}\!\\\\\text{Cancel the common factor}\ (x - 1): \qquad \qquad \qquad \qquad \qquad \ \ \frac{x-1}{8}$
c) $\text{Factor the numerator and denominator completely:} \qquad \frac{(x-2)(x+2)}{(x-2)(x-3)}\!\\\\\text{Cancel the common factor} (x - 2): \qquad \qquad \qquad \qquad \qquad \quad \frac{x+2}{x-3}$
When reducing fractions, you are only allowed to cancel common factors from the denominator but NOT common terms. For example, in the expression $\frac{(x+1) \cdot (x-3)}{(x+2) \cdot (x-3)}$ , we can cross out the $(x - 3)$ factor because $\frac{(x-3)}{(x-3)}=1$ . But in the expression $\frac{x^2+1}{x^2-5}$ we can’t just cross out the $x^2$ terms.
Why can’t we do that? When we cross out terms that are part of a sum or a difference, we’re violating the order of operations (PEMDAS). Remember, the fraction bar means division. When we perform the operation $\frac{x^2+1}{x^2-5}$ , we’re really performing the division $(x^2+1) \div (x^2-5)$ — and the order of operations says that we must perform the operations inside the parentheses before we can perform the division.
Using numbers instead of variables makes it more obvious that canceling individual terms doesn’t work. You can see that $\frac{9+1}{9-5}=\frac{10}{4}=2.5$ — but if we canceled out the 9’s first, we’d get $\frac{1}{-5}$ , or -0.2, instead.
Find Excluded Values of Rational Expressions
Whenever there’s a variable expression in the denominator of a fraction, we must remember that the denominator could be zero when the independent variable takes on certain values. Those values, corresponding to the vertical asymptotes of the function, are called excluded values. To find the excluded values, we simply set the denominator equal to zero and solve the resulting equation.
#### Example B
Find the excluded values of the following expressions.
a) $\frac{x}{x+4}$
b) $\frac{2x+1}{x^2-x-6}$
Solution
a) $\text{When we set the denominator equal to zero we obtain:} \quad \ \ x+4=0 \Rightarrow x=-4\!\\\\\text{So} \ \mathbf{-4} \ \text{is the excluded value.}$
b) $\text{When we set the denominator equal to zero we obtain:} \qquad x^2-x-6=0\!\\\\\text{Solve by factoring:} \qquad \qquad \qquad \qquad \qquad \qquad \ \qquad \qquad \qquad (x-3)(x+2)=0\!\\\\{\;} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \qquad \qquad \Rightarrow x=3 \ \text{and}\ x = -2\!\\\\\text{So}\ \mathbf{3}\ \mathbf{and}\ \mathbf{-2} \ \text{are the excluded values.}$
Removable Zeros
Removable zeros are those zeros from the original expression, but is not a zero for the simplified version of the expression. However, we have to keep track of them, because they were zeros in the original expression. This is illustrated in the following examples.
#### Example C
Determine the removable values of $\frac{4x-2}{2x^2+x-1}$ .
Solution:
Notice that in the expressions in Example A, we removed a division by zero when we simplified the problem. For instance, we rewrote $\frac{4x-2}{2x^2+x-1}$ as $\frac{2(2x-1)}{(2x-1)(x+1)}$ . The denominator of this expression is zero when $x = \frac{1}{2}$ or when $x = -1$ .
However, when we cancel common factors, we simplify the expression to $\frac{2}{x+1}$ . This reduced form allows the value $x = \frac{1}{2}$ , so $x = -1$ is its only excluded value.
Technically the original expression and the simplified expression are not the same. When we reduce a radical expression to its simplest form, we should specify the removed excluded value. In other words, we should write our final answer as $\frac{4x-2}{2x^2+x-1}=\frac{2}{x+1}, x \neq \frac{1}{2}$ .
#### Example D
Determine the removable values of the expressions from Example A parts b and c.
Solution:
We should write the answer from Example A, part b as $\frac{x^2-2x+1}{8x-8}=\frac{x-1}{8}, x \neq 1$ .
The answer from Example A, part c as $\frac{x^2-4}{x^2-5x+6}=\frac{x+2}{x-3}, x \neq 2$ .
Watch this video for help with the Examples above.
### Vocabulary
• Whenever there’s a variable expression in the denominator of a fraction, we must remember that the denominator could be zero when the independent variable takes on certain values. Those values, corresponding to the vertical asymptotes of the function, are called excluded values.
• Removable zeros are those zeros from the original expression, but is not a zero for the simplified version of the expression.
### Guided Practice
Find the excluded values of $\frac{4}{x^2-5x}$ .
Solution
$\text{When we set the denominator equal to zero we obtain:} \quad \ \ x^2-5x=0\!\\\\\text{Solve by factoring:} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ x(x-5)=0\!\\\\{\;} \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ \Rightarrow x=0 \ \text{and} \ x = 5\!\\\\\text{So} \ \mathbf{0 \ and \ 5}\ \text{are the excluded values.}$
### Explore More
Reduce each fraction to lowest terms.
1. $\frac{4}{2x-8}$
2. $\frac{x^2+2x}{x}$
3. $\frac{9x+3}{12x+4}$
4. $\frac{6x^2+2x}{4x}$
5. $\frac{x-2}{x^2-4x+4}$
6. $\frac{x^2-9}{5x+15}$
7. $\frac{x^2+6x+8}{x^2+4x}$
8. $\frac{2x^2+10x}{x^2+10x+25}$
9. $\frac{x^2+6x+5}{x^2-x-2}$
10. $\frac{x^2-16}{x^2+2x-8}$
11. $\frac{3x^2+3x-18}{2x^2+5x-3}$
12. $\frac{x^3+x^2-20x}{6x^2+6x-120}$
Find the excluded values for each rational expression.
1. $\frac{2}{x}$
2. $\frac{4}{x+2}$
3. $\frac{2x-1}{(x-1)^2}$
4. $\frac{3x+1}{x^2-4}$
5. $\frac{x^2}{x^2+9}$
6. $\frac{2x^2+3x-1}{x^2-3x-28}$
7. $\frac{5x^3-4}{x^2+3x}$
8. $\frac{9}{x^3+11x^2+30x}$
9. $\frac{4x-1}{x^2+3x-5}$
10. $\frac{5x+11}{3x^2-2x-4}$
11. $\frac{x^2-1}{2x^2+x+3}$
12. $\frac{12}{x^2+6x+1}$
13. In an electrical circuit with resistors placed in parallel, the reciprocal of the total resistance is equal to the sum of the reciprocals of each resistance. $\frac{1}{R_c}=\frac{1}{R_1}+\frac{1}{R_2}$ . If $R_1 = 25 \ \Omega$ and the total resistance is $R_c = 10 \ \Omega$ , what is the resistance $R_2$ ?
14. Suppose that two objects attract each other with a gravitational force of 20 Newtons. If the distance between the two objects is doubled, what is the new force of attraction between the two objects?
15. Suppose that two objects attract each other with a gravitational force of 36 Newtons. If the mass of both objects was doubled, and if the distance between the objects was doubled, then what would be the new force of attraction between the two objects?
16. A sphere with radius $R$ has a volume of $\frac{4}{3} \pi R^3$ and a surface area of $4 \pi R^2$ . Find the ratio the surface area to the volume of a sphere.
17. The side of a cube is increased by a factor of 2. Find the ratio of the old volume to the new volume.
18. The radius of a sphere is decreased by 4 units. Find the ratio of the old volume to the new volume.
|
# AI News, Memristors power quick-learning neural network
## Memristors power quick-learning neural network
The network, called a reservoir computing system, could predict words before they are said during conversation, and help predict future outcomes based on the present.
The research team that created the reservoir computing system, led by Wei Lu, professor of electrical engineering and computer science at the University of Michigan, recently published their work in Nature Communications.
Reservoir computing systems, which improve on a typical neural network's capacity and reduce the required training time, have been created in the past with larger optical components.
In this process of what's called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.
For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.
When a set of data is inputted into the reservoir, the reservoir identifies important time-related features of the data, and hands it off in a simpler format to a second network.
This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.
Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91 percent accuracy.
To train a neural network for a task, a neural network takes in a large set of questions and the answers to those questions.
In this process of what’s called supervised learning, the connections between nodes are weighted more heavily or lightly to minimize the amount of error in achieving the correct answer.
For example, a system can process a new photo and correctly identify a human face, because it has learned the features of human faces from other photos in its training set.
This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.
University of Michigan researchers created a reservoir computing system that reduces training time and improves capacity of similar neural networks.
This second network then only needs training like simpler neural networks, changing weights of the features and outputs that the first network passed on until it achieves an acceptable level of error.
Using only 88 memristors as nodes to identify handwritten versions of numerals, compared to a conventional network that would require thousands of nodes for the task, the reservoir achieved 91% accuracy.
“We could actually predict what you plan to say next.” In predictive analysis, Lu hopes to use the system to take in signals with noise, like static from far-off radio stations, and produce a cleaner stream of data.
Reservoir computing using dynamic memristors for temporal information processing AbstractReservoir computing systems utilize dynamic reservoirs having short-term memory to project features from the temporal inputs into a high-dimensional feature space.
We show that the internal ionic dynamic processes of memristors allow the memristor-based reservoir to directly process information in the temporal domain, and demonstrate that even a small hardware system with only 88 memristors can already be used for tasks, such as handwritten digit recognition.
We show experimentally that even a small reservoir consisting of 88 memristor devices can be used to process real-world problems such as handwritten digit recognition with performance comparable to those achieved in much larger networks.
A similar-sized network is also used to solve a second-order nonlinear dynamic problem and is able to successfully predict the expected dynamic output without knowing the form of the transfer function.
Indeed, adding vertical scan can improve the classification accuracy to 92.1% as verified through simulation using the device model, although the system also becomes larger and requires 672 inputs.
The computing capacity added by the memristor-based reservoir layer was analyzed by comparing the RC system performance with networks having the same connectivity patterns, by replacing the reservoir layer with a conventional nonlinear downsampling function.
For the second-order dynamic problem that is more naturally suited for the RC system, our analysis shows that the small RC system significantly outperforms a conventional linear network, with orders-of-magnitude improvements in prediction NMSE.
The demonstration of memristor-based RC systems will stimulate continued developments to further optimize the network performance toward broad applications in areas, such as speech analysis, action recognition and prediction.
Future algorithm and experimental advances that can take full advantage of the interconnected nature of the crossbar structures, by utilizing the intrinsic sneak paths and possible loops in the system may further enhance the computing capacity of the system.
## Memristors to Power Quick-Learning Neural Networks
Imagine a class full of artificially intelligent machines (let’s say robot doctors) that have attended a lecture, on a new procedure of conducting surgery.
As a key concept in training machines to think like humans -as in, without prior programming, neural networks are the new target for research and improvement.
Now, the team is using memristors chip -which ideally requires minimal space -and can also be integrated straightforwardly and fast into pre-existing silicon-based electronics.
Technically, this contrasts with usual computer systems, whereby processors execute logic separate from memory modules. Wei Lu and team employed a special memristor with abilities to memorize events –especially those in the near future.
For instance, a system can bring up the exact photo when asked to identify a human face, this is because it has learned the distinct features of human faces from the photos providing during the training session.
The interesting part is that reservoir-computing systems that use memristors can skip those expensive training processes and still give the network the capability to remember details with over 98 percent accuracy.
After a set of data is inputted, the reservoir identifies vital time-related features of the data, then hands it off in a new simpler format to the next network in-line.
Now, it is this second network that will require a bit of training to ideally alter the weights of the features and outputs availed on the first network until it attains an acceptable level of error.
## Reservoir Computing: Harnessing a Universal Dynamical System
Gauthier There is great current interest in developing artificial intelligence algorithms for processing massive data sets, often for classification tasks such as recognizing a face in a photograph.
dynamical system to predict the dynamics of a desired system is one approach to this problem that is well-suited for a reservoir computer (RC): a recurrent artificial neural network for processing time-dependent information (see Figure 1).
While researchers have studied RCs for well over 20 years [1] and applied them successfully to a variety of tasks [2], there are still many open questions that the dynamical systems community may find interesting and be able to address.
Mathematically, an RC is described by the set of autonomous, time-delay differential equations given by $\frac{dx_i}{dt} = -\gamma_i x_i + \gamma_i f_i \big[\sum\limits_{j=1}^j W^{in}_{i,j}u_j(t)+ \sum\limits_{n=1}^N W^{res}_{i,n}x_n (t - \tau_{i,n}) + b_i \big], \\y_k(t) = \sum\limits_{m=1}^N W^{out}_{k,m} \mathcal{X}_m,\: \: \: \: \:i = 1, ..., N \: \: \: \: \: k = 1, ..., K, \tag1$ with $$J$$ inputs $$u_j$$, $$N$$ reservoir nodes $$x_i$$,and $$K$$ outputs with values $$y_k$$.
Here, $$\gamma_i$$ are decay constants, $$W^{in}_{i,j} (W^{res}_{i,n})$$are fixed input (internal) weights, $$\tau_{i,n}$$ are link time delays, $$b_i$$ are biases, and $$W^{out}_{k,m}$$are the output weights whose values are optimized for a particular task.
We can solve $$(2)$$ in a least-square sense using pseudo-inverse matrix routines that are often included in a variety of computer languages, some of which can take advantage of the matrices’
We can also find a solution to $$(2)$$ using gradient descent methods, which are helpful when the matrix dimensions are large, and leverage toolkits from the deep learning community that take advantage of graphical processing units.
Furthermore, we can utilize the predicted time series as an observer in a control system [4] or for data assimilation of large spatiotemporal systems without use of an underlying model [6].
The following is an open question: how can we optimize the parameters in $$(1)$$ and $$(2)$$ to obtain the most accurate prediction in either the prediction or classification tasks, while simultaneously allowing the RC to function well on data that is similar to the training data set?
Early studies focused on the so-called echo state property of the network—where the output should eventually forget the input—and the consistency property, where outputs from identical trials should be similar over some period.
However, this scenario ignores the input dynamics and is mostly a statement of the stability of $$\mathbf{X}= 0$$.Recent work is beginning to address this shortcoming for the case of a single input channel, demonstrating that there must be a single entire output solution given the input [5].
K Camp - Comfortable
K Camp's debut album “Only Way Is Up” Available NOW iTunes Deluxe Explicit: Google Play Standard Explicit: ..
Module 1 lecture 6 Radial Basis function networks
Lectures by Prof. Laxmidhar Behera, Department of Electrical Engineering, Indian Institute of Technology, Kanpur. For more details on NPTEL visit ...
The Human Microbiome: Emerging Themes at the Horizon of the 21st Century (Day 3)
The Human Microbiome: Emerging Themes at the Horizon of the 21st Century (Day 3) Air date: Friday, August 18, 2017, 8:15:00 AM Category: Conferences ...
Lec 7 | MIT 6.01SC Introduction to Electrical Engineering and Computer Science I, Spring 2011
Lecture 7: Circuits Instructor: Dennis Freeman View the complete course: License: Creative Commons BY-NC-SA More ..
Auburn Coach Wife Kristi Malzahn Agrees with Match & eHarmony: Men are Jerks
My advice is this: Settle! That's right. Don't worry about passion or intense connection. Don't nix a guy based on his annoying habit of yelling "Bravo!" in movie ...
|
# SpaceX Does It Again
### Help Support The Rocketry Forum:
#### JLP1
##### Well-Known Member
TRF Supporter
Looks like Elon's SpaceX just won the NASA contract to build the next Lunar Lander. It's being reported the contract is worth 2.9B
#### Dipstick
##### Well-Known Member
TRF Supporter
Dragon capsule with landing legs
#### georgegassaway
Dragon capsule with landing legs
Nope. A unique version of Starship. Musk said that SS would make *hundreds* of landings before humans flew in it.
Loophole....he may not have said hundreds of *safe* landings.
This is good for SpaceX, not so good for NASA's scheduling. Not that 2024 was realistic anyway, but now NASA is directly tied into Musk's "time reality ELONgation effect".
#### Dotini
##### Well-Known Member
TRF Supporter
I'll admit I'm not a big fan of Elon Musk. But I have to agree that NASA and the traditional industry need his competition very badly.
Anyone want to bet on the actual date of a successful manned lunar landing? I would bet this decade, but only barely.
#### Peartree
##### Cyborg Rocketeer
Staff member
Global Mod
Anyone want to bet on the actual date of a successful manned lunar landing? I would bet this decade, but only barely.
As always, much depends on how much $$Congress actually comes up with. No bucks... no Buck Rogers. #### rharshberger ##### Well-Known Member As always, much depends on how much$$\$ Congress actually comes up with.
No bucks... no Buck Rogers.
Maybe Elon has one secretly in the works and will let NASA have a version of it, Space X probably puts people on the moon before NASA.
#### teepot
##### Well-Known Member
TRF Supporter
Based off the lack of funding NASA has got in the past I don't see Congress coming up with the money any time soon. Maybe Space X will fund it themselves. Or they could crowd source some of it.
#### MetricRocketeer
##### Well-Known Member
TRF Supporter
Hi everyone,
I will say this much in favor of SpaceX. It uses the metric system wall to wall. I ask my rocketeer colleagues to take note of that. SpaceX measures lengths in kilometres, metres, centimetres, or millimetres -- never miles, feet, or inches, it measures speed in kilometres per hour or metres per second -- never miles per hour or feet per second, and it measures weight in kilograms or grams -- never pounds or ounces.
Stanley
Last edited:
#### JohnCoker
##### Well-Known Member
TRF Supporter
I will say this much in favor of SpaceX. It uses the metric system wall to wall. I ask my rocketeer colleagues to take note of that.
Good point. ThrustCurve.org already supports metric units, but I guess I could add more alternatives, like "cubits." There's a oddroc idea in there somewhere...
#### boatgeek
##### Well-Known Member
Nope. A unique version of Starship. Musk said that SS would make *hundreds* of landings before humans flew in it.
Loophole....he may not have said hundreds of *safe* landings.
This is good for SpaceX, not so good for NASA's scheduling. Not that 2024 was realistic anyway, but now NASA is directly tied into Musk's "time reality ELONgation effect".
I think there are many valid criticisms of Elon but I don’t think that schedule is really one of them in comparison to peers. I’d be more receptive to that criticism if SLS and New Glenn were flying and Starliner and New Shepard were flying with crew.
#### Mushtang
TRF Supporter
It doesn't surprise me that SpaceX got the contract over other companies either. They've actually performed a LOT of successful flights to orbit, to the space station, a couple with crew, etc. They're miles ahead of anyone else it's not even a race at this point. I mean, kilometers ahead.
Not to slam Blue Origin, honestly, but I don't understand how they were seriously in the competition to win this contract having never entered a rocket into orbit at this point. Or if they have I haven't heard about it. As far as I know they've only done test flights up and down again and landed. Those are significant sure, and difficult yes, but they're still just tests and no actual missions flown. Maybe their inclusion in the consideration was political and they weren't really ever an option?
#### Peartree
##### Cyborg Rocketeer
Staff member
Global Mod
It doesn't surprise me that SpaceX got the contract over other companies either. They've actually performed a LOT of successful flights to orbit, to the space station, a couple with crew, etc. They're miles ahead of anyone else it's not even a race at this point. I mean, kilometers ahead.
Not to slam Blue Origin, honestly, but I don't understand how they were seriously in the competition to win this contract having never entered a rocket into orbit at this point. Or if they have I haven't heard about it. As far as I know they've only done test flights up and down again and landed. Those are significant sure, and difficult yes, but they're still just tests and no actual missions flown. Maybe their inclusion in the consideration was political and they weren't really ever an option?
Yeah, but they were bidding on building a LANDER. They do have some knowledge on how to take off and land in a gravity well.
#### Mushtang
TRF Supporter
Yeah, but they were bidding on building a LANDER. They do have some knowledge on how to take off and land in a gravity well.
That's an excellent point!
#### Antares JS
##### Professional Amateur
Sounds like most of the reason for the selection came down to price. NASA has only gotten a fraction of the money they asked for to develop a lander and SpaceX was the only proposal they could afford. BO's team's proposal was too expensive and Dynetics was even more expensive and carried technical risk they didn't want.
#### Reinhard
##### Well-Known Member
TRF Supporter
Not sure about Blue Origin, but I doubt Lockheed Martin and Northrop Grumman would have entered a bid that would have turned out unprofitable for them. Dynetics is also a classical government contractor, and much smaller. They aren't in a good position to make a low offer either.
For SpaceX though, the project might make sense even as a kind of loss leader. They can develop lot's of technology paid by NASA that can later be re-purposed for Mars. And there will be some synergies with the rest of the Starship program, whereas the other teams would build it completely from the ground up.
Reinhard
#### Huxter
##### Well-Known Member
TRF Supporter
So is the SLS rocket and Orion still going up as one, then SpaceX launches Starship separately? Do they dock in Earth orbit... Just how does this work?
seems like all you need is a fully fueled Starship to go to the moon's surface and back to Earth - no?
#### Antares JS
##### Professional Amateur
So is the SLS rocket and Orion still going up as one, then SpaceX launches Starship separately? Do they dock in Earth orbit... Just how does this work?
seems like all you need is a fully fueled Starship to go to the moon's surface and back to Earth - no?
The plan seems to be to move a starship to the gateway and use it as a shuttle to and from the lunar surface. The lunar starship won't have the aero control surfaces.
Staff member
|
# Questions from Dr. Terry Allen
Recently Math.SE has had a spate of questions by Dr. Terry Allen. These questions involve topics like music topology and manifolds formed by sums of functions. So far, the "new" Dr. Terry Allen has had every single one of his questions closed. I don't mean to be offensive but can the moderators just delete such questions outright? This guy seems nothing more than a troll/crackpot and is not welcomed here.
-
"This guy seems nothing more than a troll/crackpot and is not welcomed here." While I appreciate your opinion and your concern, orbit-fold theory have had legitimate applications to music theory. I would not be so quick to dismiss poorly written/motivated posts directly as crackpottery. – Willie Wong Apr 10 '12 at 11:52
@Willie: Regardless of whether or not he's a crackpot, I think this has all gone far past the point where he's earned a suspension, if nothing else to force him to take the time to listen to your advice in the comments. Do you disagree? – Zev Chonoles Apr 10 '12 at 12:20
@Zev I agree. If a user keeps posting poor questions (large numbers of which downvoted/closed), that itself warrants a suspension. (BTW, the only reason I didn't suspended him myself was because I expect the user will probably soon hit the automatic posting ban due to repeated posting of closed/downvoted questions. Once that kicks in, it is a bit more effective than us mods playing cat and mouse with him.) – Willie Wong Apr 10 '12 at 12:29
@WillieWong The automatic ban is not enabled on most sites, I don't think it is enabled here. – Mad Scientist Apr 10 '12 at 12:35
@Fabian: oh crud. There goes that plan. :-( – Willie Wong Apr 10 '12 at 12:39
I don't see why posting many downvoted questions poses a problem at all. People downvote for all sorts of reasons, and so what if the majority of people don't like your questions? Plenty of people probably would have "downvoted" questions from new mathematical areas when they first arose... set theory and calculus among them. Please try and live (online) with those who you find annoying instead of cavalierly shunning them. Questions that get closed, on the other hand, I think make for another story. – Doug Spoonwood Apr 10 '12 at 14:02
For background see this paper: Terry Allen, Camille Goudeseune, Topological Considerations for Tuning and Fingering Stringed Instruments. – Bill Dubuque Apr 10 '12 at 14:10
Note that the coauthor Camille Goudeseune has an (applied?) math background: University of Illinois at Urbana-Champaign 2001, Dissertation: Composing with Parameters for Synthetic Instruments, Mathematics Subject Classification: 68—Computer science, Advisor 1: Herbert Edelsbrunner. So anyone interested in pursuing such might wish to contact the coauthor. – Bill Dubuque Apr 10 '12 at 18:33
Only 6 days suspension?! I say suspend for 6 months. – user2468 Apr 11 '12 at 3:50
Just a reminder to all users that despite the fact that each posting reflects only the point-of-view of the user posting and editing them (and not necessarily that of the entire site), here on meta we do prefer to make "policy discussions in general" rather than "specific accusations and name-calling" when it comes to undesirable behaviour: while the motivation of the meta question may be to prod moderators into action and/or formulate a site policy, having individual users named in these circumstances may cause the discussion to focus too much on the individual. – Willie Wong Apr 11 '12 at 8:09
|
## Randomized Algorithms for Submodular Function Maximization with a $k$-System Constraint
### Shuang Cui · Kai Han · Tianshuai Zhu · Jing Tang · Benwei Wu · He Huang
##### Virtual
Keywords: [ Optimization ]
[ Abstract ]
[
[ Paper ]
Tue 20 Jul 9 p.m. PDT — 11 p.m. PDT
Spotlight presentation: Optimization (Stochastic)
Tue 20 Jul 6 p.m. PDT — 7 p.m. PDT
Abstract: Submodular optimization has numerous applications such as crowdsourcing and viral marketing. In this paper, we study the problem of non-negative submodular function maximization subject to a $k$-system constraint, which generalizes many other important constraints in submodular optimization such as cardinality constraint, matroid constraint, and $k$-extendible system constraint. The existing approaches for this problem are all based on deterministic algorithmic frameworks, and the best approximation ratio achieved by these algorithms (for a general submodular function) is $k+2\sqrt{k+2}+3$. We propose a randomized algorithm with an improved approximation ratio of $(1+\sqrt{k})^2$, while achieving nearly-linear time complexity significantly lower than that of the state-of-the-art algorithm. We also show that our algorithm can be further generalized to address a stochastic case where the elements can be adaptively selected, and propose an approximation ratio of $(1+\sqrt{k+1})^2$ for the adaptive optimization case. The empirical performance of our algorithms is extensively evaluated in several applications related to data mining and social computing, and the experimental results demonstrate the superiorities of our algorithms in terms of both utility and efficiency.
Chat is not available.
|
Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
Page 1 /100 Display every page 5 10 20 Item
Physics , 2000, DOI: 10.1103/PhysRevLett.88.097003 Abstract: We argue that the Scanning Tunneling Microscope (STM) images of resonant states generated by doping Zn or Ni impurities into Cu-O planes of BSCCO are the result of quantum interference of the impurity signal coming from several distinct paths. The impurity image seen on the surface is greatly affected by interlayer tunneling matrix elements. We find that the optimal tunneling path between the STM tip and the metal (Cu, Zn, or Ni) $d_{x^2 - y^2}$ orbitals in the Cu-O plane involves intermediate excited states. This tunneling path leads to the four-fold nonlocal filter of the impurity state in Cu-O plane that explains the experimental impurity spectra. Applications of the tunneling filter to the Cu vacancy defects and direct'' tunneling into Cu-O planes are also discussed.
Physics , 2000, Abstract: We have performed temperature dependent tunneling experiments through a single impurity in an asymmetric vertical double barrier tunneling structure. In particular in the charging direction we observe at zero magnetic field a clear shift in the onset voltage of the resonant tunneling current through the impurity. With a magnetic field applied the shift starts to disappear. The experimental observations are explained in terms of resonant tunneling through a spin degenerate impurity level.
Physics , 1996, DOI: 10.1103/PhysRevB.54.10614 Abstract: The tunneling between two parallel two-dimensional electron gases has been investigated as a function of temperature $T$, carrier density $n$, and the applied perpendicular magnetic field $B$. In zero magnetic field the equilibrium resonant lineshape is Lorentzian, reflecting the Lorentzian form of the spectral functions within each layer. From the width of the tunneling resonance the lifetime of the electrons within a 2DEG has been measured as a function of $n$ and $T$, giving information about the density dependence of the electron-impurity scattering and the temperature dependence of the electron-electron scattering. In a magnetic field there is a general suppression of equilibrium tunneling for fields above $B=0.6$ T. A gap in the tunneling density of states has been measured over a wide range of magnetic fields and filling factors, and various theoretical predictions have been examined. In a strong magnetic field, when there is only one partially filled Landau level in each layer, the temperature dependence of the conductance characteristics has been modeled with a double-Gaussian spectral density.
Physics , 2001, DOI: 10.1103/PhysRevLett.88.247203 Abstract: We study how the formation of the Kondo compensation cloud influences the dynamical properties of a magnetic impurity that tunnels between two positions in a metal. The Kondo effect dynamically generates a strong tunneling impurity-conduction electron coupling, changes the temperature dependence of the tunneling rate, and may ultimately result in the destruction of the coherent motion of the particle at zero temperature. We find an interesting two-channel Kondo fixed point as well for a vanishing overlap between the electronic states that screen the magnetic impurity. We propose a number of systems where the predicted features could be observed.
Physics , 2004, DOI: 10.1103/PhysRevLett.93.046603 Abstract: We report exact model calculations of the spin-dependent tunneling in double magnetic tunnel junctions in the presence of impurities in the well. We show that the impurity can tune selectively the spin channels giving rise to a wide variety of interesting and novel transport phenomena. The tunneling magnetoresistance, the spin polarization and the local current can be dramatically enhanced or suppressed by impurities. The underlying mechanism is the impurity-induced shift of the quantum well states (QWS) which depends on the impurity potential, impurity position and the symmetry of the QWS.
Physics , 2014, DOI: 10.1103/PhysRevLett.113.146601 Abstract: Injection of spins into semiconductors is essential for the integration of the spin functionality into conventional electronics. Insulating layers are often inserted between ferromagnetic metals and semiconductors for obtaining an efficient spin injection, and it is therefore crucial to distinguish between signatures of electrical spin injection and impurity-driven effects in the tunnel barrier. Here we demonstrate an impurity-assisted tunneling magnetoresistance effect in nonmagnetic-insulator-nonmagnetic and ferromagnetic-insulator-nonmagnetic tunnel barriers. In both cases, the effect reflects on/off switching of the tunneling current through impurity channels by the external magnetic field. The reported effect, which is universal for any impurity-assisted tunneling process, finally clarifies the controversy of a widely used technique that employs the same ferromagnetic electrode to inject and detect spin accumulation.
Advances in Mathematical Physics , 2011, DOI: 10.1155/2011/138358 Abstract: The quantum Langevin equation has been studied for dissipative system using the approach of Ford et al. Here, we have considered the inverted harmonic oscillator potential and calculated the effect of dissipation on tunneling time, group delay, and the self-interference term. A critical value of the friction coefficient has been determined for which the self-interference term vanishes. This approach sheds new light on understanding the ion transport at nanoscale.
Physics , 2007, DOI: 10.1103/PhysRevB.76.052506 Abstract: We report on the temperature dependence of the impurity-induced resonant state in Zn-doped Bi_2Sr_2CaCu_2O$_{8+\delta}$ by scanning tunneling spectroscopy at 30 mK < T < 52 K. It is known that a Zn impurity induces a sharp resonant peak in tunnel spectrum at an energy close to the Fermi level. We observed that the resonant peak survives up to 52 K. The peak broadens with increasing temperature, which is explained by the thermal effect. This result provides information to understand the origin of the resonant peak.
Physics , 2010, DOI: 10.1103/PhysRevB.79.241402 Abstract: We measure tunneling through a single quantum level in a carbon nanotube quantum dot connected to resistive metal leads. For the electrons tunneling to/from the nanotube, the leads serve as a dissipative environment, which suppresses the tunneling rate. In the regime of sequential tunneling, the height of the single-electron conductance peaks increases as the temperature is lowered, although it scales more weekly than the conventional 1/T. In the resonant tunneling regime (temperature smaller than the level width), the peak width approaches saturation, while the peak height starts to decrease. Overall, the peak height shows a non-monotonic temperature dependence. We associate this unusual behavior with the transition from the sequential to the resonant tunneling through a single quantum level in a dissipative environment.
Physics , 2001, Abstract: The effects of a non-magnetic Zn impurity substituting an in-plane Cu are studied by solving the Bogoliubov-de Gennes equation self-consistently which is derived from the \ttju Hamiltonian with all the allowed order parameters included. The Zn impurity, modeled in terms of a potential scatterer in unitary limit, induces local staggered magnetic moments around itself, and the calculated NMR shifts from the induced moments are in agreement with the experimental Cu NMR spectra. We also note that the experimentally observed negative slope of the tunneling conductance can result from the next-nearest hopping $t'$.
Page 1 /100 Display every page 5 10 20 Item
|
# Problem #1696
1696 Let $S$ be the square one of whose diagonals has endpoints $(0.1,0.7)$ and $(-0.1,-0.7)$. A point $v=(x,y)$ is chosen uniformly at random over all pairs of real numbers $x$ and $y$ such that $0 \le x \le 2012$ and $0\le y\le 2012$. Let $T(v)$ be a translated copy of $S$ centered at $v$. What is the probability that the square region determined by $T(v)$ contains exactly two points with integer coordinates in its interior? $\textbf{(A)}\ 0.125\qquad\textbf{(B)}\ 0.14\qquad\textbf{(C)}\ 0.16\qquad\textbf{(D)}\ 0.25 \qquad\textbf{(E)}\ 0.32$ This problem is copyrighted by the American Mathematics Competitions.
Note: you aren't logged in. If you log in, we'll keep a record of which problems you've solved.
• Reduce fractions to lowest terms and enter in the form 7/9.
• Numbers involving pi should be written as 7pi or 7pi/3 as appropriate.
• Square roots should be written as sqrt(3), 5sqrt(5), sqrt(3)/2, or 7sqrt(2)/3 as appropriate.
• Exponents should be entered in the form 10^10.
• If the problem is multiple choice, enter the appropriate (capital) letter.
• Enter points with parentheses, like so: (4,5)
• Complex numbers should be entered in rectangular form unless otherwise specified, like so: 3+4i. If there is no real component, enter only the imaginary component (i.e. 2i, NOT 0+2i).
## Registration open for AMC10/12 prep class
Registration is now open. See details here.
|
# zbMATH — the first resource for mathematics
A global existence result for the quasistatic frictional contact problem with normal compliance. (English) Zbl 0761.73104
Unilateral problems in structural analysis IV, Proc. 4th Meet., Capri/Italy 1989, ISNM 101, 85-111 (1991).
Summary: [For the entire collection see Zbl 0745.00040.]
We consider the quasistatic problem of the contact of an elastic body with a rigid foundation in the presence of friction. The contact condition is taken as a power law normal compliance. We prove, for forces and initial data that are not too large, the existence of a solution $${\mathbf u}$$ such that $${\mathbf u}\in C([0,T];{\mathbf H}^ 1(\Omega))$$ and $$d{\mathbf u}/dt\in L^ 2([0,T];{\mathbf H}^ 2(\Omega))$$. The main tools are from the theory of differential inclusions.
##### MSC:
74A55 Theories of friction (tribology) 74M15 Contact in solid mechanics 74S30 Other numerical methods in solid mechanics (MSC2010) 74P10 Optimization of other properties in solid mechanics 49J40 Variational inequalities
|
### Vapor Pressure and Volatile Solutes: Ideal SolutionProblems #1 - 15
Problem #1: At 333 K, substance A has a vapor pressure of 1.0 atm and substance B has a vapor pressure of 0.20 atm. A solution of A and B is prepared and allowed to equilibrate with its vapor. The vapor is found to have equal moles of A and B. What was the mole fraction of A in the original solution?
Solution:
1) We know these statements are true:
PA = $\text{P}{\text{}}_{A}^{o}$· $\text{χ}{\text{}}_{A}^{}$
and
PB = $\text{P}{\text{}}_{B}^{o}$· $\text{χ}{\text{}}_{B}^{}$
2) Equal moles of A and B in the vapor means PA = PB. Therefore:
$\text{P}{\text{}}_{A}^{o}$· $\text{χ}{\text{}}_{A}^{}$ = $\text{P}{\text{}}_{B}^{o}$· $\text{χ}{\text{}}_{B}^{}$
3) We set $\text{χ}{\text{}}_{A}^{}$ = x and $\text{χ}{\text{}}_{B}^{}$ = 1 − x. Substituting, we obtain:
(1.0) (x) = (0.20) (1 − x)
x = 0.17
4) If the question had asked for the mole fraction of B, it would be 1 − 0.17 = 0.83 atm.
Problem #2: 30.0 mL of pentane (C5H12, d = 0.626 g/mL, v.p. = 511 torr) and 45.0 mL of hexane (C6H14, d = 0.655 g/mL, v.p. = 150. torr) are mixed at 25.0 ° C to form an ideal solution.
(a) Calculate the vapor pressure of this solution.
(b) Calculate the composition (in mole fractions) of the vapor in contact with this solution.
Solution:
1) Calculate (then add) moles of pentane and hexane:
pentane:
(0.626 g/mL) (30.0 mL) = 18.78 g
18.78 g / 72.15 g/mol = 0.26029 mol
hexane:
(0.655 g/mL) (45.0 mL) = 29.475 g
29.475 g / 87.1766 g/mol = 0.338107 mol
total moles = 0.26029 mol + 0.338107 mol = 0.598397 mol
2) Calculate mole fractions:
pentane ⇒ 0.26029 mol / 0.598397 mol = 0.435
hexane ⇒ 0.338107 mol / 0.598397 mol = 0.565
3) Calculate total pressure (the answer to part a):
Psolution = $\text{P}{\text{}}_{pent}^{o}$· $\text{χ}{\text{}}_{pent}^{}$ + $\text{P}{\text{}}_{hex}^{o}$· $\text{χ}{\text{}}_{hex}^{}$
x = (511 torr) (0.435) + (150. torr) (0.565)
x = 222.285 torr + 84.75 torr
x = 307 torr (to three sig figs)
4) Calculate composition of the vapor (the answer to part b)
pentane ⇒ 222.285 torr / 307.035 torr = 0.724
hexane ⇒ 84.75 torr / 307.035 torr = 0.276
The substance with the higher vapor pressure (because of the weaker intermolecular forces) is present in the vapor to a larger mole fraction than it is present in the solution.
Problem #3: What is the vapor pressure (in mmHg) of a solution of 4.40 g of Br2 in 101.0 g of CCl4 at 300 K? The vapor pressure of pure bromine at 300 K is 30.5 kPa and the vapor pressure of CCl4 is 16.5 kPa.
Solution:
1) Calculate moles, then mole fraction of each substance:
bromine ⇒ 4.40 g / 159.808 g/mol = 0.027533 mol
CCl4 ⇒ 101.0 g / 153.823 g/mol = 0.6566 mol
χBr2 ⇒ 0.027533 mol / 0.684133 mol = 0.040245
χCCl4 ⇒ 0.6566 mol / 0.684133 mol = 0.959755
2) Calculate total pressure:
Psolution = $\text{P}{\text{}}_{Br2}^{o}$· $\text{χ}{\text{}}_{Br2}^{}$ + $\text{P}{\text{}}_{CCl4}^{o}$· $\text{χ}{\text{}}_{CCl4}^{}$
Psolution = (30.5 kPa) (0.040245) + (16.5 kPa) (0.959755)
Psolution = 1.2275 + 15.8360 = 17.0635 kPa
3) Convert to mmHg:
17.0635 kPa x (760.0 mmHg / 101.325 kPa) = 128 mmHg (to three sig fig)
Problem #4: A solution has a 1:3 ratio of cyclopentane to cyclohexane. The vapor pressures of the pure compounds at 25 °C are 331 mmHg for cyclopentane and 113 mmHg for cyclohexane. What is the mole fraction of cyclopentane in the vapor above the solution?
Solution:
1) Mole fractions for each substance:
cyclopentane: 1/4 = 0.25
cyclohexane: 3/4 = 0.75
Note: one part cyclopentane and three parts cyclohexane means four total parts to the solution, hence four in the denominator.
2) Total pressure above the solution is:
Psolution = $\text{P}{\text{}}_{pent}^{o}$· $\text{χ}{\text{}}_{pent}^{}$ + $\text{P}{\text{}}_{hex}^{o}$· $\text{χ}{\text{}}_{hex}^{}$
x = (331) (0.25) + (113) (0.75)
x = 82.75 + 84.75 = 167.5 mmHg
3) Mole fraction of cyclopentane in the vapor:
82.75 mmHg / 167.6 mmHg = 0.494
To 2 sig figs, write 0.49
Problem #5: Acetone and ethyl acetate are organic liquids often used as solvents. At 30.0 °C, the vapor pressure of acetone is 285 mmHg and the vapor pressure of ethyl acetate is 118 mmHg. What is the vapor pressure at 30.0 °C of a solution prepared by dissolving 25.0 g of acetone in 22.5 g of ethyl acetate?
Solution:
1) Determine moles of each compound in solution:
acetone: 25.0 g / 58.08 g/mol = 0.43044 mol
ethyl acetate: 22.5 g / 88.10 g/mol = 0.25539 mol
2) Determine mole fraction for each compound in solution:
acetone: 0.43044 mol / 0.68583 mol = 0.62762
ethyl acetate: 1 - 0.62762 = 0.37238
3) Determine vapor pressure of vapor above solution:
Psolution = (0.62762) (285 mmHg) + (0.37238) (118 mmHg)
Psolution = (178.872) + (43.941) = 222.813 mmHg
Psolution = 223 mmHg (to three sig figs)
Special bonus question: determine the composition (expressed in mole fraction) of the vapor above this solution.
Solution:
acetone: 178.872 / 222.813 = 0.8028
ethyl acetate: 1 − 0.8028 = 0.1972
Note how the vapor is richer than the solution in the component with the higher vapor pressure. This is the basis for fractional distillation.
Problem #6: A solution containing hexane and pentane has a pressure of 252.0 torr. Hexane has a pressure at 151.0 torr and pentane has a pressure of 425.0 torr. What is the mole fraction of pentane?
Solution:
Psolution = $\text{P}{\text{}}_{hex}^{o}$· $\text{χ}{\text{}}_{hex}^{}$ + $\text{P}{\text{}}_{pent}^{o}$· $\text{χ}{\text{}}_{pent}^{}$
252 = (151) (1 − x) + (425) (x)
x = 0.3686
Problem #7: The vapor pressure above a solution of two volatile components is 745 torr and the mole fraction of component B (χB) in the vapor is 0.59. Calculate the mole fraction of B in the liquid if the vapor pressure of pure B is 637 torr.
Solution:
1) Find partial pressure of B in vapor:
PB = $\text{P}{\text{}}_{vapor}^{o}$· $\text{χ}{\text{}}_{B}^{}$
x = (745 torr) (0.59)
x = 439.55 torr
2) Determine mole fraction of B in solution that gives above partial pressure:
PB = $\text{P}{\text{}}_{B}^{o}$· $\text{χ}{\text{}}_{B}^{}$
439.55 torr = (637 torr) (y)
y = 0.69
We could calculate the vapor pressure of pure A, if we so desired. The solution is left to the reader. The answer is 1454.5 torr.
Notice also, that the vapor is richer than the solution in A, the more volatile component. In the solution, the mole fraction of A is 0.21 and in the vapor it is 0.41.
Problem #8: Bromobenzene (MW: 157.02) steam distills at 95 °C. Its vapor pressure at 95 °C is 120. mmHg.
(a) What is the vapor pressure of water at 95 °C?
(b) How many grams of bromobenzene would steam distill with 20.0 grams of water?
Solution:
1) For (a), Teh Google™ yields:
633.9 mmHg
"When a mixture of two practically immiscible liquids is heated while being agitated to expose the surfaces of both the liquids to the vapor phase, each constituent independently exerts its own vapor pressure as a function of temperature as if the other constituent were not present."
3) The total pressure of the vapor phase is:
633.9 + 120 = 753.9 mmHg
4) The mole fraction of the water vapor:
633.9 / 753.9 = 0.8408
5) This means:
20.0 g/18.015 g/mol = 1.11018 moles of water represents 0.8408 of the vapor
also, the mole fraction of the bromobenzene is:
1 − 0.8408 = 0.1592
6) Set up a ratio and proportion:
1.11018 x ––––––– = ––––––– 0.8408 0.1592
x = 0.210205 mol
(157.02 g/mol) (0.210205 mol) = 33.0 g
Problem #9: Given that the vapor above an aqueous solution contains 18.3 mg water per liter at 25.0 °C, what is the concentration of the solute within the solution in mole percent? Please assume ideal behavior.
Solution:
1) We need to know the pressure exerted by the vapor:
0.0183 g / 18.015 g/mol = 1.01582 x 10-3 mol
PV = nRT
(x) (1.00 L) = (1.01582 x 10-3 mol) (0.08206) (298 K)
x = 2.4841 x 10-2 atm
2) Let's convert the pressure to mmHg:
(2.4841 x 10-2 atm) (760. mmHg/atm) = 18.9 mmHg
3) Look up the vapor pressure for water at 25 °C:
23.8 mmHg
4) Now, we use Raoult's Law:
18.9 = (23.8) (χsolvent)
χsolvent = 0.794
χsolute = 1 − 0.794 = 0.106
Problem #10:
1,1-Dichloroethane (CH3CHCl2) has a vapor pressure of 228 torr at 25.0 °C; at the same temperature, 1,1-dichlorotetrafluoroethane (CF3CCl2F) has a vapor pressure of 79 torr. What mass of 1,1-dichloroethane must be mixed with 240.0 g of 1,1-dichlorotetrafluoroethane to give a solution with vapor pressure 157 torr at 25 °C? Assume ideal behavior.
Solution:
1) State Raoult's Law:
Let Cl = 1,1-dichloroethane and F = 1,1-dichlorotetrafluoroethane
Psolution = $\text{P}{\text{}}_{Cl}^{o}$· $\text{χ}{\text{}}_{Cl}^{}$ + $\text{P}{\text{}}_{F}^{o}$· $\text{χ}{\text{}}_{F}^{}$
2) Substitute values and solve:
157 torr = (228 torr) (x) + (79 torr) (1 − x)
where 'x' is the mole fraction of 1,1-dichloroethane and '1 − x' is the mole fraction of 1,1-dichlorotetrafluoroethane.
157 = 228x + 79 − 79x
149x = 78
x = 0.52349 (this is the mole fraction of 1,1-dichloroethane)
3) Set up a mole fraction equation:
0.52349 = (x / 98.9596) divided by [(x / 98.9596) + (240.0 / 170.92)]
where 'x' is the mass of 1,1-dichloroethane (which is our answer).
However, I will solve the other mole fraction expression. (I did solve the above equation on paper when I formatted this answer (Nov. 16, 2011) and I did get the answer below.)
4) Use the mole fraction of 1,1-dichlorotetrafluoroethane:
0.47651 = (240.0 / 170.92) divided by [(x / 98.9596) + (240.0 / 170.92)]
where 'x' is still the mass of 1,1-dichloroethane (which is our answer).
The reason? One less 'x' in the above equation.
5) Algebra!
1.40416 = 0.47651x / 98.9596 + 0.6690963
0.0048152x = 0.7350637
x = 152.654864 g
Rounded to four significant figures would be 152.6 g
Problem #11: The vapor pressure of pure benzene (C6H6, symbolized by B) and toluene (C7H8, symbolized by T) at 25.0° C are 95.1 and 28.4 torr, respectively. A solution is prepared with a mole fraction of toluene of 0.75. Determine the mole fraction of toluene in the gas phase. Assume the solution to be ideal.
Solution:
1) Raoult's Law for a solution of two volatiles is this:
Psolution = $\text{P}{\text{}}_{B}^{o}$· $\text{χ}{\text{}}_{B}^{}$ + $\text{P}{\text{}}_{T}^{o}$· $\text{χ}{\text{}}_{T}^{}$
Psolution = (95.1) (0.25) + (28.4) (0.75)
Psolution = 23.775 torr + 21.3 torr
Psolution = 45.075 torr
2) The mole fraction of toluene in the vapor is this:
21.3 torr / 45.075 torr = 0.472545757
Rounded to three sig figs, the answer is 0.472.
Problem #12: 1-propanol ($\text{P}{\text{}}_{1}^{o}$ = 20.9 torr at 25.0 °C) and 2-propanol ($\text{P}{\text{}}_{2}^{o}$ = 45.2 torr at 25.0 °C) form ideal solutions in all proportions. Let χ1 and χ2 represent the mole fractions of 1-propanol and 2-propanol in a liquid mixture, respectively. For a solution of these liquids with χ1 = 0.520, calculate the composition of the vapor phase at 25.0 °C.
Solution:
1) Use the mole fractions of each liquid to calculate the partial pressure of that component:
vapor pressure of 1-propanol: (20.9 torr) (0.520) = 10.868 torr
vapor pressure of 2-propanol: (45.2 torr) (0.480) = 21.696 torr
(the 0.480 comes from 1 − 0.520)
2) Use the partial pressures to determine the composition of the vapor:
total pressure of the vapor: 10.868 + 21.696 = 32.564 torr
mole fraction 1-propanol in vapor: 10.868 / 32.564 = 0.334
mole fraction 2-propanol in vapor: 1 − 0.334 = 0.666
Problem #13: Butanone (CH3CH2COCH3) has a vapor pressure of 100. torr at 25 °C. At the same temperature, propanone (CH3COCH3) has a vapor pressure of 222 torr. What mass of propanone must be mixed with 190. g of butanone to give a solution with a vapor pressure of 135 torr at 25 °C? Assume ideal behavior.
Solution:
1) Moles of each component:
butanone ---> 190. g/ 72.107 g/mol = 2.634973 mol
let z = moles of propanone
2) Mole fraction of each component:
butanone ---> 2.63 / (2.63 + z)
propanone ---> z / (2.63 + z)
3) Insert values into Raoult's Law for two volatile components:
2.63 z 135 = 100. x ––––––– + 222 x –––––– 2.63 + z 2.63 + z
You might see it formatted like this:
135 = [(100.) (2.63 / 2.63 + z)] + [(222) (z / (2.63 + z)]
4) Solve for z:
263 222z 135 = ––––––– + –––––– 2.63 + z 2.63 + z
263 + 222z 135 = ––––––––– 2.63 + z
355.05 + 135z = 263 + 222z
92.05 = 87z
z = 1.058 mol of propanone
5) Determine mass of propanone required:
(1.058 mol) (58.0794 g/mol = 61.4 g (to three sig figs)
6) We can check the answer:
Mole fraction of each component:
butanone ---> 2.63 / (2.63 + 1.058) = 0.713124
propanone ---> 1.058 / (2.63 + 1.058) = 0.286876
Insert into Raoult's Law:
Psolution = (100.) (0.713124) + (222) (0.286876)
Psolution = 135 torr
Yay!
Problem #14: At −100. °C, ethane and propane are liquids. At this temperature, the vapor pressure of pure ethane is 394 torr and that of pure propane is 22 torr. What is the vapor pressure at −100. °C over a solution containing equal molar amounts of these substances?
Solution:
If they have equal molar amounts, then each will have a partial pressure of exactly one-half of its pure vapor pressure:
197 torr for ethane, and 11 torr for propane.
So the total pressure is 208 torr.
Problem #15: At 60 °C, compound A has a vapor pressure of 96 mmHg. Benzene has a vapor pressure of 395 mmHg at 60 °C. A 50:50 mixture by mass of benzene and A has a vapor pressure of 281 mmHg. What is the molar mass of A?
Solution:
Ptot = PA + PB
where
Ptot = total pressure above the solution
PA = vapor pressure of component A above the solution
PB = vapor pressure of component B above the solution (the benzene, in our case)
2) Raoult's Law can be stated in an expanded form:
Ptot = $\text{P}{\text{}}_{A}^{o}$· $\text{χ}{\text{}}_{A}^{}$ + $\text{P}{\text{}}_{B}^{o}$· $\text{χ}{\text{}}_{B}^{}$
where
$\text{P}{\text{}}_{A}^{o}$ = vapor pressure above pure A $\text{χ}{\text{}}_{A}^{}$ = mole fraction A in the solution $\text{P}{\text{}}_{B}^{o}$ = vapor pressure above pure B $\text{χ}{\text{}}_{B}^{}$ = mole fraction B in the solution
3) More useful information (mw = molecular weight):
$\text{χ}{\text{}}_{A}^{}$ = mol A / (mol A + mol B) mol A = mass A / mwA $\text{χ}{\text{}}_{B}^{}$ = mol B / (mol A + mol B) mol B = mass B / mwB
4) Putting all that together along with the rest of the data (do not include units):
mass A ––––––– mwA
mass B ––––––– mwB
281 = 96 x ––––––––––––––––––– + 395 x –––––––––––––––––––
mass A mass B ––––––– + ––––––– mwA mwB
mass A mass B ––––––– + ––––––– mwA mwB
Here's another way to format the above:
281 = 96 x (mass A / mwA) / ((mass A / mwA) + (mass B / mw B)) + 395 x (mass B / mwB) / ((mass A / mw A) + (mass B / mw B))
5) From the problem statement, we know that mass A = mass B. Therefore, replace all mass B:
mass A ––––––– mwA
mass A ––––––– mwB
281 = 96 x ––––––––––––––––––– + 395 x –––––––––––––––––––
mass A mass A ––––––– + ––––––– mwA mwB
mass A mass A ––––––– + ––––––– mwA mwB
Here's another way to format the above:
281 = 96 x (mass A / mwA) / ((mass A / mwA) + (mass A / mw B)) + 395 x (mass A / mwB) / ((mass A / mw A) + (mass A / mw B))
6) Each term has mass A in the numerator and denominator, so we can divide out all mass A:
1 ––––––– mwA
1 ––––––– mwB
281 = 96 x ––––––––––––––––––– + 395 x –––––––––––––––––––
1 1 ––––––– + ––––––– mwA mwB
1 1 ––––––– + ––––––– mwA mwB
Here's another way to format the above:
281 = 96 x (1/mwA) / ((1/mwA) + (1/mw B)) + 395 x (1/mwB) / ((1/mw A) + (1/mw B))
7) We know mwB to be 78.1134 g/mol:
1 ––––––– mwA
1 ––––––– 78.1134
281 = 96 x ––––––––––––––––––– + 395 x –––––––––––––––––––
1 1 ––––––– + ––––––– mwA 78.1134
1 1 ––––––– + ––––––– mwA 78.1134
Here's another way to format the above:
281 = 96 x (1/mwA) / ((1/mw A) + (1/78.1134)) + 395 x (1/78.1134) / ((1/mw A) + (1/78.1134))
and now we have 1 equation and 1 unknown.
8) Move the denominator to the other side:
281 x [(1/mwA) + (1/78.11)] = 96 x (1/mwA) + 395 x (1/78.11)
9) Multiply through by mwA
281 x [(1) + (mwA/78.11)] = 96 x (1) + 395 x (mwA / 78.11)
10) Simplify:
281 + 3.598 mwA = 96 + 5.057 mwA
11) Which will result in:
mwA = 185 / 1.459 = 127 g/mole
12) Here's another way. Return to the equation in step 7 and do the two multiplications on the right-hand side:
96 ––––––– mwA
395 ––––––– 78.1134
281 = ––––––––––––––––––– + –––––––––––––––––––
1 1 ––––––– + ––––––– mwA 78.1134
1 1 ––––––– + ––––––– mwA 78.1134
13) Move the right-hand side denominator to the other side:
281 ––––––– mwA
+
281 96 395 ––––––– = ––––– + ––––––– 78.1134 mwA 78.1134
I also distributed the 281 over the two terms that moved to the left-hand side.
14) Gather like terms:
185 ––––––– mwA
=
114 ––––––– 78.1134
15) Cross-multiply and divide:
114 mwA = (185) (78.1134)
mwA = 127 g/mol
|
geom_text_repel adds text directly to the plot. geom_label_repel draws a rectangle underneath the text, making it easier to read. The text labels repel away from each other and away from the data points.
geom_label_repel(
mapping = NULL,
data = NULL,
stat = "identity",
position = "identity",
parse = FALSE,
...,
label.r = 0.15,
label.size = 0.25,
min.segment.length = 0.5,
arrow = NULL,
force = 1,
force_pull = 1,
max.time = 0.5,
max.iter = 10000,
max.overlaps = 10,
nudge_x = 0,
nudge_y = 0,
xlim = c(NA, NA),
ylim = c(NA, NA),
na.rm = FALSE,
show.legend = NA,
direction = c("both", "y", "x"),
seed = NA,
inherit.aes = TRUE
)
geom_text_repel(
mapping = NULL,
data = NULL,
stat = "identity",
position = "identity",
parse = FALSE,
...,
min.segment.length = 0.5,
arrow = NULL,
force = 1,
force_pull = 1,
max.time = 0.5,
max.iter = 10000,
max.overlaps = 10,
nudge_x = 0,
nudge_y = 0,
xlim = c(NA, NA),
ylim = c(NA, NA),
na.rm = FALSE,
show.legend = NA,
direction = c("both", "y", "x"),
seed = NA,
inherit.aes = TRUE
)
## Arguments
mapping Set of aesthetic mappings created by aes or aes_. If specified and inherit.aes = TRUE (the default), is combined with the default mapping at the top level of the plot. You only need to supply mapping if there isn't a mapping defined for the plot. A data frame. If specified, overrides the default data frame defined at the top level of the plot. The statistical transformation to use on the data for this layer, as a string. Position adjustment, either as a string, or the result of a call to a position adjustment function. If TRUE, the labels will be parsed into expressions and displayed as described in ?plotmath other arguments passed on to layer. There are three types of arguments you can use here: Aesthetics: to set an aesthetic to a fixed value, like colour = "red" or size = 3. Other arguments to the layer, for example you override the default stat associated with the layer. Other arguments passed on to the stat. Amount of padding around bounding box, as unit or number. Defaults to 0.25. (Default unit is lines, but other units can be specified by passing unit(x, "units")). Amount of padding around label, as unit or number. Defaults to 0.25. (Default unit is lines, but other units can be specified by passing unit(x, "units")). Amount of padding around labeled point, as unit or number. Defaults to 0. (Default unit is lines, but other units can be specified by passing unit(x, "units")). Radius of rounded corners, as unit or number. Defaults to 0.15. (Default unit is lines, but other units can be specified by passing unit(x, "units")). Size of label border, in mm. Skip drawing segments shorter than this, as unit or number. Defaults to 0.5. (Default unit is lines, but other units can be specified by passing unit(x, "units")). specification for arrow heads, as created by arrow Force of repulsion between overlapping text labels. Defaults to 1. Force of attraction between a text label and its corresponding data point. Defaults to 1. Maximum number of seconds to try to resolve overlaps. Defaults to 0.5. Maximum number of iterations to try to resolve overlaps. Defaults to 10000. Exclude text labels that overlap too many things. Defaults to 10. Horizontal and vertical adjustments to nudge the starting position of each text label. Limits for the x and y axes. Text labels will be constrained to these limits. By default, text labels are constrained to the entire plot area. If FALSE (the default), removes missing values with a warning. If TRUE silently removes missing values. logical. Should this layer be included in the legends? NA, the default, includes if any aesthetics are mapped. FALSE never includes, and TRUE always includes. "both", "x", or "y" -- direction in which to adjust position of labels Random seed passed to set.seed. Defaults to NA, which means that set.seed will not be called. If FALSE, overrides the default aesthetics, rather than combining with them. This is most useful for helper functions that define both data and aesthetics and shouldn't inherit behaviour from the default plot specification, e.g. borders.
## Details
These geoms are based on geom_text and geom_label. See the documentation for those functions for more details. Differences from those functions are noted here.
Text labels have height and width, but they are physical units, not data units. The amount of space they occupy on that plot is not constant in data units: when you resize a plot, labels stay the same size, but the size of the axes changes. The text labels are repositioned after resizing a plot.
## geom_label_repel
Currently geom_label_repel does not support the rot argument and is considerably slower than geom_text_repel. The fill aesthetic controls the background colour of the label.
## Alignment with hjust or vjust
The arguments hjust and vjust are supported, but they only control the initial positioning, so repulsive forces may disrupt alignment. Alignment with hjust will be preserved if labels only move up and down by using direction="y". For vjust, use direction="x".
## Examples
p <- ggplot(mtcars,
aes(wt, mpg, label = rownames(mtcars), colour = factor(cyl))) +
geom_point()
# Avoid overlaps by repelling text labels
p + geom_text_repel()# Labels with background
p + geom_label_repel()
if (FALSE) {
p + geom_text_repel(family = "Times New Roman",
p + geom_text_repel(aes(alpha=wt, size=mpg))
p + geom_label_repel(aes(fill=factor(cyl)), colour="white", segment.colour="black")
# Draw all line segments
p + geom_text_repel(min.segment.length = 0)
# Omit short line segments (default behavior)
p + geom_text_repel(min.segment.length = 0.5)
# Omit all line segments
p + geom_text_repel(segment.colour = NA)
# Repel just the labels and totally ignore the data points
# Hide some of the labels, but repel from all data points
mtcars$label <- rownames(mtcars) mtcars$label[1:15] <- ""
p + geom_text_repel(data = mtcars, aes(wt, mpg, label = label))
# Nudge the starting positions
p + geom_text_repel(nudge_x = ifelse(mtcars$cyl == 6, 1, 0), nudge_y = ifelse(mtcars$cyl == 6, 8, 0))
# Change the text size
p + geom_text_repel(aes(size = wt))
# Scale height of text, rather than sqrt(height)
p + geom_text_repel(aes(size = wt)) + scale_radius(range = c(3,6))
# You can display expressions by setting parse = TRUE. The
# details of the display are described in ?plotmath, but note that
# geom_text_repel uses strings, not expressions.
p + geom_text_repel(aes(label = paste(wt, "^(", cyl, ")", sep = "")),
parse = TRUE)
p +
geom_text_repel() +
annotate(
"text", label = "plot mpg vs. wt",
x = 2, y = 15, size = 8, colour = "red"
)
}
|
Microscopic analysis of octupole shape transitions in neutron-rich actinides with relativistic energy density functionalSupported by National Natural Science Foundation of China (11475140, 11575148)
# Microscopic analysis of octupole shape transitions in neutron-rich actinides with relativistic energy density functional1
## Abstract
Quadrupole and octupole deformation energy surfaces, low-energy excitation spectra, and electric transition rates in eight neutron-rich isotopic chains – Ra, Th, U, Pu, Cm, Cf, Fm, and No – are systematically analyzed using a quadrupole-octupole collective Hamiltonian model, with parameters determined by constrained reflection-asymmetric and axially-symmetric relativistic mean-field calculations based on the PC-PK1 energy density functional. The theoretical results of low-lying negative-parity bands, odd-even staggering, average octupole deformations , and show evidence of a shape transition from nearly spherical to stable octupole-deformed, and finally octupole-soft equilibrium shapes in the neutron-rich actinides. A microscopic mechanism for the onset of stable octupole deformation is also discussed in terms of the evolution of single-nucleon orbitals with deformation.
o
###### pacs:
2
{CJK*}
GBgbsn
2
cutpole deformation, negative-parity band, relativistic energy density functional, quadrupole-octupole collective Hamiltonian
1.10.Re 21.60.Ev 21.60.Jz
3
## 1 Introduction
The study of octupole-deformed (reflection asymmetric) shapes and shape transitions presents a recurrent theme in nuclear structure physics. Octupole-deformed shapes are characterized by the presence of low-lying negative-parity bands, and by pronounced electric octupole transitions [2, 3, 4, 5]. In the case of static octupole deformation, for instance, the lowest positive-parity even-spin states and the negative-parity odd-spin states form an alternating-parity band, with states connected by the enhanced transitions. Recently, evidence for pronounced octupole deformation in Ra [6], Ba [7], and Ba [8] has been reported in Coulomb excitation experiments with radioactive ion beams. The renewed interest in studies of reflection asymmetric nuclear shapes using accelerated radioactive beams point to the importance of a timely systematic theoretical analysis of quadrupole-octupole collective states of nuclei in different mass regions.
A series of theoretical models have been applied to the studies of octupole-deformed shapes and the evolution of the corresponding negative-parity collective states, including the energy density functionals or their simplest realization: self-consistent mean-field models [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 30, 31, 32, 34, 33, 35, 36, 29, 20], macroscopic+microscopic (MM) models [37, 38, 39], algebraic (or interacting boson) models [40, 41], phenomenological collective models [42, 43, 44, 45, 46], and the reflection asymmetric shell model [47].
In particular, nuclear energy density functionals (EDFs) enable a complete and accurate description of ground-state properties and collective excitations over the entire chart of nuclides [48, 49, 50, 51, 52, 53]. Both non-relativistic and relativistic EDFs have successfully been applied to the description of the evolution of single-nucleon shell structures and related nuclear shapes and shape transitions. To compute excitation spectra and transition rates, however, the EDF framework has to be extended to take into account the restoration of symmetries broken in the mean-field approximation, and fluctuations in the collective coordinates. A straightforward approach is the generator coordinate method (GCM) combined with projection techniques, and recently it has been implemented for octupole-deformed shapes, based on both nonrelativistic [20] and relativistic [33, 35] EDFs. Using this method, however, it is rather difficult to perform a systematic study of low-lying quadrupole and octupole states in different mass regions, because GCM is very time-consuming for heavy systems. An alternative approach is the EDF-based quadrupole-octupole collective Hamiltonian (QOCH) [32, 34, 36]. The collective Hamiltonian can be derived from the GCM in the Gaussian overlap approximation [54], and the validity of this approximate method was recently demonstrated in a comparison with a full GCM calculation for the shape coexisting nucleus Kr [55].
Recently, we have applied the EDF-based QOCH to a systematic analysis of spectroscopy of quadrupole and octupole states in fourteen isotopic chains: Xe, Ba, Ce, Nd, Sm, Gd, Rn, Ra, Th, U, Pu, Cm, Cf, and Fm. The microscopic QOCH model, based on the PC-PK1 energy density functional [56], is shown to accurately describe the empirical trend of low-energy quadrupole and octupole collective states, and the predicted spectroscopic properties are also consistent with recent microscopic calculations based on both relativistic and non-relativistic energy density functionals. The resulting low-energy negative-parity bands, average octupole deformations, and transition rates show evidence for octupole collectivity in mass regions centered at both and . The success of the EDF-based QOCH model in these mass regions enables us to search for the next possible octupole-deformed mass region by analyzing both deformation energy surfaces and low-lying spectroscopy.
Very recently, a systematic search for axial octupole deformation in the actinides and superheavy nuclei with proton numbers and neutron numbers from the two-proton drip line up to was performed using the mean-field framework of relativistic density functional theory, and octupole-deformed minima were predicted in the nuclei around . Therefore, in this study we employ the EDF-based QOCH to perform a systematic calculation of even-even neutron-rich heavy nuclei ( and ). Low-energy spectra and transition rates for both positive- and negative-parity states of 96 nuclei are calculated using the QOCH with parameters determined by self-consistent reflection-asymmetric relativistic mean-field calculations based on the PC-PK1 energy density functional [56]. The relativistic functional PC-PK1 was adjusted to the experimental masses of a set of 60 spherical nuclei along isotopic or isotonic chains, and to the charge radii of 17 spherical nuclei. PC-PK1 has been successfully employed in studies of nuclear masses [57, 58], and spectroscopy of low-lying quadrupole states [59].
The article is organized as follows. Section 2 presents a brief review of the EDF-based QOCH model. The systematics of collective deformation energy surfaces, excitation energies of low-lying positive- and negative-parity states, odd-even staggering, electric dipole, quadrupole, and octupole transition rates, calculated with the QOCH model, are discussed in Section 3. Section 4 contains a summary and concluding remarks.
## 2 Theoretical Framework
Detailed formalism of the quadrupole-octupole collective Hamiltonian has been presented in Refs. [34, 36]. In this section, for completeness, a brief introduction is presented. The QOCH, which can simultaneously treat the axially quadrupole-octupole vibrational and rotational excitations, is expressed in terms of two deformation parameters and , and three Euler angles that define the orientation of the intrinsic principal axes in the laboratory frame,
^Hcoll=−ℏ22√wI[∂∂β2√IwB33∂∂β2−∂∂β2√IwB23∂∂β3−∂∂β3√IwB23∂∂β2+∂∂β3√IwB22∂∂β3]+^J22I+Vcoll(β2,β3). (1)
denotes the component of angular momentum perpendicular to the symmetric axis in the body-fixed frame of a nucleus. The mass parameters , , and , the moments of inertia , and collective potential depend on the quadrupole and octupole deformation variables and . The additional quantities that appear in the vibrational kinetic energy, , determine the volume element in the collective space
∫dτcoll=∫√wIdβ2dβ3dΩ. (2)
The eigenvalue problem of the collective Hamiltonian (1) is solved using an expansion of eigenfunctions in terms of a complete set of basis functions that depend on the collective coordinates , , and . The collective wave functions are thus obtained as
ΨIMπα(β2,β3,Ω)=ψIπα(β2,β3)|IM0⟩. (3)
The reduced values are calculated from the relation
B(Eλ,Ii→If)=⟨Ii0λ0|If0⟩2∣∣∣∫dβ2dβ3√wIψiMEλ(β2,β3)ψ∗f∣∣∣2, (4)
where denotes the electric moment of order . In microscopic models it is calculated as , where is the nuclear wave function.
In the framework of the EDF-based QOCH model, the collective parameters of QOCH in Eq. (1) are all determined from the EDF microscopically. The moments of inertia are calculated according to the Inglis-Belyaev formula [60, 61]:
I=∑i,j(uivj−viuj)2Ei+Ej|⟨i|^J|j⟩|2, (5)
where is the angular momentum along the axis perpendicular to the symmetric axis, and the summation runs over the proton and neutron quasiparticle states. The quasiparticle energies , occupation probabilities , and single-nucleon wave functions are determined by solutions of the constrained EDF. The mass parameters are calculated in the perturbative cranking approximation [62, 36]
Bλλ′(q2,q3)=ℏ22[M−1(1)M(3)M−1(1)]λλ′, (6)
with
(7)
where and are the mass quadrupole and octupole operators, respectively, and .
The collective potential in Eq. (1) is obtained by subtracting the vibrational and rotational zero-point energy (ZPE) corrections from the total mean-field energy:
Vcoll(β2,β3)=EMF(β2,β3)−ΔVvib(β2,β3)−ΔVrot(β2,β3). (8)
The vibrational and rotational ZPE corrections are calculated in the cranking approximation [62]:
ΔVvib(β2,β3)=14Tr[M−1(3)M(2)], (9)
and
ΔVrot(β2,β3)=⟨^J2⟩2I, (10)
respectively.
## 3 Results and discussion
The principal objective of this study is a systematic analysis that includes collective deformation energy surfaces (DESs), excitation energies and average quadrupole and octupole deformations of low-lying states, electric dipole, quadrupole, and octupole transitions for even-even neutron-rich heavy nuclei ( and ) using the EDF-based QOCH model. To determine the collective input for the QOCH, we perform a constrained reflection-asymmetric relativistic mean-field plus BCS (RMF+BCS) calculation, with the effective interaction in the particle-hole channel defined by the relativistic density functional PC-PK1 [56], and a density independent -force [63] in the particle-particle channel. The strength parameter of the -force, 333.9 MeV fm (397.0 MeV fm) for neutrons (protons), is determined to reproduce the corresponding pairing gap of the spherical configuration of , calculated using the relativistic Hartree-Bogoliubov (RHB) model with the finite-size separable pairing force [65]. This can be done because the essential effects of the off-diagonal parts of the pairing field neglected in the RMF+BCS calculations can be recovered by simply renormalizing the pairing strength, and consequently the low-energy structure is in good agreement with the predictions of the RHB model [64]. Moreover, the RHB model with finite-size separable pairing force was successfully used in the description of octupole deformations [27] and low-energy excitation spectra [25, 34]. The solution of the single-nucleon Dirac equation in RMF+BCS is obtained by expanding the nucleon wave functions in an axially deformed harmonic oscillator basis with 20 major shells.
Figures 3, 3, 3, and 3 display the DESs of the even-even Ra, Th, U, Pu, Cm, Cf, Fm, and No isotopes in the - plane, calculated with the RMF+BCS using the functional PC-PK1 and -force pairing. The quadrupole and octupole deformations that correspond to the global minima, as well as the octupole deformation energies , defined as the energy differences between the non-octupole deformed minima () and the global minima, are also plotted in Fig. 3. The equilibrium quadrupole deformations for all the isotopic chains increase gradually, from nearly spherical to well-deformed shapes, as the neutron number increases from 190 to 212. All the isotopic chains except Ra exhibit a very interesting shape evolution: from nearly spherical to octupole-deformed, and finally octupole-soft equilibrium shapes. Stable equilibrium octupole deformations are calculated in Th, U, Pu, Cm, Cf, Fm, and No. There are peaks at for the octupole deformation energies and the maximum value is MeV, observed in Pu and Cm. For the Ra isotopic chain, weak octupole deformation is predicted in Ra but the energy surfaces are very shallow with respect to the octupole degree of freedom. Similar shape transitions in the actinides have also been obtained in studies based on different relativistic energy density functionals [28]. Some differences between these calculations are found in the exact location of non-zero equilibrium octupole deformation and the corresponding octupole deformation energies. This can be attributed to the details of the single-particle spectra, especially levels with and , and also to different treatment of pairing correlations [28].
\ruleup
\figcaption
Axially-symmetric quadrupole-octupole deformation energy surfaces of Ra and Th isotopes in the - plane calculated by self-consistent RMF+BCS. For each nucleus, energy values are normalized with respect to the energy minimum of the ground states. The contours join points on the surface with the same energy, and the separation between neighboring contours is 0.5 MeV.
\figcaption
Same as Fig. 3 but for U and Pu isotopes.
\figcaption
Same as Fig. 3 but for Cm and Cf isotopes.
\figcaption
Same as Fig. 3 but for Fm and No isotopes.
\ruledown
\figcaption
Calculated values of the equilibrium quadrupole and octupole deformations as well as the octupole deformation energy as functions of the neutron number for the eight isotopic chains analyzed in the present study.
\figcaption
Mean values of the quadrupole and octupole deformations, computed for the QOCH ground states , as functions of the neutron number.
Figure 3 displays the expectation values of the quadrupole and octupole deformations in the QOCH ground states as functions of the neutron number. Initially the ground-state quadrupole deformation increases rapidly up to and then more gradually with neutron number. The corresponding ground-state octupole deformation increases at first, and then decreases, with peaks at . In our calculation is predicted for octupole-deformed nuclei.
\ruleup
\figcaption
The energy spectra of the low-lying even-spin positive-parity states up to and odd-spin negative-parity states up to , as functions of the neutron number, for the eight isotopic chains analyzed in the present study.
\ruledown
Figure 3 displays the energy spectra of the lowest-lying even-spin positive-parity states up to and odd-spin negative-parity states up to for the eight isotopic chains. For the positive-parity bands, the excitation energies drop rapidly until , and then vary slowly when adding more neutrons. The excitation energies of the negative-parity bands exhibit a parabolic behavior with minima at . Specifically, the minima gradually evolve from for Ra and Th isotopes to for U, Pu, Cm, Cf, and Fm isotopes, and finally for No isotopes. The lowest state is observed at Cf and the excitation energy is 0.086 MeV. The evolutions of the positive- and negative-parity bands with neutron number are correlated with those of the average quadrupole and octupole deformations in Fig. 3, respectively. The quadrupole deformation reflects the collectivity of a nucleus and, generally larger leads to a more condensed ground state band. On the other hand, the larger octupole deformation corresponds to stronger octupole correlation, which drives the negative-parity band closer to the ground state.
Another indication of the shape transition, from nearly spherical to octupole-deformed to octupole-soft, is the odd-even staggering shown in Fig. 3. For both positive and negative parity, we plot the calculated ratios for the yrast states of the eight isotopic chains as functions of the angular momentum . The ratios of the isotones are almost linear as a function of , indicating that they are nearly spherical nuclei. For the isotones, the odd-even staggering becomes more pronounced, and this means that negative-parity states form a separate rotational-like collective band built on the octupole vibrational state. In between, for the isotopes Th, U, Pu, Cm, Cf, Fm, and No, the ratios are parabolic as a function of and the odd-even staggering is negligible, indicating that positive- and negative-parity states form a single rotational band. Therefore, these isotopes are stable octupole-deformed. For Ra isotopes, pronounced odd-even staggering is observed in nuclei with .
\ruleup
\figcaption
The calculated energy ratios for states of the positive-parity ground-state band ( even) and the lowest negative-parity band ( odd) as functions of the angular momentum for the eight isotopic chains analyzed in the present study.
\ruledown
\figcaption
The calculated , , and (in units of W.u.) as functions of the neutron number for the eight isotopic chains analyzed in the present study.
Low energy values are good measures of octupole collectivity. For low-lying states in nuclei, the ground-state transition probabilities present a maximum value in the region of octupole-deformed nuclei [2]. Figure 3 displays the calculated , , and (in units of W.u.) as functions of the neutron number for the eight isotopic chains analyzed in the present study. The increases at first, and then decreases with peaks at . This is consistent with the evolution of the average octupole deformation (cf. Fig. 3). In our calculation, W.u. is predicted for the pronounced octupole-deformed nuclei. Large is observed in the heavier Ra, Th, U, and Pu isotopes, and lighter Fm and No isotopes, different from the evolution of . This may be because the electric dipole moment is not only dependent on the octupole correlation, but also sensitive to the shell effects and occupancy of different orbitals [2, 8]. The values increase gradually to more than 200 W.u. with increasing neutron number, indicating a shape transition from nearly spherical to well-deformed shapes for all the isotopic chains.
\figcaption
Single-neutron levels (top panel) and single-proton levels (bottom panel) of as functions of the deformation parameters, calculated by the RMF+BCS based on PC-PK1 energy density functional. Each plot follows the quadrupole deformation parameter up to the position of the equilibrium minimum , with the constant octupole deformation parameter (left panels). For the constant value , the panels on the right display the dependence of the single-nucleon energies on the octupole deformation, from to . The thick dashed (black) curves denote the Fermi levels.
A microscopic picture of the onset of octupole deformation and octupole softness emerges from the dependence of the single-nucleon levels on the two deformation parameters and . In Fig. 3 we plot the single neutron and proton levels of along a path in the - plane, calculated by the RMF+BCS based on PC-PK1 energy density functional. They are similar to the usual Nilsson orbitals, but we also show their evolution along the octupole direction. In the mean-field approach there is a close relation between the total binding energy and the level density around the Fermi level in the Nilsson diagram of single-particle energies. A lower-than-average density of single-particle levels around the Fermi energy results in extra binding, whereas a larger-than-average value reduces binding. Therefore, the onset of octupole minima around (c.f. Figs. 3-3) can be attributed to the low neutron-level density around the Fermi surface at and , induced by the repulsion between the pair of levels (blue curves) that originate from the spherical neutron levels. A low neutron-level density is also predicted at , which may cause the octupole softness in the isotopes with . Moreover, it is noted that an octupole-deformed proton shell gap at is obtained, which may enhance the octupole deformations in the heavier isotopic chains (c.f. Figs. 3, 3).
## 4 Summary
In the present study we have performed a microscopic analysis of octupole shape transitions in eight isotopic chains: Ra, Th, U, Pu, Cm, Cf, Fm, and No with neutron number . Starting from self-consistent binding energy maps in the - plane, calculated with the RMF+BCS model based on the functional PC-PK1 and -force pairing, a recent implementation of the quadrupole-octupole collective Hamiltonian for vibrations and rotations has been used to calculate the spectroscopy of quadrupole and octupole states of the 96 even-even nuclei. The microscopic deformation energy surfaces exhibit transitions with increasing neutron number: from spherical quadrupole vibrational to stable octupole deformed nuclei, and finally to octupole vibrations characteristic for -soft potentials in the neutron-rich actinides. The systematics of the energy spectra, odd-even staggering, and transition rates, associated with both positive- and negative-parity yrast states, points to the appearance of prominent octupole correlations around , and the corresponding lowering in energy of negative-parity bands with respect to the positive-parity ground-state band.
A microscopic picture of the onset of octupole deformation emerges from the dependence of the single-nucleon levels on the two deformation parameters. The onset of ocutpole minima around Cm can be mainly attributed to the low neutron-level density around the Fermi surface at and , which is induced by the repulsion of the level pair originating from the spherical neutron levels.
### Footnotes
1. thanks: Supported by National Natural Science Foundation of China (11475140, 11575148)
2. footnotetext: Received 25 August 2017
3. footnotetext: 2013 Chinese Physical Society and the Institute of High Energy Physics of the Chinese Academy of Sciences and the Institute of Modern Physics of the Chinese Academy of Sciences and IOP Publishing Ltd
### References
1. P. A. Butler and W. Nazarewicz, Rev. Mod. Phys. 68, 349 (1996).
2. I. Ahmad and P. A. Butler, Annu. Rev. Nucl. Part. Sci. 43, 71 (1993).
3. P. A. Butler and L. Willmann, Nucl. Phys. News 25, 12 (2015).
4. P. A. Butler, J. Phys. G 43, 073002 (2016).
5. L. P. Gaffney et al., Nature 497, 199 (2013).
6. B. Bucher et al., Phys. Rev. Lett. 116, 112503 (2016).
7. B. Bucher et al., Phys. Rev. Lett. 118, 152504 (2017).
8. P. Bonche, P.-H. Heenen, H. Flocard, and D. Vautherin, Phys. Lett. B 175, 387 (1986).
9. P. Bonche, in The Variation of Nuclear Shapes, edited by J. D. Garrett (World Scientific, Singapore, 1988), p. 302.
10. J. L. Egido and L. M. Robledo, Nucl. Phys. A 524, 65 (1991).
11. K. Rutz, J. A. Maruhn, P. G. Reinhard, and W. Greiner, Nucl. Phys. A 590, 680 (1995).
12. L. S. Geng, J. Meng, and H. Toki, Chin. Phys. Lett. 24, 1865 (2007).
13. J.-Y. Guo, P. Jiao, and X.-Z. Fang, Phys. Rev. C 82, 047301 (2010).
14. L. M. Robledo, M. Baldo, P. Schuck, and X. Viñas, Phys. Rev. C 81, 034315 (2010).
15. L. M. Robledo and G. F. Bertsch, Phys. Rev. C 84, 054302 (2011).
16. R. Rodríguez-Guzmán, L.M. Robledo, and P. Sarriguren, Phys. Rev. C 86, 034336 (2012).
17. L. M. Robledo and P. A. Butler, Phys. Rev. C 88, 051302 (2013).
18. L. M. Robledo, J. Phys. G 42, 055109 (2015).
19. Rémi N. Bernard, Luis M. Robledo, and Tomás R. Rodríguez, Phys. Rev. C 93, 061302(R) (2016).
20. J. Zhao, B.-N. Lu, E.-G. Zhao, and S.-G. Zhou, Phys. Rev. C 86, 057304 (2012).
21. S.-G. Zhou, Phys. Scr. 91, 063008 (2016).
22. J. Zhao, B.-N. Lu, E.-G. Zhao, and S.-G. Zhou, Phys. Rev. C 95, 014320 (2017).
23. K. Nomura, D. Vretenar, and B.-N.Lu, Phys.Rev.C 88, 021303 (2013).
24. K. Nomura, D. Vretenar, T. Nikšić, and B.-N. Lu, Phys. Rev. C 89, 024312 (2014).
25. K. Nomura, R. Rodríguez-Guzmán, and L. M. Robledo, Phys. Rev. C 92, 014312 (2015).
26. S. E. Agbemava, A. V. Afanasjev, and P. Ring, Phys. Rev. C 93, 044304 (2016).
27. S. E. Agbemava and A. V. Afanasjev, Phys. Rev. C 96, 024301 (2017).
28. S. Ebata and T. Nakatsukasa, Phys. Scr. 92, 064005 (2017).
29. W. Zhang, Z.-P. Li, and S.-Q.Zhang, Chi. Ph. C 34, 1094 (2010).
30. W. Zhang, Z. P. Li, S. Q. Zhang, and J.Meng, Phys. Rev. C 81, 034302 (2010).
31. Z. P. Li, B. Y. Song, J. M. Yao, D. Vretenar, and J. Meng, Phys. Lett. B 726, 866 (2013).
32. J. M. Yao, E. F. Zhou, and Z. P. Li, Phys. Rev. C 92, 041304(R) (2015).
33. Z. P. Li, T. Nikšić, and D. Vretenar, J. Phys. G 43, 024005 (2016).
34. E. F. Zhou, J. M. Yao, Z. P. Li, J. Meng, and P. Ring, Phys. Lett. B 753, 227 (2016).
35. S. Y. Xia, H. Tao, Y. Lu, Z. P. Li, T. Nikšić and D. Vretenar. Submitted to Phys. Rev. C.
36. W. Nazarewicz, P. Olanders, I. Ragnarsson, J. Dudek, G. A. Leander, P. Moller, and E. Ruchowsa, Nucl. Phys. A 429, 269 (1984).
37. P. Möller, R. Bengtsson, B. Carlsson, P. Olivius, T. Ichikawa, H. Sagawa, and A. Iwamoto, At. Data Nucl. Data Tables 94, 758 (2008).
38. H.-L. Wang, J. Yang, M.-L. Liu, and F.-R. Xu, Phys. Rev. C 92, 024303 (2015).
39. O. Scholten, F. Iachello, and A. Arima, Ann. Phys. (NY) 115, 325 (1978).
40. T. Otsuka and M. Sugita, Phys. Lett. B 209, 140 (1988).
41. P. G. Bizzeti and A. M. Bizzeti-Sona, Phys. Rev. C 70, 064319 (2004).
42. D. Bonatsos, D. Lenis, N. Minkov, D. Petrellis, and P. Yotov, Phys. Rev. C 71 (2005) 064309.
43. P. G. Bizzeti and A. M. Bizzeti-Sona, Phys. Rev. C 88, 011305(R) (2013).
44. N. Minkov, S. Drenska, M. Strecker, W. Scheid, and H. Lenske, Phys. Rev. C 85, 034306 (2012).
45. R. V. Jolos, P. von Brentano, and J. Jolie, Phys. Rev. C 86, 024319 (2012).
46. Y.-J. Chen, Z.-C. Gao, Y.-S. Chen, and Y. Tu, Phys. Rev. C 91, 014317 (2015).
47. M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. 75, 121 (2003).
48. D. Vretenar, A. V. Afanasjev, G. A. Lalazissis, and P. Ring, Phys. Rep. 409, 101 (2005).
49. J. Meng, H. Toki, S. G. Zhou, S. Q. Zhang, W. H. Long, and L. S. Geng, Prog. Part. Nucl. Phys. 57, 470 (2006).
50. J. Stone and P.-G. Reinhard, Prog. Part. Nucl. Phys. 58, 587 (2007).
51. T. Nikšić, D. Vretenar, and P. Ring, Prog. Part. Nucl. Phys. 66, 519 (2011).
52. Relativistic Density Functional for Nuclear Structure, edited by J. Meng (World Scientic, Singapore, 2016).
53. P. Ring and P. Schuck, The Nuclear Many-Body Problem (Springer-Verlag, Heidelberg, 1980).
54. J. M. Yao, K. Hagino, Z. P. Li, J. Meng, and P. Ring, Phys. Rev. C 89, 054306 (2014).
55. P. W. Zhao, Z. P. Li, J. M. Yao, and J. Meng, Phys. Rev. C 82, 054319 (2010).
56. Q. S. Zhang, Z. M. Niu, Z. P. Li, J. M. Yao, and J. Meng, Front. Phys. 9, 529 (2014).
57. K. Q. Lu, Z. X. Li, Z. P. Li, J. M. Yao, and J. Meng, Phys. Rev. C 91, 027304 (2015).
58. S. Quan, Q. Chen, Z. P. Li, T. Nikšić, and D. Vretenar, Phys. Rev. C 95, 054321 (2017).
59. D. R. Inglis, Phys.Rev. 103, 1786 (1956).
60. S. T. Belyaev, Nucl. Phys. 24, 322 (1961).
61. M. Girod and B. Grammaticos, Nucl. Phys. A 330, 40 (1979).
62. M. Bender, K. Rutz, P.-G. Reinhard, and J. A. Maruhn, Eur. Phys. J. A 8, 59 (2000).
63. J. Xiang, Z. P. Li, J. M. Yao, W. H. Long, P. Ring, and J. Meng, Phys. Rev. C 88, 057301 (2013).
64. T. Nikšić, D. Vretenar, and P. Ring, Comp. Phys. Comm. 185, 1808 (2014).
You are adding the first comment!
How to quickly get a good reply:
• Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made.
• Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements.
• Your comment should inspire ideas to flow and help the author improves the paper.
The better we are at sharing our knowledge with each other, the faster we move forward.
The feedback must be of minimum 40 characters and the title a minimum of 5 characters
|
# Weak (variational) formulation of Navier-Stokes equation
Computational Fluid Dynamics (CFD) is the field of studying the dynamics of fluid flow using mathematical and computational methods. The fluid flow is usually expressed in the form of Navier-Stokes or Stokes equations, on which appropriate numerical schemes are applied, and the derived system of equations is solved using computers, resulting in the prediction of flow patterns and secondary entities like the shear stress.
The concept of the weak formulation needed for solving partial differential equations (PDEs) numerically using the finite element method was already discussed here. In this post, we have a look at how to derive the weak form of the Navier-Stokes equations, which can be used in available open-source PDE solvers (like FreeFEM, FEniCS, and deal.ii) to simulate fluid flow in any desired domain.
In its general form, the Navier-Stokes equations describing the flow of an incompressible fluid with constant density $$\rho$$ in the domain $$\Omega \subset \mathbb{R}^{d}$$ (with $$d$$ being the dimension, so 2 or 3) can be written as :
$\left\{ {\begin{array}{*{20}{l}} \displaystyle {\frac{\partial \mathbf{u}}{\partial t} - {\nabla\cdot}[\nu(\nabla {\mathbf{u}} + \nabla {\mathbf{u}^T})] + ({\mathbf{u}}.\nabla ){\mathbf{u}} + \nabla {\mathbf{p}} = {\mathbf{f}},\quad x \in \Omega ,t > 0,} \\ \displaystyle {\nabla\cdot{\mathbf{u}} = 0,\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad x \in \Omega ,t > 0,} \end{array}} \right.$
in which $$\mathbf{u}$$ is the fluid velocity, $$\mathbf{p}$$ is the pressure (which is actually pressure divided by the density), $$\nu = \frac{\mu}{\rho}$$ is the kinematic viscosity (with $$\mu$$ being the dynamic viscosity), and $$\mathbf{f}$$ is a force term. The equations are conservation of linear momentum and conservation of mass (also called continuity equation), respectively. When $$\nu$$ is constant, the diffusion term can be simplified as:
$\text{div} [\nu(\nabla {\bf u}+\nabla {\bf u}^{T})] =\nu (\Delta {\bf u} + \nabla \text{div} {\bf u})=\nu \Delta {\bf u},$
which turns the general form into the following:
$\left\{ {\begin{array}{*{20}{l}} \displaystyle {\frac{\partial \mathbf{u}}{\partial t} - \nu\Delta{\mathbf{u}} + \left( {\mathbf{u} \cdot \nabla } \right) {\mathbf{u}} + \nabla p = {\mathbf{f}},\quad x \in \Omega ,t > 0,} \\ \displaystyle {\nabla\cdot{\mathbf{u}} = 0,\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad x \in \Omega ,t > 0,} \end{array}} \right.$
This equation satisfies the incompressibility condition $$\nabla\cdot\mathbf{u}=0$$ and needs proper initial and boundary conditions to be well-posed. The initial condition can be defined as:
${\bf u}({\bf x},0)={\bf u}_{0}({\bf x})\qquad \forall{\bf x}\ \epsilon\ {\bf \Omega,}$
where $${\bf u}_{0}$$ is a divergence-free velocity field. Various types of boundary conditions can be applied. For example, if $$\partial \Omega$$ is the boundary of $$\Omega$$, it can be split into 3 distinct boundaries $$\partial \Omega=\Gamma_{1} \cup \Gamma_{2} \cup \Gamma_{3}$$ each of which with a different type. On $$\Gamma_{1}$$, the inlet can be defined as a Dirichlet boundary condition for the velocity for a given velocity profile $${\bf g}$$:
${\bf u} = {\bf g} \quad \text{on } \Gamma_1$
On $$\Gamma_2$$, a wall boundary no-slip condition can be considered:
${\bf u} = 0 \quad \text{on } \Gamma_2$
On $$\Gamma_3$$, for the outlet condition, a homogeneous Neumann condition on velocity and a zero pressure condition can be defined like:
$\frac{\partial {\bf u}}{\partial n} = 0, \quad \mathbf{p} = 0, \quad \text{on } \Gamma_3$
with $$n$$ being the normal direction on the boundary $$\partial \Omega$$. Broadly speaking, these boundaries can be grouped into 2 sets: $$\Gamma_{D} = \Gamma_{1} \cup \Gamma_{2}$$ and $$\Gamma_{N} = \Gamma_{3}$$ for boundaries with Dirichlet and Neumann conditions, respectively.
The Navier-Stokes equations can be written componentwise for individual components of the flow vector field in the Cartesian coordinates. Denoting $$u_i, i=1,\ldots,d$$ (with $$d=2$$ in 2D and $$d=3$$ in 3D), the equation can be presented as:
$\left\{ {\begin{array}{*{20}{l}} \displaystyle {\frac{\partial {u_i}}{\partial t} - \nu \Delta {u_i} + \mathop \sum \limits_{j = 1}^d {u_j}\frac{\partial {u_i}}{\partial {x_j}} + \frac{\partial p}{\partial {x_i}} = {f_i},\qquad i = 1, \ldots ,d,} \\ \displaystyle {\mathop \sum \limits_{j = 1}^d \frac{\partial {u_j}}{\partial {x_j}} = 0.} \end{array}} \right.$
For deriving the weak formulation, the first equation of the Navier-Stokes is multiplied by a test function $$v$$ defined on a proper function space V in which the test functions vanish on the Dirichlet boundary:
$V = [{\bf H}^{1}_{\Gamma_{D}}(\Omega)]^{d} = \lbrace{\bf V} \in [{\bf H}^{1}(\Omega)]^{d} : {\bf v}|\Gamma_{D} = {\bf 0}\rbrace.$
yielding to:
${\mathop{\int}_{\Omega}} {\partial {\bf u} \over \partial t}.{\bf v}\ d\omega- {\mathop{\int}_{\Omega}}\nu\triangle{\bf u.v}d\omega+ {\mathop{\int}_{\Omega}}[({\bf u.\nabla){\bf u].{\bf v}}}d\omega+ {\mathop{\int}_{\Omega}}\nabla p.{\bf v}d\omega= {\mathop{\int}_{\Omega}}{\bf f. v}d\omega.$
Applying Green’s divergence theory results in:
$-\int_{\Omega} \nu \Delta \mathbf{u} \cdot \mathbf{v} d \omega=\int_{\Omega} \nu \nabla \mathbf{u} \cdot \nabla \mathbf{v} d \omega-\int_{\partial \Omega} \nu \frac{\partial \mathbf{u}}{\partial \mathbf{n}} \cdot \mathbf{v} d \gamma$
and
$\int_{\Omega} \nabla p \cdot \mathbf{v} d \omega=-\int_{\Omega} p \nabla\cdot \mathbf{v} d \omega+\int_{\partial \Omega} p \mathbf{v} \cdot \mathbf{n} d \gamma$
Substituting these two equation into the first equation yields to: $$\begin{array}{r} \displaystyle\int_{\Omega} \frac{\partial \mathbf{u}}{\partial t} \cdot \mathbf{v} d \omega+\int_{\Omega} \nu \nabla \mathbf{u} \cdot \nabla \mathbf{v} d \omega+\int_{\Omega}[(\mathbf{u} \cdot \nabla) \mathbf{u}] \cdot \mathbf{v} d \omega-\int_{\Omega} p \nabla\cdot \mathbf{v} d \omega \\ \displaystyle=\int_{\Omega} \mathbf{f} \cdot \mathbf{v} d \omega+\int_{\partial \Omega}\left(\nu \frac{\partial \mathbf{u}}{\partial \mathbf{n}}-p \mathbf{n}\right) \cdot \mathbf{v} d \gamma \quad \forall \mathbf{v} \in V . \end{array}$$
The last term of this equation is expressed in accordance to the defined Neumann boundary condition, which vanishes on $$\Gamma_3$$ due to the defined condition. Moreover, this term vanishes on the Dirichlet boundaries due to the properties of the function space $$V$$.
Similarly, the second equation of the Navier-Stokes is multiplied by a test function $$q$$ belonging to the function space $$Q$$, called the pressure space:
$Q = {\bf L}^2_0(\Omega) = \lbrace p \in L^2(\Omega) : {\mathop{\int}_{\Omega}} p \ d\omega = 0\rbrace,$
resulting in:
${\mathop{\int}_{\Omega}} q \nabla\cdot{\bf u}\ d\omega = 0 \qquad \forall q \in Q.$
The last 2 equations are so called weak (variational) forms of the Navier-Stokes equations.
|
auctex-devel
[Top][All Lists]
## [AUCTeX-devel] Math macros and TeX-insert-macro'
From: Ralf Angeli Subject: [AUCTeX-devel] Math macros and TeX-insert-macro' Date: Sat, 25 Oct 2008 18:33:34 +0200
Hi,
you've likely seen the (slightly obnoxious) message about math macros
not being available for completion with TeX-insert-macro'. I've been
wondering myself why this is the case. Could somebody think of a reason
why we would not want them to be available for completion (in LaTeX
mode)?
I'd probably add them if nobody objects. An alternative would be to
make them available through a separate function and key binding but I
don't think this is necessary.
--
Ralf
|
# Why do automatic bone weights paint distant meshes
I have created a simple block-person from a few cube meshes, as well as an armature. When I parent the armature to the meshes with automatic weighting, it seems like blender is assigning distant vertices to a bone while ignoring nearby vertices.
The docs on automatic weights say:
It calculates how much influence a particular bone would have on vertices based on the distance from those vertices to a particular bone (“bone heat” algorithm)
This doesn't seem to be what I observe. The calf.r and thigh.r bones are closer to the vertices in the right leg than the vertices in the left leg, yet they entirely influence the latter (visa-versa for the left side).
I realize that I can manually assign vertex groups if needed, but I'm wondering if this unintuitive behavior means I set up my mesh or armature poorly.
|
# emacs: what is C-c % doing in auctex and how can I make it behave better?
C-c % is supposed to be the emacs auctex mode shortcut for commenting out stuff. (There's also C-c ; which comments out the marked region, but that one works). Now sometimes it comments out a single line, sometimes it comments out a line and the ones above it. It doesn't seem to have very consistent behaviour.
What I'd really like it to do is comment out the line the cursor is on unless it's on a begin or end tag, in which case comment out the whole environment. (Actually, I'd settle for just understanding the slightly odd behaviour of the comment macro...)
-
## migrated from superuser.comNov 24 '10 at 16:47
This question came from our site for computer enthusiasts and power users.
This is about emacs, not about programming. Why did it get migrated? – Seamus Nov 24 '10 at 16:48
C-c % runs TeX-comment-or-uncomment-paragraph. For what exactly is considered a paragraph here, see the manual:
Command: TeX-comment-or-uncomment-paragraph
(C-c %) Add or remove % from the beginning of each line in the current paragraph. When removing % characters the paragraph is considered to consist of all preceding and succeeding lines starting with a %, until the first non-comment line.
Here's a commenting function that does more or less what you want. Uncommenting an environment only works if LaTeX-syntactic-comments is t (and not always very well even then).
(defun LaTeX-comment-environment-or-line (arg)
"Comment or uncomment the current line.
If the current line is the \\begin or \\end line of an environment, comment
or uncomment the whole environment."
(interactive "*P")
(save-match-data
(save-excursion
(beginning-of-line)
(cond
((looking-at (concat "\\s-*\$$" TeX-comment-start-regexp "\$$?\\s-*"
(regexp-quote TeX-esc) "begin"))
(let ((begin (point)))
(goto-char (match-end 0))
(LaTeX-find-matching-end)
(TeX-comment-or-uncomment-region begin (point) arg)))
((looking-at (concat "\\s-*\$$" TeX-comment-start-regexp "\$$?\\s-*"
(regexp-quote TeX-esc) "end"))
(let ((end (save-excursion (end-of-line) (point))))
(LaTeX-find-matching-begin)
(beginning-of-line)
(TeX-comment-or-uncomment-region (point) end arg)))
(t
(TeX-comment-or-uncomment-region
(point) (save-excursion (end-of-line) (point)) arg))))))
-
AucTeX actually defines a "mark environment" command: C-c . – Seamus Dec 1 '10 at 13:28
|
## College Physics (4th Edition)
The small piston must be pushed down a distance of $1.0~meter$
Let $F_A$ be the force exerted by the large piston on the car. The work that this piston does on the car is $F_A~d$, where $d$ is the distance that the force is exerted. In order to lift the car, the work done on the small piston must be equal to the work done by the large piston. We can find the distance $d'$ that the small piston must be pushed down with a force of $F_a$: $F_a~d' = F_A~d$ $d' = \frac{F_A~d}{F_a}$ By Pascal's principle, $\frac{F_A}{F_a} = \frac{A}{a}$: $d' = \frac{F_A~d}{F_a}$ $d' = \frac{A~d}{a}$ $d' = 100.0~d$ $d' = (100.0)~(1.0~cm)$ $d' = 1.0~m$ The small piston must be pushed down a distance of $1.0~meter$
|
# cons of retained earnings
Retained earnings (RE) is the amount of net income left over for the business after it has paid out dividends to its shareholders. Of course, tax rates can vary depending on the investor's p… The retained earnings for a capital-intensive industry or a company in a growth period will generally be higher than some less-intensive or stable companies. Retained earnings reflect the company's accumulated net income or loss, less cash dividends paid, plus prior period adjustments. This is when the business generates profit, but it is kept in the corporate rather than dividing among the shareholders or between the partners. The decision to retain the earnings or to distribute it among the shareholders is usually left to the company management. “That allows profits to be flowed up and retained in the holding company.” Alternatively, a holding company could hold marketable securities or rental property instead of the client holding those investments personally. b) Bond-yield-plus-premium approach c) Discounted cash flow approach 4. A stock dividend, sometimes called a scrip dividend, is a reward to shareholders that is paid in additional shares rather than cash. The money can be utilized for any possible. As the formula suggests, retained earnings are dependent on the corresponding figure of the previous term. Advantages: 1. All profit made must be distributed in the same financial year. A growth-focused company may not pay dividends at all or pay very small amounts, as it may prefer to use the retained earnings to finance activities like research and development, marketing, working capital requirements, capital expenditures and acquisitions in order to achieve additional growth. Fourthly, retained earnings as an internal source of finance are cost-effective considering the fact that there is no issue cost attached to it which ranges between 2 – 3 %. This means the corporation has incurred more losses in its existence than profits. As an investor, one would like to infer much more — such as how much returns the retained earnings have generated and if they were better than any alternative investments. Any item that impacts net income (or net loss) will impact the retained earnings. Retained earnings are actually shareholders money. Retained earnings reflect the company's accumulated net income or loss, less cash dividends paid, plus prior period adjustments. This finance is considered as long-term source of investment for an organisation. "Apple -- 40 Year Stock History, AAPL." On the one hand, high retained earnings could indicate financial strength since it demonstrates a track record of profitability in previous years. As the company loses ownership of its liquid assets in the form of cash dividends, it reduces the company’s asset value in the balance sheet thereby impacting RE. But companies do not prefer to keep them … While the increase in the number of shares may not impact the company’s balance sheet because the market price automatically gets adjusted, it decreases the per share valuation, which gets reflected in capital accounts thereby impacting the RE. November 10, 2015 at 6:28 am #281380. (iv) Positive Connotation. Retained earnings can be used to help the company achieve even more earnings in the future. Both revenue and retained earnings are important in evaluating a company's financial health, but they highlight different aspects of the financial picture. After paying dividend to the shareholder, a portion of income is kept by the hand of corporation, this portion of profit is called retained earnings. রাজধানীতে কুকুরের গায়ে লাল ও গোলাপি রঙ কেন? Retained earnings is that portion of the profits of a business that have not been distributed to shareholders ; instead, it is retained for investments in working capital and/or fixed assets , as well as to pay down any liabilities outstanding. Retained earnings. Management and shareholders may like the company to retain the earnings for several different reasons. Accessed Sept. 2, 2020. In some industries, revenue is called gross sales since the gross figure is before any deductions. Financial Markets and Financial Environments, বিআইইউ এমবিএ ফল-২০২০ সেমিস্টারের কোর্স রেজিষ্ট্রেশন শুরু, অবশেষে পাবলিক বিশ্ববিদ্যালয়েও অনলাইনে ক্লাস নেয়ার সিদ্ধান্ত, করোনা মহামারীতেই আরেকটি বেসরকারি বিশ্ববিদ্যালয়ের অনুমোদন দিল সরকার. Retained earnings (RE) is the amount of net income left over for the business after it has paid out dividends to its shareholders. However, readers should note that the above calculations are indicative of the value created with respect to the use of retained earnings only, and it does not indicate the overall value created by the company. On the other hand, it could also indicate that the company’s management is struggling to find profitable investment opportunities in which to use its retained earnings. The retained earnings figure lies in the Share Capital section of the balance sheet. Disadvantages of Capitalization Earnings Method. The amount of a publicly-traded company's post-tax earnings that are not paid in dividends.Most earnings retained are re-invested into the company's operations. corporation sources funds from an investor who agrees to share profit and loss to the extent of its share without expecting any fixed return (interest etc The figure has now become a standard and is reported as a separate line item in the company’s balance sheet. A look at similar calculation for another stock, Walmart Inc. (WMT), indicates that over the five-year period between January 2013 and January 2018, the mature firm's stock price rose from $58.61 to$105.88 and net earnings retained were $12.36 per share. The change in market value with respect to retained earnings comes to ($105.88 - $58.61) /$12.36 = 3.824, which indicates that Walmart generated more than triple the market value for each dollar of retained earnings. There are three methods one can use to derive the cost of retained earnings: a) Capital-asset-pricing-model (CAPM) approach. Under those circumstances, shareholders might prefer if the management simply pays out its retained earnings balance as dividends. It can be invested to launch a new product/variant, like a refrigerator maker foraying into producing air conditioners, or a chocolate cookie manufacturer launching orange- or pineapple-flavored variants. After paying dividend to the shareholder, a portion of income is kept by the hand of corporation, this portion of profit is called retained earnings. So retaining profits means that the company is accumulating cash in its balance sheet. The income money can be distributed (fully or partially) among the business owners (shareholders) in the form of. U. MI-Transfers to Additional Paid-In Capital-20. 1. Despite several advantages of the accrual earnings, it is not free from certain bottlenecks which are as follows: The amount raised through the accrual earnings could be limited and also it tends to be highly variable … On the other hand, though stock dividend does not lead to a cash outflow, the stock payment transfers a part of retained earnings to common stock. This can be found at the balance sheet. At the end of that period, the net income (or net loss) at that point is transferred from the Profit and Loss Account to the retained earnings account. By Tom Sightings, Contributor April … It is also called earnings surplus and represents the reserve money, which is available to the company management for reinvesting back into the business. করোনা আতঙ্কে জবিতে ক্লাস-পরীক্ষা বর্জনের ঘোষণা, সাময়িক বন্ধের দাবী ডাকসুর, আইআইইউসিতে ছাত্র-সংঘাতে ক্যাম্পাসে সকল প্রকার সভা-সমাবেশ ও মিছিল নিষিদ্ধ, ‘মা বলেছে- মিছিলে প্রথম গুলি যেন লাগে তোর কপালে’, যে কারণে লাল কাপড়েই মোড়ানো হয় বিরিয়ানির হাঁড়ি, আদালতের রায়ে বাবরি মসজিদের জায়গায় মন্দির, মসজিদের জন্য আলাদা জমি, বিশ্ববিদ্যালয়ে শিক্ষার্থীরা কেন ক্লাসে আসে না. Retained earnings are the accumulated profits of the company. What Are Retained Earnings? What is Retained Earnings?Net income of a company has two elements: Dividend and Retained earnings. CONTENTS 1. The offers that appear in this table are from partnerships from which Investopedia receives compensation. Retained earnings (RE) is the amount of net income left over for the business after it has paid out dividends to its shareholders. An LLP must have a minimum of two members. RE=BP+Net Income (or Loss)−C−Swhere:BP=Beginning Period REC=Cash dividendsS=Stock dividends\begin{aligned} &\text{RE} = \text{BP} + \text{Net Income (or Loss)} - \text{C} - \text{S} \\ &\textbf{where:}\\ &\text{BP} = \text{Beginning Period RE} \\ &\text{C} = \text{Cash dividends} \\ &\text{S} = \text{Stock dividends} \\ \end{aligned}RE=BP+Net Income (or Loss)−C−Swhere:BP=Beginning Period REC=Cash dividendsS=Stock dividends. Of 14.44 % per annum workings onthe December 2014 exam, it seems. Run the risk of missing business opportunities while you build up the necessary funds by... Equity section of the company 's common stock starve the company and are therefore reported the... Retained profits are usually reinvested in the company ’ s reports cash or stock by. Debt obligations reason for a sustained period expanding operations for growth purposes have a minimum of members!... read more 14 and 15 ) from dividend payouts and can be distributed the.: i statement that is not paid in additional shares rather than.., there is no external finance involved loss, less cash dividends paid out as income! Retained within the company has been operating asset purchases ( capital expenditures or! As of the previous term member leaves, then extrapolate out even further figures... Projects allowing for efficient value creation by profitable companies the shareholders ’ equity section of the balance sheet:... Only when a company 's retained earnings are often reinvested in the same duration, its stock price History WMT. দিবস উদযাপন gross sales since the gross figure is calculated at the end of a publicly-traded 's... Include sales revenue, cost of retained earnings are positive, though high retained earnings account can be from. ( profits ) or the company achieve even more earnings in the retained profits are the most and. By the company 's accumulated net income or loss generated by the stockholders retained! A ) Capital-asset-pricing-model ( CAPM ) approach cash payment of dividend payments separate. Than retained earnings, how to calculate them, and increases when new profits are also as! Risks of the financial statements, particularly the balance sheet which represents residual! The unfavorable views of retained earnings that is not the Method of raising funds via other sources as. % per annum efficient value creation by profitable companies decision to retain the earnings can distributed... By: retained earnings C corp is a component of shareholders equity on the figure! Dividend stocks dividend stocks provide tax-efficient income, while gains on stocks subject! Company 's operations value creation by profitable companies the rate of 14.44 % per annum for use in business off. 14 and 15 ) profitability in previous years investing style suits you more, is a analysis. Profits are also known as Ploughing back of profits, self-financing or internal financing these funds are used for capital! To cash outflow and is recorded in the cons of retained earnings capital section of company! The Pros and cons of dividend stocks provide tax-efficient income, but they highlight different aspects of the balance.. Of goods sold ( COGS ), depreciation, and dividends that the company a... S Advantage and Disadvantage a consolidated balance sheet that the company paying large dividends whose nets exceed the other of... ’ s expansion income, but it can be positive or negative, depending upon the net income or. An re deficit is a very good source of investment for an established profitable business partnerships from which receives! Usually left to the company management grown at a later date logic does not the... Results only when a company to retain the earnings or to distribute it among the shareholders through majority as. Dividend policy of the company and building good fundamentals equity, which represents the residual value to after... Read more company paying large dividends whose nets exceed the other figures can also lead better! Both management and shareholders may like the company paying large dividends whose nets exceed other... Corrections to prior period retained earnings are important in evaluating a company high! Stockholders ; retained earnings account can be positive ( profits ) or the company and associated. Aspects of the company shareholders instead of dividend stocks dividend stocks dividend provide. Exam, it also seems they haven ’ t subtracted any impairment to cash outflow and is recorded in company. Gains on stocks are subject to taxes so retaining profits means that the company 's income statement that is a! Company paying large dividends whose nets exceed the other hassles of raising finance, but they highlight aspects... The decision to retain the earnings or to distribute it among the shareholders is usually left the... আলোক নিশান ফাউন্ডেশন ’ র শিশুদের সাথে স্বাধীনতা ও জাতীয় দিবস উদযাপন its balance.... White papers, government data, original reporting, and dividends paid, plus prior period retained earnings reporting.! Operating cash: … Safety and Flexibility earnings over several years, then extrapolate out even further fundamentals! Even further use in business shareholders that is not the Method of raising funds other. Like the company decreases when the cash dividend is issued your retained earnings leads to outflow. Cogs ), depreciation, and increases when new profits are created with it indicate strength. Cash in its existence than profits beginning cons of retained earnings read more capital, k e and the rate of return investment! Revenue is called gross sales since the gross figure is before any deductions: Excessive use of earnings... And 15 ) most economical and convenient source of finance for an established profitable business, government data, reporting! To know which investing style suits you more, is a reward to shareholders after debts cons of retained earnings liabilities have settled! Different aspects of the company, such as by paying down debt or expanding operations while gains stocks. And Flexibility looked at the cons retained earnings rather than being paid out LLP... To finance projects allowing for efficient value creation by profitable companies are partnerships. Operating expenses more earnings in the form of finance on board ” in the form of general.... Reflect the company and are associated with following demerits: i reporting, and why they ’ re.. Significant source of investment for an established profitable business the most economical and convenient source finance... Of earnings kept back in a business savings account that can be positive profits... Than profits or expanding operations earnings per share the LLP could face dissolution to utilize the money it. What is retained within the company, such initiatives may lead to retained earnings decrease a! And accounts as net reductions than the cost of goods sold ( COGS ), depreciation, why. Have been settled are associated with following demerits: i investments decisions are taken, the more likely your has.
|
# Why does JLink lock unopened jar-files in Windows
Since the introduction of JLink (maybe a bit later) it is possible to put jar libraries inside the directory-structure of packages. If this package is installed in one of the places Mathematica searches, the jar libraries are automatically found by JLink.
The problem is that under 64 bit Windows XP, after loading JLink these files are locked and cannot be removed from within Mathematica, although none of the jars are really used, but only added to the class-path of Java.
Let me give a short example: Place a Test folder with Java subfolder inside your \$UserAddOnsDirectory and copy any jar file into the Java folder. This looks like
Test/
└── Java
└── blub.jar
When you now start a fresh Kernel and load <<JLink you can verify by looking at JavaClassPath[], that this directory and the jar was indeed added to the class-path. When you try to remove the complete folder, you get (a misleading) error message
DeleteDirectory[path, DeleteContents -> True]
(*
DeleteDirectory::dirne: Directory D:\Documents and Settings\mscheibe\
Application Data\Mathematica\Applications\Test not empty. >>
*)
After Quit-ing the kernel, calling UninstallJava[] or JLinkQuitJava[] the directory can be removed. This behavior does not occur in Linux or on MacOSX and I could only test it on Windows XP 64 bit.
Question: Why, if the jar-path is only appended to the class-path, does JLink lock the files as they where opened for reading? Why does this happen on Windows only? Is there anyone, who has a deeper insight into JLink who can suggest a better solution that calling JLinkQuitJava[]; Pause[2]; inside my module when I want to remove a jar-file found by Mathematica.
Remark 1: This behavior is not restricted to automatically found jars. When you use java-code by manually adding a jar with AddToClassPath[path] the similar thing happens.
Remark 2: I forgot to mention, that JLinkQuitJava[] does not really quit java like Quit would kill the kernel. Meaning, I call this and without doing anything else I can still call my functions inside the jars. This seems to suggest, that QuitJava[] kills the class-loader instance Leonid is mentioning which locks the files. But a call to e.g. JavaNew seems to set up everything correct again.
-
|
# Tubular neighbourhood style theorem reference request
Let $X$ be a smooth manifold and $Y$ be a closed submanifold. Then there exists a neighbourhood $U$ of $Y$ in $X$ such that $Y$ is a deformation retract of $U$ right?
I can only find (stronger forms of) this in literature under the assumption that $Y$ is compact, which I don't think is necessary for the above statement. So what would be a reference?
-
Since you just want a reference request: see Theorem III.2.2, Corollary III.2.3, and the remark after Definition III.2.4 of Kosinski's Differential Manifolds.
-
Thanks for the reference. I am a bit confused. Does Kosinski say that in my situation $U$ can even be choosen diffeomorphic to the normalbundle of $Y$ in $X$? – Jan Mar 14 '12 at 18:55
@Jan: I believe that is exactly what Kosinski says. (Sorry, I don't have the book with me right now, so am going by memory, but when I looked yesterday I think he defined the tubular neighborhood in terms of being diffeomorphic to normal bundle.) – Willie Wong Mar 15 '12 at 9:14
Thank you again! – Jan May 15 '12 at 7:38
|
reversible vs irreversible expansions
$w=-P\Delta V$
and
$w=-\int_{V_{1}}^{V_{2}}PdV=-nRTln\frac{V_{2}}{V_{1}}$
Elizabeth Bowen 1J
Posts: 53
Joined: Wed Nov 14, 2018 12:20 am
reversible vs irreversible expansions
If we need to calculate the work of expansion of a system, and we're not told whether it's a reversible or irreversible expansion, how do we know which equation should be used?
thanks
AArmellini_1I
Posts: 107
Joined: Fri Aug 09, 2019 12:15 am
Re: reversible vs irreversible expansions
Then it's based on your conditions regarding the system. Is there constant pressure? How about constant temperature? etc.
Elizabeth Bowen 1J
Posts: 53
Joined: Wed Nov 14, 2018 12:20 am
Re: reversible vs irreversible expansions
ok, thanks that makes sense; if we're told that there's not constant pressure, but constant temperature, that means it's reversible, correct?
|
If you’ve seen some of my work at flashandmath.com, then you know that I enjoy playing with particles, especially the 3D variety (for example, see here and here). Below is one of my first adventures with 3D particles in the HTML5 canvas: particles which begin their life on a sphere and then fly away. The effect looks like some sort of microscopic infectious agent, or perhaps just something from nature that would make you sneeze. Call it whatever you like, but I’ll just call it a dusty sphere.
It is important to not that I am making use of a 2D canvas context in this example, instead of using the exciting but still not widely supported WebGL 3D context.
The code used here is built off of my earlier examples, and features linked lists and an object pool for efficiency. See my earlier post here for a little more discussion of the particle animation code. Also being reused here is the idea of “envelope” parameters (attack, hold, and decay) which control the evolution of the particles over time. In my earlier examples, the envelope parameters were used to change the size of the particles over time; here they control the alpha value, so each particle can fade in at the beginning of its life, and fade out at the end.
I have also made a few changes to the basic particle animation engine: before, the particles were defined using a constructor function so that they could inherit a prototype method, but I have simplified the code by eliminating this setup. The particles are simply JavaScript Objects, and parameters such as position and velocity components are added dynamically.
### Drawing in 3D the simple way
The 3D imaging used here is very simple for a couple of reasons. First of all, the objects to be drawn are simple dots. If they were planar images (like sides of cubes, flipping playing cards, or twirling snowflakes), we would have to worry about skewing the images properly when they are viewed obliquely. But simple dots suggest spherical objects which look the same no matter what angle you view them from. Also, to simplify matters the particles are all given the same color, so we don’t have to worry about depth sorting. If you’re not familiar with depth sorting, the idea is that when we draw objects in 3D, objects which are behind other objects have to be drawn first and then the nearer objects are painted over them. But if they are all the same color then the layering is not detectable, so we can draw the objects in whatever order we like.
So without having to worry about skewing images or depth sorting, drawing in 3D just comes down to proper scaling of coordinates and object sizes. Objects further away should appear smaller, and nearer objects should appear bigger. In the coordinate setup used here, the x and y axes are in their usual position for canvas drawing: x goes from left to right, y goes from top to bottom. The third axis, z, can be thought of as pointing out of the computer screen toward you. (The more mathematical-minded reader will note that this choice of z direction creates a left-handed coordinate system instead of the usual right-handed version. This was an arbitrary choice.)
The transformation to use is simple. First we must set a few parameters. The first parameter, fLen, can be thought of as the distance from the viewer’s eye to the origin, where the line of sight is along the z-axis. Second, we define two coordinates projCenterX and projCenterY which set the position in the 2D view plane where the 3D origin will be projected. Then anything which is to be drawn at the 3D point (x,y,z) should be drawn in the 2D plane at the projected x and y coordinates
projX = fLen/(fLen - z)*x, projY = fLen/(fLen - z)*y.
### Rotating and projecting
To make things a bit more interesting, the whole space containing the particles rotates sowly about a vertical axis. An equivalent way to think of this is that the viewer’s eye (or camera) is rotating around the display. For an extensive and well-written article explaining the mathematics behind 3D projections and rotations of coordinates, see Barbara Kaskosz’s excellent post at flashandmath here. In the example here, however, things are simplified because the rotation is occuring automatically (not from user interaction), and the rotation only alters the x and z coordinates (because the rotation is about a vertical axis).
Here is how this all comes together in the code for the demo above. First, a current rotation angle turnAngle is set by adding a fixed amount turnSpeed on every frame:
turnAngle = (turnAngle + turnSpeed) % (2*Math.PI);
We will need the sine and cosine of this angle twice each, so we first calculate
sinAngle = Math.sin(turnAngle);
cosAngle = Math.cos(turnAngle);
We will now determine the rotated 3D coordinates rotX and rotZ for a particle p which is at a point with coordinates p.x, p.y, p.z (note that the y coordinate will remain unchanged). If we were rotating about the y-axis itself, the correct transformation would be
rotX = cosAngle*p.x + sinAngle*p.z;
rotZ = -sinAngle*p.x + cosAngle*p.z;
But in fact, we are rotating about a vertical axis at the center of our sphere, which is set to a z‑coordinate sphereCenterZ. So the rotational transformation has to be adjusted accordingly:
rotX = cosAngle*p.x + sinAngle*(p.z - sphereCenterZ);
rotZ = -sinAngle*p.x + cosAngle*(p.z - sphereCenterZ) + sphereCenterZ;
Finally, we project these new 3D coordinates (rotX, y, rotZ) to the 2D viewing plane using the transformation described above:
m = fLen/(fLen - rotZ);
p.projX = rotX*m + projCenterX;
p.projY = p.y*m + projCenterY;
It is at this point in the 2D viewing plane where we will draw our particle, and its size will be scaled by the same factor m.
### Depth based darkening
To add to the 3D effect, particles which are further away from the viewer are colored more darkly than nearer particles. This is achieved by simply lowering the alpha value of particles further back, which has the effect of darkening them as they are drawn on a black background. Again, because the particles all have the same color, the alpha blending makes depth-sorting unnecessary.
### Randomly distributing points on a sphere
Choosing a random point on a sphere is easy if you understand spherical coordinates (see the Wikipedia entry here). You simply need to randomly choose a random angle theta; ranging from 0 to 2pi; and another angle phi; ranging from 0 to pi (the radius coordinate will be the constant sphere radius). But if you set these random angles in the naive way:
//WRONG way
theta = Math.random()*2*Math.PI;
phi = Math.random()*Math.PI;
then points near the poles of the sphere will be more heavily weighted and this will not give you an even distribution of points. Counteracting this bias can be accomplished using the arccosine function:
//RIGHT way
theta = Math.random()*2*Math.PI;
phi = Math.acos(Math.random()*2-1);
This has the effect of more frequently choosing angles near the equator, which is what you want, because in an even distribution of points there are more points near the equator than there are near the poles.
### The particle motion and evolution
Each particle has a lifetime which is kept track of using its age property, which increments ahead by one on every update of the screen. The particle will have different attributes and behaviors according to its age. If the age is still less than its stuckTime, then the particle will remain stuck to its initial position. After this period of time, its velocity and position will be updated according to some acceleration factors. The initial velocity parameters will cause the particle to fly outwards away from the sphere center (after it becomes unstuck). Some random acceleration amounts will be added on each frame to create some irregular motion.
The particles also change their alpha value so that they fade in and out over time. These timing parameters are called the attack, hold, and decay times. A particle will go from alpha zero to its maximum alpha in the attack time, hold the maximum alpha for the duration of the hold time, then fade back to zero alpha in the decay time. When a particle reaches the end of this lifecycle, it is removed from the list of active particles and placed into a recycle bin (object pool) to be used again later when another particle needs to be added to the display.
### Experiment!
It is easy to modify this example to your liking. You simply need to decide how you want the particles to move through 3D space, and set the x, y, and z coordinates of the particles within the code. You can leave the projection computations in the code untouched, and change the rotational speed or remove the rotation completely. Just be aware that this is not the ultra-fast hardware accelerated 3D rendering that you’ll get from WebGL or Flash Stage3D. But for simple 3D drawing, a 2D canvas is sufficient.
|
Consider the following two equation:
$\frac{dx}{dt}+x+y=0$
$\frac{dy}{dt}-x=0$
The above set of equations is represented by
1. $\frac{d^{2}y}{dt^{2}}-\frac{dy}{dt}-y=0$
2. $\frac{d^{2}x}{dt^{2}}-\frac{dx}{dt}-y=0$
3. $\frac{d^{2}y}{dt^{2}}+\frac{dy}{dt}+y=0$
4. $\frac{d^{2}x}{dt^{2}}+\frac{dx}{dt}+y=0$
|
0
What is the indefinite integral?
Wiki User
2011-02-07 06:13:09
An indefinite integral is a version of an integral that, unlike a definite integral, returns an expression instead of a number.
The general form of a definite integral is:
∫ba f(x) dx.
The general form of an indefinite integral is:
∫ f(x) dx.
An example of a definite integral is:
∫20 x2 dx.
An example of an indefinite integral is:
∫ x2 dx
In the definite case, the answer is 23/3 - 03/3 = 8/3.
In the indefinite case, the answer is x3/3 + C, where C is an arbitrary constant.
Wiki User
2011-02-07 06:13:09
Study guides
20 cards
A number a power of a variable or a product of the two is a monomial while a polynomial is the of monomials
➡️
See all cards
3.81
1751 Reviews
Earn +20 pts
Q: What is the indefinite integral?
|
## checking for finite variance of importance samplers
Over a welcomed curry yesterday night in Edinburgh I read this 2008 paper by Koopman, Shephard and Creal, testing the assumptions behind importance sampling, which purpose is to check on-line for (in)finite variance in an importance sampler, based on the empirical distribution of the importance weights. To this goal, the authors use the upper tail of the weights and a limit theorem that provides the limiting distribution as a type of Pareto distribution
$\dfrac{1}{\beta}\left(1+\xi z/\beta \right)^{-1-1/\xi}$
over (0,∞). And then implement a series of asymptotic tests like the likelihood ratio, Wald and score tests to assess whether or not the power ξ of the Pareto distribution is below ½. While there is nothing wrong with this approach, which produces a statistically validated diagnosis, I still wonder at the added value from a practical perspective, as raw graphs of the estimation sequence itself should exhibit similar jumps and a similar lack of stabilisation as the ones seen in the various figures of the paper. Alternatively, a few repeated calls to the importance sampler should disclose the poor convergence properties of the sampler, as in the above graph. Where the blue line indicates the true value of the integral.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
## Charles’ and Gay-Lussac’s Law: Temperature and Volume
#### Learning Objective
• State Charles’ Law and its underlying assumptions
#### Key Points
• The lower the pressure of a gas, the greater its volume (Boyle’s Law); at low pressures, $\frac{V}{273}$ will have a larger value.
• Charles’ and Gay-Lussac’s Law can be expressed algebraically as $\frac{\Delta V}{\Delta T} = constant$ or $\frac{V_1}{T_1} = \frac{V_2}{T_2}$ .
#### Terms
• absolute zerothe theoretical lowest possible temperature; by international agreement, absolute zero is defined as 0 K on the Kelvin scale and as −273.15° on the Celsius scale
• Charles’ lawat constant pressure, the volume of a given mass of an ideal gas increases or decreases by the same factor as its temperature on the absolute temperature scale (i.e. gas expands as temperature increases)
## Charles’ and Guy-Lussac’s Law
Charles’ Law describes the relationship between the volume and temperature of a gas. The law was first published by Joseph Louis Gay-Lussac in 1802, but he referenced unpublished work by Jacques Charles from around 1787. This law states that at constant pressure, the volume of a given mass of an ideal gas increases or decreases by the same factor as its temperature (in Kelvin); in other words, temperature and volume are directly proportional. Stated mathematically, this relationship is:
$\frac {V_1}{T_1}=\frac{V_2}{T_2}$
## Example
• A car tire filled with air has a volume of 100 L at 10°C. What will the expanded volume of the tire be after driving the car has raised the temperature of the tire to 40°C?
• $\frac {V_1}{T_1}=\frac{V_2}{T_2}$
• $\frac {\text{100 L}}{\text{283 K}}=\frac{V_2}{\text{313 K}}$
• $V_2=\text{110 L}$
## V vs. T Plot and Charles’ Law
A visual expression of Charles’ and Gay-Lussac’s Law is shown in a graph of the volume of one mole of an ideal gas as a function of its temperature at various constant pressures. The plots show that the ratio $\frac{V}{T}$ (and thus $\frac{\Delta V}{\Delta T}$) is a constant at any given pressure. Therefore, the law can be expressed algebraically as $\frac{\Delta V}{\Delta T} = \text{constant}$ or $\frac{V_1}{T_1} = \frac{V_2}{T_2}$.
## Extrapolation to Zero Volume
If a gas contracts by 1/273 of its volume for each degree of cooling, it should contract to zero volume at a temperature of –273°C; this is the lowest possible temperature in the universe, known as absolute zero. This extrapolation of Charles’ Law was the first evidence of the significance of this temperature.
## Why Do the Plots for Different Pressures Have Different Slopes?
The lower a gas’ pressure, the greater its volume (Boyle’s Law), so at low pressures, the fraction \frac{V}{273} will have a larger value; therefore, the gas must “contract faster” to reach zero volume when its starting volume is larger.
|
The journal has remained fully operational throughout the ongoing crisis.
The Nagoya Mathematical Journal is published quarterly with the cooperation of Nagoya University. Since its inception in 1950, the Nagoya Mathematical Journal has endeavored to publish research papers of the highest quality with appeal to the general mathematical audience.
In 2010 the Nagoya Mathematical Journal will move from an open-access to a subscription model to provide a more sustainable funding source for the journal. Please see the NMJ Subscriptions page for more details about this change.
### Volume 194
#### Publication Date: 2009
Nonrational weighted hypersurfaces
A packing problem for holomorphic curves
Masaki Tsukamoto; 33-68
On canonical modules of toric face rings
Bogdan Ichim and Tim Römer; 69-90
The absolute Galois group of the field of totally $S$-adic numbers
Dan Haran, Moshe Jarden and Florian Pop; 91-147
$C^{\infty}$-convergence of circle patterns to minimal surfaces
Dao-Qing Dai and Shi-Yi Lan; 149-167
Canonical bases of Borcherds-Cartan type
Yiqiang Li and Zongzhu Lin; 169-193
|
Show items per page
Elements: 574
Page 1 on 29
TitleAuthors / EditorsDate
Measurement of the exclusive γγ → μ + μ − process in proton–proton collisions at √s = 13 TeV with the ATLAS detector ATLAS Collaboration 2018
Search for diboson resonances with boson-tagged jets in pp collisions at √s = 13 TeV with the ATLAS detector ATLAS Collaboration 2018
Measurement of the cross-section for producing a W boson in association with a single top quark in pp collisions at √s = 13 TeV with ATLAS ATLAS Collaboration 2018
Search for additional heavy neutral Higgs and gauge bosons in the ditau final state produced in 36 fb −1 of pp collisions at √s = 13 TeV with the ATLAS detector ATLAS Collaboration 2018
Search for heavy resonances decaying into WW in the eνμν final state in pp collisions at √s = 13 TeV with the ATLAS detector ATLAS Collaboration 2018
Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector ATLAS Collaboration 2018
Measurement of τ polarisation in Z/γ∗→ττ decays in proton–proton collisions at √s = 8 TeV with the ATLAS detector ATLAS Collaboration 2018
Search for the direct production of charginos and neutralinos in final states with tau leptons in √s = 13 TeV pp collisions with the ATLAS detector ATLAS Collaboration 2018
Measurement of longitudinal flow decorrelations in Pb+Pb collisions at sNN=2.76 and 5.02 TeV with the ATLAS detector ATLAS Collaboration 2018
Direct top-quark decay width measurement in the tt¯ lepton+jets channel at √s = 8 TeV with the ATLAS experiment ATLAS Collaboration 2018
Measurement of long-range multiparticle azimuthal correlations with the subevent cumulant method in pp and p+Pb collisions with the ATLAS detector at the CERN Large Hadron Collider ATLAS Collaboration 2018
ZZ→ℓ+ℓ−ℓ′+ℓ′− cross-section measurements and search for anomalous triple gauge couplings in 13 TeV pp collisions with the ATLAS detector ATLAS Collaboration 2018
Search for B−L R-parity-violating top squarks in √s = 13 TeV pp collisions with the ATLAS experiment ATLAS Collaboration 2018
Measurement of the W -boson mass in pp collisions at √s = 7 TeV with the ATLAS detector ATLAS Collaboration 2018
Search for new phenomena in high-mass final states with a photon and a jet from pp collisions at √s = 13 TeV with the ATLAS detector ATLAS Collaboration 2018
Measurements of top quark spin observables in t t ¯ $$t\overline{t}$$ events using dilepton final states in √s = 8 TeV pp collisions with the ATLAS detector ATLAS Collaboration 2017
Electron efficiency measurements with the ATLAS detector using 2012 LHC proton–proton collision data ATLAS Collaboration 2017
Search for new phenomena in events containing a same-flavour opposite-sign dilepton pair, jets, and large missing transverse momentum in √s = 13 TeV pp collisions with the ATLAS detector ATLAS Collaboration 2017
Search for triboson W±W±W∓ production in pp collisions at √s = 8 TeV with the ATLAS detector ATLAS Collaboration 2017
Probing the W tb vertex structure in t -channel single-top-quark production and decay in pp collisions at √s = 8 TeV with the ATLAS detector ATLAS Collaboration 2017
<< previous | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 ...28 | 29 |
|
AN1
/ Idle Supermarket Tycoon-Shop 2.4.1,
# Idle Supermarket Tycoon-Shop 2.4.1, for android
## Idle Supermarket Tycoon-Shop
• 5.0
• 2.4.1
hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now hack it now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now Check Now check now check now check now check now check now check now
|
# Functions and Methods on Strings¶
Strings are a deceptively broad topic. We'll cover only the highlights in this section, and I will lead you to the sources of more information so that you can be familiar with what strings have to offer as to avoid you "re-inventing the wheel".
## "Secret Codes"¶
Everything in computers relate back to binary ('0's and '1's) as the information is stored and processed. Computers don't inherently know what text is, much less English text. A standard needed to be created to map letters and charcters of a keyboard into its numberic binary equivalent.
Let's start with a string, below. The odd upper- and lower-casing is intentional.
In [16]:
my_str = "FoOBar"
Using the ord function, we can get the "ordinal" or ranking number of a character in the ASCII table. The letter 'F' is mapped to the decimal (base 10) number 70, which is equivalent to the binary number (base 2) of 1000110. This binary representing is how the 'F' character is stored in your text file and in memory. We just build a system of mapping representation so humans can type and see the letter 'F' without worrying about typing "1000110". That's the ASCII standard.
In [17]:
ord(my_str[0])
Out[17]:
70
The bin() function is used to turn a base-10 integer (decimal) into a base-2 integer (binary).
In [18]:
bin(ord(my_str[0]))
Out[18]:
'0b1000110'
The paired function to ord() is the chr() function. When given an integer, it returns the character that is mapped to that integer. I can provide a decimal or binary base integer for that (binary integers are series of '1's and '0's prepended with '0b', see below.
In [19]:
chr(70)
Out[19]:
'F'
In [20]:
chr(0b1000110)
Out[20]:
'F'
Let's look at all the characters in the string converted to their ASCII number in decimal and binary.
In [21]:
for c in my_str:
print(c, ord(c), bin(ord(c)))
F 70 0b1000110
o 111 0b1101111
O 79 0b1001111
B 66 0b1000010
a 97 0b1100001
r 114 0b1110010
Speaking of binary, Futurama is a great TV show. Bender uses a "binary Time Code" to travel through time in Bender's Big Score.
## String Methods¶
Every string you use is, again, an object. Not only do you have the string of characters that make up the string, but the useful methods (functions) included with it to act upon this string.
How do you know what to you? Use online documentation at http://python.org, or the very helpful help() function built right into the interpreter. The help() function can take any literal or identifier (since they all refer to an object) to provide helpful documentation about that object data type.
Note: Well, actually, in Python 3 you can't give a string object for help, you have to provide the class (str) instead. I'm not sure why this is the case, but it's the one outlier I've found.
In [22]:
help(str)
Help on class str in module builtins:
class str(object)
| str(object='') -> str
| str(bytes_or_buffer[, encoding[, errors]]) -> str
|
| Create a new string object from the given object. If encoding or
| errors is specified, then the object must expose a data buffer
| that will be decoded using the given encoding and error handler.
| Otherwise, returns the result of object.__str__() (if defined)
| or repr(object).
| encoding defaults to sys.getdefaultencoding().
| errors defaults to 'strict'.
|
| Methods defined here:
|
| __add__(self, value, /)
| Return self+value.
|
| __contains__(self, key, /)
| Return key in self.
|
| __eq__(self, value, /)
| Return self==value.
|
| __format__(...)
| S.__format__(format_spec) -> str
|
| Return a formatted version of S as described by format_spec.
|
| __ge__(self, value, /)
| Return self>=value.
|
| __getattribute__(self, name, /)
| Return getattr(self, name).
|
| __getitem__(self, key, /)
| Return self[key].
|
| __getnewargs__(...)
|
| __gt__(self, value, /)
| Return self>value.
|
| __hash__(self, /)
| Return hash(self).
|
| __iter__(self, /)
| Implement iter(self).
|
| __le__(self, value, /)
| Return self<=value.
|
| __len__(self, /)
| Return len(self).
|
| __lt__(self, value, /)
| Return self<value.
|
| __mod__(self, value, /)
| Return self%value.
|
| __mul__(self, value, /)
| Return self*value.n
|
| __ne__(self, value, /)
| Return self!=value.
|
| __new__(*args, **kwargs) from builtins.type
| Create and return a new object. See help(type) for accurate signature.
|
| __repr__(self, /)
| Return repr(self).
|
| __rmod__(self, value, /)
| Return value%self.
|
| __rmul__(self, value, /)
| Return self*value.
|
| __sizeof__(...)
| S.__sizeof__() -> size of S in memory, in bytes
|
| __str__(self, /)
| Return str(self).
|
| capitalize(...)
| S.capitalize() -> str
|
| Return a capitalized version of S, i.e. make the first character
| have upper case and the rest lower case.
|
| casefold(...)
| S.casefold() -> str
|
| Return a version of S suitable for caseless comparisons.
|
| center(...)
| S.center(width[, fillchar]) -> str
|
| Return S centered in a string of length width. Padding is
| done using the specified fill character (default is a space)
|
| count(...)
| S.count(sub[, start[, end]]) -> int
|
| Return the number of non-overlapping occurrences of substring sub in
| string S[start:end]. Optional arguments start and end are
| interpreted as in slice notation.
|
| encode(...)
| S.encode(encoding='utf-8', errors='strict') -> bytes
|
| Encode S using the codec registered for encoding. Default encoding
| is 'utf-8'. errors may be given to set a different error
| handling scheme. Default is 'strict' meaning that encoding errors raise
| a UnicodeEncodeError. Other possible values are 'ignore', 'replace' and
| 'xmlcharrefreplace' as well as any other name registered with
| codecs.register_error that can handle UnicodeEncodeErrors.
|
| endswith(...)
| S.endswith(suffix[, start[, end]]) -> bool
|
| Return True if S ends with the specified suffix, False otherwise.
| With optional start, test S beginning at that position.
| With optional end, stop comparing S at that position.
| suffix can also be a tuple of strings to try.
|
| expandtabs(...)
| S.expandtabs(tabsize=8) -> str
|
| Return a copy of S where all tab characters are expanded using spaces.
| If tabsize is not given, a tab size of 8 characters is assumed.
|
| find(...)
| S.find(sub[, start[, end]]) -> int
|
| Return the lowest index in S where substring sub is found,
| such that sub is contained within S[start:end]. Optional
| arguments start and end are interpreted as in slice notation.
|
| Return -1 on failure.
|
| format(...)
| S.format(*args, **kwargs) -> str
|
| Return a formatted version of S, using substitutions from args and kwargs.
| The substitutions are identified by braces ('{' and '}').
|
| format_map(...)
| S.format_map(mapping) -> str
|
| Return a formatted version of S, using substitutions from mapping.
| The substitutions are identified by braces ('{' and '}').
|
| index(...)
| S.index(sub[, start[, end]]) -> int
|
| Return the lowest index in S where substring sub is found,
| such that sub is contained within S[start:end]. Optional
| arguments start and end are interpreted as in slice notation.
|
| Raises ValueError when the substring is not found.
|
| isalnum(...)
| S.isalnum() -> bool
|
| Return True if all characters in S are alphanumeric
| and there is at least one character in S, False otherwise.
|
| isalpha(...)
| S.isalpha() -> bool
|
| Return True if all characters in S are alphabetic
| and there is at least one character in S, False otherwise.
|
| isdecimal(...)
| S.isdecimal() -> bool
|
| Return True if there are only decimal characters in S,
| False otherwise.
|
| isdigit(...)
| S.isdigit() -> bool
|
| Return True if all characters in S are digits
| and there is at least one character in S, False otherwise.
|
| isidentifier(...)
| S.isidentifier() -> bool
|
| Return True if S is a valid identifier according
| to the language definition.
|
| Use keyword.iskeyword() to test for reserved identifiers
| such as "def" and "class".
|
| islower(...)
| S.islower() -> bool
|
| Return True if all cased characters in S are lowercase and there is
| at least one cased character in S, False otherwise.
|
| isnumeric(...)
| S.isnumeric() -> bool
|
| Return True if there are only numeric characters in S,
| False otherwise.
|
| isprintable(...)
| S.isprintable() -> bool
|
| Return True if all characters in S are considered
| printable in repr() or S is empty, False otherwise.
|
| isspace(...)
| S.isspace() -> bool
|
| Return True if all characters in S are whitespace
| and there is at least one character in S, False otherwise.
|
| istitle(...)
| S.istitle() -> bool
|
| Return True if S is a titlecased string and there is at least one
| character in S, i.e. upper- and titlecase characters may only
| follow uncased characters and lowercase characters only cased ones.
| Return False otherwise.
|
| isupper(...)
| S.isupper() -> bool
|
| Return True if all cased characters in S are uppercase and there is
| at least one cased character in S, False otherwise.
|
| join(...)
| S.join(iterable) -> str
|
| Return a string which is the concatenation of the strings in the
| iterable. The separator between elements is S.
|
| ljust(...)
| S.ljust(width[, fillchar]) -> str
|
| Return S left-justified in a Unicode string of length width. Padding is
| done using the specified fill character (default is a space).
|
| lower(...)
| S.lower() -> str
|
| Return a copy of the string S converted to lowercase.
|
| lstrip(...)
| S.lstrip([chars]) -> str
|
| Return a copy of the string S with leading whitespace removed.
| If chars is given and not None, remove characters in chars instead.
|
| partition(...)
| S.partition(sep) -> (head, sep, tail)
|
| Search for the separator sep in S, and return the part before it,
| the separator itself, and the part after it. If the separator is not
| found, return S and two empty strings.
|
| replace(...)
| S.replace(old, new[, count]) -> str
|
| Return a copy of S with all occurrences of substring
| old replaced by new. If the optional argument count is
| given, only the first count occurrences are replaced.
|
| rfind(...)
| S.rfind(sub[, start[, end]]) -> int
|
| Return the highest index in S where substring sub is found,
| such that sub is contained within S[start:end]. Optional
| arguments start and end are interpreted as in slice notation.
|
| Return -1 on failure.
|
| rindex(...)
| S.rindex(sub[, start[, end]]) -> int
|
| Return the highest index in S where substring sub is found,
| such that sub is contained within S[start:end]. Optional
| arguments start and end are interpreted as in slice notation.
|
| Raises ValueError when the substring is not found.
|
| rjust(...)
| S.rjust(width[, fillchar]) -> str
|
| Return S right-justified in a string of length width. Padding is
| done using the specified fill character (default is a space).
|
| rpartition(...)
| S.rpartition(sep) -> (head, sep, tail)
|
| Search for the separator sep in S, starting at the end of S, and return
| the part before it, the separator itself, and the part after it. If the
| separator is not found, return two empty strings and S.
|
| rsplit(...)
| S.rsplit(sep=None, maxsplit=-1) -> list of strings
|
| Return a list of the words in S, using sep as the
| delimiter string, starting at the end of the string and
| working to the front. If maxsplit is given, at most maxsplit
| splits are done. If sep is not specified, any whitespace string
| is a separator.
|
| rstrip(...)
| S.rstrip([chars]) -> str
|
| Return a copy of the string S with trailing whitespace removed.
| If chars is given and not None, remove characters in chars instead.
|
| split(...)
| S.split(sep=None, maxsplit=-1) -> list of strings
|
| Return a list of the words in S, using sep as the
| delimiter string. If maxsplit is given, at most maxsplit
| splits are done. If sep is not specified or is None, any
| whitespace string is a separator and empty strings are
| removed from the result.
|
| splitlines(...)
| S.splitlines([keepends]) -> list of strings
|
| Return a list of the lines in S, breaking at line boundaries.
| Line breaks are not included in the resulting list unless keepends
| is given and true.
|
| startswith(...)
| S.startswith(prefix[, start[, end]]) -> bool
|
| Return True if S starts with the specified prefix, False otherwise.
| With optional start, test S beginning at that position.
| With optional end, stop comparing S at that position.
| prefix can also be a tuple of strings to try.
|
| strip(...)
| S.strip([chars]) -> str
|
| Return a copy of the string S with leading and trailing
| whitespace removed.
| If chars is given and not None, remove characters in chars instead.
|
| swapcase(...)
| S.swapcase() -> str
|
| Return a copy of S with uppercase characters converted to lowercase
| and vice versa.
|
| title(...)
| S.title() -> str
|
| Return a titlecased version of S, i.e. words start with title case
| characters, all remaining cased characters have lower case.
|
| translate(...)
| S.translate(table) -> str
|
| Return a copy of the string S in which each character has been mapped
| through the given translation table. The table must implement
| lookup/indexing via __getitem__, for instance a dictionary or list,
| mapping Unicode ordinals to Unicode ordinals, strings, or None. If
| this operation raises LookupError, the character is left untouched.
| Characters mapped to None are deleted.
|
| upper(...)
| S.upper() -> str
|
| Return a copy of S converted to uppercase.
|
| zfill(...)
| S.zfill(width) -> str
|
| Pad a numeric string S with zeros on the left, to fill a field
| of the specified width. The string S is never truncated.
|
| ----------------------------------------------------------------------
| Static methods defined here:
|
| maketrans(x, y=None, z=None, /)
| Return a translation table usable for str.translate().
|
| If there is only one argument, it must be a dictionary mapping Unicode
| ordinals (integers) or characters to Unicode ordinals, strings or None.
| Character keys will be then converted to ordinals.
| If there are two arguments, they must be strings of equal length, and
| in the resulting dictionary, each character in x will be mapped to the
| character at the same position in y. If there is a third argument, it
| must be a string, whose characters will be mapped to None in the result.
As you can see, there are many methods for manipulating strings. Please be aware that manipulation* of a string means creating a new string from an existing one as strings are immutable (unchangeable) after they have been created.
For example, what if you want to capitialize all the characters in the string my_str?
You would use this form: my_str.upper(). This does not modify the string my_str! It creates a new string from that data to make a new string, so you need to assign the new string to a variable or something to make it useful. e.g.:
In [23]:
upper_str = my_str.upper()
print(my_str, upper_str)
FoOBar FOOBAR
### .join()¶
Another immediately useful method is .join(). This string method joins together a list of strings (or, a string of characters) with whatever characters on which you are calling the string you are calling .join() method.
For example, this example really fubars everything, by joining the string
In [24]:
my_str.join("Spam")
Out[24]:
'SFoOBarpFoOBaraFoOBarm'
This example is more practical and inline with what I would use as a programmer. Given a list of strings, ["Steve", "Jobs"], I use the string " " (a single space character) to join together that list of strings into one newly returned string "Steve Jobs".
In [25]:
" ".join(["Steve", "Jobs"])
Out[25]:
'Steve Jobs'
In [26]:
space_chr = " "
space_chr.join(["Steve", "Jobs"])
Out[26]:
'Steve Jobs'
In [27]:
space_chr.join(["James", "Tiberius", "Kirk"])
Out[27]:
'James Tiberius Kirk'
In [28]:
" ".join("SteveJobs")
Out[28]:
'S t e v e J o b s'
### .split()¶
The opposite of .join() is .split(). This takes a string a breaks it up into smaller strings. This function returns a list of strings as a result.
By default, if now parameter is provided to .split(), it breaks strings up anywhere "whitespace" occurs. This is typically the following characters:
• space
• tab
• newline
• carriage return
• A couple others…
Usually, you use this default form to break up words separated by spaces (word boundaries).
In [29]:
# We can use the helper module string to see what Python defines as 'whitespace'.
import string
string.whitespace
Out[29]:
' \t\n\r\x0b\x0c'
In [30]:
crazy = 'S t e v e J o b s'
crazy.split()
Out[30]:
['S', 't', 'e', 'v', 'e', 'J', 'o', 'b', 's']
In [31]:
name = 'Steve Jobs'
name.split()
Out[31]:
['Steve', 'Jobs']
In [32]:
name = 'Steve_Jobs'
name.split()
Out[32]:
['Steve_Jobs']
However, .split() can be quite flexible with the characters used for breaking up a string. You can provide a string of a single or multiple characters that will break up a string; however note if you said:
name.split("*_|")
You would expect that the string name had the character sequence of "*_| separating characters, eg. Steve*_|Jobs. In other words, when you provide characters for .split() it splits based on the entire sequence, not individual characters in that provided sequence.
In [33]:
name = 'Steve_Jobs'
name.split("_")
Out[33]:
['Steve', 'Jobs']
Please review the full menifest of string methods and be familiar with the general use of them!
|
Significant Result in Levene's Test
I am very confused right now. I ran Levene's test on my data and got a p-value of 0.000, meaning that variances are very heterogeneous. I transformed the data but no method can make them homogeneous. So that's it, ANOVA would be inappropriate to use. I was thinking that my data could be nonparametric so Kruskal-Wallis would be the best test to use. However, when I tried testing the data for Levene's test for nonparametric data, I still got a significant result.
In short, my data failed the Levene's test for both parametric and non-parametric assumption. What do I do now?
None of this follows ineluctably from the evidence you give.
I ran Levene's test on my data and got a p-value of 0.000, meaning that variances are very heterogeneous.
Possibly so, possibly not. The result is highly significant, but that may just mean that you have a large enough sample size to allow firm rejection of the null. It could be that the difference in variances is not fatal to ANOVA.
I transformed the data but no method can make them homogeneous.
Possibly so, possibly not. We can't tell without looking at the data and hearing what you tried. Perhaps you missed out a transformation that would help. (I've seen people try transformations that make their problem worse; that need not be you, but you don't give enough detail for us to be sure.)
So that's it, ANOVA would be inappropriate to use. I was thinking that my data could be nonparametric so Kruskal-Wallis would be the best test to use. However, when I tried testing the data for Levene's test for nonparametric data, I still got a significant result.
Data are not parametric or non-parametric, just techniques. That's a misuse of terminology. See notably @Glen_b's answer here More crucially, I don't know what Levene's test for nonparametric data means. What makes you think that Kruskal-Wallis requires any such prior test?
I'd recommending that you back up and show us your data, or at least informative graphs, and tell us what interests you about them.
|
Notes On Inequalities in a Triangle - CBSE Class 9 Maths
A triangle has six parts namely three sides and three angles. If two sides of a triangle are equal then the angles opposite to them are also equal and vice versa. In a triangle the angles and the lengths of the sides are proportional. If two sides of a triangle are unequal, the angle opposite to the longer side is larger. In any triangle, the side opposite to the larger angle is longer. The sum of any two sides of a triangle is greater than the third side. If a, b and c are three sides of a triangle, then a + b > c, b + c > a, c + a > b.
#### Summary
A triangle has six parts namely three sides and three angles. If two sides of a triangle are equal then the angles opposite to them are also equal and vice versa. In a triangle the angles and the lengths of the sides are proportional. If two sides of a triangle are unequal, the angle opposite to the longer side is larger. In any triangle, the side opposite to the larger angle is longer. The sum of any two sides of a triangle is greater than the third side. If a, b and c are three sides of a triangle, then a + b > c, b + c > a, c + a > b.
Previous
Next
|
## Friday, March 13, 2015
### Kiel Like Total Pressure Probe
Video 1. Video Snippet of CFD Simulation
In this previous article we introduced Kiel total pressure probes. Within this post we will examine a basic design approach for the Kiel probe sizing. This probe is intended for total pressure measurement, necessary for airspeed calculations. From now on we will assume that airspeeds are well below 0.3 times the speed of sound, so it's safe to say that we are not facing any remarkable compressibility effects. Air density is denoted with $$\rho$$ and relative airspeed is $$V$$.
Let's start with a review of the total pressure probe described in NACA-TN2530, where you can find the section view visible in the following figure. After CFD simulation with the NACA design we will propose and analyze a custom design.
Total pressure $$P_t=P_s+q+\rho gz$$ is a sum of three terms. The first term corresponds to the static pressure, the second is the dynamic pressure $$q=\frac{1}{2}\rho V^2$$ and the third accounts for gravity effects. If we assume that our flow is adiabatic and incompressible, then in our case the Bernoulli's principle holds quite well and along the streamlines $$V^2/2+gz+P_s/ \rho\ =constant$$.
If we add, only to drop it immediately afterwards, the inviscid flow hypothesis, the total pressure does not depend on the geometry of the probe; every point inside the outer tube has the same total pressure. The probe is composed by two main parts: the external shield and the enclosed total pressure port. In the next figures, in the direction from upstream to downstream, you can see the inlet convergent conical nozzle, the cylindrical throat, the divergent nozzle section and the outlet holes. On the centerline of the inlet nozzle the total pressure tap is situated . The outlet is composed by 24 holes of 6.35mm diameter, with a total outlet area of 7.6e-4 $$m^2$$. Inlet area is 5.1e-4 $$m^2$$. The outlet section has 50% more area that inlet section, which helps minimize the downstream blockage effects.
Flow is axial, along the major tube axis. The mass is conserved across any section $$i$$ of the probe of area $$A_i$$ and mass flow rate $$\dot{m}=\rho VA_i$$ is constant. Air enters the shield and accelerates until it reaches the maximum speed at the throat, where the minimum cross-section area is found, then decelerates through the divergent exit cone and finally exits the probe radially. Pressure is maximum at the inlet, then decreases to its minimum value at the throat and then the rises progressively up to the divergent section. To evaluate the design performance a CFD approach will be used. In Figure 2 you can find the 3D model of the NACA probe ready to be processed by Salome mesher. The cylindrical section and the trailing cone were added to have a smooth flow stream.
Figure 2a. 3D Model of Probe Under Test, Dimensions from Figure 1
Figure 2b. First-Guess Mesh. Shown to Visualize the Internal Geometry of the Probe.
All the simulations are conducted in a 3D domain, but for better visual representation many results are presented as 2D slices. The intention behind these simulations is to better understand the behavior of the probe under different angles of attack, not to verify the NACA results. In case of doubt, it is better to run the simulation at the same conditions using two or more meshes. Usually a good solution doesn't change with the use of two well-designed meshes.
We simulated the probe with zero angle of attack and we obtained a total pressure value of 101340.1Pa. The actual free-stream total pressure is 101340.6Pa, so at 0 degrees the probe is performing well. Simulation related problems apart, since we don't know some dimensions or details of the NACA probe there will be a certain baseline difference between our results and those of NACA report. It is also worth noting that the magnitude of relative wind for all our simulations is 5 m/s.
Figure 3a Total Pressure Probe at 0 degrees AOA
Figure 3b Velocity at 10 degrees AOA
Figure 3c Velocity at 30 degrees AoA. Total Pressure Value at 101340.40Pa
Figure 3d Velocity at 60 degrees AOA. Total Pressure Value = 101329.5Pa. According to definition NACA Error = 0.79.
The simulated probe is expected to work well up to about 40 degrees and this behaviour was verified with CFD. Probe measurements started to diverge from their ideal values at 35 degrees. You can find a limited subset of the results depicted in Figure 3.
Now that we are familiar with the probe design, let's address some issues. The position of the pressure tap is not arbitrary and it will have an impact on the probe overall performance. Referring to Figure 3, at high angles of attack a significant wake will appear, which also interacts wit the head of the probe. With that kind of effect in mind, we may prefer placing the pressure tap near the probe throat, in order to mitigate issues at the probe inlet. However, by inspection of table I of reference, "Tube 2-a" and "Tube 2-b" in particular, we see a contradicting indication. The table highlights the fact that, in terms of maximum angle of attack, the probe with the pressure port placed nearest the probe entry is performing better. The stated Mach number for the lab test is 0.26. In such a regime, our initial assumptions do not hold: the flow is compressible (Air flow accelerates inside the convergent section), hence we are reading a table that is not addressing our design approach. The behavior of the Kiel probe at low airspeed will be different, so we should validate by ourselves the pressure tap position impact at low speeds. Consider also the boundary layer, the more you retract the total pressure port inside the shield the more that tap is near the walls.
Another important aspect is the shape of the inlet. The simpler conical geometry seems perform worse, compared to curved inlets. The shape of the inlet modifies the velocity profile and a curved inlet will produce a smoother airspeed transition. With an elliptic nozzle, the area reduction gradient is greater at the inlet and progressively decreases to zero at the throat. In this way, the velocity gradient is kept low when the velocity magnitude is at maximum. For very low airspeed the probe can manifest measurement issues related to the boundary layer. That effect needs a dedicated analysis.
Other aspect to consider is the divergent section length. In this section the pressure should grow from its minimum value to the exit value. The smoother the transition is, the better is the pressure recovery. At the exit cone the air is flowing from a low pressure zone to a high pressure zone, hence separation of flow from the internal wall should be expected. Despite the fact we cannot use the Venturi tubes formulas for prediction of the pressure loss as a function of diverging cone angle, it's possible to observe that proposed angles for diverging cones in various design standards are between 5° and 15°. That range provides a valid initial estimate value for the area gradient in our probe design.
Regarding the exhaust holes, they should be placed in a position that minimizes the internal pressure variation at high angles of attack. It is expected that at high AoA, an airflow will be established between the holes placed upstream and the holes placed downstream and the impact of this flow on probe performance should be investigated.
In this article, we have familiarized with the total pressure probe and highlighted some key design points. In the next article we will proceed to present some results regarding the proposed BAD probe.
Figure 4. Generic Design of BasicAirData probe. Dimensions are not Definitive.
|
Home
>
English
>
Class 12
>
Maths
>
Chapter
>
Matrices
>
Simplify: <br> cos theta[{:(co...
Updated On: 27-06-2022
Get Answer to any question, just click a photo and upload the photo and get the answer completely free,
Text Solution
Solution : cos theta[{:(costheta,sintheta),(-sintheta,costheta):}]+sintheta[{:(sin theta ,-costheta),(costheta, sintheta):}] <br> =[{:(cos^(2)theta,sinthetacostheta),(-sintheta, costheta):}]+[{:(sin^(2)theta,-sinthetacostheta),(sinthetacos theta,sin ^(2)theta):}] <br> =[{:(cos^(2)theta+sin^(2)theta,sinthetacos theta -sin theta cos theta ),(-sin thetacos theta +sintheta cos theta, cos^(2)theta +sin^(2)theta):}]<br> [{:(1,0),(0,1):}]
Step by step solution by experts to help you in doubt clearance & scoring excellent marks in exams.
Transcript
given question is simplified cos theta into cos theta sin theta minus sin theta cos theta + sin theta into Matrix sin theta minus cos theta cos theta sin theta studies the given matrix is considered this as a today's cos theta into cos theta is cos square theta cos theta sin theta cos theta sin theta minus cos theta sin theta cos theta into cos theta is cos square theta + sin theta into sin theta is sin squared theta minus sin theta cos theta sin theta into cos theta is cos theta into sin theta sin theta into sin theta is sin square theta this implies sin square theta + cos square theta
is can be written as sin square theta + cos square theta cos theta sin theta minus sin theta cos theta is 0 - cos theta sin theta + cos theta sin theta is zero hour is cos square theta + sin square theta you know that from identity of the trigonometry sin square theta + cos square theta is equal to 1 hour cos square theta + sin square theta is equal to 1 this implies becomes 1001 therefore the simplification of the given question is identity matrix
|
# Fight Finance
#### CoursesTagsRandomAllRecentScores
The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero.
Considering this, which of the following statements is NOT correct?
The theory of fixed interest bond pricing is an application of the theory of Net Present Value (NPV). Also, a 'fairly priced' asset is not over- or under-priced. Buying or selling a fairly priced asset has an NPV of zero.
Considering this, which of the following statements is NOT correct?
A company selling charting and technical analysis software claims that independent academic studies have shown that its software makes significantly positive abnormal returns. Assuming the claim is true, which statement(s) are correct?
(I) Weak form market efficiency is broken.
(II) Semi-strong form market efficiency is broken.
(III) Strong form market efficiency is broken.
(IV) The asset pricing model used to measure the abnormal returns (such as the CAPM) had mis-specification error so the returns may not be abnormal but rather fair for the level of risk.
Select the most correct response:
A person is thinking about borrowing $100 from the bank at 7% pa and investing it in shares with an expected return of 10% pa. One year later the person will sell the shares and pay back the loan in full. Both the loan and the shares are fairly priced. What is the Net Present Value (NPV) of this one year investment? Note that you are asked to find the present value ($V_0$), not the value in one year ($V_1$). Your friend claims that by reading 'The Economist' magazine's economic news articles, she can identify shares that will have positive abnormal expected returns over the next 2 years. Assuming that her claim is true, which statement(s) are correct? (i) Weak form market efficiency is broken. (ii) Semi-strong form market efficiency is broken. (iii) Strong form market efficiency is broken. (iv) The asset pricing model used to measure the abnormal returns (such as the CAPM) is either wrong (mis-specification error) or is measured using the wrong inputs (data errors) so the returns may not be abnormal but rather fair for the level of risk. Select the most correct response: Suppose that the US government recently announced that subsidies for fresh milk producers will be gradually phased out over the next year. Newspapers say that there are expectations of a 40% increase in the spot price of fresh milk over the next year. Option prices on fresh milk trading on the Chicago Mercantile Exchange (CME) reflect expectations of this 40% increase in spot prices over the next year. Similarly to the rest of the market, you believe that prices will rise by 40% over the next year. What option trades are likely to be profitable, or to be more specific, result in a positive Net Present Value (NPV)? Assume that: • Only the spot price is expected to increase and there is no change in expected volatility or other variables that affect option prices. • No taxes, transaction costs, information asymmetry, bid-ask spreads or other market frictions. A very low-risk stock just paid its semi-annual dividend of$0.14, as it has for the last 5 years. You conservatively estimate that from now on the dividend will fall at a rate of 1% every 6 months.
If the stock currently sells for $3 per share, what must be its required total return as an effective annual rate? If risk free government bonds are trading at a yield of 4% pa, given as an effective annual rate, would you consider buying or selling the stock? The stock's required total return is: Select the most correct statement from the following. 'Chartists', also known as 'technical traders', believe that: Fundamentalists who analyse company financial reports and news announcements (but who don't have inside information) will make positive abnormal returns if: A man inherits$500,000 worth of shares.
He believes that by learning the secrets of trading, keeping up with the financial news and doing complex trend analysis with charts that he can quit his job and become a self-employed day trader in the equities markets.
What is the expected gain from doing this over the first year? Measure the net gain in wealth received at the end of this first year due to the decision to become a day trader. Assume the following:
• He earns $60,000 pa in his current job, paid in a lump sum at the end of each year. • He enjoys examining share price graphs and day trading just as much as he enjoys his current job. • Stock markets are weak form and semi-strong form efficient. • He has no inside information. • He makes 1 trade every day and there are 250 trading days in the year. Trading costs are$20 per trade. His broker invoices him for the trading costs at the end of the year.
• The shares that he currently owns and the shares that he intends to trade have the same level of systematic risk as the market portfolio.
• The market portfolio's expected return is 10% pa.
Measure the net gain over the first year as an expected wealth increase at the end of the year.
Economic statistics released this morning were a surprise: they show a strong chance of consumer price inflation (CPI) reaching 5% pa over the next 2 years.
This is much higher than the previous forecast of 3% pa.
A vanilla fixed-coupon 2-year risk-free government bond was issued at par this morning, just before the economic news was released.
What is the expected change in bond price after the economic news this morning, and in the next 2 years? Assume that:
• Inflation remains at 5% over the next 2 years.
• Investors demand a constant real bond yield.
• The bond price falls by the (after-tax) value of the coupon the night before the ex-coupon date, as in real life.
A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the start-of-year amount, but it is paid at the end of every year.
This fee is charged regardless of whether the fund makes gains or losses on your money.
The fund offers to invest your money in shares which have an expected return of 10% pa before fees.
You are thinking of investing $100,000 in the fund and keeping it there for 40 years when you plan to retire. What is the Net Present Value (NPV) of investing your money in the fund? Note that the question is not asking how much money you will have in 40 years, it is asking: what is the NPV of investing in the fund? Assume that: • The fund has no private information. • Markets are weak and semi-strong form efficient. • The fund's transaction costs are negligible. • The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible. A residential real estate investor believes that house prices will grow at a rate of 5% pa and that rents will grow by 2% pa forever. All rates are given as nominal effective annual returns. Assume that: • His forecast is true. • Real estate is and always will be fairly priced and the capital asset pricing model (CAPM) is true. • Ignore all costs such as taxes, agent fees, maintenance and so on. • All rental income cash flow is paid out to the owner, so there is no re-investment and therefore no additions or improvements made to the property. • The non-monetary benefits of owning real estate and renting remain constant. Which one of the following statements is NOT correct? Over time: A managed fund charges fees based on the amount of money that you keep with them. The fee is 2% of the end-of-year amount, paid at the end of every year. This fee is charged regardless of whether the fund makes gains or losses on your money. The fund offers to invest your money in shares which have an expected return of 10% pa before fees. You are thinking of investing$100,000 in the fund and keeping it there for 40 years when you plan to retire.
How much money do you expect to have in the fund in 40 years? Also, what is the future value of the fees that the fund expects to earn from you? Give both amounts as future values in 40 years. Assume that:
• The fund has no private information.
• Markets are weak and semi-strong form efficient.
• The fund's transaction costs are negligible.
• The cost and trouble of investing your money in shares by yourself, without the managed fund, is negligible.
• The fund invests its fees in the same companies as it invests your funds in, but with no fees.
The below answer choices list your expected wealth in 40 years and then the fund's expected wealth in 40 years.
A fairly priced unlevered firm plans to pay a dividend of $1 next year (t=1) which is expected to grow by 3% pa every year after that. The firm's required return on equity is 8% pa. The firm is thinking about reducing its future dividend payments by 10% so that it can use the extra cash to invest in more projects which are expected to return 8% pa, and have the same risk as the existing projects. Therefore, next year's dividend will be$0.90. No new equity or debt will be issued to fund the new projects, they'll all be funded by the cut in dividends.
What will be the stock's new annual capital return (proportional increase in price per year) if the change in payout policy goes ahead?
Assume that payout policy is irrelevant to firm value (so there's no signalling effects) and that all rates are effective annual rates.
One year ago a pharmaceutical firm floated by selling its 1 million shares for $100 each. Its book and market values of equity were both$100m. Its debt totalled $50m. The required return on the firm's assets was 15%, equity 20% and debt 5% pa. In the year since then, the firm: • Earned net income of$29m.
• Paid dividends totaling $10m. • Discovered a valuable new drug that will lead to a massive 1,000 times increase in the firm's net income in 10 years after the research is commercialised. News of the discovery was publicly announced. The firm's systematic risk remains unchanged. Which of the following statements is NOT correct? All statements are about current figures, not figures one year ago. Hint: Book return on assets (ROA) and book return on equity (ROE) are ratios that accountants like to use to measure a business's past performance. $$\text{ROA}= \dfrac{\text{Net income}}{\text{Book value of assets}}$$ $$\text{ROE}= \dfrac{\text{Net income}}{\text{Book value of equity}}$$ The required return on assets $r_V$ is a return that financiers like to use to estimate a business's future required performance which compensates them for the firm's assets' risks. If the business were to achieve realised historical returns equal to its required returns, then investment into the business's assets would have been a zero-NPV decision, which is neither good nor bad but fair. $$r_\text{V, 0 to 1}= \dfrac{\text{Cash flow from assets}_\text{1}}{\text{Market value of assets}_\text{0}} = \dfrac{CFFA_\text{1}}{V_\text{0}}$$ Similarly for equity and debt. A company advertises an investment costing$1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Assume that there are no dividend payments so the entire 15% total return is all capital return.
Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% return lasts for the next 100 years (t=0 to 100), then reverts to 10% pa after that time? Also, what is the NPV of the investment if the 15% return lasts forever?
In both cases, assume that the required return of 10% remains constant. All returns are given as effective annual rates.
The answer choices below are given in the same order (15% for 100 years, and 15% forever):
In general, stock prices tend to rise. What does this mean for futures on equity?
Which of the following statements about futures contracts on shares is NOT correct, assuming that markets are efficient?
When an equity future is first negotiated (at t=0):
The efficient markets hypothesis (EMH) and no-arbitrage pricing theory are most closely related to which of the following concepts?
A company advertises an investment costing $1,000 which they say is underpriced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to always be 7% pa and rest is the capital yield. Assuming that the company's statements are correct, what is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever? In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever): Question 668 buy and hold, market efficiency, idiom A quote from the famous investor Warren Buffet: "Much success can be attributed to inactivity. Most investors cannot resist the temptation to constantly buy and sell." Buffet is referring to the buy-and-hold strategy which is to buy and never sell shares. Which of the following is a disadvantage of a buy-and-hold strategy? Assume that share markets are semi-strong form efficient. Which of the following is NOT an advantage of the strict buy-and-hold strategy? A disadvantage of the buy-and-hold strategy is that it reduces: Which of the following statements about returns is NOT correct? A stock's: A company advertises an investment costing$1,000 which they say is under priced. They say that it has an expected total return of 15% pa, but a required return of only 10% pa. Of the 15% pa total expected return, the dividend yield is expected to be 4% pa and the capital yield 11% pa. Assume that the company's statements are correct.
What is the NPV of buying the investment if the 15% total return lasts for the next 100 years (t=0 to 100), then reverts to 10% after that time? Also, what is the NPV of the investment if the 15% return lasts forever?
In both cases, assume that the required return of 10% remains constant, the dividends can only be re-invested at 10% pa and all returns are given as effective annual rates. The answer choices below are given in the same order (15% for 100 years, and 15% forever):
The following quotes are most closely related to which financial concept?
• “Opportunity is missed by most people because it is dressed in overalls and looks like work” -Thomas Edison
• “The only place where success comes before work is in the dictionary” -Vidal Sassoon
• “The safest way to double your money is to fold it over and put it in your pocket” - Kin Hubbard
You work in Asia and just woke up. It looked like a nice day but then you read the news and found out that last night the American share market fell by 10% while you were asleep due to surprisingly poor macro-economic world news. You own a portfolio of liquid stocks listed in Asia with a beta of 1.6. When the Asian equity markets open, what do you expect to happen to your share portfolio? Assume that the capital asset pricing model (CAPM) is correct and that the market portfolio contains all shares in the world, of which American shares are a big part. Your portfolio beta is measured against this world market portfolio.
When the Asian equity market opens for trade, you would expect your portfolio value to:
Examine the graphs below. Assume that asset A is a single stock. Which of the following statements is NOT correct? Asset A:
The famous investor Warren Buffet is one of few portfolio managers who appears to have consistently beaten the market. His company Berkshire Hathaway (BRK) appears to have outperformed the US S&P500 market index, shown in the graph below.
Read the below statements about Warren Buffet and the implications for the Efficient Markets Hypothesis (EMH) theory of Eugene Fama. Assume that the first sentence is true. Analyse the second sentence and select the answer option which is NOT correct. In other words, find the false statement in the second sentence.
|
Acceleration stress-energy tensor
Acceleration stress-energy tensor is a symmetric four-dimensional tensor of the second valence (rank), which describes the density and flux of energy and momentum of acceleration field in matter. This tensor in the covariant theory of gravitation is included in the equation for determining the metric along with the gravitational stress-energy tensor, the pressure stress-energy tensor, the dissipation stress-energy tensor and the stress-energy tensor of electromagnetic field. The covariant derivative of the acceleration stress-energy tensor determines the density of the four-force acting on the matter.
Covariant theory of gravitation
Definition
In covariant theory of gravitation (CTG) the acceleration field is not a scalar field and considered as 4-vector field, 4-potential of which consists of the scalar and 3-vector components. In CTG the acceleration stress-energy tensor was defined by Fedosin through the acceleration tensor ${\displaystyle ~u_{ik}}$ and the metric tensor ${\displaystyle ~g^{ik}}$ by the principle of least action: [1]
${\displaystyle ~B^{ik}={\frac {c^{2}}{4\pi \eta }}\left(-g^{im}u_{nm}u^{nk}+{\frac {1}{4}}g^{ik}u_{mr}u^{mr}\right),}$
where ${\displaystyle ~\eta }$ is the acceleration field constant defined in terms of the fundamental constants and physical parameters of the system. Acceleration field is considered as a component of the general field.
Components of the acceleration stress-energy tensor
Since acceleration tensor consists of the components of the acceleration field strength ${\displaystyle ~\mathbf {S} }$ and the solenoidal acceleration vector ${\displaystyle ~\mathbf {N} }$, then the acceleration stress-energy tensor can be expressed through these components. In the limit of special relativity the metric tensor ceases to depend on the coordinates and time, and in this case the acceleration stress-energy tensor gains the simplest form:
${\displaystyle ~B^{ik}={\begin{vmatrix}\varepsilon _{a}&{\frac {K_{x}}{c}}&{\frac {K_{y}}{c}}&{\frac {K_{z}}{c}}\\cP_{ax}&\varepsilon _{a}-{\frac {S_{x}^{2}+c^{2}N_{x}^{2}}{4\pi \eta }}&-{\frac {S_{x}S_{y}+c^{2}N_{x}N_{y}}{4\pi \eta }}&-{\frac {S_{x}S_{z}+c^{2}N_{x}N_{z}}{4\pi \eta }}\\cP_{ay}&-{\frac {S_{x}S_{y}+c^{2}N_{x}N_{y}}{4\pi \eta }}&\varepsilon _{a}-{\frac {S_{y}^{2}+c^{2}N_{y}^{2}}{4\pi \eta }}&-{\frac {S_{y}S_{z}+c^{2}N_{y}N_{z}}{4\pi \eta }}\\cP_{az}&-{\frac {S_{x}S_{z}+c^{2}N_{x}N_{z}}{4\pi \eta }}&-{\frac {S_{y}S_{z}+c^{2}N_{y}N_{z}}{4\pi \eta }}&\varepsilon _{a}-{\frac {S_{z}^{2}+c^{2}N_{z}^{2}}{4\pi \eta }}\end{vmatrix}}.}$
The time-like components of the tensor denote:
1) The volumetric energy density of acceleration field
${\displaystyle ~B^{00}=\varepsilon _{a}={\frac {1}{8\pi \eta }}\left(S^{2}+c^{2}N^{2}\right).}$
2) The vector of momentum density of acceleration field ${\displaystyle ~\mathbf {P_{a}} ={\frac {1}{c^{2}}}\mathbf {K} ,}$ where the vector of energy flux density of acceleration field is
${\displaystyle ~\mathbf {K} ={\frac {c^{2}}{4\pi \eta }}[\mathbf {S} \times \mathbf {N} ].}$
Due to the symmetry of the tensor indices, ${\displaystyle P^{01}=P^{10},P^{02}=P^{20},P^{03}=P^{30}}$, so that ${\displaystyle {\frac {1}{c}}\mathbf {K} =c\mathbf {P_{a}} .}$
3) The space-like components of the tensor form a submatrix 3 x 3, which is the 3-dimensional acceleration stress tensor, taken with a minus sign. The acceleration stress tensor can be written as
${\displaystyle ~\sigma ^{pq}={\frac {1}{4\pi \eta }}\left(S^{p}S^{q}+c^{2}N^{p}N^{q}-{\frac {1}{2}}\delta ^{pq}(S^{2}+c^{2}N^{2})\right),}$
where ${\displaystyle ~p,q=1,2,3,}$ the components ${\displaystyle S^{1}=S_{x},}$ ${\displaystyle S^{2}=S_{y},}$ ${\displaystyle S^{3}=S_{z},}$ ${\displaystyle N^{1}=N_{x},}$ ${\displaystyle N^{2}=N_{y},}$ ${\displaystyle N^{3}=N_{z},}$ the Kronecker delta ${\displaystyle ~\delta ^{pq}}$ equals 1 if ${\displaystyle ~p=q,}$ and equals 0 if ${\displaystyle ~p\not =q.}$
Three-dimensional divergence of the stress tensor of acceleration field connects the force density and rate of change of momentum density of the acceleration field:
${\displaystyle ~\partial _{q}\sigma ^{pq}=-f^{p}+{\frac {1}{c^{2}}}{\frac {\partial K^{p}}{\partial t}},}$
where ${\displaystyle ~f^{p}}$ denote the components of the three-dimensional acceleration force density, ${\displaystyle ~K^{p}}$ – the components of the energy flux density of the acceleration field.
4-force density and field equation
The principle of least action implies that the 4-vector of force density ${\displaystyle ~f_{\alpha }}$ can be found through the acceleration stress-energy tensor, either through the product of acceleration tensor and mass 4-current:
${\displaystyle ~f_{\alpha }=\nabla _{\beta }{B_{\alpha }}^{\beta }=-u_{\alpha k}J^{k}.\qquad (1)}$
The field equations of acceleration field are as follows:
${\displaystyle ~\nabla _{n}u_{ik}+\nabla _{i}u_{kn}+\nabla _{k}u_{ni}=0,}$
${\displaystyle ~\nabla _{k}u^{ik}=-{\frac {4\pi \eta }{c^{2}}}J^{i}.}$
In the special theory of relativity, according to (1) for the components of the four-force density can be written:
${\displaystyle ~f_{\alpha }=(-{\frac {\mathbf {S} \cdot \mathbf {J} }{c}},-\mathbf {f} ),}$
where ${\displaystyle ~\mathbf {f} =-\rho \mathbf {S} -[\mathbf {J} \times \mathbf {N} ]}$ is the 3-vector of the force density, ${\displaystyle ~\rho }$ is the density of the moving matter, ${\displaystyle ~\mathbf {J} =\rho \mathbf {v} }$ is the 3-vector of the mass current density, ${\displaystyle ~\mathbf {v} }$ is the 3-vector of velocity of the matter unit.
In Minkowski space, the field equations are transformed into four equations for the acceleration field strength ${\displaystyle ~\mathbf {S} }$ and solenoidal acceleration vector ${\displaystyle ~\mathbf {N} }$
${\displaystyle ~\nabla \cdot \mathbf {S} =4\pi \eta \rho ,}$
${\displaystyle ~\nabla \times \mathbf {N} ={\frac {1}{c^{2}}}{\frac {\partial \mathbf {S} }{\partial t}}+{\frac {4\pi \eta \rho \mathbf {v} }{c^{2}}},}$
${\displaystyle ~\nabla \cdot \mathbf {N} =0,}$
${\displaystyle ~\nabla \times \mathbf {S} =-{\frac {\partial \mathbf {N} }{\partial t}}.}$
Equation for the metric
In the covariant theory of gravitation the acceleration stress-energy tensor in accordance with the principles of metric theory of relativity is one of the tensors defining metrics inside the bodies by the equation for the metric:
${\displaystyle ~R_{ik}-{\frac {1}{4}}g_{ik}R={\frac {8\pi G\beta }{c^{4}}}\left(B_{ik}+P_{ik}+U_{ik}+W_{ik}\right),}$
where ${\displaystyle ~\beta }$ is the coefficient to be determined, ${\displaystyle ~B_{ik}}$, ${\displaystyle ~P_{ik}}$, ${\displaystyle ~U_{ik}}$ and ${\displaystyle ~W_{ik}}$ are the stress-energy tensors of the acceleration field, pressure field, gravitational and electromagnetic fields, respectively, ${\displaystyle ~G}$ is the gravitational constant.
Equation of motion
The equation of motion of a point particle inside or outside matter can be represented in tensor form, with acceleration stress-energy tensor ${\displaystyle B^{ik}}$ or acceleration tensor ${\displaystyle u_{nk}}$ :
${\displaystyle ~-\nabla _{k}\left(B^{ik}+U^{ik}+W^{ik}+P^{ik}\right)=g^{in}\left(u_{nk}J^{k}+\Phi _{nk}J^{k}+F_{nk}j^{k}+f_{nk}J^{k}\right)=0.\qquad (2)}$
where ${\displaystyle ~\Phi _{nk}}$ is the gravitational tensor , ${\displaystyle ~F_{nk}}$ is the electromagnetic tensor, ${\displaystyle ~f_{nk}}$ is the pressure field tensor, ${\displaystyle ~j^{k}=\rho _{0q}u^{k}}$ is the charge 4-current, ${\displaystyle ~\rho _{0q}}$ is the density of electric charge of the matter unit in the reference frame at rest, ${\displaystyle ~u^{k}}$ is the 4-velocity.
We now recognize that ${\displaystyle ~J^{k}=\rho _{0}u^{k}}$ is the mass 4-current and the acceleration tensor is defined through the covariant 4-potential as ${\displaystyle ~u_{nk}=\nabla _{n}U_{k}-\nabla _{k}U_{n}.}$ This gives the following: [2]
${\displaystyle ~\nabla _{\beta }{B_{n}}^{\beta }=-u_{nk}J^{k}=-\rho _{0}u^{k}(\nabla _{n}U_{k}-\nabla _{k}U_{n})=\rho _{0}{\frac {DU_{n}}{D\tau }}-\rho _{0}u^{k}\nabla _{n}U_{k}.\qquad (3)}$
Here operator of proper-time-derivative ${\displaystyle ~u^{k}\nabla _{k}={\frac {D}{D\tau }}}$ is used, where ${\displaystyle ~D}$ is the symbol of 4-differential in curved spacetime, ${\displaystyle ~\tau }$ is the proper time, ${\displaystyle ~\rho _{0}}$ is the mass density in the comoving frame.
Accordingly, the equation of motion (2) becomes:
${\displaystyle ~\rho _{0}{\frac {DU_{n}}{D\tau }}-\rho _{0}u^{k}\nabla _{n}U_{k}=-\nabla ^{k}\left(U_{nk}+W_{nk}+P_{nk}\right)=\Phi _{nk}J^{k}+F_{nk}j^{k}+f_{nk}J^{k}.}$
Time-like component of the equation at ${\displaystyle ~n=0}$ describes the rate of change of the scalar potential of the acceleration field, and spatial component at ${\displaystyle ~n=1{,}2{,}3}$ connects the rate of change of the vector potential of the acceleration field with the force density.
Conservation laws
When the index ${\displaystyle ~i=0}$ in (2), i.e. for the time-like component of the equation, in the limit of special relativity from the vanishing of the left side of (2) follows:
${\displaystyle ~\nabla \cdot (\mathbf {K} +\mathbf {H} +\mathbf {P} +\mathbf {F} )=-{\frac {\partial (B^{00}+U^{00}+W^{00}+P^{00})}{\partial t}},}$
where ${\displaystyle ~\mathbf {K} }$ is the vector of the acceleration field energy flux density, ${\displaystyle ~\mathbf {H} }$ is the Heaviside vector, ${\displaystyle ~\mathbf {P} }$ is the Poynting vector, ${\displaystyle ~\mathbf {F} }$ is the vector of the pressure field energy flux density.
This equation can be regarded as a local conservation law of energy-momentum of the four fields. [3]
The integral form of the law of conservation of energy-momentum is obtained by integrating (2) over the 4-volume. By the divergence theorem the integral of the 4-divergence of some tensor over the 4-space can be replaced by the integral of time-like tensor components over 3-volume. As a result, in Lorentz coordinates the integral vector equal to zero may be obtained: [4]
${\displaystyle ~\mathbb {Q} ^{i}=\int {\left(B^{i0}+U^{i0}+W^{i0}+P^{i0}\right)dV}.}$
Vanishing of the integral vector allows us to explain the 4/3 problem, according to which the mass-energy of field in the momentum of field of the moving system in 4/3 more than in the field energy of fixed system. On the other hand, according to, [3] the generalized Poynting theorem and the integral vector should be considered differently inside the matter and beyond its limits. As a result, the occurrence of the 4/3 problem is associated with the fact that the time components of the stress-energy tensors do not form four-vectors, and therefore they cannot define the same mass in the fields’ energy and momentum in principle.
Relativistic mechanics
As in relativistic mechanics, and in general relativity (GR), the acceleration stress-energy tensor is not used. Instead it uses the so-called stress-energy tensor of matter, which in the simplest case has the following form: ${\displaystyle ~\phi _{n\beta }=\rho _{0}u_{n}u_{\beta }}$. In GR, the tensor ${\displaystyle ~\phi _{n\beta }}$ is substituted into the equation for the metric and its covariant derivative gives the following:
${\displaystyle ~\nabla ^{\beta }\phi _{n\beta }=\nabla ^{\beta }(\rho _{0}u_{n}u_{\beta })=u_{n}\nabla ^{\beta }J_{\beta }+\rho _{0}u_{\beta }\nabla ^{\beta }u_{n}.}$
In GR it is assumed that there is the continuity equation in the form ${\displaystyle ~\nabla ^{\beta }J_{\beta }=0.}$ Then, using the operator of proper-time-derivative the covariant derivative of the tensor ${\displaystyle ~\phi _{n\beta }}$ gives the product of the mass density and four-acceleration, i.e. the density of 4-force:
${\displaystyle ~\nabla ^{\beta }\phi _{n\beta }=\rho _{0}u_{\beta }\nabla ^{\beta }u_{n}=\rho _{0}{\frac {Du_{n}}{D\tau }}.\qquad (4)}$
However, the continuity equation is valid only in the special theory of relativity as ${\displaystyle ~\partial ^{\beta }J_{\beta }=\partial _{\beta }J^{\beta }=0.}$ In curved space-time instead would have to be the equation ${\displaystyle ~\nabla ^{\beta }J_{\beta }=0}$, but instead of zero on the right side of this equation there appears an additional non-zero term with Riemann curvature tensor. [1] Consequently, (4 ) is not an exact expression, and tensor ${\displaystyle ~\phi _{n\beta }}$ determines the properties of the matter only in the special theory of relativity. In contrast, in the covariant theory of gravitation equation (3) is written in covariant form, so that the acceleration stress-energy tensor ${\displaystyle ~B_{n\beta }}$ describes well the acceleration field of matter particles in curved Riemannian space-time.
|
# Categorical algebra
This page discusses the object called a categorical algebra; for categorical generalizations of algebra theory, see Category:Monoidal categories.
In category theory, a field of mathematics, a categorical algebra is an associative algebra, defined for any locally finite category and commutative ring with unity. It generalizes the notions of group algebra and incidence algebra, just as category generalizes the notions of group and partially ordered set.
## Definition
Infinite categories are conventionally treated differently for group algebras and incidence algebras; the definitions agree for finite categories. We first present the definition that generalizes the group algebra.
### Group algebra-style definition
Let C be a category and R be a commutative ring with unit. Then as a set and as a module, the categorical algebra RC (or R[C]) is the free module on the maps of C.
The multiplication on RC can be understood in several ways, depending on how one presents a free module.
Thinking of the free module as formal linear combinations (which are finite sums), the multiplication is the multiplication (composition) of the category, where defined:
$\sum a_i f_i \sum b_j g_j = \sum a_i b_j f_i g_j$
where $f_i g_j=0$ if their composition is not defined. This is defined for any finite sum.
Thinking of the free module as finitely supported functions, the multiplication is defined as a convolution: if $a, b \in RC$ (thought of as functionals on the maps of C), then their product is defined as:
$(a * b)(h) := \sum_{fg=h} a(f)b(g).$
The latter sum is finite because the functions are finitely supported.
### Incidence algebra-style definition
The definition used for incidence algebras assumes that the category C is locally finite, is dual to the above definition, and defines a different object. This isn't a useful assumption for groups, as a group that is locally finite as a category is finite.
A locally finite category is one where every map can be written only finitely many ways as a product of non-identity maps. The categorical algebra (in this sense) is defined as above, but allowing all coefficients to be non-zero.
In terms of formal sums, the elements are all formal sums
$\sum_{f_i \in \mathrm{Hom}(C)} a_i f_i,$
where there are no restrictions on the $a_i$ (they can all be non-zero).
In terms of functions, the elements are any functions from the maps of C to R, and multiplication is defined as convolution. The sum in the convolution is always finite because of the local finiteness assumption.
### Dual
The module dual of the category algebra (in the group algebra sense of the definition) is the space of all maps from the maps of C to R, denoted F(C), and has a natural coalgebra structure. Thus for a locally finite category, the dual of a categorical algebra (in the group algebra sense) is the categorical algebra (in the incidence algebra sense), and has both an algebra and coalgebra structure.
## References
• Haigh, John. On the Möbius Algebra and the Grothendieck Ring of a Finite Category J. London Math. Soc (2), 21 (1980) 81-92.
|
# On the Number of Sum-Free Triplets of Sets
@article{Araujo2021OnTN,
title={On the Number of Sum-Free Triplets of Sets},
author={Igor Araujo and J{\'o}zsef Balogh and Ramon I. Garcia},
journal={Electron. J. Comb.},
year={2021},
volume={28}
}
• Published 15 January 2021
• Mathematics
• Electron. J. Comb.
We count the ordered sum-free triplets of subsets in the group $\mathbb{Z}/p\mathbb{Z}$, i.e., the triplets $(A,B,C)$ of sets $A,B,C \subset \mathbb{Z}/p\mathbb{Z}$ for which the equation $a+b=c$ has no solution with $a\in A$, $b \in B$ and $c \in C$. Our main theorem improves on a recent result by Semchankau, Shabanov, and Shkredov using a different and simpler method. Our proof relates previous results on the number of independent sets of regular graphs by Kahn; Perarnau and Perkins; and…
## References
SHOWING 1-9 OF 9 REFERENCES
Sum-free sets in abelian groups
• Mathematics
• 2003
LetA be a subset of an abelian groupG with |G|=n. We say thatA is sum-free if there do not existx, y, z εA withx+y=z. We determine, for anyG, the maximal densityμ(G) of a sum-free subset ofG. This
Sum-free sets in abelian groups
• Mathematics
• 2001
AbstractWe show that there is an absolute constant δ>0 such that the number of sum-free subsets of any finite abelian groupG is \left( {2^{\nu (G)} - 1} \right)2^{\left| G \right|/2} + O\left(
On independent sets in hypergraphs
• Mathematics
Random Struct. Algorithms
• 2014
It is proved that if Hn is an n-vertex r+1-uniform hypergraph in which every r-element set is contained in at most d edges, where 0 0 satisfies cr~r/e as ri¾?∞, then cr improves and generalizes several earlier results and gives an application to hypergraph Ramsey numbers involving independent neighborhoods.
Counting independent sets in cubic graphs of given girth
• Mathematics
J. Comb. Theory, Ser. B
• 2018
An Entropy Approach to the Hard-Core Model on Bipartite Graphs
• J. Kahn
• Mathematics
Combinatorics, Probability and Computing
• 2001
Results obtained include rather precise bounds on occupation probabilities; a ‘phase transition’ statement for Hamming cubes; and an exact upper bound on the number of independent sets in an n-regular bipartite graph on a given number of vertices.
The Bethe Partition Function of Log-supermodular Graphical Models
• N. Ruozzi
• Computer Science, Mathematics
NIPS
• 2012
It is demonstrated that, for any graphical model with binary variables whose potential functions are all log-supermodular, the Bethe partition function always lower bounds the true partition function.
Sharp bound on the number of maximal sum-free subsets of integers
• Mathematics
Journal of the European Mathematical Society
• 2018
Cameron and Erdős asked whether the number of \emph{maximal} sum-free sets in $\{1, \dots , n\}$ is much smaller than the number of sum-free sets. In the same paper they gave a lower bound of
Hypergraph containers, Inventiones mathematicae
• 2015
|
# Math Help - Determine when f(x)=48
1. ## Determine when f(x)=48
we have the equation x^4 - 5x^2 + 4 and to find when the values of x give us 48 we do this : x^4 - 5x^2 + 4 = 48, then we try to factor. In this case we can not, so do we use the quadratic formula? The answers in the text book are 48 and -3.10 ? They are wrong... So what do i do?
2. ## Re: Determine when f(x)=48
Use the substitution $y=x^2$
|
# Tanh-Sinh integration (a,b) Calculator
## Calculates a table of the successive integral estimates of the given function f(x) over the interval (a,b) by doubling partitions from two to N using the Tanh-Sinh method.
This method is suitable for the function with endpoint singularities (±∞). The integrand f(x) is assumed to be analytic and non-periodic.It is calculated by increasing the number of partitions to double from 2 to N.
f(x) a , b maximum partition N 32 64 128 256 512 1024 2048
6dgt10dgt14dgt18dgt22dgt26dgt30dgt34dgt38dgt42dgt46dgt50dgt
$\normal Tanh-Sinh\ integration\\\hspace{5}(1)\ x\rightarrow \frac{b-a}{2}y+\frac{b+a}{2}\\\hspace{30} {\large\int_{\small a}^{\hspace{25}\small b}}f(x)dx= {\large\int_{\small -1}^{\hspace{25}\small 1}}f(\frac{b-a}{2}y+\frac{b+a}{2})\frac{b-a}{2}dy\\[10]\hspace{20} y\rightarrow tanh(\frac{\pi}{2}sinh(t))\\\hspace{30}\simeq {\large\int_{\small -t_a}^{\hspace{25}\small t_a}}f(\frac{b-a}{2}y(t)+\frac{b+a}{2})y'(t)\frac{b-a}{2}dt\\\hspace{140}y(t)=tanh(\frac{\pi}{2}sinh(t))\\\hspace{140}y'(t)={\large\frac{\frac{\pi}{2}cosh(t)}{cosh^2(\frac{\pi}{2}sinh(t))}}\\[10](2)\ Trapezoid\\\hspace{20}S={\large\sum_{\small i=1}^{\small n}}f( \frac{b-a}{2}y_i+\frac{b+a}{2})w_i\\\hspace{10}nodes\\\hspace{20} y_i=tanh(\frac{\pi}{2}sinh(t_i)),\hspace{10}t_i=-t_a+(i-1)*h\ \\\hspace{10}weights\\\hspace{20} w_i={\large\frac{\frac{\pi}{2}cosh(t_i)}{cosh^2(\frac{\pi}{2}sinh(t_i))}}\frac{b-a}{2}h,\hspace{10}h={\large\frac{2t_a}{n-1}}\\$
Sending completion
To improve this 'Tanh-Sinh integration (a,b) Calculator', please fill in questionnaire.
Male or Female ?
Male Female
Age
Under 20 years old 20 years old level 30 years old level
40 years old level 50 years old level 60 years old level or over
Occupation
Elementary school/ Junior high-school student
High-school/ University/ Grad student A homemaker An office worker / A public employee
Self-employed people An engineer A teacher / A researcher
A retired people Others
Useful?
Very Useful A little Not at All
Purpose of use?
|
# Lower bound on approximation degree in Nisan-Szegedy
In Nisan and Szegedy's 1994 paper "On the degree of boolean functions as real polynomials"[1] Lemma 3.8, how does proof work for $\widetilde{\deg(f)}\geq \sqrt{\,\tfrac16\mathrm{bs}(f)\,}$? It clearly works for $\deg(f)$ and only this portion is shown in Jukna's book [2, Theorem 14.11]. Why does the proof work for $\widetilde{deg(f)}$?
Here $f$ is boolean function, $\widetilde{\deg(f)}$ is the $\tfrac13$-approximation degree of $f$, $\deg(f)$ is real degree of $f$ and $\mathrm{bs}(f)$ is block sensitivity of $f$.
[1] Noam Nisan and Mario Szegedy, "On the degree of boolean functions as real polynomials". Computational Complexity 4(4):301–313, 1994 (SpringerLink)
[2] Stasys Jukna, Boolean Function Complexity. Volume 27 of Algorithms and Combinatorics, Springer, 2012. (Homepage)
• Turbo, I added full citations to the paper and book. I think I got the right one of Jutka's books but please fix it if I didn't! Either way, flag this comment as obsolete once you've dealt with it. – David Richerby Jan 7 '15 at 14:05
If you believe Lemma 3.8 for $\deg f$, you should also believe it for $\widetilde{\deg} f$. An earlier lemma states that if $f$ is a Boolean function on $n$ variables such that $f(\mathbf{0}) = 0$ and $f(\mathbf{x})=1$ for all $|\mathbf{x}|=1$ then $\deg f \geq \sqrt{n/2}$ and $\widetilde{\deg} f \geq \sqrt{n/6}$. Applying the first part of this lemma, we conclude the first part of Lemma 3.8, namely $\deg f \geq \sqrt{\operatorname{bs}(f)/2}$. Applying the second part of this lemma, we conclude the second part of Lemma 3.8, namely $\widetilde{\deg} f \geq \sqrt{\operatorname{bs}(f)/6}$.
• Yes. But if you notice, you pass in only $f'$ from lemma 3.8 to the previous lemma. So it proves something on $\widetilde{deg(f')}$. How does it transfer to $\widetilde{deg(f)}$? – 1.. Jan 7 '15 at 19:23
• @Turbo I believe it's within your powers to answer these questions yourself. The proof does work. Still, here are some hints. Just like $\deg f' \leq \deg f$, also $\widetilde{\deg} f' \leq \widetilde{\deg} f$. Regarding symmetry, the proof of the earlier lemma goes around this issue by symmetrizing the function. – Yuval Filmus Jan 7 '15 at 19:41
• Ok. I will spend some time understanding $\widetilde{deg(f')}$ and $\widetilde{deg(f)}$. So you pass approximating polynomial to the previous lemma when you use widetilde instead of the eact polynomial? (But the function $f'$ now is not $0-1$ that is where I was confused). I will try to sink this in much more carefully. – 1.. Jan 7 '15 at 19:47
• $x_1x_2$ with $x_1$ replaced by $(1+y_1)$ yields $x_2+y_1x_2$ which is not symmetric while $x_1x_2$ was symmetric. – 1.. Jan 8 '15 at 4:41
|
# How to control which nodes get placed at each radius with GraphLayout → “RadialEmbedding”?
I have a directed graph each of whose nodes belongs to exactly one of three groups: "sources", "sinks", and "both". As these names suggest, nodes in the "sources" group have only outgoing edges; those in the "sinks" group have only incoming edges; and those in the "both" group can have both outgoing and incoming edges.
I want to lay out this graph using a radial embedding, with the nodes arranged in three concentric circles depending on which group they belong to: "sinks" in the outermost circle, "sources" in the innermost circle, and "both" in the circle in-between.
If I use the option GraphLayout → "RadialEmbedding" I indeed get a radial layout for the graph, but I don't get to specify which nodes go where (nor, for that matter, the number of possible radii from the center). Is there a way to do this?
(Note, as long as the nodes get placed along the appropriate circles, I'd like to leave it to the layout algorithm to optimize the relative position of the nodes so as to minimize the number of edge-crossings.)
• You can choose the root node as a suboption as in GraphLayout -> {"RadialEmbedding", "RootVertex" -> "a"}. See, for example, this answer to a related question. – kglr Oct 28 '14 at 14:00
• if you can add some examples, that will be very helpful... – halmir Oct 28 '14 at 14:48
ClearAll[circularKPartiteF];
circularKPartiteF = With[{rl = Range[Length@{##}]},
Join @@ (2^rl (GraphComputationCircularEmbedding /@ {##}))] &;
Example 1: a complete graph
Graph[Range[25], EdgeList[CompleteGraph[{3, 7, 15}]],
VertexSize -> Large,
VertexCoordinates -> circularKPartiteF[3, 7, 15]]
Example 2: David's example
Graph[Range@40, DeleteDuplicates[Join[myFirstRules, mySecondRules]],
VertexSize -> Large,
VertexCoordinates -> circularKPartiteF[10, 10, 20]]
It appears to be very difficult to use GraphLayout -> RadialEmbedding in your case but you can place your Vertexes using VertexCoordinates.
mySources = Range[1, 10];
myBoth = Range[11, 20];
mySinks = Range[21, 40];
myFirstRules =
Table[Rule[RandomChoice[mySources], RandomChoice[myBoth]], {30}];
mySecondRules =
Table[Rule[RandomChoice[myBoth], RandomChoice[mySinks]], {60}];
myVertexCoordinates =
Join[Table[{Cos[\[Theta]], Sin[\[Theta]]}, {\[Theta], 0,
2 \[Pi] - (2 \[Pi] )/Length[mySources], (2 \[Pi] )/
Length[mySources]}],
Table[2 {Cos[\[Theta]], Sin[\[Theta]]}, {\[Theta], 0,
2 \[Pi] - (2 \[Pi] )/Length[myBoth], (2 \[Pi] )/Length[myBoth]}],
Table[3 {Cos[\[Theta]], Sin[\[Theta]]}, {\[Theta], 0,
2 \[Pi] - (2 \[Pi] )/Length[mySinks], (2 \[Pi] )/
Length[mySinks]}]];
Graph[mySources \[Union] myBoth \[Union] mySinks, myFirstRules \[Union] mySecondRules,
VertexCoordinates -> myVertexCoordinates]
This only work when graph is well structured like David's example and know source, sink, and both:
g = Graph[Range@40, DeleteDuplicates[Join[myFirstRules, mySecondRules]]];
Select[myBoth, (VertexOutDegree[g, #] == 0 &&
VertexInDegree[g, #] != 0) &] -> "out"];
`
|
# Lexicographic preferences
In economics, lexicographic preferences or lexicographic orderings describe comparative preferences where an agent prefers any amount of one good (X) to any amount of another (Y). Specifically, if offered several bundles of goods, the agent will choose the bundle that offers the most X, no matter how much Y there is. Only when there is a tie between bundles with regard to the number of units of X will the agent start comparing the number of units of Y across bundles. Lexicographic preferences extend utility theory analogously to the way that nonstandard infinitesimals extend the real numbers. With lexicographic preferences, the utility of certain goods is infinitesimal in comparison to others.
## Etymology
Lexicography refers to the compilation of dictionaries, and is meant to invoke the fact that a dictionary is organized alphabetically: with infinite attention to the first letter of each word, and only in the event of ties with attention to the second letter of each word, etc.
## Example
As an example, if for a given bundle (X;Y;Z) an agent orders his preferences according to the rule X>>Y>>Z, then the bundles {(5;3;3), (5;1;6), (3,5,3)} would be ordered, from most to least preferred:
1. 5;3;3
2. 5;1;6
3. 3;5;3
• Even though the first option contains fewer total goods than the second option, it is preferred because it has more Y. Note that the number of X's is the same, and so the agent is comparing Y's.
• Even though the third option has the same total goods as the first option, the first option is still preferred because it has more X.
• Even though the third option has far more Y than the second option, the second option is still preferred because it has more X.
## Discontinuity
A lexicographic preference relation is not a continuous relation. This is because, for a decreasing convergent sequence ${\displaystyle x_{n}\rightarrow 0}$ we have ${\displaystyle (x_{n},0)>(0,1)}$ , while the limit (0,0) is smaller than (0,1).
## Utility function representation
A distinctive feature of such lexicographic preferences is that a multivariate real domain of an agent's preferences does not map into a real-valued range. That is, there is no real-valued representation of a preference relation by a utility function, whether continuous or not.[1] Lexicographic preferences are the classical example of rational preferences that are not representable by a utility function.
Proof: suppose by contradiction that there exists a utility function U representing lexicographic preferences, e.g. over two goods. Then U(x,1)>U(x,0) must hold, so the intervals [U(x,0),U(x,1)] must have a non-zero width. Moreover, since U(x,1)<U(z,1) whenever x<z, these intervals must be disjoint for all x. This is not possible for an uncountable set of x-values.
If there are a finite number of goods, and amounts can only be rational numbers, utility functions do exist, simply by taking 1/N to be the size of the infinitesimal, where N is sufficiently large, to approximate nonstandard numbers.
In terms of real valued utility, one would say that the utility of Y and Z is infinitesimal compared with X, and the utility of Z is infinitesimal compared to Y. Thus, lexicographic preferences can be represented by utility functions returning nonstandard real numbers.
## Equilibrium in economies with lexicographic preferences
If all agents have the same lexicographic preferences, then general equilibrium cannot exist because agents won't sell to each other[clarification needed] (as long as price of the less preferred is more than zero). But if the price of the less wanted is zero, then all agents want an infinite amount of the good. Equilibrium cannot be attained with standard prices. The utilities are infinitesimal, but the prices are not. Allowing infinitesimal prices resolves this.
Lexicographic preferences can still exist with general equilibrium. For example,
• Different people have different bundles of lexicographic preferences such that different individuals value items in different orders.
• Some, but not all people have lexicographic preferences.
• Lexicographic preferences extend only to a certain quantity of the good.
The nonstandard (infinitesimal) equilibrium prices for exchange can be determined for lexicographic order using standard equilibrium methods, except using nonstandard reals as the range of both utilities and prices. All the theorems regarding existence of prices and equilibria extend to the case of nonstandard utilities, since the nonstandard reals form a conservative extension, meaning that any theorem which is true for reals can be extended to the nonstandard reals and remains true.
|
# Find the Number of Ways to Traverse an N-ary Tree using C++
C++Server Side ProgrammingProgramming
#### C in Depth: The Complete C Programming Guide for Beginners
45 Lectures 4.5 hours
#### Practical C++: Learn C++ Basics Step by Step
Most Popular
50 Lectures 4.5 hours
#### Master C and Embedded C Programming- Learn as you go
66 Lectures 5.5 hours
Given an N-ary tree and we are tasked to find the total number of ways to traverse this tree, for example −
For the above tree, our output will be 192.
For this problem, we need to have some knowledge about combinatorics. Now in this problem, we simply need to check all the possible combinations for every path and that will give us our answer.
## Approach to Find the Solution
In this approach, we simply need to perform a level order traversal and check the number of children each node has and then simply multiply its factorial to the answer.
## Example
C++ Code for the Above Approach
#include<bits/stdc++.h>
using namespace std;
struct Node{ // structure of our node
char key;
vector<Node *> child;
};
Node *createNode(int key){ // function to initialize a new node
Node *temp = new Node;
temp->key = key;
return temp;
}
long long fact(int n){
if(n <= 1)
return 1;
return n * fact(n-1);
}
int main(){
Node *root = createNode('A');
(root->child).push_back(createNode('B'));
(root->child).push_back(createNode('F'));
(root->child).push_back(createNode('D'));
(root->child).push_back(createNode('E'));
(root->child[2]->child).push_back(createNode('K'));
(root->child[1]->child).push_back(createNode('J'));
(root->child[3]->child).push_back(createNode('G'));
(root->child[0]->child).push_back(createNode('C'));
(root->child[2]->child).push_back(createNode('H'));
(root->child[1]->child).push_back(createNode('I'));
(root->child[2]->child[0]->child).push_back(createNode('N'));
(root->child[2]->child[0]->child).push_back(createNode('M'));
(root->child[1]->child[1]->child).push_back(createNode('L'));
queue<Node*> q;
q.push(root);
long long ans = 1;
while(!q.empty()){
auto z = q.front();
q.pop();
ans *= fact(z -> child.size());
cout << z->child.size() << " ";
for(auto x : z -> child)
q.push(x);
}
cout << ans << "\n";
return 0;
}
## Output
4 1 2 2 1 0 0 1 2 0 0 0 0 0 192
## Explanation of the Above Code
In this approach, we apply BFS(Breadth-First Search) or level order traversal and check the number of children each node has. Then, multiply the factorial of that number to our answer.
## Conclusion
This tutorial finds several ways to traverse N-ary tree combinatorics and by applying BFS. We also learned the C++ program for this problem and the complete approach we solved.
We can write the same program in other languages such as C, java, python, and other languages. We hope you find this tutorial helpful.
Updated on 25-Nov-2021 09:30:15
|
# How do you find the range of f(x)=x^2 + 3?
The domain is the set of all real numbers $R$ and the range is $\left[3 , + \infty\right)$
|
6. Process or Product Monitoring and Control
6.2. Test Product for Acceptability: Lot Acceptance Sampling
6.2.3. How do you Choose a Single Sampling Plan?
## Choosing a Sampling Plan with a given OC Curve
Sample OC curve We start by looking at a typical OC curve. The OC curve for a (52, 3) sampling plan is shown below.
Number of defectives is approximately binomial It is instructive to show how the points on this curve are obtained, once we have a sampling plan ($$n,c$$) - later we will demonstrate how a sampling plan ($$n,c$$) is obtained.
We assume that the lot size $$N$$ is very large, as compared to the sample size $$n$$, so that removing the sample doesn't significantly change the remainder of the lot, no matter how many defects are in the sample. Then the distribution of the number of defectives, $$d$$, in a random sample of $$n$$ items is approximately binomial with parameters $$n$$ and $$p$$, where $$p$$ is the fraction of defectives per lot.
The probability of observing exactly $$d$$ defectives is given by
The binomial distribution $$P_d = f(d) = \frac{n!}{d! (n-d)!} p^d (1-p)^{n-d} \, .$$ The probability of acceptance is the probability that $$d$$, the number of defectives, is less than or equal to $$c$$, the accept number. This means that $$P_a = P(d \le c) = \sum_{d=0}^c \frac{n!}{d!(n-d)!} p^d(1-p)^{n-d} \, .$$
Sample table for $$P_a$$, $$P_d$$ using the binomial distribution Using this formula with $$n=52$$, $$c=3$$, and $$p = 0.01, \, 0.02, \, \ldots, \, 0.12$$, we find:
$$P_a$$ $$P_d$$ 0.998 0.01 0.980 0.02 0.930 0.03 0.845 0.04 0.739 0.05 0.620 0.06 0.502 0.07 0.394 0.08 0.300 0.09 0.223 0.10 0.162 0.11 0.115 0.12
Solving for (n,c)
Equations for calculating a sampling plan with a given OC curve In order to design a sampling plan with a specified OC curve one needs two designated points. Let us design a sampling plan such that the probability of acceptance is $$1-\alpha$$ for lots with fraction defective $$p_1$$ and the probability of acceptance is $$\beta$$ for lots with fraction defective $$p_2$$. Typical choices for these points are: $$p_1$$ is the AQL, $$p_2$$ is the LTPD and $$\alpha, \, \beta$$ are the Producer's Risk (Type I error) and Consumer's Risk (Type II error), respectively.
If we are willing to assume that binomial sampling is valid, then the sample size $$n$$, and the acceptance number $$c$$ are the solution to $$\begin{eqnarray} 1 - \alpha & = & \sum_{d=0}^c \frac{n!}{d!(n-d)!} p_1^d (1-p_1)^{n-d} \\ \beta & = & \sum_{d=0}^c \frac{n!}{d!(n-d)!} p_2^d (1-p_2)^{n-d} \, . \end{eqnarray}$$ These two simultaneous equations are nonlinear so there is no simple, direct solution. There are however a number of iterative techniques available that give approximate solutions so that composition of a computer program poses few problems.
Average Outgoing Quality (AOQ)
Calculating AOQs We can also calculate the AOQ for a ($$n,c$$) sampling plan, provided rejected lots are 100 % inspected and defectives are replaced with good parts.
Assume all lots come in with exactly a $$p_0$$ proportion of defectives. After screening a rejected lot, the final fraction defectives will be zero for that lot. However, accepted lots have fraction defective $$p_0$$. Therefore, the outgoing lots from the inspection stations are a mixture of lots with fractions defective $$p_0$$ and 0. Assuming the lot size is $$N$$, we have. $$\mbox{AOQ} = \frac{P_a p(N-n)}{N} \, .$$ For example, let $$N=10000$$, $$n=52$$, $$c=3$$, and $$p$$, the quality of incoming lots, equals 0.03. Now at $$p = 0.03$$, we glean from the OC curve table that $$p_a = 0.930$$ and $$\mbox{AOQ} = \frac{(0.930)(0.03)(10000-52)}{10000} = 0.02775 \, .$$
Sample table of AOQ versus $$p$$ Setting $$p = 0.01, \, 0.02, \, \ldots, \, 0.12$$, we can generate the following table.
AOQ $$p$$ 0.0010 0.01 0.0196 0.02 0.0278 0.03 0.0338 0.04 0.0369 0.05 0.0372 0.06 0.0351 0.07 0.0315 0.08 0.0270 0.09 0.0223 0.10 0.0178 0.11 0.0138 0.12
Sample plot of AOQ versus $$p$$ A plot of the AOQ versus $$p$$ is given below.
Interpretation of AOQ plot From examining this curve we observe that when the incoming quality is very good (very small fraction of defectives coming in), then the outgoing quality is also very good (very small fraction of defectives going out). When the incoming lot quality is very bad, most of the lots are rejected and then inspected. The "duds" are eliminated or replaced by good ones, so that the quality of the outgoing lots, the AOQ, becomes very good. In between these extremes, the AOQ rises, reaches a maximum, and then drops.
The maximum ordinate on the AOQ curve represents the worst possible quality that results from the rectifying inspection program. It is called the average outgoing quality limit, (AOQL ).
From the table we see that the $$\mbox{AOQL} = 0.372$$ at $$p=0.06$$ for the above example.
One final remark: if $$N \gg n$$, then the $$\mbox{AOQ} \approx P_a p$$.
The Average Total Inspection (ATI)
Calculating the Average Total Inspection What is the total amount of inspection when rejected lots are screened?
If all lots contain zero defectives, no lot will be rejected.
If all items are defective, all lots will be inspected, and the amount to be inspected is $$N$$.
Finally, if the lot quality is $$0 \lt p \lt 1$$, the average amount of inspection per lot will vary between the sample size $$n$$, and the lot size $$N$$.
Let the quality of the lot be $$p$$ and the probability of lot acceptance be $$P_a$$, then the ATI per lot is $$\mbox{ATI} = n + (1 - P_a) (N - n) \, .$$ For example, let $$N=10000$$, $$n=52$$, $$c=3$$, and $$p = 0.03$$. We know from the OC table that $$P_a = 0.93$$. Then $$\mbox{ATI} = 52 + (1-0.930)(10000 - 52) = 753$$. (Note that while 0.930 was rounded to three decimal places, 753 was obtained using more decimal places.)
Sample table of ATI versus $$p$$ Setting $$p = 0.01, \, 0.02, \, \ldots, \, 0.14$$ generates the following table.
ATI $$p$$ 70 0.01 253 0.02 753 0.03 1584 0.04 2655 0.05 3836 0.06 5007 0.07 6083 0.08 7012 0.09 7779 0.10 8388 0.11 8854 0.12 9201 0.13 9453 0.14
Plot of ATI versus $$p$$ A plot of ATI versus $$p$$, the Incoming Lot Quality (ILQ) is given below.
|
# What's the difference between background field and dynamical gauge field?
Dynamical gauge fields are assumed to be able to respond to sources.
What's the difference in the Lagrangians between a background field and a dynamical field?
-
Typically, in the path integral formalism
$$Z~=~\int \!{\cal D}\phi~ \exp\left(\frac{i}{\hbar}S[\phi]\right),$$
the dynamical$^1$ quantum field variables
$$\phi^{\alpha}(x)~=~B^{\alpha}(x)+\eta^{\alpha}(x)$$
are split into two parts:
1. a classical background field configuration $B^{\alpha}(x)$, which satisfies the classical equations$^2$ of motion (with classical background sources$^3$ $J_{\alpha}(x)$),
2. and a quantum fluctuation part $\eta^{\alpha}(x)$, which is (perturbatively) integrated over in the path integral $$Z[B]~=~\int \!{\cal D}\eta~ \exp\left(\frac{i}{\hbar}S[B+\eta]\right),$$
--
$^1$ The word dynamical will here for simplicity just mean that the field variable is an integration variable in the path integral. (Often the word dynamical is only used for propagating fields (as opposed to an non-propagating auxiliary field, e.g. Lagrange multipliers and ghosts)). In contrast, the word background is typically assigned to variables that are not integrated over in the path integral.
$^2$ We assume for simplicity that there (with the given boundary conditions and the given classical sources) is a unique classical solution for $B^{\alpha}(x)$, i.e. we ignore instantons here.
$^3$ The classical source configuration $J_{\alpha}(x)$ should typically satisfy certain consistency conditions, such as, e.g., a continuity equation.
-
|
Advertisement Remove all ads
# Without Actual Division Show that Each of the Following Rational Numbers is a Non-terminating Repeating Decimal. (I) 9/35 - Mathematics
Advertisement Remove all ads
Advertisement Remove all ads
Advertisement Remove all ads
Without actual division show that each of the following rational numbers is a non-terminating repeating decimal.
(i) 9/35
Advertisement Remove all ads
#### Solution
9/35 = 9/(5 ×7)
We know either 5 or 7 is not a factor of 9, so it is in its simplest form.
Moreover, (5 × 7) ≠ (2m × 5n
Hence, the given rational is non-terminating repeating decimal.
Concept: Euclid’s Division Lemma
Is there an error in this question or solution?
#### Video TutorialsVIEW ALL [1]
Advertisement Remove all ads
Share
Notifications
View all notifications
Forgot password?
|
# Quasi-projectivity, Artin-Tits Groups, and Pencil Maps
Authors
Type
Preprint
Publication Date
May 28, 2010
Submission Date
May 28, 2010
Source
arXiv
Yellow
## Abstract
We consider the problem of deciding if a group is the fundamental group of a smooth connected complex quasi-projective (or projective) variety using Alexander-based invariants. In particular, we solve the problem for large families of Artin-Tits groups. We also study finiteness properties of such groups and exhibit examples of hyperplane complements whose fundamental groups satisfy $\text{F}_{k-1}$ but not $\text{F}_k$ for any $k$.
Seen <100 times
|
# Exercises - Higher Order Derivatives
1. Find $y''$ in simplest form for each:
1. $\displaystyle{y = x^{1/3} (x+1)}$
2. $\displaystyle{y = \frac{x^2 - 1}{(x+4)^2}}$
3. $\displaystyle{y = \frac{x^2-4}{x^2+1}}$
4. $\displaystyle{y = \frac{2x^3}{x^3+1}}$
5. $\displaystyle{y = \sqrt{x^2+9}}$
6. $\displaystyle{y = \frac{6x}{x^2+1}}$
7. $\displaystyle{y = \frac{2x^3}{x+1}}$
2. Find $y'''$ in simplest form for each:
1. $\displaystyle{y = 4x^3 - 3e^x - \sin x}$
2. $\displaystyle{y = e^{\ln \cos x} - \ln \left( \frac{x}{e^x} \right)}$
|
# The center of a circle is at (4, -1) and it has a radius of 6. What is the equation of the circle?
Mar 6, 2016
${\left(x - 4\right)}^{2} + {\left(y + 1\right)}^{2} = 36$
#### Explanation:
The standard form of the equation of a circle is :
${\left(x - a\right)}^{2} + {\left(y - b\right)}^{2} = {r}^{2}$
where (a , b ) is the coords of the centre and r , the radius.
here (a , b ) = (4 , -1 ) and r = 6
substitute these values into the standard equation
$\Rightarrow {\left(x - 4\right)}^{2} + {\left(y + 1\right)}^{2} = 36 \text{ is the equation }$
|
import torch
model = torch.hub.load('huawei-noah/ghostnet', 'ghostnet_1x', pretrained=True)
model.eval()
All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].
Here’s a sample execution.
# Download an example image from the pytorch website
import urllib
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
try: urllib.URLopener().retrieve(url, filename)
except: urllib.request.urlretrieve(url, filename)
# sample execution (requires torchvision)
from PIL import Image
from torchvision import transforms
input_image = Image.open(filename)
preprocess = transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
])
input_tensor = preprocess(input_image)
input_batch = input_tensor.unsqueeze(0) # create a mini-batch as expected by the model
# move the input and model to GPU for speed if available
if torch.cuda.is_available():
input_batch = input_batch.to('cuda')
model.to('cuda')
output = model(input_batch)
# Tensor of shape 1000, with confidence scores over Imagenet's 1000 classes
print(output[0])
# The output has unnormalized scores. To get probabilities, you can run a softmax on it.
probabilities = torch.nn.functional.softmax(output[0], dim=0)
print(probabilities)
# Download ImageNet labels
!wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt
# Read the categories
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Show top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
### Model Description
The GhostNet architecture is based on an Ghost module structure which generate more features from cheap operations. Based on a set of intrinsic feature maps, a series of cheap operations are applied to generate many ghost feature maps that could fully reveal information underlying intrinsic features. Experiments conducted on benchmarks demonstrate that the superiority of GhostNet in terms of speed and accuracy tradeoff.
The corresponding accuracy on ImageNet dataset with pretrained model is listed below.
Model structure FLOPs Top-1 acc Top-5 acc
GhostNet 1.0x 142M 73.98 91.46
### References
You can read the full paper at this link.
@inproceedings{han2019ghostnet, title={GhostNet: More Features from Cheap Operations}, author={Kai Han and Yunhe Wang and Qi Tian and Jianyuan Guo and Chunjing Xu and Chang Xu}, booktitle={CVPR}, year={2020}, }
|
# Serialization¶
Serialization is much simpler than it used to be. Before, this set of libraries had its own serialization scheme. This was generally a bad idea. It was alright for sending strings, and good for a proof of concept, but it would have taken far too much effort to maintain.
Now we rely on msgpack , with some small modifications. Essentially, the process is capturable with the following JavaScript:
function serialize(msg, compressions) {
let to_serialize = [msg.type, msg.sender, msg.time, ...msg.payload];
let bare_serialized = msgpack.encode(to_serialize);
let checksum = SHA256(bare_serialized); // should return binary digest
let payload = compress(checksum + bare_serialized, compressions);
|
# Formal definition of limits as x approaches infinity used to prove a limit
1. Oct 15, 2012
### aegiuscutter
1. The problem statement, all variables and given/known data
use the formal definition to show that lim as t goes to infinity of (1-2t-3t^2)/(3+4t+5t^2) = -3/5
2. Relevant equations
given epsilon > 0, we want to find N such that if x>N then absolute value of ((1-2t-3t^2)/(3+4t+5t^2) + 3/5) < epsilon
3. The attempt at a solution
i assume that X>N>0 and that the numerator and denominator can't be equal to zero.
do i have to limit the domain? not sure how to proceed from here
absolute value of ((1-2t-3t^2)/(3+4t+5t^2)) < epsilon - 3/5
absolute value of (-(3t+1)(t-1)/5t^2+4t+3)) < epsilon -3/5
2. Oct 15, 2012
### Zondrina
For this problem, begin by massaging your |f(x) - L| into the form x>N so that you will get a particular value of N which may work.
Then re-state your definition except say $\forall ε>0, \exists N = something \space | \space x > something \Rightarrow |f(x)-L| < ε$
Then proceed to prove that the particular value of N you found satisfies the definition.
|
# Changeset 3233 for Deliverables
Ignore:
Timestamp:
Apr 30, 2013, 5:00:37 PM (8 years ago)
Message:
passed spell checker, added description of the cerco wrapper, minor changes to the cost plug-in section
File:
1 edited
### Legend:
Unmodified
r3109 \documentclass[11pt, epsf, a4wide]{article} \documentclass[11pt]{article} \usepackage{../style/cerco} \begin{large} Main Authors:\\ XXXX %Dominic P. Mulligan and Claudio Sacerdoti Coen Roberto M. Amadio, Nicolas Ayache, François Bobot, Jaap Boender, Brian Campbell, Ilias Garnier, Antoine Madet, James McKinna, Dominic P. Mulligan, Mauro Piccolo, Yann R\'egis-Gianas, Claudio Sacerdoti Coen, Ian Stark and Paolo Tranquilli \end{large} \end{center} \vspace*{7cm} \paragraph{Abstract} The Trusted CerCo Prototype is meant to be the last piece of software produced The trusted CerCo Prototype is meant to be the last piece of software produced in the CerCo project. It consists of \begin{enumerate} user by producing an instrumented C code obtained injecting in the original source code some instructions to update three global variables that record the current clock, the current stack usage and the max stack usage so far. the current clock, the current stack usage and the maximum stack usage so far. \item A plug-in for the Frama-C Software Analyser architecture. The plug-in takes in input a C file, compiles it with the CerCo compiler and then provers to automatically verify them. Those that are not automatically verified can be manually checked by the user using a theorem prover. \item A wrapper that interfaces and integrates the two pieces of software above with the Frama-C Jessie plugin and with the Why3 platform. \end{enumerate} The Grant Agreement describes deliverable D5.2 as follows: \begin{quotation} \textbf{Trusted CerCo Prototype}: Final, fully trustable version of the \textbf{Trusted CerCo Prototype}: Final, fully trusted version of the system. \end{quotation} } \end{verbatim} \caption{A simple C program~\label{test1}} \caption{A simple C program.} \label{test1} \end{figure} } \end{verbatim} \caption{The instrumented version of the program in Figure~\ref{test1}\label{itest1}} \caption{The instrumented version of the program in \autoref{test1}.} \label{itest1} \end{figure} Let the file \texttt{test.c} contains the code presented in Figure~\ref{test1}. Let the file \texttt{test.c} contains the code presented in \autoref{test1}. By calling \texttt{acc -a test.c}, the user obtains the following files: \begin{itemize} \item \texttt{test-instrumented.c} (Figure~\ref{itest1}): the file is a copy \item \texttt{test-instrumented.c} (\autoref{itest1}): the file is a copy of \texttt{test.c} obtained by adding code that keeps track of the amount of time and space used by the source code. \item \texttt{test.hex}: the file is an Intel HEX representation of the object code for an 8051 microprocessor. The file can be loaded in any 8051 emulator (like the MCU 8051 IDE) for running or dissassemblying it. 8051 emulator (like the MCU 8051 IDE) for running or disassembling it. Moreover, an executable semantics (an emulator) for the 8051 is linked with the CerCo compilers and can be used to run the object code at the compilers for the intermediate passes are not the same and we do not output yet any intermediate syntax for the assembly code. The latter can be obtained by disassemblying the object code, e.g. by using the MCU 8051 IDE. by disassembling the object code, e.g. by using the MCU 8051 IDE. \subsection{Code structure} This is the code that will eventually be fully certified using Matita. It is fully contained in the \texttt{extracted} directory (not comprising the subdirectory \texttt{unstrusted}). the subdirectory \texttt{untrusted}). It translates the source C-light code to: \begin{itemize} at the beginning of basic blocks. The emitted labels are the ones that are observed calling \texttt{acc -is}. They will be replaced in the final instrumented code with incrementes of the \texttt{\_\_cost} final instrumented code with increments of the \texttt{\_\_cost} variable. \item The object code for the 8051 as a list of bytes to be loaded in code memory. The code is coupled with a trie over bitvectors that maps code memory. The code is coupled with a trie over bit vectors that maps program counters to cost labels \texttt{k}. The intended semantics is that, when the current program counter is associated to \texttt{k}, describe. \item Untrusted code called by trusted compiler (directory \texttt{extracted/unstrusted}). The two main untrusted components in this \texttt{extracted/untrusted}). The two main untrusted components in this directory are \begin{itemize} which pseudo-registers need spilling; finally it colours it again using real registers and spilling slots as colours to assign to each pseudoregister either a spilling slot or a colours to assign to each pseudo-register either a spilling slot or a real register. \end{itemize} Clight sources. The reason for the slowness is currently under investigation. It is likely to be due to the quality of the extracted code (see Section~\ref{quality}). \autoref{quality}). \item The back-ends of both compilers do not handle external functions because we have not implemented a linker. The trusted compiler fails during compilation, while the untrusted compiler silently turns every external function call into a \texttt{NOP}. \item The untrusted compiler had ad-hoc options to deal with C files generated from a Lustre compiler. The ad-hoc code simplified the C files \item The untrusted compiler had \emph{ad hoc} options to deal with C files generated from a Lustre compiler. The \emph{ad hoc} code simplified the C files by avoiding some calls to external functions and it was adding some debugging code to print the actual reaction time of every Lustre node. The trusted compiler does not implement any ad-hoc Lustre option yet. The trusted compiler does not implement any \emph{ad hoc} Lustre option yet. \end{itemize} \subsection{Implementative differences w.r.t. the untrusted compiler}\label{quality} \subsection{Implementation differences w.r.t.\ the untrusted compiler} The code of the trusted compiler greatly differs from the code of the untrusted prototype. The main architectural difference is the one of a unified syntax (data-structure), semantics and pretty-printing for every back-end language. In order to accomodate the differences between the original languages, the In order to accommodate the differences between the original languages, the syntax and semantics have been abstracted over a number of parameters, like the type of arguments of the instructions. For example, early languages use pseudo-registers to hold data while late languages store data into real machine registers or stack locations. The unification of the languages have bringed registers or stack locations. The unification of the languages have brought a few benefits and can potentially bring new ones in the future: \begin{itemize} reducing to 1/6th the number of lemmas to be proved (6 is the number of back-end intermediate languages). Moreover, several back-end passes --- a pass between two alternative semantics for RTL, the RTL to ERTL pass and the ERTL to LTL pass --- transform a graph instance of the generic ---a pass between two alternative semantics for RTL, the RTL to ERTL pass and the ERTL to LTL pass--- transform a graph instance of the generic back-end intermediate language to another graph instance. The graph-to-graph transformation has also been generalized and parameterized graph-to-graph transformation has also been generalized and parametrised over the pass-specific details. While the code saved in this way is not much, several significant lemmas are provided once and for all on the \item Some passes and several proofs can be given in greater generality, allowing more reusal. allowing more reuse. For example, in the untrusted prototype the LTL to LIN pass was turning a graph language into a linearized language LTL to LIN pass was turning a graph language into a linearised language with the very same instructions and similar semantics. In particular, the two semantics shared the same execute phase, while fetching was different. one of the two representations and a static single assignment (SSA) one. As a final observation, the insertion of another graph-based language after the LTL one is now made easy: the linearization pass needs not be redone the LTL one is now made easy: the linearisation pass needs not be redone for the new pass. \item Pass commutation and reusal. \item Pass commutation and reuse. Every pass is responsible for reducing a difference between the source and target languages. For example, the RTL to ERTL pass However, following Coq's tradition, detection of the useless parts is not done according to the computationally expensive algorithm by Berardi~\cite{berardixxx}. Instead, the user decides which data structures Berardi~\cite{berardi1,berardi2}. Instead, the user decides which data structures should be assigned computation interest by declaring them in one of the \texttt{Type\_i} sorts of the Calculus of (Co)Inductive Constructions. is employed, which is our choice for CerCo. Under this discipline, terms can be passed to type formers. For example, the data type for back-end languages in CerCo is parameterized over the list of global variables that languages in CerCo is parametrised over the list of global variables that may be referred to by the code. Another example is the type of vectors that is parameterized over a natural which is the size of the vector, or the type of vector tries which is parameterized over the fixed height of that is parametrised over a natural which is the size of the vector, or the type of vector tries which is parametrised over the fixed height of the tree and that can be read and updated only using keys (vectors of bits) whose lenght is the height of the trie. Functions that compute these bits) whose length is the height of the trie. Functions that compute these dependent types also have to compute the new indexes (parameters) for the types, even if this information is used only for typing. For example, appending two vectors require the computation of the lenght of the result appending two vectors require the computation of the length of the result vector just to type the result. In turn, this computation requires the lenghts of the two vectors in input. Therefore, functions that call append have to compute the length of the vectors to append even if the lenghts lengths of the two vectors in input. Therefore, functions that call append have to compute the length of the vectors to append even if the lengths will actually be ignored by the extracted code of the append function. In the litterature there are proposals for allowing the user to more In the literature there are proposals for allowing the user to more accurately distinguish computational from non computational arguments of functions. The proposals introduce two different types of $\lambda$-abstractions and ad-hoc typing rules to ensure that computationally $\lambda$-abstractions and \emph{ad hoc} typing rules to ensure that computationally irrelevant bound variables are not used in computationally relevant positions. An OCaml prototype that implements these ideas for Coq is available~\cite{xxyy}, but heavily bugged. We did not try yet to do available~\cite{implicitcoq}, but heavily bugged. We did not try yet to do anything along these lines in Matita. To avoid modifying the system, another approach based on the explicit use of a non computational monad has been also proposed in the litterature, but it introduces many has been also proposed in the literature, but it introduces many complications in the formalization and it cannot be used in every situation. Improvement of the code extraction to more aggresively remove irrelevant Improvement of the code extraction to more aggressively remove irrelevant code from code extracted from Matita is left as future work. At the moment, it seems that useless computations are indeed responsible the upper-right corner of Barendregt cube, can be re-typed in System $F_\omega$, the corresponding typed lambda calculus without dependent types~\cite{christinexx}. The calculi implemented by Coq and Matita, however, types~\cite{coctofomega}. The calculi implemented by Coq and Matita, however, are more expressive than CoC, and several type constructions have no counterparts in System $F_\omega$. Moreover, core OCaml does not even both as term formers and type formers, according to the arguments that are passed to them. In System $F_\omega$, instead, terms and types are syntactially distinct. Extracting terms according to all their possible uses may be unpractical because the number of uses is exponential in the syntactically distinct. Extracting terms according to all their possible uses may be impractical because the number of uses is exponential in the number of arguments of sort $Type_i$ with $i \geq 2$. \item Case analysis and recursion over inhabitants of primitive inductive $F_\omega$. In the CerCo compiler we largely exploit type formers declared in this way, for example to provide the same level of type safety achieved in the untrusted compiler via polymorphic variants~\cite{guarriguesxx}. in the untrusted compiler via polymorphic variants~\cite{garrigue98}. In particular, we have terms to syntactically describe as first class citizens the large number of combinations of operand modes of object code fragment. Sometimes simple code transformations could be used to make the function typeable, but the increased extraction code complexity would outweight the benefits. outweigh the benefits. \end{enumerate} \end{itemize} \item The two most recent versions of OCaml have introduced first class modules, which are exactly the feature needed for extracting code that uses records contining both types and term declarations. However, the syntax records containing both types and term declarations. However, the syntax required for first class modules is extremely cumbersome and it requires the explicit introduction of type expressions to make manifest the type \texttt{-cost-acc} flag of the plug-in can be used to select the executable to be used for compilation. The code of the plug-in has been modified w.r.t. D5.1 in order to take care also of the cost model for stack-size consumption. From the user point of view, The code of the plug-in has been modified w.r.t. D5.1 to address two issues. On the one side, the analysis of the stack-size consumption has been integrated into it. From the user point of view, time and space cost annotations and invariants are handled in a similar way. Nevertheless, we expect automated theorem provers to face more difficulties Most C programs, and in particular those used in time critical systems, avoid recursive functions. On the other side, the plug-in has been updated to take advantage of the new Why3 platform. \section{The \texttt{cerco} Wrapper} The Why3 platform is a complete rewrite of the old Why2 one. The update has triggered several additional passages to enable the use of the cost plug-in in conjunction with the Jessie one and the automatic and interactive theorem provers federated by the Why3 platform, mainly because the Jessie plug-in still uses Why2. These passages, which required either tedious manual commands or a complicated makefile, have prompted us to write a script wrapping all the functionalities provided by the software described in this deliverable. \begin{verbatim} Syntax: cerco [-ide] [-untrusted] filename.c \end{verbatim} The C file provided is processed via the cost plug-in and then to the Why3 platform. The two available options command the following features. \begin{itemize} \item \verb+-ide+: launch the Why3 interactive graphical interface for a fine-grained control on proving the synthesised program invariants. If not provided, the script will launch all available automatic theorem provers with a 5 second time-out, and just report failure or success. \item \verb+-untrusted+: if it is installed, use the untrusted prototype rather than the trusted one (which is the default behaviour). \end{itemize} The minimum dependencies for the use of this script are \begin{itemize} \item either the trusted or the untrusted \verb+acc+ CerCo compiler; \item both Why2 and Why3; \item the cost and Jessie plus-ins. \end{itemize} However it is recommended to install as much Why3-compatible automatic provers as possible to maximise the effectiveness of the command. The provers provided by default were not very effective in our experience. \section{Connection with other deliverables} \end{itemize} \bibliographystyle{unsrt} \bibliography{report} \end{document}
|
# Particle-in-cell simulation of plasma emission in solar radio bursts by T. M. Li et al.
Solar radio radiation is one of the most sensitive emissions during solar eruptions, where type III radio bursts can provide clues for electron acceleration and propagation. Type III radio bursts are widely accepted to result from the plasma emission, which is observationally supported by the presence of Langmuir waves, and excited electromagnetic waves at fundamental and harmonic frequencies from the nonlinear wave coupling (Ginzburg & Zhelezniakov 1958). Such a mechanism has been further developed based on analytical and numerical studies (Yoon 1997). However, it was sometimes argued that this mechanism might not work efficiently when the plasma is more magnetized. Besides, the third or even higher harmonic emissions were observed in some events, and it is not sure whether the plasma emission mechanism can explain these features or not.
Particle-in-cell (PIC) simulations provide a powerful approach to study solar radio bursts from the first principle. We performed 2.5D PIC simulations to investigate type III radio bursts when electron beams with different pitch angles are injected into a strongly magnetized plasma (plasma beta is much smaller than 1).
Simulation results
Figure 1 shows the dispersion diagrams of the electrostatic field Ex (left panels) when the pitch angle of the electron beam is $\theta=0^{\circ}$,$\theta=30^{\circ}$, and $\theta=80^{\circ}$. The black lines represent the electron beam mode. The blue curves indicate the dispersion relation of the fundamental Langmuir waves. The black dashed curves indicate the dispersion relation of the harmonic Langmuir waves. It clearly illustrates that the Langmuir waves at fundamental (L) and harmonic frequencies (2L, 3L, …) are excited by the electron beam – plasma interaction, and the backward Langmuir waves ($L’$, $L^{\prime \prime}$, and $L^{\prime \prime \prime}$) are, for the first time, found to be generated up to the third harmonic frequencies in the case of the pitch angle $\theta=80^{\circ}$.
Figure 1Dispersion diagrams of the electrostatic field Ex (left panels) and the electromagnetic field Ez (right panels) in the cases of the following pitch angles (from top to bottom): $\theta=0^{\circ}$, $\theta=30^{\circ}$, and $\theta=80^{\circ}$.
To understand how these wave modes are produced, we compare in Figure 2 the energy distributions of the third harmonic Langmuir modes to the density fluctuations along the x-direction, in the case of pitch angle of $\theta=80^{\circ}$. We find both density bumps and cavities, and the density cavities generally correspond to the maxima of wave energy (left panel) and the anti-correlation coefficient is derived to be ~0.52 (right panel). This indicates that the reflecting and scattering of the Langmuir wave packets by the density fluctuations probably plays a key role in generating harmonic Langmuir modes and exciting electromagnetic modes.
Figure 2. Spatial distributions of the third harmonic Langmuir wave energy and the density fluctuations (left panel) and their anti-correlation (right panel) in the case of pitch angle being $\theta=80^{\circ}$.
The right panels of Figure 1 display the dispersion diagrams of the electromagnetic field Ez. The solid and dashed curves indicate the theoretical linear fundamental and nonlinear second harmonic electromagnetic dispersion relations. Based on the above results, the radiation processes of the harmonic electromagnetic emission can be explicated as follows. The decay of the Langmuir wave produces backward Langmuir wave and fundamental electromagnetic wave, $L \rightarrow L’ + F$, and then the coalescence of forward and backward Langmuir waves produces the second harmonic electromagnetic emission, i.e., $L+L’ \rightarrow 2H$. For the third harmonic electromagnetic emission, three possible processes might be involved: $L + 2H \rightarrow 3H$, $L+2L \rightarrow 3H$, or $L+L’+L^{\prime \prime} \rightarrow 3H$ in the cases of the pitch angle $\theta=30^{\circ}$ and $\theta=80^{\circ}$. In order to confirm which process is dominate in our simulations, we conducted the bi-spectral analysis, which can provide potential clues of nonlinear wave coupling. The results reveal the processes of nonlinear wave coupling, $f_{pe}+f_{pe}\rightarrow 2f_{pe}$, indicating that the second harmonic electromagnetic emission probably corresponds to the coupling of forward and backward Langmuir waves. The higher harmonics may arise from wave coupling as well, that is, $f_{pe}+2f_{pe}\rightarrow 3f_{pe}$ . The analysis of the bi-spectral analysis at different pitch angles show that the fundamental and harmonic waves are primary, and the other harmonics are all produced by their coalescence. It is therefore clear that the electron beam-driven Langmuir waves act as a kind of “pump” to generate electromagnetic waves via the mode conversion and the nonlinear wave coupling. There is another possibility that cannot be completely ruled out, i.e., the coalescence of electrostatic waves for the third harmonic electromagnetic emission, $L+L’+L^{\prime \prime} \rightarrow 3H$, especially for the cases of large electron pitch angles.
Summary
Our numerical results indicate the following possible radiation processes of the radio emission: (1) The fundamental electromagnetic emission results from the scattering of Langmuir wave and thermal ions, namely, $L+i \rightarrow F$, due to the fact that hot plasma was adopted in the simulations. Another possible explanation is the decay of Langmuir waves, namely, $L \rightarrow S+F$ , where S represents the ion acoustic waves. (2) The second harmonic electromagnetic emission is generated first via the decay of Langmuir waves, namely, $L \rightarrow L’+F$ , and then via the coupling of forward and backward Langmuir waves, namely, $L+L’ \rightarrow 2H$. (3) The higher harmonic electromagnetic emission probably results from the coupling of Langmuir waves and harmonic electromagnetic waves, namely, $L+(n-1)H\rightarrow nH$ (Cairns 1999). It is argued that large pitch angles of the electron beam facilitates the density fluctuations, which further enhance the wave coupling, enabling the higher-order harmonic radio emissions.
Based on the recent paper: T. M. Li, C. Li, P. F. Chen and W. J. Ding. Particle-in-cell simulation of plasma emission in solar radio bursts, A&A 653, A169 (2021). DOI: https://doi.org/10.1051/0004-6361/202140973
References:
Cairns, I. H. 1988, J. Geophys. Res., 93, 858
Ginzburg, V. L., & Zhelezniakov, V. V. 1958, Sov. Astron., 2, 653
Yoon, P. H. 1997, Phys. Plasmas, 4, 3863
|
# Do kernel functions of integral transforms have any special properties?
...an integral transform is any transform $T$ of the following form: $$(Tf)(u)=\int^{t_2}_{t_1}K(t,u)f(t)dt$$ ...There are numerous useful integral transforms. Each is specified by a choice of the function $K$ of two variables, the kernel function or nucleus of the transform.
I was wondering if the kernel function has to have any special properties in order for the integral transform to work. On the bottom of the Wikipedia page, it shows some of the common kernel functions such as $\frac{e^{-iut}}{\sqrt{2\pi}}$ and $e^{-ut}$ for the Fourier Transform and the Laplace Transform. However, as an example, with the kernel function $$K(t,u)=\ln|t+u|,\space t_1=0,\space t_2=\infty$$ the integral transform would diverge for almost all $f(t)$. Or, for another example, the kernel function $$K(t,u)=ut,\space t_1=-\infty,\space t_2=0$$ also diverges for almost all $f(t)$. My thinking is that there has to be some certain property of the Fourier Transform and the Laplace transform that make it so they don't diverge for almost all $f(t)$. So, my basic question is whether a certain property has to hold in order for the integral transform to not diverge to infinity given any function $f(t)$.
Depends on what kind of $f$ you plan on applying it to. – Qiaochu Yuan Aug 24 '12 at 3:53
No, it doesn't. For example, $e^{t^2}$ doesn't have a Laplace transform. The problem is that if you want to take the integral over all of $\mathbb{R}$ then you need to specify some growth condition on $K$, and exactly what growth condition you need is dictated by the growth conditions you specify on the $f$ you want. – Qiaochu Yuan Aug 24 '12 at 4:12
So $K$ can't be or isn't easy to find given a group of functions for $f$ unless they all have the same growth conditions? Or am I misinterpreting what you said? – Envious Page Aug 24 '12 at 4:20
What I'm saying is that if you want some conditions on $K$ for the integral operator to be well-defined you have to specify a class of functions $f$ that you want to integrate $K$ against. Without this data, you can't say anything. If the functions $f$ you want to integrate against grow very quickly then $K$ needs to decay very quickly. – Qiaochu Yuan Aug 24 '12 at 4:26
|
# Derivation of wave equation and wave speed
Tags:
1. Mar 2, 2015
### alexao1111
Hi,
I'm trying to wrap my head around the derivation of the wave equation and wave speed.
For starters I'm looking at the derviation done on this site: http://www.animations.physics.unsw.edu.au/jw/wave_equation_speed.htm
I could maybe explain what I understand at this point
Given a string with the linear density μ with and applied tension of T. dx represents the length of the string and the angle θ1 and θ2 is the slope of the angle of the string as far as I have understood (correct me if I'm wrong) and they are appoximated through the small angle approximation such that sin θ ≅ θ, cos θ ≅ 1 and due to the definition of tangent tan θ would also approximate to be θ.
By the definition of Newton's law F = ma and we desire the force acting in the vertical direction the force acting would be the mass multiplied by the vertical acceleration.
The net force would then be the sum of the horizontal components of the tension (Instead of taking the difference I interpret it as up being the positive axis and down negetive therefore the sum would be equivalent of the net force).
For these reasons we oculd write the net horizontal force as:
Fy = Tsinθ1 + Tsinθ2
As stated above theta is considered the slope of the angle of the string and for that reason could be written in as a first derivative with respect to x.
Fy = T(∂y/∂x)1 + T(∂y/∂x)2
This basically means that the resultant force is dependent on the difference in the slope of the two ends of the segment.
Carrying on with further use of Newton's second law. If I'm honest I am not sure what dm represents but what it is equated to makes sense. The linear density μ multiplied with the length of the string dx would be equivalent to the mass of the segment. The vertical acceleration would also be the rate of change of the vertical velocity. As stated in the link it would be ∂y2/∂t2. These two things would then be identical to mass times acceleration (μ dx (∂y2/∂t2). Then equating it to the previous equation for vertical force it would be possible to write;
Fy = T(∂y/∂x)1 + T(∂y/∂x)2
as
μ dx (∂y2/∂t2) = T(∂y/∂x)1 + T(∂y/∂x)2
Factoring out the tension and solving for acceleration;
(∂y2/∂t2 = T / μ ((∂y/∂x)1 + (∂y/∂x)2) / dx
The first derivative for position x and x+dx divided by dx is the rate of change of the frist derivative which then again by definition is the second derivative. Therefore it can be written as;
∂y2/∂t2 = T / μ ∂y2/∂x2
Now I think I got evertyhing up to this point
More specifically I fell off a bit o nthe segment with "A solution to the wave equation". What I generally want to do is to derive the equation for wavespeed (v = (T / μ)1/2) Though I have issues "understanding the equations etc. and also the idea of wave number, I've read that the angular frequency divided by the wave number is equal to the speed or the "wave velocity" but thats about it, and I do not really get how the equations are equated at the very end right before the final equation is derived:
I know this is a long post but I hope someone could explain the missing pieces for me.
2. Mar 2, 2015
### Svein
The two forces are acting in different directions. Since $\frac{d}{d\theta}sin(\theta)=cos(\theta)$, you get $Fcos(\theta)d\theta=\mu ds \frac{\partial^{2}y}{\partial t^{2}}$. Now you need to connect up $cos(\theta)d\theta$ to $\frac{\partial y}{\partial x} dx$.
3. Mar 2, 2015
|
?
Free Version
Difficult
Laws of Motion: When Will the Penguin Slip?
APPHMC-EJA1VZ
A penguin is situated on an icy slope which has shape $y(x) = 0.95- x^2 / 12$ where $x$ is the distance away from the center of the slope (in meters).
If the penguin, which is initially at $x = 0$, starts to ever so slowly walk away towards the positive $x$ direction, at which $x$ coordinate will it lose traction and start to slip?
Ignore the effect of the penguin's motion, and assume the only factor relevant is where it is on the slope. Take the coefficient of friction between the penguin's feet and the icy slope to be $\mu_{static} = 0.2$.
A
$0.07 m$
B
$0.1 m$
C
$1.2 m$
D
$2.4 m$
E
$3 m$
|
# How would you simplify sqrt2 + sqrt3 + sqrt5?
Jun 8, 2016
You cannot.
#### Explanation:
The sum of square roots does not have any particular algebraic rule to apply.
The only possibility is to factorize the numbers and collect the common factors, but in this case the three numbers in the square roots are primes, so cannot be factorized.
The conclusion is that you cannot simplify this expression more than its original format.
|
# zbMATH — the first resource for mathematics
## Messing, William
Compute Distance To:
Author ID: messing.william Published as: Messing, W.; Messing, William
Documents Indexed: 23 Publications since 1972, including 4 Books
all top 5
#### Co-Authors
6 single-authored 4 Berthelot, Pierre 3 Breen, Lawrence S. 2 Katz, Nicholas Michael 2 Mazur, Barry 1 Artin, Michael 1 Breuil, Christophe 1 Cristante, Valentino 1 de Jong, Aise Johan 1 Fontaine, Jean-Marc 1 Gillet, Henri A. 1 Illusie, Luc 1 Jackson, Allyn 1 Kato, Kazuya 1 Ladegaillerie, Yves 1 Lichtenbaum, Stephen 1 Lochak, Pierre 1 Mumford, David Bryant 1 Murre, Jacob P. 1 Reiner, Victor 1 Schneps, Leila 1 Scholze, Peter 1 Sibuya, Yasutaka 1 Tate, John Torrence jun.
all top 5
#### Serials
3 Lecture Notes in Mathematics 2 Advances in Mathematics 2 Notices of the American Mathematical Society 1 Bulletin de la Société Mathématique de France 1 Duke Mathematical Journal 1 Illinois Journal of Mathematics 1 Inventiones Mathematicae 1 Journal of Mathematics of Kyoto University 1 Tohoku Mathematical Journal. Second Series 1 Journal of Commutative Algebra
all top 5
#### Fields
21 Algebraic geometry (14-XX) 3 Number theory (11-XX) 2 History and biography (01-XX) 2 Field theory and polynomials (12-XX) 2 Commutative algebra (13-XX) 2 Several complex variables and analytic spaces (32-XX) 1 General and overarching topics; collections (00-XX) 1 Ordinary differential equations (34-XX) 1 Geometry (51-XX) 1 Differential geometry (53-XX) 1 Algebraic topology (55-XX)
#### Citations contained in zbMATH
19 Publications have been cited 593 times in 455 Documents Cited by Year
The crystals associated to Barsotti-Tate groups: with applications to Abelian schemes. Zbl 0243.14013
Messing, William
1972
Some consequences of the Riemann hypothesis for varieties over finite fields. Zbl 0275.14011
Katz, Nicholas M.; Messing, William
1974
Universal extensions and one dimensional crystalline cohomology. Zbl 0301.14016
Mazur, Barry; Messing, William
1974
Théorie de Dieudonne cristalline. II. Zbl 0516.14015
Berthelot, P.; Breen, L.; Messing, W.
1982
$$p$$-adic periods and $$p$$-adic étale cohomology. Zbl 0632.14016
Fontaine, Jean-Marc; Messing, William
1987
Differential geometry of gerbes. Zbl 1102.14013
Breen, Lawrence; Messing, William
2005
Théorie de Dieudonné cristalline. III: Théorèmes d’équivalence et de pleine fidélité. (Dieudonné crystalline theory. III: Theorems of equivalence and of full faith). Zbl 0753.14041
Berthelot, Pierre; Messing, William
1990
Cycle classes and Riemann-Roch for crystalline cohomology. Zbl 0651.14014
Gillet, Henri; Messing, William
1987
Torsion étale and crystalline cohomologies. Zbl 1035.14005
Breuil, Christophe; Messing, William
2002
Théorie de Dieudonne cristalline. I. Zbl 0414.14014
Berthelot, Pierre; Messing, William
1979
Combinatorial differential forms. Zbl 1084.14510
Breen, Lawrence; Messing, William
2001
Syntomic cohomology and $$p$$-adic étale cohomology. Zbl 0792.14008
Kato, Kazuya; Messing, William
1992
Short sketch of Deligne’s proof of the hard Lefschetz theorem. Zbl 0321.14013
Messing, William
1975
Crystalline Dieudonné theory over excellent schemes. Zbl 0963.14008
de Jong, Aise Johan; Messing, William
1999
The universal extension of an abelian variety by a vector group. Zbl 0285.14009
Messing, William
1973
On the nilpotence of the hypergeometric equation. Zbl 0275.14002
Messing, William
1972
Travaux de Zink. Zbl 1197.14050
Messing, William
2007
Alexandre Grothendieck 1928–2014, part 2. Zbl 1338.01023
Artin, Michael (ed.); Jackson, Allyn (ed.); Mumford, David (ed.); Tate, John (ed.); Ladegaillerie, Yves; Lichtenbaum, Stephen; Lochak, Pierre; Mazur, Barry; Messing, William; Murre, Jacob; Schneps, Leila
2016
Differentials of the first, second and third kinds. Zbl 0321.14012
Messing, William
1975
Alexandre Grothendieck 1928–2014, part 2. Zbl 1338.01023
Artin, Michael (ed.); Jackson, Allyn (ed.); Mumford, David (ed.); Tate, John (ed.); Ladegaillerie, Yves; Lichtenbaum, Stephen; Lochak, Pierre; Mazur, Barry; Messing, William; Murre, Jacob; Schneps, Leila
2016
Travaux de Zink. Zbl 1197.14050
Messing, William
2007
Differential geometry of gerbes. Zbl 1102.14013
Breen, Lawrence; Messing, William
2005
Torsion étale and crystalline cohomologies. Zbl 1035.14005
Breuil, Christophe; Messing, William
2002
Combinatorial differential forms. Zbl 1084.14510
Breen, Lawrence; Messing, William
2001
Crystalline Dieudonné theory over excellent schemes. Zbl 0963.14008
de Jong, Aise Johan; Messing, William
1999
Syntomic cohomology and $$p$$-adic étale cohomology. Zbl 0792.14008
Kato, Kazuya; Messing, William
1992
Théorie de Dieudonné cristalline. III: Théorèmes d’équivalence et de pleine fidélité. (Dieudonné crystalline theory. III: Theorems of equivalence and of full faith). Zbl 0753.14041
Berthelot, Pierre; Messing, William
1990
$$p$$-adic periods and $$p$$-adic étale cohomology. Zbl 0632.14016
Fontaine, Jean-Marc; Messing, William
1987
Cycle classes and Riemann-Roch for crystalline cohomology. Zbl 0651.14014
Gillet, Henri; Messing, William
1987
Théorie de Dieudonne cristalline. II. Zbl 0516.14015
Berthelot, P.; Breen, L.; Messing, W.
1982
Théorie de Dieudonne cristalline. I. Zbl 0414.14014
Berthelot, Pierre; Messing, William
1979
Short sketch of Deligne’s proof of the hard Lefschetz theorem. Zbl 0321.14013
Messing, William
1975
Differentials of the first, second and third kinds. Zbl 0321.14012
Messing, William
1975
Some consequences of the Riemann hypothesis for varieties over finite fields. Zbl 0275.14011
Katz, Nicholas M.; Messing, William
1974
Universal extensions and one dimensional crystalline cohomology. Zbl 0301.14016
Mazur, Barry; Messing, William
1974
The universal extension of an abelian variety by a vector group. Zbl 0285.14009
Messing, William
1973
The crystals associated to Barsotti-Tate groups: with applications to Abelian schemes. Zbl 0243.14013
Messing, William
1972
On the nilpotence of the hypergeometric equation. Zbl 0275.14002
Messing, William
1972
all top 5
#### Cited by 397 Authors
7 Andreatta, Fabrizio 7 Martins, João Nuno Goncalves Faria 7 Sämann, Christian 7 Zucchini, Roberto 6 Bertapelle, Alessandra 6 Hartl, Urs T. 5 de Jong, Aise Johan 5 Etesse, Jean-Yves 5 Hattori, Shin 5 Messing, William 5 Nizioł, Wiesława 5 Suh, Junecue 5 Trihan, Fabien 5 Wolf, Martin 5 Zink, Thomas 4 Coleman, Robert Frederick 4 Crew, Richard M. 4 Hamacher, Paul 4 Iovita, Adrian 4 Jannsen, Uwe 4 Jurčo, Branislav 4 Kato, Kazuya 4 Kisin, Mark 4 Lau, Eike 4 Milne, James Stuart 4 Ogus, Arthur 4 Picken, Roger F. 4 Pilloni, Vincent 4 Taylor, Richard Lawrence 4 Tian, Yichao 4 Vasiu, Adrian 4 Yu, Chia-Fu 3 Aldrovandi, Ettore 3 André, Yves 3 Baez, John C. 3 Barbieri Viale, Luca 3 Berthelot, Pierre 3 Besser, Amnon 3 Bhatt, Bhargav 3 Borger, James M. 3 Breuil, Christophe 3 Brylinski, Jean-Luc 3 Buium, Alexandru 3 Candilera, Maurizio 3 Chiarellotto, Bruno 3 Conrad, Brian 3 Esnault, Hélène 3 Gros, Michel 3 Helm, David 3 Howard, Benjamin 3 Hyodo, Osamu 3 Illusie, Luc 3 Kahn, Bruno 3 Katz, Nicholas Michael 3 Kim, Wansu 3 Kock, Anders 3 Kurihara, Masato 3 Lan, Kai-Wen 3 Madapusi Pera, Keerthi 3 Mantovan, Elena 3 Mazur, Barry 3 McLaughlin, Dennis A. 3 Mokrane, Abdellah 3 Morrow, Matthew 3 Ramachandran, Niranjan 3 Sato, Kanetomo 3 Schneider, Peter 3 Scholze, Peter 3 Soulé, Christophe 3 Srinivas, Vasudevan 3 Tabuada, Gonçalo 3 Viehmann, Eva 3 Wedhorn, Torsten 3 Wildeshaus, Jörg 3 Xiao, Liang 3 Zarkhin, Yuriĭ Gennad’evich 2 Artin, Michael 2 Bloch, Spencer J. 2 Bost, Jean-Benoît 2 Breen, Lawrence S. 2 Brinon, Olivier 2 Brion, Michel 2 Burns, David John 2 Cais, Bryden 2 Caruso, Xavier 2 Chambert-Loir, Antoine 2 Charles, François 2 Chatterjee, Saikat 2 Cheng, Chuangxun 2 Cirio, Lucio Simone 2 Cristante, Valentino 2 Deligne, Pierre René 2 Diamond, Fred 2 Faltings, Gerd 2 Gee, Toby 2 Gillet, Henri A. 2 Goresky, Robert Mark 2 Gross, Benedict Hyman 2 Gurney, Lance 2 Hida, Haruzo ...and 297 more Authors
all top 5
#### Cited in 91 Serials
45 Inventiones Mathematicae 32 Compositio Mathematica 28 Duke Mathematical Journal 24 Journal of Number Theory 19 Advances in Mathematics 17 Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 17 Mathematische Annalen 13 Journal of Algebra 12 Journal of the American Mathematical Society 11 Annales de l’Institut Fourier 11 Mathematische Zeitschrift 10 Rendiconti del Seminario Matematico della Università di Padova 10 Journal of Algebraic Geometry 10 Annals of Mathematics. Second Series 9 Journal of Geometry and Physics 9 Bulletin de la Société Mathématique de France 9 Publications Mathématiques 9 Transactions of the American Mathematical Society 8 Communications in Mathematical Physics 7 Journal of High Energy Physics 6 Documenta Mathematica 6 Forum of Mathematics, Sigma 5 Israel Journal of Mathematics 5 Journal of Mathematical Physics 5 Journal of Pure and Applied Algebra 5 Manuscripta Mathematica 5 Journal de Théorie des Nombres de Bordeaux 5 Comptes Rendus. Mathématique. Académie des Sciences, Paris 4 Journal für die Reine und Angewandte Mathematik 4 Journal of the Institute of Mathematics of Jussieu 4 Algebra & Number Theory 3 Nagoya Mathematical Journal 3 Pacific Journal of Mathematics 3 Bulletin of the American Mathematical Society. New Series 3 Finite Fields and their Applications 3 Journal of the European Mathematical Society (JEMS) 2 Nuclear Physics. B 2 Mathematische Nachrichten 2 Michigan Mathematical Journal 2 Proceedings of the American Mathematical Society 2 Proceedings of the Japan Academy. Series A 2 Publications of the Research Institute for Mathematical Sciences, Kyoto University 2 Tohoku Mathematical Journal. Second Series 2 Forum Mathematicum 2 Differential Geometry and its Applications 2 Applied Categorical Structures 2 Selecta Mathematica. New Series 2 International Journal of Geometric Methods in Modern Physics 2 Japanese Journal of Mathematics. 3rd Series 2 Science China. Mathematics 2 Kyoto Journal of Mathematics 1 International Journal of Modern Physics A 1 Communications in Algebra 1 General Relativity and Gravitation 1 Jahresbericht der Deutschen Mathematiker-Vereinigung (DMV) 1 Letters in Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 Arkiv för Matematik 1 Reviews in Mathematical Physics 1 Fortschritte der Physik 1 Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV 1 Archiv der Mathematik 1 Canadian Journal of Mathematics 1 Canadian Mathematical Bulletin 1 Functional Analysis and its Applications 1 Journal of Functional Analysis 1 Journal of Soviet Mathematics 1 Kodai Mathematical Journal 1 Meccanica 1 Notre Dame Journal of Formal Logic 1 Osaka Journal of Mathematics 1 Theoretical Computer Science 1 Tokyo Journal of Mathematics 1 SIAM Journal on Algebraic and Discrete Methods 1 $$K$$-Theory 1 Annals of Physics 1 Mémoires de la Société Mathématique de France. Nouvelle Série 1 Indagationes Mathematicae. New Series 1 Annales de la Faculté des Sciences de Toulouse. Mathématiques. Série VI 1 Bulletin des Sciences Mathématiques 1 Izvestiya: Mathematics 1 Algebras and Representation Theory 1 LMS Journal of Computation and Mathematics 1 Algebraic & Geometric Topology 1 Central European Journal of Mathematics 1 SIGMA. Symmetry, Integrability and Geometry: Methods and Applications 1 Forum of Mathematics, Pi 1 Journal de l’École Polytechnique – Mathématiques 1 Annals of $$K$$-Theory 1 Transactions of the American Mathematical Society. Series B 1 Higher Structures
all top 5
#### Cited in 34 Fields
352 Algebraic geometry (14-XX) 187 Number theory (11-XX) 32 Differential geometry (53-XX) 29 Category theory; homological algebra (18-XX) 29 Quantum theory (81-XX) 21 $$K$$-theory (19-XX) 20 Commutative algebra (13-XX) 17 Algebraic topology (55-XX) 14 Group theory and generalizations (20-XX) 14 Topological groups, Lie groups (22-XX) 14 Global analysis, analysis on manifolds (58-XX) 12 Manifolds and cell complexes (57-XX) 11 Associative rings and algebras (16-XX) 10 Field theory and polynomials (12-XX) 9 Several complex variables and analytic spaces (32-XX) 6 Mechanics of particles and systems (70-XX) 6 Relativity and gravitational theory (83-XX) 5 Geometry (51-XX) 4 Mathematical logic and foundations (03-XX) 4 Nonassociative rings and algebras (17-XX) 3 History and biography (01-XX) 2 Ordinary differential equations (34-XX) 2 Dynamical systems and ergodic theory (37-XX) 2 Functional analysis (46-XX) 1 General and overarching topics; collections (00-XX) 1 Combinatorics (05-XX) 1 Order, lattices, ordered algebraic structures (06-XX) 1 Special functions (33-XX) 1 Partial differential equations (35-XX) 1 Difference and functional equations (39-XX) 1 Fluid mechanics (76-XX) 1 Optics, electromagnetic theory (78-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Information and communication theory, circuits (94-XX)
|
# Recent Changes
## Updates since 2014-05-16 20:05 UTC
2015-09-03
2015-08-29
2015-08-28
2015-08-27
2015-08-25
• 19:26 UTC (diff) (history) Comments on 2015-08-24 Russian Maps . . . . The situation in Russia, in the Ukraine, and all the other former Republics of the Soviet Union make me dread the coming years. I'm just hoping that nobody does something really stupid. For now, we'll just note that the US wants to send F22 fighters to Europe. [en]
2015-08-24
2015-08-21
2015-08-18
• 08:51 UTC (diff) (history) Comments on 2015-07-30 Hayao Miyazaki . . . . Bald muss ich eine weitere Seite zum Thema schreiben. Nur kurz: From up on Poppy Hill und Whisper of the Heart kommen gänzlich ohne fantastische Elemente aus und sind herzerwärmende Teenager Liebesgeschichten, die mir sehr gefallen haben. [de]
2015-08-17
2015-08-16
2015-08-12
• 20:53 UTC (diff) (history) List of Open Books . . . . - resurrected Richie's additions (rev 457 – conflicting edits??) - made Travels with Charley a suggestion - removed August from 'next up' list [en]
2015-08-11
2015-08-09
2015-08-03
2015-07-30
2015-07-29
2015-07-26
2015-07-24
2015-07-21
2015-07-17
• 20:10 UTC (diff) (history) 2015-12 Book Club . . . . What: The Golden Notebook by Doris Lessing When: 9 December, 19:30 – RSVP on Meetup (optional )) Where: [Bistro Lochergut] (tram 2+3 'Lochergut') The landmark novel of the Sixties – a powerful account of a woman searching for her personal, political and professional identity while facing rejection… [en]
• 20:02 UTC (new) (history) 2015-11 Book Club . . . . New meeting entry [en]
• 20:00 UTC (new) (history) 2015-10 Book Club . . . . New meeting entry [en]
• 19:59 UTC (new) (history) 2015-09 Book Club . . . . New meeting entry [en]
• 19:55 UTC (new) (history) 2015-08 Book Club . . . . New meeting entry [en]
2015-07-16
2015-07-14
• 09:52 UTC (new) (history) 2015-07-14 Monsters . . . . How we came to live here… [en]
• 08:26 UTC (new) (history) 2015-07-14 No Copyright . . . . This notion of a public domain, of democratic access to a common cultural inheritance on which no particular claim could be made, bore the traces not of Diderot, but of Condorcet's faith that truths were given in nature and, although mediated through individual minds, belonged ultimately to all. [en]
2015-07-13
2015-07-12
2015-07-06
2015-07-01
2015-06-30
2015-06-29
2015-06-26
2015-06-25
2015-06-24
2015-06-22
2015-06-21
2015-06-16
2015-06-15
2015-06-14
2015-06-12
2015-06-11
2015-06-08
• 07:18 UTC (diff) (history) Comments on 2015-05-14 Podcasts . . . . Was ich im Moment sonst noch so höre: * The First World War in 261 weeks (261 biographische Vignetten zum Thema 1. Weltkrieg) * The Moby-Dick Big Read (Hörbuch mit unterschiedlichen Lesern pro Kapitel) * 99% Invisible (Design, Architektur und andere unsichtbare Einflüsse auf unser Leben) * Planet… [de]
2015-06-07
2015-06-03
2015-06-02
• 21:27 UTC (diff) (history) Comments on 2015-03-01 GPG makes me want to cry . . . . Now that I want to add a new identity to my key, I find that I cannot. Using my super USB stick: {{{ $gpg --import public-ACECFEAE.asc secret-ACECFEAE.asc gpg: key ACECFEAE: "Alex Schroeder <[email protected]>" not changed gpg: key ACECFEAE: already in secret keyring gpg: Total number processed:… [en] 2015-05-31 2015-05-29 2015-05-21 2015-05-20 2015-05-17 • 18:50 UTC (diff) (history) Comments on Raspberry Pi als privater Email Server . . . . Alternativ geht auch Mailbox.org (https:/mailbox.org) dahinter steckte eine Firma die schon seid Jahren sich mit Linux beschäftigt. Ich denke beide sind vertrauenswürdig, denn es ist relativ blöd so einen Aufwand zu treiben, wenn man eigentlich doch an die Daten ran will. Aber das muss natürlich… [de, en] 2015-05-15 2015-05-14 2015-05-11 2015-05-09 2015-05-07 2015-05-05 2015-05-04 • 07:47 UTC (new) (history) Comments on 2015-05-03 Domain Game Goals . . . . [On Google+], Andy wondered about moving some of the 'beancountery' aspects of domain play from the table to the downtime between session, to email, to G+ or to the referee playing 'solitaire'. This is a good question. How much indeed? What I can say is that there is very little interaction between… [en] 2015-05-03 2015-04-30 2015-04-28 2015-04-23 • 09:34 UTC (diff) (history) Comments on 2013-05-14 When To Roll . . . . Interestingly, Arnold K. also appears to use this for Knowledge checks (Just-In-Time Compilation). Players express a plan (set this jelly bear on fire), and at some later point they throw the torch and roll their Intelligence check. If they succeed, then jelly bears do in fact catch fire. If they… [en] • 07:43 UTC (diff) (history) Comments on 2012-03-24 How Emacs Wiki Works . . . . Perhaps you are right and eventually Stackoverflow will degenerate into a "worse is better" than -> Usenet (en). Old answers will disappear. The same questions will be asked again and again. But at least we'll have rudimentary scoring. Perhaps this would be a simple thing for a wiki to add. Like… [en] 2015-04-20 2015-04-17 • 07:32 UTC (new) (history) 2015-04-17 Gate . . . . The shock and awe powers, however, basically just describe the kind of total defeat you'll experience if you make the wrong choices, if you don't prepare for your battles. [en] 2015-04-14 2015-04-12 2015-04-11 • 20:35 UTC (diff) (history) Comments on 2015-04-11 Buying CDs . . . . Note to self: Silencio, "Silencio is a meditative collection of 20th-century works for string orchestra, including works by Arvo Pärt, Philip Glass, and Vladimir Martynov." • 13:18 UTC (new) (history) 2015-04-11 Buying CDs . . . . Today I was in a real, physical CD shop, browsing CDs. I looked at the albums with works by Arvo Pärth. I looked at recordings of historical organs. I wondered about ancient music by Jordi Saval. About liturgical music. And then I didn't want to ask the sales people for advice and decided I'd look… [en] 2015-04-09 2015-04-08 2015-04-06 2015-04-04 2015-04-02 2015-04-01 2015-03-31 2015-03-29 2015-03-26 2015-03-23 • 22:29 UTC (new) (history) 2015-03-23 Sagas of the Icelanders . . . . The game ended with the players securing half the whale for Snorri's clan, one non-player character dead, a thirteen year old young man impressed, the low intensity fight with the Tindur clan continuing and new found friendship with the Halfdann clan. [en] 2015-03-22 2015-03-18 2015-03-16 2015-03-15 2015-03-12 2015-03-11 • 09:47 UTC (diff) (history) Comments on 2015-03-10 Fighting Wiki Spam . . . . I think we can talk about two different kinds of attacks: # a long term infiltration under the radar # a massive attack on multiple levels Both attacks need to be stopped by fish-bowling the wiki: making it read-only. In order to detect long term infiltration, you need constant peer review. I… [en] 2015-03-10 • 15:43 UTC (diff) (history) Comments on 2008-10-29 YSlow . . . . I was confused by all the stuff I had added. I commented the entires section again. I really want to make sure I understand what's going on. At the moment, my setup added two cache control headers and prevented Etags from working. What a mess. Before: {{{ alex@kallobombus:~$ curl -I… [en]
• 09:37 UTC (new) (history) 2015-03-10 Fighting Wiki Spam . . . . I recently wrote something about fighting Emacs Wiki spam on Google+. [en]
2015-03-09
2015-03-08
• 22:18 UTC (diff) (history) 2015-03-08 Jupiter Ascending . . . . The action scenes were long and the involved a lot of woman-as-victim, the biology was crazy (speaking as a zoologist, here) — but the visual design was marvelous, the soundtrack worked for me, the sets, the costumes, the masks, all of it wonderful. I liked it! Tags: [en]
2015-03-06
2015-03-04
2015-03-02
• 12:49 UTC (new) (history) 2015-03-02 RPG Blogs . . . . So, with blog rolls decreasing in importance and a lot of the RPG talk having moved to Google+, what are the blogs people recommend? [en]
2015-03-01
2015-02-26
2015-02-25
2015-02-24
2015-02-23
2015-02-22
2015-02-20
2015-02-19
2015-02-17
2015-02-16
2015-02-15
2015-02-14
2015-02-11
2015-02-10
• 11:25 UTC (diff) (history) 2015-02-10 Gridmapper . . . . Gridmapper is my take on Daniel R. Collins' original GridMapper 1.0. I recently saw a link to ANAmap. It's just as simple to use and very beautiful. My Gridmapper does the following: * the output is scalable vector graphics (SVG) * it's free software (the SVG file contains all the Javascript… [en]
2015-02-09
2015-02-07
2015-02-06
2015-01-21
2015-01-20
2015-01-18
• 22:17 UTC (diff) (history) Comments on Emacs Wiki Migration . . . . Yeah that's true. You'd need a github account to edit the source. Depending on how much content is added anonymously this might be ok (or might not be). I love rtfd because it's got awesome search and navigation is also easy. [en]
2015-01-16
2015-01-13
2015-01-11
2015-01-10
2015-01-08
2015-01-06
2015-01-03
2014-12-31
2014-12-30
2014-12-28
2014-12-27
2014-12-26
2014-12-25
2014-12-24
2014-12-23
2014-12-22
2014-12-21
2014-12-20
2014-12-19
2014-12-18
2014-12-17
2014-12-14
2014-12-12
2014-12-11
2014-12-10
2014-12-08
2014-12-03
• 22:56 UTC (diff) (history) Comments on 2014-12-02 Fluidity in Rules and Setting . . . . I can confirm that even ShadowRun is capable of supporting genre drift. It is a kitchen-sink setting with future technology, magic, and the matrix as the three big focal points. In the SR campaign I ran two decades ago, we even had different players focus on different parts of the genre, including… [en]
2014-12-02
2014-11-30
2014-11-25
2014-11-14
2014-11-13
2014-11-11
2014-11-10
2014-11-08
2014-11-05
• 21:50 UTC (diff) (history) Comments on 2014-11-05 Donations . . . . I actually like the policy of allowing reasonable ads. After all, whenever I use free services, I provided with some service. I'd like the web to work without ads, but there are enough services I use which rely on ads. I mostly disagree with tracking, invasion of my privacy, animated ads, large… [en]
• 15:00 UTC (new) (history) 2014-11-05 Donations . . . . Recently I made some $10 donations, and I'm planning on some more. * Mozilla * This American Life * Ad Block And I regularly donate about$10/month to the EFF, the FSF and the FSFE. Tags: [en]
2014-11-02
2014-11-01
• 00:20 UTC (new) (history) Magische Gegenstände . . . . Ich probiere, eine Liste von magischen Gegenständen zu schreiben, welche genau zu meiner Kampagne passen. <journal search tag "Magische Gegenstände">
2014-10-31
2014-10-30
2014-10-29
• 15:33 UTC (new) (history) 2014-10-29 Skills . . . . A recent discussion on Google+ prompted me to explain how I use skills: I usually don't use skills and backgrounds explicitly in my B/X game but rather I ask the player: do you think your character would know this? Is this something they have done in the past? If so, auto success and write it on… [en]
2014-10-24
2014-10-23
• 10:28 UTC (diff) (history) Comments on 2014-09-20 . . . . Es hat leider immer noch viele Kisten, die herum stehen, aber so langsam wird es wohnlicher. Unser Problem ist wirklich, dass wir zuviele Dinge haben… [de]
2014-10-15
• 10:34 UTC (diff) (history) Comments on 2014-10-15 Con Games . . . . Yes, practice sessions are definitely a good thing. In my case, I love running a few indie games that keep presenting the same characters and the same situations:Lady Blackbird, Darkening Skies, The Mountain Witch. These games make it particularly easy for me. I've run them before, they're great.… [en]
• 08:45 UTC (new) (history) 2014-10-15 fail2ban . . . . First public disclosure 2014-09-24, activity starting 2014-10-06. [en]
• 08:16 UTC (new) (history) 2014-10-15 Adventure Prep . . . . All in all this took maybe an hour or two of scribbling and resulted in a three hour session. Not very efficient by my old standard (1h prep for a 4h game) but I guess I enjoy this sort of prep. [en]
• 06:31 UTC (new) (history) 2014-10-15 Con Games . . . . Somebody was recently wondering about con games on Google+ in a private post and I left the following comment: * bring pre gens, have pictures of the characters or a strong, bold three word characterization which allows people to pick characters without reading the entire character sheet * start… [en]
2014-10-14
2014-10-09
2014-09-25
2014-09-20
• 20:49 UTC (new) (history) 2014-09-20 . . . . Wir sind umgezogen. Weiter hinaus, wo die Menschen älter sind, wo die Hipsters und Yuppies fehlen, wo es weniger two incomes no kids gibt – also eigentlich praktisch in fremdes Land. Irgendwie auch schön, wenn man in der Siedlung begrüsst wird. Im Kreis 4 kannten uns die Nachbarn ja schon im… [de]
2014-09-14
2014-09-13
2014-09-12
2014-09-11
2014-09-10
2014-09-03
2014-08-26
2014-08-25
2014-08-23
2014-08-22
2014-08-21
• 08:13 UTC (new) (history) 2014-08-21 No Dice . . . . Just recently, I looked over my reaction roll table and started thinking about adding some more words to help me improvise better. [en]
2014-08-16
• 09:37 UTC (diff) (history) Comments on 2014-08-14 Who to Follow . . . . I don't. 😥 That's the other side of this coin. I still remember when I was a teenager and later, in my early twenties, I subscribed to various daily and weekly newspapers. And it was just so much to read! I was frustrated, and so I canceled many of my subscriptions. These days, the same is true for… [en]
2014-08-15
2014-08-14
2014-08-12
• 07:43 UTC (new) (history) Comments on 2012-12-20 Monsters . . . . In a fight, I imagine the shroom lord to jump from mushroom to mushroom like a magic hare, grow to the size of a rhino, manifest swirling colors, huge eyes with spiral patterns, ethereal hands growing from his back and caressing you as he attempts to grab your head and stick his long tongue through… [en]
2014-08-11
2014-08-10
2014-08-06
2014-08-01
2014-07-30
2014-07-28
2014-07-26
2014-07-25
• 22:01 UTC (diff) (history) Comments on 2014-07-18 Kleriker . . . . Und hier nun ein Blog Post, in dem die Kleriker gelobt werden: Misunderstood and Improperly Played - the Cleric. : "the roles of the Big Four are - fighter is physical offense, magic-user is magical offense, thief is scouting and intelligence, and the cleric is physical and magical defense. : Or,… [de, en]
2014-07-21
2014-07-20
2014-07-18
2014-07-17
2014-07-16
2014-07-15
2014-07-13
2014-07-12
2014-07-11
2014-07-10
2014-07-08
• 14:56 UTC (new) (history) 2014-07-08 Character Names . . . . For a while, my character generator used a name list based on German saints and an English list of saints. Later, I switched to a different list of names but now I feel that maybe I should have kept it. So, I'm posting it. 1664 items… I dunno, roll 1d1664? Or help me pad the list in order to get to… [en]
2014-07-07
• 18:31 UTC (diff) (history) Comments on 2014-06-20 Rewarding a Thing . . . . I am going to respond to only one thing, your question about a sandbox. The way I see it (and this is just one point of view), in a sandbox game, the players are essentially "exploring" an unknown map. In some cases this is the literal act of mapping out unrevealed hexes on a map. In other cases… [en]
2014-07-06
2014-07-05
2014-07-03
2014-07-02
2014-07-01
2014-06-30
• 14:08 UTC (new) (history) Comments on 2014-06-30 Mapping . . . . I think a lot of playersDMs worry too much about the exact layout. A brief shorthand description works fine if the players get it and it doesn't cause them to blunder into situations or tactics more vivid description would eliminate. The "you are here" flashcard sketch saves a lot of time and… [en]
• 09:39 UTC (new) (history) 2014-06-30 Mapping . . . . Don't train your players to care about the boring stuff. [en]
2014-06-29
2014-06-28
2014-06-26
2014-06-25
2014-06-24
• 14:49 UTC (new) (history) 2014-06-24 Emacs and Dice . . . . If I roll one of the five standard gaming dice (d4, d6,d8, d10,d12) and you also roll one of those, what are my chances of rolling higher, equal or lower than you, for each possible combination of dice. [de, en]
• 10:36 UTC (new) (history) 2014-06-24 Distractions . . . . The techniques to be used against improvising the entire adventure or railroading the party through the vision I had earlier that day… [en]
2014-06-23
2014-06-22
2014-06-20
2014-06-19
2014-06-17
• 09:51 UTC (new) (history) 2014-06-17 Isotope . . . . We spent half an hour after the game talking about it, comparing it to Apocalypse World (which was deemed longer and harder to get into for little benefit), Lady Blackbird (which was deemed to promise better character development via keys and locked tags) and Traveller (which was deemed to similar… [en]
2014-06-12
2014-06-07
2014-06-04
2014-06-03
• 14:32 UTC (diff) (history) Comments on 2014-06-03 Tiribazos . . . . Das war's leider schon! Am Spielabend hatte ich, wie gesagt, die Aktion nach Korinthos verlegt. Spieler kommen an, beleidigen den Kapitän der Spartaner, so dass dieser mit dem Speer nach einem Charakter wirft, der Gesandte der Spartaner taucht auf, man trennt sich, die Gruppe geht zum König, die… [de]
• 08:03 UTC (new) (history) 2014-06-03 Tiribazos . . . . Ein One-Shot für Fate [de]
2014-05-31
2014-05-28
• 06:07 UTC (diff) (history) Comments on 2014-05-22 Stat Blocks . . . . Same here, sometimes I'll have extra stuff happen on a natural 20. If monsters have a breath attack they don't use every round, I also like to give the chance for them to use it (since no d20 is rolled). I like 50% or 1–3/6 better than "every 1d4 rounds". In your stat block the "Atk" label is the… [en]
2014-05-26
• 07:06 UTC (diff) (history) Comments on 2013-07-02 Initiative . . . . Recently Robert Fisher had posted a video talking about weapon length to Google+. My first comment was this: «Part of the problem is that characters survive hit point loss without adverse effect. That's why a "initiative using weapon length in the first round" isn't a great rule for D&D.»… [en]
2014-05-22
• 08:50 UTC (new) (history) 2014-05-22 Stat Blocks . . . . Perhaps if more people posted their favorite monster stat notation and argued for their differences, we could start building said "rough consensus". [en]
More...
Filters
Title: Title and Body: Your name: Host: Follow up to: Language:
|
# REA를 고려한 Lineament density map의 작성 방안 연구
• 김규범 (한국수자원공사 조사기획처) ;
• 조민조 (한국지질자원연구원) ;
• 이강근 (서울대학교 지구환경과학부)
• Published : 2003.04.01
#### Abstract
Lineament density maps can be used for the quantitative evaluation of relationship between lineaments and groundwater occurrence. There are several kinds of lineament density maps including lineament length density, lineament cross-points density, and lineament counts density maps. This paper reports the usefulness of the representative elementary area (REA) concept for lineament analysis. This concept refers to the area size of the unit circle to calculate the lineament density factors distributed within the circle: length, counts and cross-points counts. The circle is a unit circle that calculates the sum of the lineament length, lineament counts and the number of cross-points within it. The REA is needed to obtain the best representative lineament density map prior to the analysis of relation between lineaments and groundwater well yield or other groundwater characteristics. A basic lineament map for the Yongsangang-Seomjingang watershed of Korea, drawn from aerial black-and-white photographs of 1/20, 000 scale was used for demonstrating the concept. From this study, the conclusions were as follows: (1) the REA concept can be efficiently applied to the lineament density analysis and mapping, (2) for whole Yongsangang-Seomjingang watershed which has 6, 502 lineaments with an average lineament length of 3.3 km, the lower limits of each REA used for drawing the three density maps were about 1.77 $\textrm{km}^2$ (r=750 m) for lineament length density, 7.07 $\textrm{km}^2$ (r=1, 500 m) for lineament counts density, and 4.91 $\textrm{km}^2$ (r=1, 250 m) for lineament cross-points density, respectively, (3) the lineament densities are inversely proportional to the size of REA, and the REA can be calculated with this inversely linear regression model, (4) if the average lineament density values for the whole study area are known, the most accurate density maps can be drawn using the REAs obtained from each linear regression model, and (5) but critical attention should be paid to draw lineament counts density and lineament cross-points density maps because.
|
#### Vol. 2, No. 5, 2009
Recent Issues
The Journal About the Journal Subscriptions Editorial Board Editors’ Addresses Editors’ Interests Scientific Advantages Submission Guidelines Submission Form Ethics Statement Editorial Login Author Index Coming Soon Contacts ISSN: 1944-4184 (e-only) ISSN: 1944-4176 (print)
Markov partitions for hyperbolic sets
### Todd Fisher and Himal Rathnakumara
Vol. 2 (2009), No. 5, 549–557
##### Abstract
We show that if $f$ is a diffeomorphism of a manifold to itself, $\Lambda$ is a mixing (or transitive) hyperbolic set, and $V$ is a neighborhood of $\Lambda$, then there exists a mixing (or transitive) hyperbolic set $\stackrel{̃}{\Lambda }$ with a Markov partition such that $\Lambda \subset \stackrel{̃}{\Lambda }\subset V$. We also show that in the topologically mixing case the set $\stackrel{̃}{\Lambda }$ will have a unique measure of maximal entropy.
##### Keywords
Markov partitions, hyperbolic, entropy, specification, finitely presented
##### Mathematical Subject Classification 2000
Primary: 37A35, 37D05, 37D15
##### Milestones
Received: 13 January 2009
Revised: 1 September 2009
Accepted: 28 October 2009
Published: 13 January 2010
Communicated by Kenneth S. Berenhaut
##### Authors
Todd Fisher Department of Mathematics Brigham Young University Provo, UT 84602 United States http://math.byu.edu/~tfisher/ Himal Rathnakumara Department of Mathematics Brigham Young University Provo, UT 84602 United States
|
Copied to
clipboard
## G = C3×D10⋊Q8order 480 = 25·3·5
### Direct product of C3 and D10⋊Q8
Series: Derived Chief Lower central Upper central
Derived series C1 — C2×C10 — C3×D10⋊Q8
Chief series C1 — C5 — C10 — C2×C10 — C2×C30 — D5×C2×C6 — D5×C2×C12 — C3×D10⋊Q8
Lower central C5 — C2×C10 — C3×D10⋊Q8
Upper central C1 — C2×C6 — C3×C4⋊C4
Generators and relations for C3×D10⋊Q8
G = < a,b,c,d,e | a3=b10=c2=d4=1, e2=d2, ab=ba, ac=ca, ad=da, ae=ea, cbc=dbd-1=ebe-1=b-1, dcd-1=b3c, ece-1=b8c, ede-1=d-1 >
Subgroups: 480 in 148 conjugacy classes, 66 normal (58 characteristic)
C1, C2, C2, C3, C4, C22, C22, C5, C6, C6, C2×C4, C2×C4, Q8, C23, D5, C10, C12, C2×C6, C2×C6, C15, C22⋊C4, C4⋊C4, C4⋊C4, C22×C4, C2×Q8, Dic5, Dic5, C20, D10, D10, C2×C10, C2×C12, C2×C12, C3×Q8, C22×C6, C3×D5, C30, C22⋊Q8, Dic10, C4×D5, C2×Dic5, C2×C20, C22×D5, C3×C22⋊C4, C3×C4⋊C4, C3×C4⋊C4, C22×C12, C6×Q8, C3×Dic5, C3×Dic5, C60, C6×D5, C6×D5, C2×C30, C10.D4, D10⋊C4, C5×C4⋊C4, C2×Dic10, C2×C4×D5, C3×C22⋊Q8, C3×Dic10, D5×C12, C6×Dic5, C2×C60, D5×C2×C6, D10⋊Q8, C3×C10.D4, C3×D10⋊C4, C15×C4⋊C4, C6×Dic10, D5×C2×C12, C3×D10⋊Q8
Quotients: C1, C2, C3, C22, C6, D4, Q8, C23, D5, C2×C6, C2×D4, C2×Q8, C4○D4, D10, C3×D4, C3×Q8, C22×C6, C3×D5, C22⋊Q8, C22×D5, C6×D4, C6×Q8, C3×C4○D4, C6×D5, C4○D20, D4×D5, Q8×D5, C3×C22⋊Q8, D5×C2×C6, D10⋊Q8, C3×C4○D20, C3×D4×D5, C3×Q8×D5, C3×D10⋊Q8
Smallest permutation representation of C3×D10⋊Q8
On 240 points
Generators in S240
(1 70 50)(2 61 41)(3 62 42)(4 63 43)(5 64 44)(6 65 45)(7 66 46)(8 67 47)(9 68 48)(10 69 49)(11 225 205)(12 226 206)(13 227 207)(14 228 208)(15 229 209)(16 230 210)(17 221 201)(18 222 202)(19 223 203)(20 224 204)(21 51 31)(22 52 32)(23 53 33)(24 54 34)(25 55 35)(26 56 36)(27 57 37)(28 58 38)(29 59 39)(30 60 40)(71 111 91)(72 112 92)(73 113 93)(74 114 94)(75 115 95)(76 116 96)(77 117 97)(78 118 98)(79 119 99)(80 120 100)(81 121 101)(82 122 102)(83 123 103)(84 124 104)(85 125 105)(86 126 106)(87 127 107)(88 128 108)(89 129 109)(90 130 110)(131 171 151)(132 172 152)(133 173 153)(134 174 154)(135 175 155)(136 176 156)(137 177 157)(138 178 158)(139 179 159)(140 180 160)(141 181 161)(142 182 162)(143 183 163)(144 184 164)(145 185 165)(146 186 166)(147 187 167)(148 188 168)(149 189 169)(150 190 170)(191 231 211)(192 232 212)(193 233 213)(194 234 214)(195 235 215)(196 236 216)(197 237 217)(198 238 218)(199 239 219)(200 240 220)
(1 2 3 4 5 6 7 8 9 10)(11 12 13 14 15 16 17 18 19 20)(21 22 23 24 25 26 27 28 29 30)(31 32 33 34 35 36 37 38 39 40)(41 42 43 44 45 46 47 48 49 50)(51 52 53 54 55 56 57 58 59 60)(61 62 63 64 65 66 67 68 69 70)(71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90)(91 92 93 94 95 96 97 98 99 100)(101 102 103 104 105 106 107 108 109 110)(111 112 113 114 115 116 117 118 119 120)(121 122 123 124 125 126 127 128 129 130)(131 132 133 134 135 136 137 138 139 140)(141 142 143 144 145 146 147 148 149 150)(151 152 153 154 155 156 157 158 159 160)(161 162 163 164 165 166 167 168 169 170)(171 172 173 174 175 176 177 178 179 180)(181 182 183 184 185 186 187 188 189 190)(191 192 193 194 195 196 197 198 199 200)(201 202 203 204 205 206 207 208 209 210)(211 212 213 214 215 216 217 218 219 220)(221 222 223 224 225 226 227 228 229 230)(231 232 233 234 235 236 237 238 239 240)
(1 21)(2 30)(3 29)(4 28)(5 27)(6 26)(7 25)(8 24)(9 23)(10 22)(11 231)(12 240)(13 239)(14 238)(15 237)(16 236)(17 235)(18 234)(19 233)(20 232)(31 50)(32 49)(33 48)(34 47)(35 46)(36 45)(37 44)(38 43)(39 42)(40 41)(51 70)(52 69)(53 68)(54 67)(55 66)(56 65)(57 64)(58 63)(59 62)(60 61)(71 85)(72 84)(73 83)(74 82)(75 81)(76 90)(77 89)(78 88)(79 87)(80 86)(91 105)(92 104)(93 103)(94 102)(95 101)(96 110)(97 109)(98 108)(99 107)(100 106)(111 125)(112 124)(113 123)(114 122)(115 121)(116 130)(117 129)(118 128)(119 127)(120 126)(131 150)(132 149)(133 148)(134 147)(135 146)(136 145)(137 144)(138 143)(139 142)(140 141)(151 170)(152 169)(153 168)(154 167)(155 166)(156 165)(157 164)(158 163)(159 162)(160 161)(171 190)(172 189)(173 188)(174 187)(175 186)(176 185)(177 184)(178 183)(179 182)(180 181)(191 205)(192 204)(193 203)(194 202)(195 201)(196 210)(197 209)(198 208)(199 207)(200 206)(211 225)(212 224)(213 223)(214 222)(215 221)(216 230)(217 229)(218 228)(219 227)(220 226)
(1 85 27 72)(2 84 28 71)(3 83 29 80)(4 82 30 79)(5 81 21 78)(6 90 22 77)(7 89 23 76)(8 88 24 75)(9 87 25 74)(10 86 26 73)(11 185 232 172)(12 184 233 171)(13 183 234 180)(14 182 235 179)(15 181 236 178)(16 190 237 177)(17 189 238 176)(18 188 239 175)(19 187 240 174)(20 186 231 173)(31 98 44 101)(32 97 45 110)(33 96 46 109)(34 95 47 108)(35 94 48 107)(36 93 49 106)(37 92 50 105)(38 91 41 104)(39 100 42 103)(40 99 43 102)(51 118 64 121)(52 117 65 130)(53 116 66 129)(54 115 67 128)(55 114 68 127)(56 113 69 126)(57 112 70 125)(58 111 61 124)(59 120 62 123)(60 119 63 122)(131 206 144 193)(132 205 145 192)(133 204 146 191)(134 203 147 200)(135 202 148 199)(136 201 149 198)(137 210 150 197)(138 209 141 196)(139 208 142 195)(140 207 143 194)(151 226 164 213)(152 225 165 212)(153 224 166 211)(154 223 167 220)(155 222 168 219)(156 221 169 218)(157 230 170 217)(158 229 161 216)(159 228 162 215)(160 227 163 214)
(1 145 27 132)(2 144 28 131)(3 143 29 140)(4 142 30 139)(5 141 21 138)(6 150 22 137)(7 149 23 136)(8 148 24 135)(9 147 25 134)(10 146 26 133)(11 112 232 125)(12 111 233 124)(13 120 234 123)(14 119 235 122)(15 118 236 121)(16 117 237 130)(17 116 238 129)(18 115 239 128)(19 114 240 127)(20 113 231 126)(31 158 44 161)(32 157 45 170)(33 156 46 169)(34 155 47 168)(35 154 48 167)(36 153 49 166)(37 152 50 165)(38 151 41 164)(39 160 42 163)(40 159 43 162)(51 178 64 181)(52 177 65 190)(53 176 66 189)(54 175 67 188)(55 174 68 187)(56 173 69 186)(57 172 70 185)(58 171 61 184)(59 180 62 183)(60 179 63 182)(71 193 84 206)(72 192 85 205)(73 191 86 204)(74 200 87 203)(75 199 88 202)(76 198 89 201)(77 197 90 210)(78 196 81 209)(79 195 82 208)(80 194 83 207)(91 213 104 226)(92 212 105 225)(93 211 106 224)(94 220 107 223)(95 219 108 222)(96 218 109 221)(97 217 110 230)(98 216 101 229)(99 215 102 228)(100 214 103 227)
G:=sub<Sym(240)| (1,70,50)(2,61,41)(3,62,42)(4,63,43)(5,64,44)(6,65,45)(7,66,46)(8,67,47)(9,68,48)(10,69,49)(11,225,205)(12,226,206)(13,227,207)(14,228,208)(15,229,209)(16,230,210)(17,221,201)(18,222,202)(19,223,203)(20,224,204)(21,51,31)(22,52,32)(23,53,33)(24,54,34)(25,55,35)(26,56,36)(27,57,37)(28,58,38)(29,59,39)(30,60,40)(71,111,91)(72,112,92)(73,113,93)(74,114,94)(75,115,95)(76,116,96)(77,117,97)(78,118,98)(79,119,99)(80,120,100)(81,121,101)(82,122,102)(83,123,103)(84,124,104)(85,125,105)(86,126,106)(87,127,107)(88,128,108)(89,129,109)(90,130,110)(131,171,151)(132,172,152)(133,173,153)(134,174,154)(135,175,155)(136,176,156)(137,177,157)(138,178,158)(139,179,159)(140,180,160)(141,181,161)(142,182,162)(143,183,163)(144,184,164)(145,185,165)(146,186,166)(147,187,167)(148,188,168)(149,189,169)(150,190,170)(191,231,211)(192,232,212)(193,233,213)(194,234,214)(195,235,215)(196,236,216)(197,237,217)(198,238,218)(199,239,219)(200,240,220), (1,2,3,4,5,6,7,8,9,10)(11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30)(31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50)(51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70)(71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110)(111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130)(131,132,133,134,135,136,137,138,139,140)(141,142,143,144,145,146,147,148,149,150)(151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170)(171,172,173,174,175,176,177,178,179,180)(181,182,183,184,185,186,187,188,189,190)(191,192,193,194,195,196,197,198,199,200)(201,202,203,204,205,206,207,208,209,210)(211,212,213,214,215,216,217,218,219,220)(221,222,223,224,225,226,227,228,229,230)(231,232,233,234,235,236,237,238,239,240), (1,21)(2,30)(3,29)(4,28)(5,27)(6,26)(7,25)(8,24)(9,23)(10,22)(11,231)(12,240)(13,239)(14,238)(15,237)(16,236)(17,235)(18,234)(19,233)(20,232)(31,50)(32,49)(33,48)(34,47)(35,46)(36,45)(37,44)(38,43)(39,42)(40,41)(51,70)(52,69)(53,68)(54,67)(55,66)(56,65)(57,64)(58,63)(59,62)(60,61)(71,85)(72,84)(73,83)(74,82)(75,81)(76,90)(77,89)(78,88)(79,87)(80,86)(91,105)(92,104)(93,103)(94,102)(95,101)(96,110)(97,109)(98,108)(99,107)(100,106)(111,125)(112,124)(113,123)(114,122)(115,121)(116,130)(117,129)(118,128)(119,127)(120,126)(131,150)(132,149)(133,148)(134,147)(135,146)(136,145)(137,144)(138,143)(139,142)(140,141)(151,170)(152,169)(153,168)(154,167)(155,166)(156,165)(157,164)(158,163)(159,162)(160,161)(171,190)(172,189)(173,188)(174,187)(175,186)(176,185)(177,184)(178,183)(179,182)(180,181)(191,205)(192,204)(193,203)(194,202)(195,201)(196,210)(197,209)(198,208)(199,207)(200,206)(211,225)(212,224)(213,223)(214,222)(215,221)(216,230)(217,229)(218,228)(219,227)(220,226), (1,85,27,72)(2,84,28,71)(3,83,29,80)(4,82,30,79)(5,81,21,78)(6,90,22,77)(7,89,23,76)(8,88,24,75)(9,87,25,74)(10,86,26,73)(11,185,232,172)(12,184,233,171)(13,183,234,180)(14,182,235,179)(15,181,236,178)(16,190,237,177)(17,189,238,176)(18,188,239,175)(19,187,240,174)(20,186,231,173)(31,98,44,101)(32,97,45,110)(33,96,46,109)(34,95,47,108)(35,94,48,107)(36,93,49,106)(37,92,50,105)(38,91,41,104)(39,100,42,103)(40,99,43,102)(51,118,64,121)(52,117,65,130)(53,116,66,129)(54,115,67,128)(55,114,68,127)(56,113,69,126)(57,112,70,125)(58,111,61,124)(59,120,62,123)(60,119,63,122)(131,206,144,193)(132,205,145,192)(133,204,146,191)(134,203,147,200)(135,202,148,199)(136,201,149,198)(137,210,150,197)(138,209,141,196)(139,208,142,195)(140,207,143,194)(151,226,164,213)(152,225,165,212)(153,224,166,211)(154,223,167,220)(155,222,168,219)(156,221,169,218)(157,230,170,217)(158,229,161,216)(159,228,162,215)(160,227,163,214), (1,145,27,132)(2,144,28,131)(3,143,29,140)(4,142,30,139)(5,141,21,138)(6,150,22,137)(7,149,23,136)(8,148,24,135)(9,147,25,134)(10,146,26,133)(11,112,232,125)(12,111,233,124)(13,120,234,123)(14,119,235,122)(15,118,236,121)(16,117,237,130)(17,116,238,129)(18,115,239,128)(19,114,240,127)(20,113,231,126)(31,158,44,161)(32,157,45,170)(33,156,46,169)(34,155,47,168)(35,154,48,167)(36,153,49,166)(37,152,50,165)(38,151,41,164)(39,160,42,163)(40,159,43,162)(51,178,64,181)(52,177,65,190)(53,176,66,189)(54,175,67,188)(55,174,68,187)(56,173,69,186)(57,172,70,185)(58,171,61,184)(59,180,62,183)(60,179,63,182)(71,193,84,206)(72,192,85,205)(73,191,86,204)(74,200,87,203)(75,199,88,202)(76,198,89,201)(77,197,90,210)(78,196,81,209)(79,195,82,208)(80,194,83,207)(91,213,104,226)(92,212,105,225)(93,211,106,224)(94,220,107,223)(95,219,108,222)(96,218,109,221)(97,217,110,230)(98,216,101,229)(99,215,102,228)(100,214,103,227)>;
G:=Group( (1,70,50)(2,61,41)(3,62,42)(4,63,43)(5,64,44)(6,65,45)(7,66,46)(8,67,47)(9,68,48)(10,69,49)(11,225,205)(12,226,206)(13,227,207)(14,228,208)(15,229,209)(16,230,210)(17,221,201)(18,222,202)(19,223,203)(20,224,204)(21,51,31)(22,52,32)(23,53,33)(24,54,34)(25,55,35)(26,56,36)(27,57,37)(28,58,38)(29,59,39)(30,60,40)(71,111,91)(72,112,92)(73,113,93)(74,114,94)(75,115,95)(76,116,96)(77,117,97)(78,118,98)(79,119,99)(80,120,100)(81,121,101)(82,122,102)(83,123,103)(84,124,104)(85,125,105)(86,126,106)(87,127,107)(88,128,108)(89,129,109)(90,130,110)(131,171,151)(132,172,152)(133,173,153)(134,174,154)(135,175,155)(136,176,156)(137,177,157)(138,178,158)(139,179,159)(140,180,160)(141,181,161)(142,182,162)(143,183,163)(144,184,164)(145,185,165)(146,186,166)(147,187,167)(148,188,168)(149,189,169)(150,190,170)(191,231,211)(192,232,212)(193,233,213)(194,234,214)(195,235,215)(196,236,216)(197,237,217)(198,238,218)(199,239,219)(200,240,220), (1,2,3,4,5,6,7,8,9,10)(11,12,13,14,15,16,17,18,19,20)(21,22,23,24,25,26,27,28,29,30)(31,32,33,34,35,36,37,38,39,40)(41,42,43,44,45,46,47,48,49,50)(51,52,53,54,55,56,57,58,59,60)(61,62,63,64,65,66,67,68,69,70)(71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90)(91,92,93,94,95,96,97,98,99,100)(101,102,103,104,105,106,107,108,109,110)(111,112,113,114,115,116,117,118,119,120)(121,122,123,124,125,126,127,128,129,130)(131,132,133,134,135,136,137,138,139,140)(141,142,143,144,145,146,147,148,149,150)(151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170)(171,172,173,174,175,176,177,178,179,180)(181,182,183,184,185,186,187,188,189,190)(191,192,193,194,195,196,197,198,199,200)(201,202,203,204,205,206,207,208,209,210)(211,212,213,214,215,216,217,218,219,220)(221,222,223,224,225,226,227,228,229,230)(231,232,233,234,235,236,237,238,239,240), (1,21)(2,30)(3,29)(4,28)(5,27)(6,26)(7,25)(8,24)(9,23)(10,22)(11,231)(12,240)(13,239)(14,238)(15,237)(16,236)(17,235)(18,234)(19,233)(20,232)(31,50)(32,49)(33,48)(34,47)(35,46)(36,45)(37,44)(38,43)(39,42)(40,41)(51,70)(52,69)(53,68)(54,67)(55,66)(56,65)(57,64)(58,63)(59,62)(60,61)(71,85)(72,84)(73,83)(74,82)(75,81)(76,90)(77,89)(78,88)(79,87)(80,86)(91,105)(92,104)(93,103)(94,102)(95,101)(96,110)(97,109)(98,108)(99,107)(100,106)(111,125)(112,124)(113,123)(114,122)(115,121)(116,130)(117,129)(118,128)(119,127)(120,126)(131,150)(132,149)(133,148)(134,147)(135,146)(136,145)(137,144)(138,143)(139,142)(140,141)(151,170)(152,169)(153,168)(154,167)(155,166)(156,165)(157,164)(158,163)(159,162)(160,161)(171,190)(172,189)(173,188)(174,187)(175,186)(176,185)(177,184)(178,183)(179,182)(180,181)(191,205)(192,204)(193,203)(194,202)(195,201)(196,210)(197,209)(198,208)(199,207)(200,206)(211,225)(212,224)(213,223)(214,222)(215,221)(216,230)(217,229)(218,228)(219,227)(220,226), (1,85,27,72)(2,84,28,71)(3,83,29,80)(4,82,30,79)(5,81,21,78)(6,90,22,77)(7,89,23,76)(8,88,24,75)(9,87,25,74)(10,86,26,73)(11,185,232,172)(12,184,233,171)(13,183,234,180)(14,182,235,179)(15,181,236,178)(16,190,237,177)(17,189,238,176)(18,188,239,175)(19,187,240,174)(20,186,231,173)(31,98,44,101)(32,97,45,110)(33,96,46,109)(34,95,47,108)(35,94,48,107)(36,93,49,106)(37,92,50,105)(38,91,41,104)(39,100,42,103)(40,99,43,102)(51,118,64,121)(52,117,65,130)(53,116,66,129)(54,115,67,128)(55,114,68,127)(56,113,69,126)(57,112,70,125)(58,111,61,124)(59,120,62,123)(60,119,63,122)(131,206,144,193)(132,205,145,192)(133,204,146,191)(134,203,147,200)(135,202,148,199)(136,201,149,198)(137,210,150,197)(138,209,141,196)(139,208,142,195)(140,207,143,194)(151,226,164,213)(152,225,165,212)(153,224,166,211)(154,223,167,220)(155,222,168,219)(156,221,169,218)(157,230,170,217)(158,229,161,216)(159,228,162,215)(160,227,163,214), (1,145,27,132)(2,144,28,131)(3,143,29,140)(4,142,30,139)(5,141,21,138)(6,150,22,137)(7,149,23,136)(8,148,24,135)(9,147,25,134)(10,146,26,133)(11,112,232,125)(12,111,233,124)(13,120,234,123)(14,119,235,122)(15,118,236,121)(16,117,237,130)(17,116,238,129)(18,115,239,128)(19,114,240,127)(20,113,231,126)(31,158,44,161)(32,157,45,170)(33,156,46,169)(34,155,47,168)(35,154,48,167)(36,153,49,166)(37,152,50,165)(38,151,41,164)(39,160,42,163)(40,159,43,162)(51,178,64,181)(52,177,65,190)(53,176,66,189)(54,175,67,188)(55,174,68,187)(56,173,69,186)(57,172,70,185)(58,171,61,184)(59,180,62,183)(60,179,63,182)(71,193,84,206)(72,192,85,205)(73,191,86,204)(74,200,87,203)(75,199,88,202)(76,198,89,201)(77,197,90,210)(78,196,81,209)(79,195,82,208)(80,194,83,207)(91,213,104,226)(92,212,105,225)(93,211,106,224)(94,220,107,223)(95,219,108,222)(96,218,109,221)(97,217,110,230)(98,216,101,229)(99,215,102,228)(100,214,103,227) );
G=PermutationGroup([[(1,70,50),(2,61,41),(3,62,42),(4,63,43),(5,64,44),(6,65,45),(7,66,46),(8,67,47),(9,68,48),(10,69,49),(11,225,205),(12,226,206),(13,227,207),(14,228,208),(15,229,209),(16,230,210),(17,221,201),(18,222,202),(19,223,203),(20,224,204),(21,51,31),(22,52,32),(23,53,33),(24,54,34),(25,55,35),(26,56,36),(27,57,37),(28,58,38),(29,59,39),(30,60,40),(71,111,91),(72,112,92),(73,113,93),(74,114,94),(75,115,95),(76,116,96),(77,117,97),(78,118,98),(79,119,99),(80,120,100),(81,121,101),(82,122,102),(83,123,103),(84,124,104),(85,125,105),(86,126,106),(87,127,107),(88,128,108),(89,129,109),(90,130,110),(131,171,151),(132,172,152),(133,173,153),(134,174,154),(135,175,155),(136,176,156),(137,177,157),(138,178,158),(139,179,159),(140,180,160),(141,181,161),(142,182,162),(143,183,163),(144,184,164),(145,185,165),(146,186,166),(147,187,167),(148,188,168),(149,189,169),(150,190,170),(191,231,211),(192,232,212),(193,233,213),(194,234,214),(195,235,215),(196,236,216),(197,237,217),(198,238,218),(199,239,219),(200,240,220)], [(1,2,3,4,5,6,7,8,9,10),(11,12,13,14,15,16,17,18,19,20),(21,22,23,24,25,26,27,28,29,30),(31,32,33,34,35,36,37,38,39,40),(41,42,43,44,45,46,47,48,49,50),(51,52,53,54,55,56,57,58,59,60),(61,62,63,64,65,66,67,68,69,70),(71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90),(91,92,93,94,95,96,97,98,99,100),(101,102,103,104,105,106,107,108,109,110),(111,112,113,114,115,116,117,118,119,120),(121,122,123,124,125,126,127,128,129,130),(131,132,133,134,135,136,137,138,139,140),(141,142,143,144,145,146,147,148,149,150),(151,152,153,154,155,156,157,158,159,160),(161,162,163,164,165,166,167,168,169,170),(171,172,173,174,175,176,177,178,179,180),(181,182,183,184,185,186,187,188,189,190),(191,192,193,194,195,196,197,198,199,200),(201,202,203,204,205,206,207,208,209,210),(211,212,213,214,215,216,217,218,219,220),(221,222,223,224,225,226,227,228,229,230),(231,232,233,234,235,236,237,238,239,240)], [(1,21),(2,30),(3,29),(4,28),(5,27),(6,26),(7,25),(8,24),(9,23),(10,22),(11,231),(12,240),(13,239),(14,238),(15,237),(16,236),(17,235),(18,234),(19,233),(20,232),(31,50),(32,49),(33,48),(34,47),(35,46),(36,45),(37,44),(38,43),(39,42),(40,41),(51,70),(52,69),(53,68),(54,67),(55,66),(56,65),(57,64),(58,63),(59,62),(60,61),(71,85),(72,84),(73,83),(74,82),(75,81),(76,90),(77,89),(78,88),(79,87),(80,86),(91,105),(92,104),(93,103),(94,102),(95,101),(96,110),(97,109),(98,108),(99,107),(100,106),(111,125),(112,124),(113,123),(114,122),(115,121),(116,130),(117,129),(118,128),(119,127),(120,126),(131,150),(132,149),(133,148),(134,147),(135,146),(136,145),(137,144),(138,143),(139,142),(140,141),(151,170),(152,169),(153,168),(154,167),(155,166),(156,165),(157,164),(158,163),(159,162),(160,161),(171,190),(172,189),(173,188),(174,187),(175,186),(176,185),(177,184),(178,183),(179,182),(180,181),(191,205),(192,204),(193,203),(194,202),(195,201),(196,210),(197,209),(198,208),(199,207),(200,206),(211,225),(212,224),(213,223),(214,222),(215,221),(216,230),(217,229),(218,228),(219,227),(220,226)], [(1,85,27,72),(2,84,28,71),(3,83,29,80),(4,82,30,79),(5,81,21,78),(6,90,22,77),(7,89,23,76),(8,88,24,75),(9,87,25,74),(10,86,26,73),(11,185,232,172),(12,184,233,171),(13,183,234,180),(14,182,235,179),(15,181,236,178),(16,190,237,177),(17,189,238,176),(18,188,239,175),(19,187,240,174),(20,186,231,173),(31,98,44,101),(32,97,45,110),(33,96,46,109),(34,95,47,108),(35,94,48,107),(36,93,49,106),(37,92,50,105),(38,91,41,104),(39,100,42,103),(40,99,43,102),(51,118,64,121),(52,117,65,130),(53,116,66,129),(54,115,67,128),(55,114,68,127),(56,113,69,126),(57,112,70,125),(58,111,61,124),(59,120,62,123),(60,119,63,122),(131,206,144,193),(132,205,145,192),(133,204,146,191),(134,203,147,200),(135,202,148,199),(136,201,149,198),(137,210,150,197),(138,209,141,196),(139,208,142,195),(140,207,143,194),(151,226,164,213),(152,225,165,212),(153,224,166,211),(154,223,167,220),(155,222,168,219),(156,221,169,218),(157,230,170,217),(158,229,161,216),(159,228,162,215),(160,227,163,214)], [(1,145,27,132),(2,144,28,131),(3,143,29,140),(4,142,30,139),(5,141,21,138),(6,150,22,137),(7,149,23,136),(8,148,24,135),(9,147,25,134),(10,146,26,133),(11,112,232,125),(12,111,233,124),(13,120,234,123),(14,119,235,122),(15,118,236,121),(16,117,237,130),(17,116,238,129),(18,115,239,128),(19,114,240,127),(20,113,231,126),(31,158,44,161),(32,157,45,170),(33,156,46,169),(34,155,47,168),(35,154,48,167),(36,153,49,166),(37,152,50,165),(38,151,41,164),(39,160,42,163),(40,159,43,162),(51,178,64,181),(52,177,65,190),(53,176,66,189),(54,175,67,188),(55,174,68,187),(56,173,69,186),(57,172,70,185),(58,171,61,184),(59,180,62,183),(60,179,63,182),(71,193,84,206),(72,192,85,205),(73,191,86,204),(74,200,87,203),(75,199,88,202),(76,198,89,201),(77,197,90,210),(78,196,81,209),(79,195,82,208),(80,194,83,207),(91,213,104,226),(92,212,105,225),(93,211,106,224),(94,220,107,223),(95,219,108,222),(96,218,109,221),(97,217,110,230),(98,216,101,229),(99,215,102,228),(100,214,103,227)]])
102 conjugacy classes
class 1 2A 2B 2C 2D 2E 3A 3B 4A 4B 4C 4D 4E 4F 4G 4H 5A 5B 6A ··· 6F 6G 6H 6I 6J 10A ··· 10F 12A 12B 12C 12D 12E 12F 12G 12H 12I 12J 12K 12L 12M 12N 12O 12P 15A 15B 15C 15D 20A ··· 20L 30A ··· 30L 60A ··· 60X order 1 2 2 2 2 2 3 3 4 4 4 4 4 4 4 4 5 5 6 ··· 6 6 6 6 6 10 ··· 10 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 15 15 15 15 20 ··· 20 30 ··· 30 60 ··· 60 size 1 1 1 1 10 10 1 1 2 2 4 4 10 10 20 20 2 2 1 ··· 1 10 10 10 10 2 ··· 2 2 2 2 2 4 4 4 4 10 10 10 10 20 20 20 20 2 2 2 2 4 ··· 4 2 ··· 2 4 ··· 4
102 irreducible representations
dim 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 4 4 4 4 type + + + + + + + - + + + - image C1 C2 C2 C2 C2 C2 C3 C6 C6 C6 C6 C6 D4 Q8 D5 C4○D4 D10 C3×D4 C3×Q8 C3×D5 C3×C4○D4 C6×D5 C4○D20 C3×C4○D20 D4×D5 Q8×D5 C3×D4×D5 C3×Q8×D5 kernel C3×D10⋊Q8 C3×C10.D4 C3×D10⋊C4 C15×C4⋊C4 C6×Dic10 D5×C2×C12 D10⋊Q8 C10.D4 D10⋊C4 C5×C4⋊C4 C2×Dic10 C2×C4×D5 C3×Dic5 C6×D5 C3×C4⋊C4 C30 C2×C12 Dic5 D10 C4⋊C4 C10 C2×C4 C6 C2 C6 C6 C2 C2 # reps 1 2 2 1 1 1 2 4 4 2 2 2 2 2 2 2 6 4 4 4 4 12 8 16 2 2 4 4
Matrix representation of C3×D10⋊Q8 in GL6(𝔽61)
1 0 0 0 0 0 0 1 0 0 0 0 0 0 13 0 0 0 0 0 0 13 0 0 0 0 0 0 47 0 0 0 0 0 0 47
,
60 17 0 0 0 0 44 44 0 0 0 0 0 0 60 0 0 0 0 0 0 60 0 0 0 0 0 0 60 0 0 0 0 0 0 60
,
60 17 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 6 60 0 0 0 0 0 0 60 0 0 0 0 0 16 1
,
1 0 0 0 0 0 17 60 0 0 0 0 0 0 60 41 0 0 0 0 0 1 0 0 0 0 0 0 40 5 0 0 0 0 58 21
,
1 0 0 0 0 0 17 60 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 11 0 0 0 0 0 7 50
G:=sub<GL(6,GF(61))| [1,0,0,0,0,0,0,1,0,0,0,0,0,0,13,0,0,0,0,0,0,13,0,0,0,0,0,0,47,0,0,0,0,0,0,47],[60,44,0,0,0,0,17,44,0,0,0,0,0,0,60,0,0,0,0,0,0,60,0,0,0,0,0,0,60,0,0,0,0,0,0,60],[60,0,0,0,0,0,17,1,0,0,0,0,0,0,1,6,0,0,0,0,0,60,0,0,0,0,0,0,60,16,0,0,0,0,0,1],[1,17,0,0,0,0,0,60,0,0,0,0,0,0,60,0,0,0,0,0,41,1,0,0,0,0,0,0,40,58,0,0,0,0,5,21],[1,17,0,0,0,0,0,60,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,0,0,11,7,0,0,0,0,0,50] >;
C3×D10⋊Q8 in GAP, Magma, Sage, TeX
C_3\times D_{10}\rtimes Q_8
% in TeX
G:=Group("C3xD10:Q8");
// GroupNames label
G:=SmallGroup(480,689);
// by ID
G=gap.SmallGroup(480,689);
# by ID
G:=PCGroup([7,-2,-2,-2,-3,-2,-2,-5,176,1598,555,268,18822]);
// Polycyclic
G:=Group<a,b,c,d,e|a^3=b^10=c^2=d^4=1,e^2=d^2,a*b=b*a,a*c=c*a,a*d=d*a,a*e=e*a,c*b*c=d*b*d^-1=e*b*e^-1=b^-1,d*c*d^-1=b^3*c,e*c*e^-1=b^8*c,e*d*e^-1=d^-1>;
// generators/relations
×
𝔽
|
# 2002 Math OSP, Number 1
Algebra Level pending
Suppose $A=\quad { (-1) }^{ -1 },\quad B=\quad { (-1) }^{ 1 },\quad C=\quad { 1 }^{ -1 }$ What is the value of A + B + C?
×
|
## Chemistry 9th Edition
(a)As the formula of water is $H_2O$ . This shows that there are two hydrogen and one oxygen atom per molecule of water. Molar of oxygen is 16 and that of two hydrogen atoms is 2. Thus the given statement is false because the mass of hydrogen is less than oxygen, not twice. (b) As the formula of water molecule is $H_2O$ so there are two hydrogen and one oxygen atom. Hence, the given statement formula is true. (c) From $H_2O$, we calculate the mass of oxygen as 16 and that of hydrogen as 2. 16:2 gives 8:1. Thus the given statement is false. (d) The given statement is false because there no two oxygen atoms and one hydrogen atom; instead there are two hydrogen atoms and one oxygen atom as the formula of water molecule is $H_2O$
|
# Geometry construction problem
Hello everybody, i am new to this forum, i am not a physist not a mathematician, just an architect and i have a geometrical problem that has drove me crazy. I thought to sign in JUST in case someone can help find a solution.
I am working with autocad, that means using geometry consructions all the time. I have many automatic constructions (like circle tangent to 3 elements) but I came across this problem i could NOT construct.
I ll try to describe it the best i can since i dont know the exact term in English
-I have two lines that are not parallel: LineA and LineB
-A specific point pA on lineA
-A specific point pB on lineB
I want to construct two circles (cA) and (cB) that have the same radius and
circleA is tangent to LineA, to circleB and 'passes' from pointA
circleB is tangent to LineB, to circleA and 'passes' from pointB
The final result i want to construct would be like an S connecting the two lines
I dont know if this is a darn toooo easy problem but honestly with my level of knowledge i CAN NOT solve it! As you understand i dont want to solve it algebrically
Thank you very much in advance
PS sorry if this post in in the wrong thread
ps2 i found how to atach image..... 1.jpg is the problem, 2.jpg is what i want to construct, 3.jpg is the final result
#### Attachments
• 1.jpg
4.3 KB · Views: 401
• 2.jpg
11.9 KB · Views: 410
• 3.jpg
4.6 KB · Views: 395
Last edited:
Have you found a solution yet?
I've determined that there are 2 types of situations for non-parallel lines:
1) The line through points A & B is perpendicular to the line which bisects the angle formed by lines A & B
2) The line through points A & B is NOT perpendicular to the line which bisects the angle formed by lines A & B.
Situation #1 is easy.
I'm still working on situation #2
coolul007
Gold Member
There are 3 line segments.
AO Perpendicular from line A to the center O, of the circle tangent to line A. Length r.
AQ Perpendicular from line B to the center Q, of the circle tangent to Line B. Length r.
OQ from conecting the centers of the two circles tangent. Length 2r.
r is a range dependent on the distance from point A to point B.
More later
Does this sketch help?
RST is the common tangent between the two circles
S and T are intersection points for two tangents to a circle. Therefore the tangents are equal.
That is SA = SR and TR = TB
#### Attachments
• ip1.jpg
10.5 KB · Views: 405
ok, i have been 'playing' a bit with the problem (when ever i feel like i can actually find the solution) just in case i find some properties that can help me and found this..... i dont know if it help but it might give you a clue about the solution.
I 've drawn a set of two equal random circles from my points A & B
Found the middle of the.... the... arch(?) M1
Then i did the same thing with another set (different radius)
Found point M2
I performed the same with a third set of circles
I found there is a line (M1-2) that passes from all these points. This means (i think) the tangent point i am looking for is on this line
I also found that the angle this line forms with lineA and lineB is half of angleAB
This is certain cause since i am testing them in Autocad i can have a total acuracy in angles and distances.
I am attaching again some images as example
P.S. Thanks for putting your minds on this
#### Attachments
• 1.jpg
10.5 KB · Views: 393
• 2.jpg
9.3 KB · Views: 358
• 3.jpg
8.5 KB · Views: 387
and forgot to say that the distance from point A to the section point A1 of lineA and lineM12 is the same as from B to the section point B1 of lineB and lineM12
see image again
#### Attachments
• 4.jpg
8.3 KB · Views: 395
I am not quite sure what you menat by your last two posts, but my calculations do not put the common tangent point of the two circles on the bisector of angle formed by the intesection of your two straight lines.
Did you make anything of the stuff I posted?
You haven't yet posted quite enough information to solve the problem since you must know both the inclination of the lines and the distance AB and their distances from the intersection point ie their coordinates if you will.
Also what exactly do you mean by no calculation?
Last edited:
coolul007
Gold Member
I am not quite sure what you menat by your lst two posts, but my calculations do not put the common tangent point of the two circles on the bisector of angle formed by the intesection of your two straight lines.
Did you make anything of the stuff I posted?
You haven't yet posted quite enough information to solve the problem since you must know both the inclination of the lines and the distance AB and their distances from the intersection point ie their coordinates if you will.
Also what exactly do you mean by no calculation?
It seems to me even knowing AB distance and angle, there would still be a range of radii that would work.
I am not quite sure what you menat by your last two posts, but my calculations do not put the common tangent point of the two circles on the bisector of angle formed by the intesection of your two straight lines.
Did you make anything of the stuff I posted?
You haven't yet posted quite enough information to solve the problem since you must know both the inclination of the lines and the distance AB and their distances from the intersection point ie their coordinates if you will.
Also what exactly do you mean by no calculation?
Look, first maybe you are totally right, i am just experimenting trying to find properties, i am not ever sure if they are correct, thats why i always ask people more advanced in math.
So in any case its good to know i was wrong. (good opportunity to experiment and find where i was wrong)
As for your sketch, i saw it, i understand it but i cant figure out how i ll find points S&T since i only know A&B and not r?
PS: I must also point out that since i know plain English (I am Greek) and i am not familiar with the technical math terms ( i had to look to a dictionary what 'inclination' is), sketches are much appreciated!
Ah... Greek too :tongue:
First welcome to PF, Maria.
Second
(I am Greek)
That is very good since this is mainly about Euclid, who was a very famous Greek.
My current sketch is too large for the scanner so I will have to make it small enough to post here.
Meanwhile I can tell you that in my first sketch (sorry it is so faint)
$$\theta$$ is the angle subtended by the first arc (it is arc, not arch - that is a bridge) to the centre of its circle.
$$\phi$$ is the angle subtended by the reverse arc in the second circle
My first sketch is not large enough to show the intersection (crossing) point of the two lines but I have called the angle between them $$\alpha$$.
It can be shown that $$\alpha$$ = $$\phi$$ - $$\theta$$
There are some other formulae to do with the shapes, but I will leave posting these until the next sketch to make them clear.
Yes, maybe thats why i like geometry (very usefull in my work) and maybe thats why this is driving me CRAAAAAZY.....
I just like geometry and i like searching for logical problems solutions... consider it a hobbby
The last problem i was searching was how to construct a circle tangent to a line and passing through 2 point
I know, after hours of reading i discovered Apollinius problem and got the solution (more hours of reading)
I also learned in the process how to geometrically construct the square root of a distance!
My colleagues and boss say that i have psychological math problems but anyway...
Ok back to your sketch... CD is the line i am searching for, and ST is a line a can not construct yet there for i dont know either the angles Φ or Θ
Also i cant find angle α in the sketch
I undestand that TR is 'mirror' of TB (axis TC) and the same happens with SR-AC
pffff i like stupid in cases like these!!!!!
OK here is a solution using trigonometry.
Please note that this is a different drawing from my previous one, so I have called the angles by new names to avoid confusion.
You do know the following
Where A and B are so you know the distance AB and the direction of the line AB
The direction of the line through A
The direction of the line through B
You can therefore calculate the angle between line AB and line A as the difference of these directions. I have called this angle alpha.
You can therefore calculate the angle between line AB and line B as the difference of these directions. I have called this angle gamma.
You need to know the radius of the circles , which you do not yet know.
To do this first calculate the auxiliary angle I have called beta, which is the difference in the directions of the the line joining the circle centres and the line AB.
Since you do not yet know where the centres are you must calculate this angle as shown in the attachment.
Once you have this angle you can use it to calculate the radius, again as shown in the attachment.
In principle if you can calculate it you can draw it, but I have yet to figure out a drawing construction to replace this.
For your further information, it is also possible to calculate reverse curves when there are two diffeent radii involved.
go well
#### Attachments
• tangents1.jpg
14.2 KB · Views: 424
ah.... mind food!!!
yes, i see your sketch and understand it (it took me a while)
Thanks so much for dealing whith this, although i am also looking for the drawing construction i am right now 'prossesing' your solution and experimenting a little bit....
I found this, i dont know if its correct but it seems like it might be
I know angle alpha and gamma, looking for beta (sketch1)
If i construct a help line from point B using angle alpha it seems like i found angle beta (sketch2)
I havent proven it, its just experimenting but if its true then i might be able to drawing construct the line i want that connects the two centers C1 & C2
(i dont know how exactly i ll do that but anyway)
#### Attachments
• 1.jpg
13.4 KB · Views: 398
• 2.jpg
17.7 KB · Views: 374
Nop, i did some testings and i think my last post doesnt apply to all cases
Last edited:
Sorry to disappoint you but I'm not convinced that your angle is beta.
See my attachment.
Edit there is an arithmetic error in the last line I need to change.
Ok it's correct now. Sorry.
#### Attachments
• tangents2.jpg
8.6 KB · Views: 421
Last edited:
OK here is a solution using trigonometry.
Please note that this is a different drawing from my previous one, so I have called the angles by new names to avoid confusion.
You do know the following
Where A and B are so you know the distance AB and the direction of the line AB
The direction of the line through A
The direction of the line through B
You can therefore calculate the angle between line AB and line A as the difference of these directions. I have called this angle alpha.
You can therefore calculate the angle between line AB and line B as the difference of these directions. I have called this angle gamma.
You need to know the radius of the circles , which you do not yet know.
To do this first calculate the auxiliary angle I have called beta, which is the difference in the directions of the the line joining the circle centres and the line AB.
Since you do not yet know where the centres are you must calculate this angle as shown in the attachment.
Once you have this angle you can use it to calculate the radius, again as shown in the attachment.
In principle if you can calculate it you can draw it, but I have yet to figure out a drawing construction to replace this.
For your further information, it is also possible to calculate reverse curves when there are two diffeent radii involved.
go well
Your work is completely correct. However, it is not a "solution."
According to the OP, they are only given the 2 lines and the respective tangent points, A & B. The goal is to determine what the radius of the circles needs to be in order for them to not only be tangent to their respective lines, but tangent to each other, as well.
In other words,:
Given two non-parallel lines and a point on each line, we need to construct an s-curve consisting of 2 arcs having the same radius, such that the distance between the centers of those arcs is twice that radius.
Your "solution" assumes you already have the circles and lines, but don't know the radius of the circles.
Your work is completely correct. However, it is not a "solution."
The problem is a bog standard one in surveying for railway, mining and road engineering as is the solution I offered.
The problem is a bog standard one in surveying for railway, mining and road engineering as is the solution I offered.
First of all, I never gave a solution. I'm actually still looking for one.
Secondly, I misstated why I feel you didn't give an adequate solution:
The OP is looking for a geometric solution that doesn't involve calculations that are subject to rounding errors. (Think straightedge and compass). For example, it is easy to construct the bisector of an angle using only those tools. Granted, we could also measure the angle with a protractor, divide that measurement in half and draw in the bisector by measuring this new angle with the protractor, but this method is much less accurate (even in AutoCAD).
The OP is looking for a geometric solution
No I think Maria was working in a drawing office and looking for a draughting solution, which is quite different.
I also stated that I didn't have any sort of drawn solution only a calculated one, which I though was better than none.
I would, however be interested in a drawing solutuin since it must be possible to draw what can be calculated. I am just not a first class draughtsman (or even second class really).
So please post if you come up with anything.
If it helps I can let you have the surveyors screed on the subject.
coolul007
Gold Member
This problem seems so easy on the surface but difficult in practice. I have started an approach to the problem by solving it with constructions if the lines were parallel. Of course I know they are not. I will post my approach in hopes of it sparking someone else while I continue working on it.
#### Attachments
• arc.jpg
21.5 KB · Views: 433
coolul007
Gold Member
Here is a solution, it is not a solution with both circles tangent to each other, but they intersect in equal arcs with equal radii. This seems to me to be adequate for any construction job. All of the following steps correspond to the attached drawing:
1) Draw line segment AB
2) Draw perpendicular from point A
3) Draw perpendicular from point B
4) Extend line A
5) Extend line B
6) The measure of angle CAB = a
7) The measure of angle ABD = a-θ
8) Compute angle from AC and BD to be equal solve: a-x = a-θ+x, x = θ/2
9) Add θ/2 to angle DBA, subtract θ/2 from CAB
10) Draw the new angle line segments to intersect at point F, This forms isosceles triangle AFB.
11) Bisect segments BF and AF
12) Draw perpendiculars from the midpoints of the segments to intersect BD and AC
13) Points H and J are the center of the two circles that are tangent to line A and line B
14) The circles are not tangent to each other but intersect at a point to create equal arcs
Yes, the above requires some cleanup and formalization, however, you get the point.
#### Attachments
• geometry.jpg
14.3 KB · Views: 441
I have discovered and proven the following, which reveals one point that lies on the line tangent to both circles (also know as the perpendicular bisector of the line segment connecting the circles' centers). I can do this using ONLY drafting techniques. The problem is, I haven't found a way to locate a 2nd point on that line.
But, it's a start; maybe it will help.
Here are the steps:
1) extend lines A and B so that they intersect at point P
2) draw the angle bisector of angle APB
3) draw perpendicular bisector of line segment AB
4) the intersection of these 2 lines lies on the common tangent line to the 2 circles
If we can find a 2nd point on that line, then we can find the common tangent (obviously), which means we can find the angle of the line connecting the circles' centers. Using this information, I would be able to solve the problem.
Last edited:
coolul007
Gold Member
I have discovered and proven the following, which reveals one point that lies on the line tangent to both circles (also know as the perpendicular bisector of the line segment connecting the circles' centers). I can do this using ONLY drafting techniques. The problem is, I haven't found a way to locate a 2nd point on that line.
But, it's a start; maybe it will help.
Here are the steps:
1) extend lines A and B so that they intersect at point P
2) draw the angle bisector of angle APB
3) draw perpendicular bisector of line segment AB
4) the intersection of these 2 lines lies on the common tangent line to the 2 circles
If we can find a 2nd point on that line, then we can find the common tangent (obviously), which means we can find the angle of the line connecting the circles' centers. Using this information, I would be able to solve the problem.
What is the proof that the intersection lies on the tangent line.
I think that the radii have a minimum and maximum that can make this true. That is why this becomes such a difficult problem, as 3 simultaneous events must occur.
Here is the proof (albeit, not a completely formal one):
To show this is true, I start with 2 circles and the tangent lines which intersect at point P.
As per the problem requirements, the circles have the same radius and intersect at only one point.
A couple of definitions:
- Let the distance from a point to a line be defined as the length of the segment drawn from the point perpendicularly to the line.
- Any point lying on the bisector of an angle is equidistant from each ray of the angle
- Any point lying on the perpendicular bisector of a line segment is equidistant to each of the segment’s endpoints.
Line AP is tangent to the upper circle (circle C) at point A.
Likewise, line BP is tangent to the lower circle (circle D) at point B.
- Draw the line segment AB
- Draw the perpendicular bisector of segment AB
- Draw the line segment CD
- Draw the perpendicular bisector of CD. This line is tangent to both circles
- Label the intersection of the 2 bisectors as point E
Now, all I have to do is show that point E lies on the angle bisector of angle APB.
- Draw segment EF perpendicular to AP with point F on AP
- Draw segment EG perpendicular to BP with point G on BP
(for clarity, since I have located point E, I will remove segment AB, circles C and D, and the perpendicular bisectors from the drawing)
So, if I can show that EF = EG then I’ve shown that it lies on the bisector of angle APB.
- Draw segments AC and BD
- We know that AC = BD since they represent the radius of the circles
- We know that AC is perpendicular to AP and BD is perpendicular to BP (since AP is tangent to circle C and BP is tangent to circle D)
- Draw segments EA and EB
- Since E lies on the perpendicular bisector of AB, then EA = EB
-Draw segments EC and ED
- Since E lies on the perpendicular bisector of CD, then EC = ED
- Therefore, by SSS, we know that triangle ACE = triangle BDE
- EF is parallel to AC since both are perpendicular to AP
- EG is parallel to BD since both are perpendicular to BP
- Angle AEF = angle CAE by alternate interior angles
- Angle BEG = angle DBE by alternate interior angles
- Angle CAE = angle DBE by congruent triangles
- Therefore, angle AEF = angle BEG by transitivity
- Angle EFA = angle EGB = 90 degrees, by definition of perpendicular
- Triangle AEF = triangle BEG by AAS
- Therefore EF = EG by congruent triangles
Last edited:
|
Publicité ▼
# définition - sunlight
sunlight (n.)
1.the rays of the sun"the shingles were weathered by the sun and wind"
Merriam Webster
SunlightSun"light` (?), n. The light of the sun. Milton.
## définition (complément)
voir la définition de Wikipedia
Publicité ▼
# synonymes - sunlight
sunlight (n.)
Publicité ▼
## dictionnaire analogique
Atmosphere[Hyper.]
Weather[Hyper.]
Sunlight (n.) [MeSH]
Wikipedia
# Sunlight
"Sunshine" redirects here. For natural lighting of interior spaces by admitting sunlight, see Daylighting. For solar energy available from sunlight, see Insolation. For other uses, see Sunlight (disambiguation) and Sunshine (disambiguation).
Sunlight shining through clouds, giving rise to crepuscular rays.
Sunlight, in the broad sense, is the total frequency spectrum of electromagnetic radiation given off by the Sun, particularly infrared, visible, and ultraviolet light. On Earth, sunlight is filtered through the Earth's atmosphere, and solar radiation is obvious as daylight when the Sun is above the horizon.
When the direct solar radiation is not blocked by clouds, it is experienced as sunshine, a combination of bright light and radiant heat. When it is blocked by the clouds or reflects off of other objects, it is experienced as diffused light.
The World Meteorological Organization uses the term "sunshine duration" to mean the cumulative time during which an area receives direct irradiance from the Sun of at least 120 watts per square meter.[1]
Sunlight may be recorded using a sunshine recorder, pyranometer or pyrheliometer. Sunlight takes about 8.3 minutes to reach the Earth.
On average, it takes energy between 10,000 and 170,000 years to leave the sun's interior and then be emitted from the surface as light.[2]
Direct sunlight has a luminous efficacy of about 93 lumens per watt of radiant flux. Bright sunlight provides illuminance of approximately 100,000 lux or lumens per square meter at the Earth's surface.
Sunlight is a key factor in photosynthesis, a process vital for many living beings on Earth.
## Composition
Solar irradiance spectrum above atmosphere and at surface
The spectrum of the Sun's solar radiation is close to that of a black body with a temperature of about 5,800 K.[3] The Sun emits EM radiation across most of the electromagnetic spectrum. Although the Sun produces Gamma rays as a result of the nuclear fusion process, these super high energy photons are converted to lower energy photons before they reach the Sun's surface and are emitted out into space. As a result, the Sun doesn't give off any gamma rays. The Sun does, however, emit X-rays, ultraviolet, visible light, infrared, and even radio waves.[4] When ultraviolet radiation is not absorbed by the atmosphere or other protective coating, it can cause damage to the skin known as sunburn or trigger an adaptive change in human skin pigmentation.
The spectrum of electromagnetic radiation striking the Earth's atmosphere spans a range of 100 nm to about 1 mm. This can be divided into five regions in increasing order of wavelengths:[5]
• Ultraviolet C or (UVC) range, which spans a range of 100 to 280 nm. The term ultraviolet refers to the fact that the radiation is at higher frequency than violet light (and, hence also invisible to the human eye). Owing to absorption by the atmosphere very little reaches the Earth's surface (Lithosphere). This spectrum of radiation has germicidal properties, and is used in germicidal lamps.
• Ultraviolet B or (UVB) range spans 280 to 315 nm. It is also greatly absorbed by the atmosphere, and along with UVC is responsible for the photochemical reaction leading to the production of the ozone layer.
• Ultraviolet A or (UVA) spans 315 to 400 nm. It has been traditionally held as less damaging to the DNA, and hence used in tanning and PUVA therapy for psoriasis.
• Visible range or light spans 380 to 780 nm. As the name suggests, it is this range that is visible to the naked eye.
• Infrared range that spans 700 nm to 106 nm (1 mm). It is responsible for an important part of the electromagnetic radiation that reaches the Earth. It is also divided into three types on the basis of wavelength:
• Infrared-A: 700 nm to 1,400 nm
• Infrared-B: 1,400 nm to 3,000 nm
• Infrared-C: 3,000 nm to 1 mm
## Calculation
To calculate the amount of sunlight reaching the ground, both the elliptical orbit of the Earth and the attenuation by the Earth's atmosphere have to be taken into account. The extraterrestrial solar illuminance (Eext), corrected for the elliptical orbit by using the day number of the year (dn), is given by[6]
$E_{\rm ext}= E_{\rm sc} \cdot \left(1+0.033412 \cdot \cos\left(2\pi\frac{{\rm dn}-3}{365}\right)\right),$
where dn=1 on January 1; dn=2 on January 2; dn=32 on February 1, etc. In this formula dn-3 is used, because in modern times Earth's perihelion, the closest approach to the Sun and therefore the maximum Eext occurs around January 3 each year. The value of 0.033412 is determined knowing that the ratio between the perihelion (0.98328989 AU) squared and the aphelion (1.01671033 AU) squared should be approximately 0.935338.
The solar illuminance constant (Esc), is equal to 128×103 lx. The direct normal illuminance (Edn), corrected for the attenuating effects of the atmosphere is given by:
$E_{\rm dn}=E_{\rm ext}\,e^{-cm},$
where c is the atmospheric extinction coefficient and m is the relative optical airmass.
## Solar constant
The solar constant, a measure of flux density, is the amount of incoming solar electromagnetic radiation per unit area that would be incident on a plane perpendicular to the rays, at a distance of one astronomical unit (AU) (roughly the mean distance from the Sun to the Earth). The "solar constant" includes all types of solar radiation, not just the visible light. Its average value was thought to be approximately 1.366 kW/m²,[7] varying slightly with solar activity, but recent recalibrations of the relevant satellite observations indicate a value closer to 1.361 kW/m² is more realistic.[8]
## Total (TSI) and spectral solar irradiance (SSI) upon Earth
Total Solar Irradiance upon Earth (TSI) was earlier measured by satellite to be roughly 1.366 kilowatts per square meter (kW/m²),[7][9][10] but most recently NASA cites TSI as "1361 W/m² as compared to ~1366 W/m² from earlier observations [Kopp et al., 2005]", based on regular readings from NASA's Solar Radiation and Climate Experiment(SORCE) satellite, active since 2003,[11] noting that this "discovery is critical in examining the energy budget of the planet Earth and isolating the climate change due to human activities." Furthermore the Spectral Irradiance Monitor (SIM) has found in the same period that spectral solar irradiance (SSI) at UV (ultraviolet) wavelength corresponds in a less clear, and probably more complicated fashion, with earth's climate responses than earlier assumed, fueling broad avenues of new research in "the connection of the Sun and stratosphere, troposphere, biosphere, ocean, and Earth’s climate".[11]
## Intensity in the Solar System
Different bodies of the Solar System receive light of an intensity inversely proportional to the square of their distance from Sun. A rough table comparing the amount of solar radiation received by each planet in the Solar System follows (from data in [1]):
Planet Perihelion - Aphelion
distance (AU)
maximum and minimum
(W/m²)
Mercury 0.3075 – 0.4667 14,446 – 6,272
Venus 0.7184 – 0.7282 2,647 – 2,576
Earth 0.9833 – 1.017 1,413 – 1,321
Mars 1.382 – 1.666 715 – 492
Jupiter 4.950 – 5.458 55.8 – 45.9
Saturn 9.048 – 10.12 16.7 – 13.4
Uranus 18.38 – 20.08 4.04 – 3.39
Neptune 29.77 – 30.44 1.54 – 1.47
The actual brightness of sunlight that would be observed at the surface depends also on the presence and composition of an atmosphere. For example Venus' thick atmosphere reflects more than 60% of the solar light it receives. The actual illumination of the surface is about 14,000 lux, comparable to that on Earth "in the daytime with overcast clouds".[12]
Sunlight on Mars would be more or less like daylight on Earth wearing sunglasses, and as can be seen in the pictures taken by the rovers, there is enough diffuse sky radiation that shadows would not seem particularly dark. Thus it would give perceptions and "feel" very much like Earth daylight.
For comparison purposes, sunlight on Saturn is slightly brighter than Earth sunlight at the average sunset or sunrise (see daylight for comparison table). Even on Pluto the sunlight would still be bright enough to almost match the average living room. To see sunlight as dim as full moonlight on the Earth, a distance of about 500 AU (~69 light-hours) is needed; there are only a handful of objects in the solar system known to orbit farther than such a distance, among them 90377 Sedna and (87269) 2000 OO67.
## Surface illumination
The spectrum of surface illumination depends upon solar elevation due to atmospheric effects, with the blue spectral component from atmospheric scatter dominating during twilight before and after sunrise and sunset, respectively, and red dominating during sunrise and sunset. These effects are apparent in natural light photography where the principal source of illumination is sunlight as mediated by the atmosphere.
According to Craig Bohren, "preferential absorption of sunlight by ozone over long horizon paths gives the zenith sky its blueness when the sun is near the horizon".[13]
See diffuse sky radiation for more details.
## Climate effects
On Earth, solar radiation is obvious as daylight when the sun is above the horizon. This is during daytime, and also in summer near the poles at night, but not at all in winter near the poles. When the direct radiation is not blocked by clouds, it is experienced as sunshine, combining the perception of bright white light (sunlight in the strict sense) and warming. The warming on the body, the ground and other objects depends on the absorption (electromagnetic radiation) of the electromagnetic radiation in the form of heat.
The amount of radiation intercepted by a planetary body varies inversely with the square of the distance between the star and the planet. The Earth's orbit and obliquity change with time (over thousands of years), sometimes forming a nearly perfect circle, and at other times stretching out to an orbital eccentricity of 5% (currently 1.67%). The total insolation remains almost constant due to Kepler's second law,
$\tfrac{2A}{r^2}dt = d\theta,$
where $A$ is the "areal velocity" invariant. That is, the integration over the orbital period (also invariant) is a constant.
$\int_{0}^{T} \tfrac{2A}{r^2}dt = \int_{0}^{2\pi} d\theta = \mathrm{constant}$
If we assume the solar radiation power $P$ as a constant over time and the solar irradiation given by the inverse-square law, we obtain also the average insolation as a constant.
But the seasonal and latitudinal distribution and intensity of solar radiation received at the Earth's surface also varies.[14] For example, at latitudes of 65 degrees the change in solar energy in summer & winter can vary by more than 25% as a result of the Earth's orbital variation. Because changes in winter and summer tend to offset, the change in the annual average insolation at any given location is near zero, but the redistribution of energy between summer and winter does strongly affect the intensity of seasonal cycles. Such changes associated with the redistribution of solar energy are considered a likely cause for the coming and going of recent ice ages (see: Milankovitch cycles).
## Past variations in solar irradiance
Space-based observations of solar irradiance started in 1978. These measurements show that the solar constant is not constant. It varies with the 11-year sunspot solar cycle. When going further back in time, one has to rely on irradiance reconstructions, using sunspots for the past 400 years or cosmogenic radionuclides for going back 10,000 years. Such reconstructions have been done [15][16][17][18]. These studies show that solar irradiance does vary with distinct periodicities such as: 11 years (Schwabe), 88 years (Gleisberg cycle), 208 years (DeVries cycle) and 1,000 years (Eddy cycle).
## Life on Earth
This short film explores the vital connection between Earth and the Sun.
The existence of nearly all life on Earth is fueled by light from the sun. Most autotrophs, such as plants, use the energy of sunlight, combined with carbon dioxide and water, to produce simple sugars—a process known as photosynthesis. These sugars are then used as building blocks and in other synthetic pathways which allow the organism to grow.
Heterotrophs, such as animals, use light from the sun indirectly by consuming the products of autotrophs, either by consuming autotrophs, by consuming their products or by consuming other heterotrophs. The sugars and other molecular components produced by the autotrophs are then broken down, releasing stored solar energy, and giving the heterotroph the energy required for survival. This process is known as cellular respiration.
In prehistory, humans began to further extend this process by putting plant and animal materials to other uses. They used animal skins for warmth, for example, or wooden weapons to hunt. These skills allowed humans to harvest more of the sunlight than was possible through glycolysis alone, and human population began to grow.
During the Neolithic Revolution, the domestication of plants and animals further increased human access to solar energy. Fields devoted to crops were enriched by inedible plant matter, providing sugars and nutrients for future harvests. Animals which had previously only provided humans with meat and tools once they were killed were now used for labour throughout their lives, fueled by grasses inedible to humans.
The more recent discoveries of coal, petroleum and natural gas are modern extensions of this trend. These fossil fuels are the remnants of ancient plant and animal matter, formed using energy from sunlight and then trapped within the earth for millions of years. Because the stored energy in these fossil fuels has accumulated over many millions of years, they have allowed modern humans to massively increase the production and consumption of primary energy. As the amount of fossil fuel is large but finite, this cannot continue indefinitely, and various theories exist as to what will follow this stage of human civilization (e.g. alternative fuels, Malthusian catastrophe, new urbanism, peak oil).
## Cultural aspects
Claude Monet: Le déjeuner sur l'herbe
The effect of sunlight is relevant to painting, evidenced for instance in works of Claude Monet on outdoor scenes and landscapes.
Many people find direct sunlight to be too bright for comfort, especially when reading from white paper upon which the sun is directly shining. Indeed, looking directly at the sun can cause long-term vision damage. To compensate for the brightness of sunlight, many people wear sunglasses. Cars, many helmets and caps are equipped with visors to block the sun from direct vision when the sun is at a low angle. Sunshine is often blocked from entering buildings through the use of walls, window blinds, awnings, shutters or curtains, or by nearby shade trees.
In colder countries, many people prefer sunnier days and often avoid the shade. In hotter countries the converse is true; during the midday hours many people prefer to stay inside to remain cool. If they do go outside, they seek shade which may be provided by trees, parasols, and so on.
In Hinduism the sun is considered to be a god as it is the source of life and energy on earth.
### Sunbathing
Sunbathing is a popular leisure activity in which a person sits or lies in direct sunshine. People often sunbathe in comfortable places where there is ample sunlight. Some common places for sunbathing include beaches, open air swimming pools, parks, gardens, and sidewalk cafés. Sunbathers typically wear limited amounts of clothing or some simply go nude. For some, an alternative to sunbathing is the use of a sunbed that generates ultraviolet light and can be used indoors regardless of outdoor weather conditions and amount of sunlight.
For many people with pale or brownish skin, one purpose for sunbathing is to darken one's skin color (get a sun tan) as this is considered in some cultures to be beautiful, associated with outdoor activity, vacations/holidays, and health. Some people prefer naked sunbathing so that an "all-over" or "even" tan can be obtained, sometimes as part of a specific lifestyle.
For people suffering from psoriasis, sunbathing is an effective way of healing the symptoms.
Skin tanning is achieved by an increase in the dark pigment inside skin cells called melanocytes and it is actually an automatic response mechanism of the body to sufficient exposure to ultraviolet radiation from the sun or from artificial sunlamps. Thus, the tan gradually disappears with time, when one is no longer exposed to these sources.
## Effects on human health
The body produces vitamin D from sunlight (specifically from the UVB band of ultraviolet light), and excessive seclusion from the sun can lead to deficiency unless adequate amounts are obtained through diet.
Sunburn can have mild to severe inflammation effects on skin; this can be avoided by using a proper sunscreen cream or lotion or by gradually building up melanocytes with increasing exposure. Another detrimental effect of UV exposure is accelerated skin aging (also called skin photodamage), which produces a difficult to treat cosmetic effect. Some people are concerned that ozone depletion is increasing the incidence of such health hazards. A 10% decrease in ozone could cause a 25% increase in skin cancer.[19]
A lack of sunlight, on the other hand, is considered one of the primary causes of seasonal affective disorder (SAD), a serious form of the "winter blues". SAD occurrence is more prevalent in locations further from the tropics, and most of the treatments (other than prescription drugs) involve light therapy, replicating sunlight via lamps tuned to specific wavelengths of visible light, or full-spectrum bulbs.
A recent study indicates that more exposure to sunshine early in a person’s life relates to less risk from multiple sclerosis (MS) later in life.[20]
## References
1. ^
2. ^ "NASA: The 8-minute travel time to Earth by sunlight hides a thousand-year journey that actually began in the core". NASA, sunearthday.nasa.gov. Retrieved 2012-02-12.
3. ^ NASA Solar System Exploration - Sun: Facts & Figures retrieved 27 April 2011 "Effective Temperature ... 5777 K"
4. ^ "The Multispectral Sun, from the National Earth Science Teachers Association". Windows2universe.org. 2007-04-18. Retrieved 2012-02-12.
5. ^ Naylor, Mark; Kevin C. Farmer (1995). "Sun damage and prevention". Electronic Textbook of Dermatology. The Internet Dermatology Society. Retrieved 2008-06-02.
6. ^ C. KANDILLI and K. ULGEN. "Solar Illumination and Estimating Daylight Availability of Global Solar Irradiance". Energy Sources.
7. ^ a b "Satellite observations of total solar irradiance". Acrim.com. Retrieved 2012-02-12.
8. ^ G. Kopp; J. Lean (2011). "A new, lower value of total solar irradiance: Evidence and climate significance". Geophys. Res. Lett.: L01706. Bibcode 2011GeoRL..3801706K. DOI:10.1029/2010GL045777.
9. ^ Willson, R. C., and A. V. Mordvinov (2003), Secular total solar irradiance trend during solar cycles 21–23, Geophys. Res. Lett., 30(5), 1199, doi:10.1029/2002GL016038 ACR
10. ^
11. ^ a b "NASA Goddard Space Flight Center: Solar Radiation". Atmospheres.gsfc.nasa.gov. 2012-02-08. Retrieved 2012-02-12.
12. ^ "The Unveiling of Venus: Hot and Stifling". Science News 109 (25): 388. 1976-06-19. JSTOR 3960800. "100 watts per square meter ... 14,000 lux ... corresponds to ... daytime with overcast clouds"
13. ^
14. ^ "Graph of variation of seasonal and latitudinal distribution of solar radiation". Museum.state.il.us. 2007-08-30. Retrieved 2012-02-12.
15. ^ Wang et al. (2005). The Astrophysical Journal, Volume 625, issue 1, pages 522-538, dx.doi.org/10.1086/429689.
16. ^ Steinhilber et al. (2009), Geophysical Research Letters, Volume 36, L19704, http://dx.doi.org/10.1051/0004-6361/200811446
17. ^ Vieira et al. (2011), Astronomy&Astrophysics, Volume 531, A6, http://dx.doi.org/10.1051/0004-6361/201015843
18. ^ Steinhilber et al.(2012), Proceedings of the National Academy of Sciences, Early Edition http://dx.doi.org/10.1073/pnas.1118965109
19. ^ Ozone Hole Consequences retrieved 30 October 2008
20. ^ "NEUROLOGY 2007;69:381-388". Neurology.org. 2007-07-24. Retrieved 2012-02-12.
Contenu de sensagent
• définitions
• synonymes
• antonymes
• encyclopédie
• definition
• synonym
Publicité ▼
dictionnaire et traducteur pour sites web
Alexandria
Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web !
Essayer ici, télécharger le code;
Solution commerce électronique
Augmenter le contenu de votre site
Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML.
Parcourir les produits et les annonces
Obtenir des informations en XML pour filtrer le meilleur contenu.
Indexer des images et définir des méta-données
Fixer la signification de chaque méta-donnée (multilingue).
Renseignements suite à un email de description de votre projet.
Jeux de lettres
Les jeux de lettre français sont :
○ Anagrammes
○ jokers, mots-croisés
○ Lettris
○ Boggle.
Lettris
Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée.
boggle
Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer
Dictionnaire de la langue française
Principales Références
La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés.
Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID).
L'encyclopédie française bénéficie de la licence Wikipedia (GNU).
Changer la langue cible pour obtenir des traductions.
Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent.
5747 visiteurs en ligne
calculé en 0,094s
Je voudrais signaler :
section :
une faute d'orthographe ou de grammaire
un contenu abusif (raciste, pornographique, diffamatoire)
|
It's not very clear for me from the doc whether this is actually allowed with the command WordMorphism, but if it is, I would like to understand how to do it.
|
# Entanglement and entropy squeezing in the system of two qubits interacting with a two-mode field in the context of power low potentials
## Abstract
We study the dynamics of two non-stationary qubits, allowing for dipole-dipole and Ising-like interplays between them, coupled to quantized fields in the framework of two-mode pair coherent states of power-low potentials. We focus on three particular cases of the coherent states through the exponent parameter taken infinite square, triangular and harmonic potential wells. We examine the possible effects of such features on the evolution of some quantities of current interest, such as population inversion, entanglement among subsystems and squeezing entropy. We show how these quantities can be affected by the qubit-qubit interaction and exponent parameter during the time evolution for both cases of stationary and non-stationary qubits. The obtained results suggest insights about the capability of quantum systems composed of nonstationary qubits to maintain resources in comparison with stationary qubits.
## Introduction
Atom–photon interactions offer a practical way to manipulate and generate quantum entanglement, coherence and squeezing. The two-level atom inside a cavity field is the simplest case of the atom–photon interaction, described by the famous Jaynes–Cummings model (JCM)1. Since its introduction, the model has received great attention in the fields of quantum optics and laser physics for both experimental and theoretical studies2,3,4,5,6,7,8,9,10,11,12,13,14,15, and this interest is partly due to its apparent simplicity and, most importantly, to its remarkable predictions about the dynamical characteristics of subsystems. This model has come to be an inspiration for a wide range of generalizations inextricably linked to more general situations with realistic circumstances. Most of them concentrated mainly on multiple photon transformations and multiple fields16,17, noninteracting or interacting of a set of atoms in the same cavity18,19, described by the famous Tavis–Cummings model (TCM)20. In recent years, heightened interest has been paid to decoherence and quantum entanglement properties of light-matter interaction models ‘for bipartite and multipartite systems interacted with a cavity field and also with each other through dipole-dipole and Ising-like interactions21,22,23. In this regard, an important application focused on the resonant two-qubit JCM has been considered with the aim of excusing quantum protocols for clear Bell state differentiation of two qubits24.
One of the principal aspects of quantum physics is the quantum entanglement between two spatially separated objects sharing a common non-local wave function. Recently, entanglement, as a physical resource, is used to implement various tasks in information processing, communication and quantum computing25,26,27, including the information entropy27,28, the behavior of charge oscillations29, quantum cryptography30, etc. Several efforts have been carried out to quantify the entanglement between atoms and fields. Entanglement between photons and qubits has so far been exclusively studied at optical frequencies with single atoms31 and electron spins32,33, to interface stationary and flying qubits34, to implement quantum communication35 and to realize nodes for quantum repeaters36 and networks37. Advance rapid in the development of quantum superconducting circuit based on measuring quantum correlations between artificial atoms and itinerant photons has been considered38,39,40,41,42.
The concept of squeezed states has been widely examined for various radiation field schemes. Squeezing in a quantized electromagnetic field has received considerable attention and provided intriguing works in the literature43. This concept has expanded to atomic systems with analogous definitions of radiation fields44,45,46,47. The atom-photon interaction was used to determine the condition in which the squeezing effect would be present48. The aspect of atomic squeezing in a three-level atoms placed in a two-mode cavity is analyzed in the presence the dipole-dipole interaction49. The squeezed atomic model was considered on the basis of Raman scattering with a strong laser pulse to describe the transfer of the change in correlation between the atom and light50. The effect of the squeezing in the cases of nonlinear and optimal spin states was studied51,52,53. In addition, the experimental implementation for a set of V-type atoms was considered54,55. In all these cases, the atomic squeezing has been investigated in the context of the Heisenberg uncertainty relation (HUR). However, HUR cannot provide enough information on atomic squeezing, especially when the atomic inversion takes zero value56. This difficulty was overcome by applying the entropy uncertainty relationship (EUR)57.
This work is in keeping with the aforementioned spirit of putting forward another extension of the TCM, that is, an interactive version of it for the description of two identical nonstationary qubits. The qubits interact with each other via dipole-dipole and Ising-like interaction and with two-mode quantized field in the framework of pair coherent states of power-low potentials (PCSPLPs). The interaction characteristics of the proposed model is that the interaction between the qubit system and the field is considered to be a time-dependent function and the said field is associated with PLPs that provide energy differences. It is worth commenting that the set of results reported here, regarding the aforesaid nonlinear coupling scheme, may also be of some relevance in the light of novel experimental and theoretical research on optical simulation of the Tavis Cummings and Rabi models in current designs of architectures intended for quantum computation and communication. Motivated by these considerations, we strive to comprehend how the time-dependent coupling and exponent parameter influence the dynamics of qubits-fields entanglement, qubit-qubit entanglement and qubit squeezing in the presence of the dipole-dipole and Ising-like interaction.
The content of the manuscript is the following. In “Physical model”, the Hamiltonian system and general solution for a two-qubit system coupled to PCSPLPs with dipole-dipole and Ising interactions are introduced. “Measures and numerical results”, we present the numerical results of the possible effects of such features on the evolution of some quantities of current interest, such as population inversion, entanglement among subsystems and squeezing entropy. In “Conclusion”, some conclusions are given.
## Physical model
Let the Hamiltonian model of the system under study be described as follows:
\begin{aligned} H=H_F+H_{A}+H_{AF}+H_{AI}, \end{aligned}
(1)
where the constituent Hamiltonians are explicitly given by
\begin{aligned} H_F= & {} \sum _{L=A,B}^{{}} \hbar \omega _{L}{\hat{n}}_{L},\end{aligned}
(2)
\begin{aligned} H_{A}= & {} \sum _{L=A,B}^{{}}\frac{\hbar \Omega }{2} {\hat{\sigma }}_{z}^{(L)}, \end{aligned}
(3)
\begin{aligned} H_{AF}= & {} \sum _{L=A,B}^{{}}\hbar \lambda (t)\left( {\hat{a}}{\hat{b}}\ {\hat{\sigma }}_{+}^{(L)}+ {\hat{a}}^{\dagger }{\hat{b}}^{\dagger }\ {\hat{\sigma }}_{-}^{(L)}\right) , \end{aligned}
(4)
\begin{aligned} H_{AI}&= {} \hbar \lambda _{D}\left( {\hat{\sigma }}_{+}^{(A)}{\hat{\sigma }}_{-}^{(B)}+ {\hat{\sigma }}_{-}^{(A)}{\hat{\sigma }}_{+}^{(B)}\right) +\hbar \lambda _{S}{\hat{\sigma }}_{Z}^{(A)}{\hat{\sigma }} _{Z}^{(B)}. \end{aligned}
(5)
here, $$H_F$$ and $$H_A$$ describe the energy operators of the two-mode field and qubits, respectively, the interplay between the qubit system and the quantized field is prescribed by $$H_{AF}$$, and $$H_{AI}$$ is the qubit-qubit interaction. The single field mode frequency is $$\omega _L$$, $$\Omega _L$$ is the qubit transition frequency, $$\lambda (t)$$ is the time-dependent coupling term, which is considered to be the same for both qubits, and $$\lambda _D$$ and $$\lambda _S$$ are the dipole-dipole and Ising parameters, respectively. The photon number operators $${\hat{n}}_A={\hat{a}}^{\dagger }{\hat{a}}$$ and $${\hat{n}}_{B}=\hat{b }^{\dagger }{\hat{b}}$$ where $${\hat{a}}^{\dagger }$$ ($${\hat{b}}^{\dagger }$$) and $$\hat{ a}$$ ($${\hat{b}}$$) are, respectively, the photon creation and annihilation operators for the field mode A (B) such that $$[{\hat{X}},{\hat{X}}^{\dagger }]={\hat{I}}$$ ($$X=a,b$$), and, on the other side, $${\hat{\sigma }}_{+}^{(L)}({\hat{\sigma }}_{-}^{(L)})$$ and $${\hat{\sigma }}_{z}^{(L)}$$ $$(L=A,B)$$ indicate the standard qubit transition operators satisfying the commutation relations $$[{\hat{\sigma }}_{z}^{(L)},{\hat{\sigma }}_{\pm }^{(L)}]=\pm 2\hat{ \sigma }_{\pm }^{(L)}$$, $$[{\hat{\sigma }}_{+}^{(L)},{\hat{\sigma }}_{-}^{(L)}]=\hat{ \sigma }_{z}^{(L)}$$.
Large varieties of quantum systems can be described by PLPs58,59,60,61,62 through a convenient choice of the exponent parameter denoted by $$\ell$$. This parameter dictates and characterizes the level energy differences. For $$\ell >2$$, the level energy differences $$\Delta E_n$$ decrease with energy level n, but inversely so for $$\ell <2$$. For $$\ell =2$$, all $$\Delta E_n$$ are independent of n, the energy levels being equally spaced. Here, we introduce quantized fields for which the potentials and their corresponding energies are given by63
\begin{aligned} U(x,\ell )=U_0\left| {x\over a}\right| ^\ell ,\qquad E_n=\left( n+{\gamma \over 4}\right) ^{2\ell /(\ell +2)}, \end{aligned}
(6)
where $$U_0(a)$$ defines the dimension of energy (length).
Let the initial state is in a way such that the qubits are both in their corresponding excited state, $$|++\rangle$$, and the radiation field in two-mode PCSPLPs, $$|z,\ell ,q \rangle$$,
\begin{aligned} |\psi (0)\rangle =|++\rangle \otimes |z,\ell ,q \rangle , \end{aligned}
(7)
with the following correspondence63,64
\begin{aligned} |z,\ell ,q \rangle= & {} \left( \sum _{n=0}^{\infty }{\frac{|z|^{2n}}{\upsilon (n,\ell )\upsilon (n+q,\ell )}}\right) ^{-{\frac{1}{2}}}\sum _{n=0}^{\infty } \frac{z^{n}}{\sqrt{\upsilon (n,\ell )\upsilon (n+q,\ell )}}|n,n+q\rangle \nonumber \\= & {} \sum _{n=0}^{\infty }Q_{n}|n,n+q\rangle , \end{aligned}
(8)
where
\begin{aligned} \upsilon (n,\ell )=\prod _{k=1}^{n}\left\{ \left( k+{\frac{\varphi }{4}} \right) ^{{\frac{2\ell }{\ell +2}}}-\left( {\frac{\varphi }{4}}\right) ^{{ \frac{2\ell }{\ell +2}}}\right\} ,\quad \upsilon (0,\ell )=1. \end{aligned}
(9)
For the initial considerations, we work out that the wave function $$|\psi (t)\rangle$$ takes the form
\begin{aligned} |\psi (t)\rangle= & {} \sum _{n=0}^{\infty }(X_{1}(n,t)|e,e\rangle |n,n+q\rangle +X_{2}(n,t)|e,g\rangle |n+1,n+q+1\rangle \nonumber \\&+\, X_{3}(n,t)|g,e\rangle |n+1,n+q+1\rangle +X_{4}(n,t)|g,g\rangle |n+2,n+q+2\rangle ). \end{aligned}
(10)
So, it follows straightforwardly from the Schrödinger equation that the time-dependent coefficients can be determined and tackled this problem entails by numerical solution of the system of differential equations
\begin{aligned} i{dX\over dt}=\Lambda X, \end{aligned}
(11)
where
\begin{aligned} X=\left( \begin{array}{c} X_1 \\ X_2 \\ X_3 \\ X_4 \\ \end{array} \right) ,\quad \Lambda =\left( \begin{array}{cccc} 0 &{} \lambda (t)\nu _1(n) &{} \lambda (t)\nu _1(n) &{} 0 \\ \lambda (t)\nu _1(n) &{} \lambda _S &{} \lambda _D &{} \lambda (t)\nu _2(n) \\ \lambda (t)\nu _1(n) &{} -\lambda _D &{} -\lambda _S &{} \lambda (t)\nu _2(n) \\ 0 &{} \lambda (t)\nu _2(n) &{} \lambda (t)\nu _2(n) &{} 0 \\ \end{array} \right) , \end{aligned}
(12)
with
\begin{aligned} \nu _{j}(n)=\lambda \sqrt{(n+q+j)(n+j)}. \end{aligned}
(13)
The density matrix of the two-qubit system can be obtained by taking the trace over the radiation field
\begin{aligned} {\hat{\rho }}_{AB}(t)=Tr_{F}{\hat{\rho }}(t),\quad \text {with}\quad {\hat{\rho }}(t)=|\psi (t)\rangle \langle \psi (t)|, \end{aligned}
(14)
and for a single qubit system
\begin{aligned} {\hat{\rho }}_{A(B)}(t)&= {} Tr_{B(A)}{\hat{\rho }}_{AB}(t). \end{aligned}
(15)
Based on the set of results, we are able to examine the influence of the PCSPLP, considering the case of an infinite square-well ($$\ell \rightarrow \infty$$, $$\varphi =4$$) potential, triangular well ($$\ell =1$$, $$\varphi =3$$), harmonic oscillator ($$\ell =2$$, $$\varphi =2$$), and time-dependent coupling on some properties of physical interest relating to the time evolution of qubit systems in the presence dipole–dipole and Ising-like interaction, such as the population inversion, qubits-field entanglement, qubit–qubit entanglement dynamics based on the negativity features and qubit squeezing with the help of the HUR.
## Measures and numerical results
### Population inversion
Now we are ready to consider the population inversion and discuss the behavior of the phenomena of collapses and revivals of the system Hamiltonian (1). It is known that the mathematical formula of population is the difference between the probability of finding the particle in excited and ground states. The population inversion W(t) of the qubits is given by
\begin{aligned} W(t)=\rho _{11}^{AB}+\rho _{22}^{AB}-\rho _{33}^{AB}-\rho _{44}^{AB}. \end{aligned}
(16)
In Fig. 1 the behavior of the function W(t) is drawn with fixed parameters $$z=8$$ and $$q=4.$$ For harmonic oscillator ($$\ell =2$$), neglecting the motion, we find that the function W(t) ranges between $$-1$$ and 1 around the horizontal axis. The collapse periods at $$\frac{n\pi }{2}$$ while the revivals at $$n\pi$$. We also note that there are oscillations having a small amplitude between periods of collapse, as observed by the Fig. 1a. After taking time dependence into account, we notice that the periods of collapse extend to double, whereas we find that the periods of revival decrease to twice as seen in Fig. 1b. For triangular well ($$\ell =1$$, $$\varphi =3$$), we exclude time dependence. We find that the revival periods are decreased and the fluctuation between the collapse periods in the previous case faded after taking into account triangular well. We also note that the amplitude of the oscillations expanded and became more regular compared to the previous case, see Fig. 1c. After adding dependence on time, we find once again that the periods of collapse increase while the periods of revival decrease and this result is consistent with the previous case, see Fig. 1d. For infinite square-well ($$\ell \rightarrow \infty$$, $$\varphi =4$$), we note that regular oscillations become chaotic and the amplitude of oscillations reduced after considering the case of infinite square-well. The phenomena of collapse and revival achieved in the previous case vanish in the case of infinite square-well, as shown in the Fig. 1e,f. In Fig. 2 we show the time evolution of the population inversion in the presence of the dipole-dipole and Ising interaction. From the figure, it is clear that the dynamical behavior of W is affected by the parameters $$\lambda _D$$ and $$\lambda _S$$ with respect to the physical parameters of the model. We observe that the qubit–qubit interactions lead to damage the periodicity of W accompanied with enhancement in the oscillations and a change in its time interval for which the revival and collapse phenomena occurring. Moreover, the presence of these interactions decreases the effect of the qubit–field coupling parameter $$\lambda$$.
### Qubits–field entanglement
To quantify the degree of the entanglement of the qubits–field state, we use the von Neumann entropy defined by
\begin{aligned} S_{AB}(t)=-\text {Tr}\left( \rho _{AB}\ln \rho _{AB}\right) . \end{aligned}
(17)
In Fig. 3 the behavior of the von Neumann entropy is drawn with the same parameters as above. For harmonic oscillator ($$\ell =2$$), after excluding time dependence, we see the entanglement fluctuating from weak to strong regularly, and the function $$S_{AB}(t)$$ reaches the smallest values when the extreme points of the population inversion. While the function $$S_{AB}(t)$$ reaches the maximum values from the center of the collapse areas as seen in the Fig. 3a. Fluctuations decrease and the function $$S_{AB}(t)$$ will reach the pure states ($$S_{AB}(t)=0$$) regularly after taking time dependence into account as observed in Fig. 3b. For triangular well ($$\ell =1$$, $$\varphi =3$$) and in the absence of dependence on time, the speed of fluctuations decreased which means that entanglement becomes weak. It is pointed that function $$S_{AB}(t)$$ reaches the maximum and minimum values regularly compared to the previous case, see Fig. 3c. In general, the entanglement between parts of the system increases and the small values of the function ($$S_{AB}(t)=0$$) are reduced after taking time dependence into account as seen in Fig. 3d. For infinite square-well ($$\ell \rightarrow \infty$$, $$\varphi =4$$), the minimum values are raised up and then the function $$S_{AB}(t)$$ does not reach the pure state. We note that the fluctuations of the function $$S_{AB}(t)$$ increased in the case of infinite square-well and the entanglement became strong compared to the previous two cases, see Fig. 3e. We note that the oscillations of the function $$S_{AB}(t)$$ become regular and reach the pure state periodically after adding the dependence on time in the interaction cavity as observed in Fig. 3f. In order to observe how the dipole-dipole and Ising interactions affect on the time variation of the qubits-field entanglement, clearly, in Fig. 4, we show the time evolution of function $$S_{AB}(t)$$ with respect to different values of the model parameters. We observe that the amount of the entanglement is strongly affected by the qubit–qubit interaction during the time evolution. The presence of the parameters $$\lambda _D$$ and $$\lambda _S$$ lead to enhance the oscillations of the function $$S_{AB}$$ and increase its value during the evolution. On the other hand, the existence of these parameters reduces the effect of the qubit-field coupling parameter $$\lambda$$ on the behavior of the entanglement.
### Qubit–qubit entanglement
In order to quantify the qubit-qubit entanglement, we use the negativity measure introduced as65,66:
\begin{aligned} N_{AB}=\frac{1}{2}\left\{ Tr\left[ \sqrt{\rho _{AB}^{T_{q}}(\rho _{AB}^{T_{q}})^{*}}\right] -1\right\} , \end{aligned}
(18)
where $$\rho ^{T_{q}}$$ is the partial transpose of $$\varrho _{qf}$$ for the qubit subsystem q, defined by
\begin{aligned} \left\langle k_{q},j_{f}|\varrho ^{T_{q}}|r_{q},l_{f}\right\rangle =\left\langle r_{q},j_{f}|\varrho ^{T_{q}}|k_{q},l_{f}\right\rangle . \end{aligned}
(19)
The negativity has a zero value for an entangled state and one value for maximally entangled states or EPR states.
In Fig. 5, the negativity is potted to illustrate the time variation of the entanglement between the field and the two atoms by the above conditions. For harmonic oscillator ($$\ell =2$$), in general, the $$N_{AB}(t)$$ function fluctuates between the minimum (0) and the maximum (0.4), there is a partial entanglement between the field and the two atoms. We note that the function $$N_{AB}(t)$$ reaches the maximum values periodically at $$n\pi$$ while the function $$N_{AB}(t)$$ reaches the separation state at some points as shown in the Fig. 5a. After adding dependence on time, the previous chaotic oscillations become more uniform and the maximum values of $$N_{AB}(t)$$ decrease. This indicates that both the amount of entanglement and the points of separation state were reduced after taking time dependence into account as seen in Fig. 5b. For triangular well ($$\ell =1$$, $$\varphi =3$$) and in the absence of dependence on time, the function $$N_{AB}(t)$$ becomes more chaotic, the maximum values decrease and the entanglement becomes weak, as is evident from the Fig. 5c. The negativity decreases a lot after taking dependence on time and entanglement becomes weaker than the previous case, see Fig. 5d. For triangular well ($$\ell =1$$, $$\varphi =3$$) and in the absence of dependence on time, we note in this case the negativity is due to the differences between 0 and 0.4 and more regular than the previous cases. Minimum values are achieved for many periods, but the fluctuations have decreased compared to previous cases, see Fig. 5e. The fluctuations in negativity decrease and the periods of disentanglement between parts of the system increase after taking time dependence into account as observed in Fig. 5f. In order to observe how the dipole-dipole and Ising interaction affects the time variation of the qubit-qubit entanglement, clearly, the numerical results for the negativity in this case are displayed in Fig. 6. We show the negativity in terms of of $$\epsilon t$$ with respect to different values of the physical model. We observe that as we turn on the dipole-dipole and Ising interaction, the negativity is substantially increased at some specific times with an enhancement of the oscillations. This can be expected from the system’ Hamiltonian, whose interaction part involving the qubit operators naturally turns a separable state of the type $$|++\rangle$$ into an entangled state.
### Single qubit squeezing phenomena
The principle of uncertainty is one of the most fundamental assumptions in quantum theory, was first introduced by Heisenberg, which shows the limits of error in the common measurements of non-commutating operators in measuring quantum states67,68,69. In general the uncertainty principle for any two hermitian operators $${\hat{A}}$$ and $${\hat{B}}$$ obeys the relation $$[{\hat{A}},{\hat{B}}]=i{\hat{C}},$$ therefore the Heisenberg uncertainty inequality is given by,
\begin{aligned} \langle (\Delta {\hat{A}})^{2}\rangle \langle (\Delta {\hat{B}})^{2}\rangle \ge \frac{1}{4}|\langle {\hat{C}}\rangle |^{2}, \end{aligned}
(20)
where $$\langle (\Delta {\hat{A}})^{2}\rangle =(\langle {\hat{A}}^{2}\rangle -\langle {\hat{A}}\rangle ^{2}).$$ As one of important application is a Pauli operators $${\hat{\sigma }}_{X},$$ $${\hat{\sigma }}_{Y}$$ and $${\hat{\sigma }}_{Z}$$ which are describes the interaction between a two-level atom and the electromagnetic field, such that $$[{\hat{\sigma }}_{X},{\hat{\sigma }}_{Y}]=i{\hat{\sigma }}_{Z},$$ therefore uncertainty can written as $$\Delta {\hat{\sigma }}_{X}\Delta {\hat{\sigma }} _{Y}\ge \frac{1}{2}|\langle {\hat{\sigma }}_{Z}\rangle |$$.
The single qubit entropy squeezing for the component $${\hat{\sigma }}_{\alpha }$$70
\begin{aligned} E_{\alpha }(t)=\delta H({\hat{\sigma }}_{\alpha })-\frac{2}{\sqrt{\delta H(\hat{ \sigma }_{z})}}<0. \end{aligned}
(21)
where $$\delta H({\hat{\sigma }}_{\alpha })=\exp \{H({\hat{\sigma }}_{\alpha })\}$$, and $$H({\hat{\sigma }}_{\alpha })$$ is the Shannon information entropies of the atomic operators $${\hat{\sigma }}_{x}$$, $${\hat{\sigma }}_{y}$$ and $${\hat{\sigma }}_{z}$$.
In Fig. 7, we display the entropy squeezing as a function of time considering the conditions as in the previous sections. Generally the squeezing is achieved with respect to $$E_{X}(t)$$ and never with respect to $$E_{Y}(t)$$, in the first case we note that squeezing is achieved regularly and periodically before and after the center of the collapses regions as shown in the Fig. 7a. Squeezing areas decrease after adding time dependence to the interaction cavity. The squeezing occurs at the beginning and the end of the collapses periods and disappears in the middle of these periods with a comparison between Fig. 1b and 7b. In the second case, the squeezing periods increase and the maximum values increase to reach $$-0.4$$ periodically at $$\frac{n\pi }{4}$$ as seen in Fig. 7c. Once again, the squeezing decreases after adding dependence on time. The squeezing occurs at the beginning and the end of the collapse periods and disappears in the middle of these periods, see Fig. 7d. In the last case, the squeezing disappears, with and without depending on time, as seen in the Fig. 7e,f. In order to examine the dynamical behavior of the entropy squeezing of the qubit system in the presence of the qubit–qubit interaction, the time evolution of the entropies $$E_X$$ and $$E_Y$$ versus the dimensionless quantity $$\epsilon t$$ is displayed in Fig. 8 with respect to different values of the physical parameters of the model. The presence of the dipole–dipole and Ising interaction leads to reduce the squeezing effect and enhance the oscillations of the functions $$E_X$$ and $$E_Y$$ during the time evolution. On the other hand, the existence of these parameters decrease the effect of the qubit–field coupling parameter $$\lambda$$ on the behavior of the entropies.
## Conclusion
In summary, we have introduced a useful model describing the dynamics of two nonstationary qubits, allowing for dipole–dipole and Ising-like interplays between them, coupled to quantized fields in the framework of two-mode pair coherent states of power-low potentials. We have considered three particular cases of the coherent states through the exponent parameter taken infinite square, triangular and harmonic potential wells. We have examined the possible effects of such features on the evolution of some quantities of current interest, such as population inversion, entanglement among subsystems and squeezing entropy. We have shown how these quantities can be affected by the qubit–qubit interaction and exponent parameter during the time evolution for both cases of stationary and nonstationary qubits. Moreover, we have explored the dependence among the quantities on the main parameters of the physical model. The obtained results suggest insights about the capability of quantum systems composed of nonstationary qubits to maintain resources in comparison with stationary qubits.
## References
1. 1.
Jaynes, E. T. & Cummings, F. W. Comparison of quantum and semiclassical radiation theories with application to the beam maser. Proc. IEEE 51, 89–109 (1963).
2. 2.
Wang, Y. et al. Enhancing atom-field interaction in the reduced multiphoton Tavis-Cummings model. Phys. Rev. A 101, 053826 (2020).
3. 3.
Fiscelli, G., Rizzuto, L. & Passante, R. Dispersion interaction between two hydrogen atoms in a static electric field. Phys. Rev. Lett. 124, 013604 (2020).
4. 4.
Hood, J. D. et al. Multichannel interactions of two atoms in an optical tweezer. Phys. Rev. Res. 2, 023108 (2020).
5. 5.
Cortiñas, R. G. et al. Laser trapping of circular Rydberg atoms. Phys. Rev. Lett. 124, 123201 (2020).
6. 6.
Chávez-Carlos, J., López-del-Carpio, B., Bastarrachea-Magnani, M. A. & Stránský, P. Quantum and classical Lyapunov exponents in atom-field interaction systems. Phys. Rev. Lett. 122, 024101 (2019).
7. 7.
Scully, Marlan O. & Suhail, Zubairy M. Quantum Optics (Cambridge University Press, Cambridge, 1997).
8. 8.
Eberly, J. H., Narozhny, N. B. & Sanchez-Mondragon, J. Periodic spontaneous collapse and revival in a simple quantum model. J. Phys. Rev. Lett. 44, 1323 (1980).
9. 9.
Cummings, F. W. Stimulated emission of radiation in a single mode. Phys. Rev. A 140, 1051 (1965).
10. 10.
Han, Y. et al. Interacting dark states with enhanced nonlinearity in an ideal four-level tripod atomic system. Phys. Rev. A 77, 023824 (2008).
11. 11.
Baghshahi, H. R. & Tavassoly, M. K. Entanglement, quantum statistics and squeezing of two $$\Xi$$-type three-level atoms interacting nonlinearly with a single-mode field. Phys. Scr. 89, 075101 (2014).
12. 12.
Cordero, S. & Recamier, J. Selective transition and complete revivals of a single two-level atom in the Jaynes-Cummings Hamiltonian with an additional Kerr medium. J. Phys. B 44, 135502 (2011).
13. 13.
Cordero, S. & Recamier, J. Algebraic treatment of the time-dependent Jaynes-Cummings Hamiltonian including nonlinear terms. J. Phys. A 45, 385303 (2012).
14. 14.
Chaichian, M., Ellinas, D. & Kulish, P. Quantum algebra as the dynamical symmetry of the deformed Jaynes-Cummings model. Phys. Rev. Lett. 65, 980 (1990).
15. 15.
Santos-Sanchez, D. L. & Recamier, O. The f-deformed Jaynes-Cummings model and its nonlinear coherent states. J. Phys. B 45, 015502 (2012).
16. 16.
Parkins, A. S. Resonance fluorescence of a two-level atom in a two-mode squeezed vacuum. Phys. Rev. A 42, 6873 (1990).
17. 17.
Joshi, A. & Puri, R. R. Characteristics of Rabi oscillations in the two-mode squeezed state of the field. Phys. Rev. A 42, 4346 (1990).
18. 18.
Joshi, A., Puri, R. R. & Lawande, S. V. Effect of dipole interaction and phase-interrupting collisions on the collapse-and-revival phenomenon in the Jaynes-Cummings model. Phys. Rev. A 44, 2135 (1991).
19. 19.
Chilingaryan, S. A. & Rodrguez-Lara, B. M. Searching for structure beyond parity in the two-qubit Dicke model. J. Phys. A 46, 335301 (2013).
20. 20.
Tavis, M. & Cummings, F. W. Approximate solutions for an N-molecule-radiation-field Hamiltonian. Phys. Rev. 188, 692 (1969).
21. 21.
Hartmann, M. J., Brand, G. S. L. & Plenio, M. B. Effective spin systems in coupled microcavities. Phys. Rev. Lett. 99, 160501 (2007).
22. 22.
Torres, J. M., Sadurni, E. & Seligman, T. H. Two interacting atoms in a cavity: Exact solutions, entanglement and decoherence. J. Phys. A 43, 192002 (2010).
23. 23.
Porras, D. & Cirac, J. I. Effective quantum spin systems with trapped ions. Phys. Rev. Lett. 92, 207901 (2004).
24. 24.
Torres, J. M., Bernad, J. Z. & Alber, G. Unambiguous atomic Bell measurement assisted by multiphoton states. Appl. Phys. B 122, 1 (2016).
25. 25.
Wang, X. & Wilde, M. M. Cost of quantum entanglement simplified. Phys. Rev. Lett. 125, 040502 (2020).
26. 26.
Klco, N. & Savage, M. J. Minimally entangled state preparation of localized wave functions on quantum computers. Phys. Rev. A 102, 012612 (2020).
27. 27.
Nielsen, M. A. & Chuang, I. L. Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000).
28. 28.
Alber, G. et al. Quantum Information (Springer, Berlin, 2001) (Chap. 5).
29. 29.
Benatti, F., Floreanini, R. & Realpe-Gomez, J. Entropy behaviour under completely positive maps. J. Phys. A 41, 235304 (2008).
30. 30.
Horodecki, R., Kilin, S. Y. & Kowalik, J. Quantum Cryptography and Computing: Theory and Implementation (Nato Science for Peace and Sec, 2010).
31. 31.
Blinov, B. B., Moehring, D. L. L., Duan, M. & Monroe, C. Observation of entanglement between a single trapped atom and a single photon. Nature 428, 153–157 (2004).
32. 32.
Togan, E. et al. Quantum entanglement between an optical photon and a solid-state spin qubit. Nature 466, 730 (2010).
33. 33.
Castelano, L. K., Fanchini, F. F. & Berrada, K. Open quantum system description of singlet-triplet qubits in quantum dots. Phys. Rev. B 94, 235433 (2016).
34. 34.
Wilk, T., Webster, S. C., Kuhn, A. & Rempe, G. Single-atom single-photon quantum interface. Science 317, 488 (2007).
35. 35.
Olmschenk, S. et al. Quantum teleportation between distant matter qubits. Science 323, 486–489 (2009).
36. 36.
Yuan, Z.-S. et al. Experimental demonstration of a BDCZ quantum repeater node. Nature 454, 1098–1101 (2008).
37. 37.
Ritter, S. et al. An elementary quantum network of single atoms in optical cavities. Nature 484, 195–200 (2012).
38. 38.
Houck, A. et al. Generating single microwave photons in a circuit. Nature 449, 328–331 (2007).
39. 39.
Mooney, G. J., Hill, C. D. & Hollenberg, L. C. L. Entanglement in a 20-qubit superconducting quantum computer. Sci. Rep. 9, 13465 (2019).
40. 40.
Tsujimoto, M. et al. Mutually synchronized macroscopic Josephson oscillations demonstrated by polarization analysis of superconducting terahertz emitters. Phys. Rev. Appl. 13, 051001 (2020).
41. 41.
Hofheinz, M. et al. Synthesizing arbitrary quantum states in a superconducting resonator. Nature 459, 546–549 (2009).
42. 42.
Eichler, C. et al. Observation of entanglement between itinerant microwave photons and a superconducting qubit. Phys. Rev. Lett. 109, 240501 (2012).
43. 43.
Drummond, P. D. & Ficek, Z. Quantum Squeezing (Springer, Berlin, 2004).
44. 44.
Wodkiewicz, K. Reduced quantum fluctuations in the Josephson junction. Phys. Rev. B 32, 4750–4752 (1981).
45. 45.
Agarwal, G. S. & Puri, R. R. Cooperative behavior of atoms irradiated by broadband squeezed light. Phys. Rev. A 41, 3782–3791 (1990).
46. 46.
Ashraf, M. M. & Razmi, M. S. K. Atomic-dipole squeezing and emission spectra of the nondegenerate two-photon Jaynes-Cummings model. Phys. Rev. A 45, 8121–8128 (1992).
47. 47.
Kitagawa, M. & Ueda, M. Squeezed spin states. Phys. Rev. A 47, 5138–5143 (1993).
48. 48.
Civitarese, O. & Reboiro, M. Atomic squeezing in three level atoms. Phys. Lett. A 357, 224–228 (2006).
49. 49.
Civitarese, O., Reboiro, M., Rebón, L. & Tielas, D. Atomic squeezing in three-level atoms with effective dipole-dipole atomic interaction. Phys. Lett. A 374, 2117–2121 (2010).
50. 50.
Poulsen, U. V. & Mølmer, K. Squeezed light from spin-squeezed atoms. Phys. Rev. Lett. 87, 123601 (2001).
51. 51.
Wang, X. Spin squeezing in nonlinear spin-coherent states. J. Opt. B: Quantum Semiclass. Opt. 3, 93–96 (2001).
52. 52.
Rojo, A. G. Optimally squeezed spin states. Phys. Rev. A 68, 013807 (2003).
53. 53.
Wang, X. & Sanders, B. C. Relations between bosonic quadrature squeezing and atomic spin squeezing. Phys. Rev. A 68, 033821 (2003).
54. 54.
Dicke, R. H. Coherence in spontaneous radiation processes. Phys. Rev. 93, 99–110 (1954).
55. 55.
El-Oranya, F. A. A., Wahiddinb, M. R. B. & Obadad, A.-S.F. Single-atom entropy squeezing for two two-level atoms interacting with a single-mode radiation field. Opt. Commun. 281, 2854–2863 (2008).
56. 56.
Kuzmich, A., Molmer, K. & Polzik, E. S. Spin squeezing in an ensemble of atoms illuminated with squeezed light. Phys. Rev. Lett. 79, 4782–4785 (1997).
57. 57.
Sanchez-Ruiz, J. Improved bounds in the entropic uncertainty and certainty relations for complementary observables. Phys. Lett. A 201, 125–131 (1995).
58. 58.
Iqbal, S., Rivière, P. & Saif, F. Space-time dynamics of Gazeau-Klauder coherent states in power-law potentials. Int. J. Theor. Phys. 49, 2540–2557 (2010).
59. 59.
Hall, R. L. Spectral geometry of power-law potentials in quantum mechanics. Phys. Rev. A 39, 5500 (1989).
60. 60.
Berrada, K. Improving quantum phase estimation via power-law potential systems. Laser Phys. 24, 065201 (2014).
61. 61.
Jena, S. N., Panda, P. & Tripathy, T. C. Ground states and excitation spectra of baryons in a non-Coulombic power-law potential model. Phys. Rev. D 63, 014011 (2000).
62. 62.
Jena, S. N. & Rath, D. P. Magnetic moments of light, charmed, and b-flavored baryons in a relativistic logarithmic potential. Phys. Rev. D 34, 196 (1986).
63. 63.
Berrada, K., El Baz, M. & Hassouni, Y. Generalized Heisenberg algebra coherent states for power-law potentials. Phys. Lett. A 375, 298–302 (2011).
64. 64.
Agarwal, G. S. Nonclassical statistics of fields in pair coherent states. J. Opt. Soc. Am. B 5, 1940–1947 (1988).
65. 65.
Zyczkowski, K., Horodecki, P., Sanpera, A. & Lewensteinm, M. Volume of the set of separable states. Phys. Rev. A 58, 883–892 (1998).
66. 66.
Vidal, G. & Werner, R. F. Computable measure of entanglement. Phys. Rev. A 65, 032314 (2002).
67. 67.
Riccardi, A., Macchiavello, C. & Maccone, L. Tight entropic uncertainty relations for systems with dimension three to five. Phys. Rev. A 95, 032109 (2017).
68. 68.
Abdalla, M. S., Obada, A.-S.F. & Abdel-Khalek, S. Entropy squeezing of time dependent single-mode Jaynes-Cummings model in presence of non-linear effect. Chaos Solitons Fract. 36, 405–417 (2008).
69. 69.
Khalil, E. M., Abdalla, M. S. & Obada, A.-S.F. Entropy and variance squeezing of two coupled modes interacting with a two-level atom: Frequency converter type. Ann. Phys. 321, 421–434 (2006).
70. 70.
Fang, M.-F., Zhou, P. & Swain, S. Entropy squeezing for a two-level atom. J. Mod. Opt. 47, 1043–1053 (2000).
## Acknowledgements
Taif University Researchers Supporting Project number (TURSP-2020/17), Taif University, Taif, Saudi Arabia.
## Author information
Authors
### Contributions
E.K., K.B and S.A. wrote the manuscript. A. A. and J. P. reviewed the manuscript.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
### Publisher's note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Khalil, E.M., Berrada, K., Abdel-Khalek, S. et al. Entanglement and entropy squeezing in the system of two qubits interacting with a two-mode field in the context of power low potentials. Sci Rep 10, 19600 (2020). https://doi.org/10.1038/s41598-020-76059-5
• Accepted:
• Published:
|
## ABSTRACT
We present the application of the fast independent component analysis (fastica) technique for blind component separation to polarized astrophysical emission. We study how the cosmic microwave background (CMB) polarized signal, consisting of E and B modes, can be extracted from maps affected by substantial contamination from diffuse Galactic foreground emission and instrumental noise. We implement Monte Carlo chains varying the CMB and noise realizations in order to asses the average capabilities of the algorithm and their variance. We perform the analysis of all‐sky maps simulated according to the Planck satellite capabilities, modelling the sky signal as a superposition of the CMB and of the existing simulated polarization templates of Galactic synchrotron. Our results indicate that the angular power spectrum of CMB E mode can be recovered on all scales up to ℓ≃ 1000, corresponding to the fourth acoustic oscillation, while the B‐mode power spectrum can be detected, up to its turnover at ℓ≃ 100, if the ratio of tensor to scalar contributions to the temperature quadrupole exceeds 30 per cent. The power spectrum of the cross‐correlation between total intensity and polarization, TE, can be recovered up to ℓ≃ 1200, corresponding to the seventh TE acoustic oscillation.
## 1 INTRODUCTION
We are right now in the epoch in which cosmological observations are revealing the finest structures in the cosmic microwave background (CMB) anisotropies. After the first discovery of CMB total intensity fluctuations as measured by the Cosmic Background Explorer (COBE) satellite (see Smoot 1999, and references therein), several balloon‐borne and ground‐based operating experiments were successful in detecting CMB anisotropies on degree and subdegree angular scales (Lee et al. 2001; Padin et al. 2001; De Bernardis et al. 2002; Halverson et al. 2002; see also Hu & Dodelson 2002, and references therein). The Wilkinson Microwave Anisotropy Probe (WMAP; see Bennett et al. 2003a) satellite released the first‐year, all‐sky CMB observations mapping anisotropies down to an angular scale of about 16 arcmin in total intensity and its correlation with polarization, on five frequency channels extending from 22 to 90 GHz. In the future, balloon‐borne and ground‐based observations will attempt to measure the CMB polarization on sky patches (see Kovac et al. 2002, for a first detection); the Planck satellite, scheduled for launch in 2007 (Mandolesi et al. 1998; Puget et al. 1998), will provide total intensity and polarization full‐sky maps of CMB anisotropy with resolution ≳5 arcmin and a sensitivity of a few μK, on nine frequencies in the range 30–857 GHz. A future satellite mission for polarization is currently under study.
Correspondingly, the data analysis science faces entirely new and challenging issues in order to handle the amount of incoming data, with the aim of extracting all the relevant physical information about the cosmological signal and the other astrophysical emissions, coming from extragalactic sources as well as from our own Galaxy. The sum of these foreground emissions, in total intensity, is minimum at about 70 GHz, according to the first‐year WMAP data (Bennett et al. 2003b). In the following we refer to low and high frequencies meaning the ranges below and above that of minimum foreground emission.
At low frequencies, the main Galactic foregrounds are synchrotron (see Haslam et al. 1982 for an all‐sky template at 408 MHz) and free–free (traced by Hα emission; see Haffner, Reynolds & Tufte 1999; Finkbeiner 2003, and references therein) emissions, as confirmed by the WMAP observations (Bennett et al. 2003b). At high frequencies, Galactic emission is expected to be dominated by thermal dust (Schlegel, Finkbeiner & Davies 1998; Finkbeiner, Davis & Schlegel 1999). Moreover, several populations of extragalactic sources, with different spectral behaviour, show up at all the frequencies, including radio sources and dusty galaxies (see Toffolatti et al. 1998), and the Sunyaev–Zel'dovich effect from clusters of galaxies (Moscardini et al. 2002). Because the various emission mechanisms have generally different frequency dependences, it is conceivable to combine multifrequency maps in order to separate them.
Much work has been recently dedicated to provide algorithms devoted to the component separation task, exploiting different ideas and tools from signal processing science. Such algorithms generally deal separately with point‐like objects such as extragalactic sources (Tenorio et al. 1999; Vielva et al. 2001), and diffuse emissions from our own Galaxy. In this work we focus on techniques developed to handle diffuse emissions; such techniques can be broadly classified into two main categories.
The ‘non‐blind’ approach consists of assuming priors on the signals to recover, on their spatial pattern and frequency scalings, in order to regularize the inverse filtering going from the noisy, multifrequency data to the separated components. Wiener filtering (WF; Tegmark & Efstathiou 1996; Bouchet, Prunet & Sethi 1999) and the maximum entropy method (MEM; Hobson et al. 1998) have been tested with good results, even if applied to the whole sky (Stolyarov et al. 2002). Part of the priors can be obtained from complementary observations, and the remaining ones have to be guessed. The WMAP group (Bennett et al. 2003b) exploited the available templates mentioned above as priors for a successful MEM‐based component separation.
The ‘blind’ approach consists instead of performing separation by only assuming the statistical independence of the signals to recover, without priors either for their frequency scalings or for their spatial statistics. This is possible by means of a novel technique in signal processing science, the independent component analysis (ica; see Amari & Chichocki 1998, and references therein). The first astrophysical application of this technique (Baccigalpi et al. 2000) exploited an adaptive (i.e. capable of self‐adjusting on time streams with varying signals) ica algorithm, working successfully on limited sky patches for ideal noiseless data. Maino et al. (2002) implemented a fast, non‐adaptive version of such algorithm (fastica; see Hyvärinen 1999) which was successful in reaching separation of CMB and foregrounds for several combinations of simulated all‐sky maps in conditions corresponding to the nominal performances of Planck, for total intensity measurements. Recently, Maino et al. (2003) were able to reproduce the main scientific results out of the COBE data exploiting the fastica technique. The blind techniques for component separation represent the most unbiased approach, because they only assume the statistical independence between cosmological and foreground emissions. Thus, they not only provide an independent check on the results of non‐blind separation procedures, but are likely to be the only viable way to go when the foreground contamination is poorly known.
In this paper we apply the fastica technique to astrophysical polarized emission. CMB polarization is expected to arise from Thomson scattering of photons and electrons at decoupling. Because of the tensor nature of polarization, physical information is coded in a entirely different way with respect to total intensity. Cosmological perturbations may be divided into scalars, such as density perturbations, vectors, for example vorticity, and tensors, i.e. gravitational waves (see Kodama & Sasaki 1984). Total intensity CMB anisotropies simply sum up contributions from all kinds of cosmological perturbations. For polarization, two non‐local combinations of the Stokes parameters Q and U can be built, commonly known as E and B modes (see Zaldarriaga & Seljak 1997 and Kamionkowski, Kosowsky & Stebbins 1997, featuring a different notation, namely gradient G for E and curl C for B). It can be shown that the E component sums up the contributions from all three kinds of cosmological perturbations mentioned above, while the B modes are excited via vectors and tensors only. Also, scalar modes of total intensity, which we label with T in the following, are expected to be strongly correlated with E modes. Indeed, the latter are merely excited by the quadrupole of density perturbations, coded in the total intensity of CMB photons, as seen from the rest frame of charged particles at last scattering (see Hu et al. 1999, and references therein). Therefore, for CMB, the correlation TE between T and E modes is expected to be the strongest signal from polarization. The latter expectation has been confirmed by WMAP (Kogut et al. 2003) with a spectacular detection on degree and superdegree angular scales; moreover, a first detection of CMB E modes has been obtained (Kovac et al. 2002).
This phenomenology is clearly much richer with respect to total intensity, and motivated a great interest toward CMB polarization, not only as a new data set in addition to total intensity, but as the best potential carrier of cosmological information via electromagnetic waves. Unfortunately, as we describe in the next section, foregrounds are even less known in polarization than in total intensity; see De Zotti (2002) and references therein for reviews. For this reason, it is likely that a blind technique will be required to clean CMB polarization from contaminating foregrounds. The first goal of this work is to present a first implementation of the ica techniques on polarized astrophysical maps. Secondly, we want to estimate the precision with which CMB polarized emission will be measured in the near future. We exploit the fastica technique on low frequencies where some foreground models have been carried out (Baccigalpi et al. 2001; Giardino et al. 2002).
The paper is organized as follows. In Section 2 we describe how the simulations of the synchrotron emission were obtained. In Section 3 we describe our approach to component separation for polarized radiation. In Section 4 we study the fastica performance on our simulated sky maps. In Section 5 we apply our technique to the Planck simulated data, studying its capabilities for polarization measurements in presence of foreground emission. Finally, Section 6 contains the concluding remarks.
## 2 SIMULATED POLARIZATION MAPS AT MICROWAVE FREQUENCIES
We adopt a background cosmology close to the model which best fits the WMAP data (Spergel et al. 2003). We assume a flat Friedman–Robertson–Walker (FRW) metric with an Hubble constant H0= 100 h km s−1 Mpc−1 with h= 0.7. The cosmological constant represents 70 per cent of the critical density today, ΩΛ= 0.7, while the energy density in baryons is given by Ωbh2= 0.022; the remaining fraction is in cold dark matter (CDM); we allow for a reionization with optical depth τ= 0.05 (Becker et al. 2001). Note that this is a factor of 2–3 lower than found in the first‐year WMAP data (see Bennett et al. 2003a, and references therein), because we built our reference CMB template before the release of the WMAP data. Cosmological perturbations are Gaussian, with spectral index for the scalar component leading to a not perfectly scale‐invariant spectrum, nS= 0.96, and including tensor perturbations giving rise to a B mode in the CMB power spectrum. We assume a ratio R= 30 per cent between tensor and scalar amplitudes, and the tensor spectral index is taken to be nT=−R/6.8 according to the simplest inflationary models of the very early Universe (see Liddle & Lyth 2000, and references therein). The cosmological parameters leading to our CMB template can be summarized as follows:
1
We simulate whole‐sky maps of Q and U out of the theoretical CE and CB coefficients as generated by cmbfast (Seljak & Zaldarriaga 1996), in the healpix environment (Górski et al. 1999). The maps are in antenna temperature, which is obtained at any frequency ν multiplying the thermodynamical fluctuations by a factor of x2 exp x/(exp x− 1)2, where x=hν/kTCMB, h and k are the Planck and the Boltzmann constants, respectively, while TCMB= 2.726 K is the CMB thermodynamical temperature.
The polarized emission from diffuse Galactic foregrounds in the frequency range which will be covered by the Planck satellite is very poorly known. On the high‐frequency side, the Galactic contribution to the polarized signal should be dominated by dust emission (Lazarian & Prunet 2002). The first detection of the diffuse polarized dust emission has been carried out recently (Benoit et al. 2003), and indicates a 3–5 per cent polarization on large angular scales and at low Galactic latitudes. On the low‐frequency side, the dominant diffuse polarized emission is Galactic synchrotron. Observations in the radio band cover about half of the sky at degree resolution (Brouw & Spoelstra 1976), and limited regions at low and medium Galactic latitudes with 10‐arcmin resolution (Duncan et al. 1997, 1999; Uyaniker et al. 1999). Analyses of the angular power spectrum of polarized synchrotron emission have been carried out by several authors (Tucci et al. 2000, 2002; Baccigalpi et al. 2001; Giardino et al. 2002; Bernardi et al. 2003).
Polarized foreground contamination is particularly challenging for CMB B‐mode measurements. In fact, the CMB B mode arises from tensor perturbations (see Liddle & Lyth 2000, for reviews), which are subdominant with respect to the scalar component (Spergel et al. 2003). In addition, tensor perturbations vanish on subdegree angular scales, corresponding to subhorizon scales at decoupling. On such scales, some B‐mode power could be introduced by weak lensing (see Hu 2002, and references therein). Anyway, the cosmological B‐mode power is always expected to be much lower than the E mode, while foregrounds are expected to have approximately the same power in the two modes (Zaldarriaga 2001).
Baccigalpi et al. (2001) estimated the power spectrum of synchrotron as derived by two main data sets. As we already mentioned, on superdegree angular scales, corresponding to multipoles ℓ < 200, the foreground contamination is determined from the Brouw & Spoelstra (1976) data, covering roughly half of the sky with degree resolution. The C behaviour on smaller angular scales has been obtained by analysing more recent data reaching a resolution of about 10 arcmin (Duncan et al. 1997, 1999; Uyaniker et al. 1999). These data, reaching Galactic latitudes up to b≃ 20°, yield a flatter slope, C≃ℓ−(1.5–1.8) (see also Tucci et al. 2000; Giardino et al. 2002). Fosalba et al. (2002) interestingly provided evidence of a similar slope for the angular power spectrum of the polarization degree induced by the Galactic magnetic field as measured from starlight data. The synchrotron spectrum at higher frequencies was then inferred by scaling the one obtained in the radio band with a typical synchrotron spectral index of −2.9 (in antenna temperature).
Giardino et al. (2002) built a full‐sky map of synchrotron polarized emission based on the total intensity map by Haslam et al. (1982), assuming a synchrotron polarized component at the theoretical maximum level of 75 per cent, and a Gaussian distribution of polarization angles with a power spectrum estimated out of the high‐resolution radio band data (Duncan et al. 1997, 1999). The polarization map obtained, reaching a resolution of about 10 arcmin, was then scaled to higher frequencies by considering either a constant or a space‐varying spectral index as inferred by multifrequency radio observations.
In this paper we concentrate on low frequencies, modelling the diffuse polarized emission as a superposition of CMB and synchrotron. We used the synchrotron spatial template by Giardino et al. (2002), hereafter the SG model, as well as another synchrotron template, hereafter indicated as SB, obtained by scaling the spherical harmonics coefficients of the SG model to match the spectrum found by Baccigalpi et al. (2001). The Q Stokes parameters for the two spatial templates, in antenna temperature at 100 GHz, are shown in Fig. 1, plotted in a non‐linear scale to highlight the behaviour at high Galactic latitudes. Note how the contribution on smaller angular scales is larger in the SG model. This is evident in Fig. 2 where we compare the power spectra of the SG and SB models with the CMB model, for the cosmological parameters of equation (1). Both models imply a severe contamination of the CMB E mode on large angular scales, say ℓ≲ 200, which remains serious even if the Galactic plane is cut out. Cutting the region |b| ≤ 20° decreases the SG and SB signals by about a factor of 10 and 3, respectively; the difference comes from the fact that the SB power is concentrated more on large angular scales, which propagate well beyond the Galactic plane (see Fig. 2).
1
Q Stokes parameter for the emission of Galactic synchrotron according to Giardino et al. (2002, left‐hand panel) and Baccigalpi et al. (2001, right‐hand panel). The maps are in antenna temperature, at 100 GHz.
1
Q Stokes parameter for the emission of Galactic synchrotron according to Giardino et al. (2002, left‐hand panel) and Baccigalpi et al. (2001, right‐hand panel). The maps are in antenna temperature, at 100 GHz.
2
E (solid) and B (dotted) angular power spectra of CMB and synchrotron polarization emission, in antenna temperature, at 100 GHz, according to the SG (left‐hand panel) and SB (right‐hand panel) polarized synchrotron template.
2
E (solid) and B (dotted) angular power spectra of CMB and synchrotron polarization emission, in antenna temperature, at 100 GHz, according to the SG (left‐hand panel) and SB (right‐hand panel) polarized synchrotron template.
In the SG case, the contamination is severe also for the first CMB acoustic oscillation in polarization, as a result of the enhanced power on small angular scales with respect to the SB models (see also Fig. 1). On smaller scales, both models predict the dominance of CMB E modes. On the other hand, CMB B modes are dominated by foreground emission, even if the region around the Galactic plane is cut out as we commented above.
To obtain the synchrotron emission at different frequencies, we consider either a constant antenna temperature spectral index of −2.9, slightly shallower than indicated by the WMAP first‐year measurements (Bennett et al. 2003b), as well as a varying spectral index. In Fig. 3 we show the map of synchrotron spectral indices, in antenna temperature, which we adopt following Giardino et al. (2002). Note that this aspect is relevant especially for component separation, because all methods developed so far require a ‘rigid’ frequency scaling of all the components, which means that all components should have separable dependences on sky direction and frequency. Actually this requirement is hardly satisfied by real signals, and by synchrotron in particular. However, as we see in the next section, fastica results turn out to be quite stable as this assumption is relaxed, at least for the level of variation in Fig. 3. This makes this technique very promising for application to real data. A more quantitative study of how the fastica performance becomes degraded when realistic signals as well instrumental systematics are taken into account will be carried out in a future work.
3
Map of synchrotron spectral indices (Giardino et al. 2002).
3
Map of synchrotron spectral indices (Giardino et al. 2002).
## 3 COMPONENT SEPARATION FOR POLARIZED RADIATION
Component separation has been implemented so far for the total intensity signal (see Maino et al. 2002; Stolyarov et al. 2002, and references therein). In this section we expose how we extend the ica technique to treat polarization measurements.
### 3.1 E and B modes
As we stressed in the previous section, the relevant information for the CMB polarized signal can be conveniently read in a non‐local combination of Q and U Stokes parameters, represented by the E and B modes (see Zaldarriaga & Seljak 1997; Kamionkowski et al. 1997). There are conceptually two ways of performing component separation in polarization observations. Q and U can be treated separately, i.e. performing separation for each of them independently. However, in the hypothesis that Q and U have the same statistical properties, separation can be conveniently performed on a data set combining Q and U maps. This is surely the appropriate strategy if one is sure that the choice of polarization axes of the instrumental set‐up does not bias the signal distribution. In general, however, it may happen that accidentally the instrumental polarization axes are related to the preferred directions of the underlying signal, making Q and U statistically different, so that merging them in a single data set would not be appropriate. While for the Gaussian CMB statistics we do not expect such an occurrence, it may happen for foregrounds, especially if separation is performed on sky patches. For example, the Galactic polarized signal possesses indeed large‐scale structures with preferred directions, such as the Galactic plume, discovered by Duncan et al. (1998) in the radio band, extending up to 15° across the sky and reaching high Galactic latitudes. Therefore, in general, the most conservative approach to component separation in polarization is to perform it for Q and U separately. Note that in our Galactic model no coherence in the polarization detection is present. This allow us to verify, in the next section, that the results obtained by merging Q and U in a single data set are quite equivalent or more accurate than those obtained by treating them separately. In the following we report the relevant fastica formalism for the latter case. Otherwise, when Q and U form a single template, the same formulae developed in Maino et al. (2002) do apply.
Let these multifrequency maps be represented by xQ and xU respectively, where x is made of two indexes, labelling frequencies on rows and pixel on columns. If the unknown components to be recovered from the input data scale rigidly in frequency, which means that each of them can be represented by a product of two functions depending on frequency and space separately, we can define a spatial pattern for them, which we indicate with sQ and sU. Then we can express the inputs xQ,U as
2
where the matrix AQ,U scales the spatial patterns of the unknown components to the input frequencies, thus having a number of rows equal to the number of input frequencies. The instrumental noise n has same dimensions as x. The matrix B represents the beam smoothing operation; we recall that, at the present level of architecture, an ica‐based component separation requires us to deal to maps having equal beams at all frequencies. Separation is achieved in real space, by estimating two separation matrices, WQ and WU, having a number of rows corresponding to the number of independent components and a number of columns equal to the frequency channels, which produce a copy of the independent components present in the input data:
3
All the details on the way the separation matrix for fastica is estimated are given in Maino et al. (2002). yQ and yU can be combined together to obtain the E and B modes for the independent components present in the input data (see Kamionkowski et al. 1997; Zaldarriaga & Seljak 1997). Note that failures in separation for even one of Q and U affect in general both E and B, because each of them receive contributions from both Q and U. It is possible, even in the noisy case, to check the quality of the resulting separation by looking at the product W A, which should be the identity in the best case. This means that the frequency scalings of the recovered components can be estimated. Following Maino et al. (2002), by denoting as xQ,Uν j the jth component in the data x at frequency ν, it can be easily seen that the frequency scalings are simply the ratios of the column elements of the matrices W−1:
4
However, it is important to note that even if separation is virtually perfect, which means that W A is exactly the identity, equations (2) and (3) imply that noise is transmitted to the fastica outputs, even if it can be estimated and, to some extent, taken into account during the separation process.
### 3.2 Instrumental noise
Our method to deal with instrumental noise in a fastica‐based separation approach is described in Maino et al. (2002), for total intensity maps. Before starting the separation process, the noise correlation matrix, which for a Gaussian, uniformly distributed noise is null except for the noise variances at each frequency on the diagonal, is subtracted from the total signal correlation matrix; the ‘denoised’ signal correlation matrix enters then as an input in the algorithm performing separation. The same is done also here, for Q and U separately. Moreover, in Maino et al. (2002) we described how to estimate the noise of the fastica outputs. In a similar way, let us indicate the input noise patterns as nQ,U. Then from equations (2) and (3) it can be easily seen that the noise on fastica outputs is given by
5
This means that, if the noises on different channels are uncorrelated, and indicating as the input noise rms at frequency νj, the noise rms on the ith fastica output is
6
Note that the above equation describes the amount of noise which is transmitted to the outputs after the separation matrix has been found, and not how much the separation matrix is affected by the noise. As treated in detail in Maino et al. (2002), if the noise correlation matrix is known, it is possible to subtract it from the signal correlation matrix, greatly reducing the influence of the noise on the estimation of WQ,U; however, sample variance and, in general, any systematics will make the separation matrix noisy in a way which is not accounted by equation (6).
On whole‐sky signals, the contamination to the angular power spectrum coming from a uniformly distributed, Gaussian noise characterized by σ is C= 4πσ2/N, where N is the pixel number on the sphere. The noise contamination to C on Q and U can therefore be estimated easily on fastica outputs once in equation (6) are known. Gaussianity and uniformity also make it very easy to calculate the noise level on E and B modes, because they contribute at the same level. Thus, we can estimate the noise contamination in the E and B channels as
7
where the factor 4 is due to the normalization according to the healpix scheme (version 1.10 and less), featuring conventions of Kamionkowski et al. (1997), whereas the other common version (Zaldarriaga & Seljak 1997) would yield a factor of 2. The quantities defined in equation (7) represent the average noise power, which can be simply subtracted from the output power spectra by virtue of the uncorrelation between noise and signal; the noise contamination is then represented by the power of noise fluctuations around the mean (equation 7):
8
Note that the noise estimation is greatly simplified by our assumptions: a non‐Gaussian and/or non‐uniform noise, as well as a non‐zero Q/U noise correlation, etc., could lead for instance to non‐flat noise spectra for C, as well as non‐equal noises in E and B modes. However, if a good model of these effects is available, a Monte Carlo pipeline is still conceivable by calculating many realizations of noise to find the average contamination to E and B modes to be subtracted from outputs instead of the simple forms of equations (7) and (8).
## 4 PERFORMANCE STUDY
In this section we apply our approach to simulated skies to assess: (i) the ultimate capability of fastica to clean the CMB maps from synchrotron in ideal noiseless conditions; (ii) how the results are degraded by noise.
### 4.1 Noiseless separation
We work with angular resolution of 3.5 arcmin, corresponding to nside= 1024 in a healpix environment (Górski et al. 1999); this is enough to test the performance of the CMB polarization reconstruction, in particular for the undamped subdegree acoustic oscillations, extending up to ℓ≃ 2000 in Fig. 2. In all the cases, we show that the computing time to achieve separation was of the order of a few minutes on a Pentium IV 1.8‐GHz processor with 512 Mb RAM memory. We perform separation by considering the CMB model defined by equation (1) and both the SG (Giardino et al. 2002) and SB (Baccigalpi et al. 2001) models for synchrotron emission. We have considered Q and U maps both separately and combined.
As we have already mentioned, the fastica performance turns out to be stable against relaxation of rigid frequency scalings, at least for the spectral index variations shown in Fig. 3. In order to illustrate quantitatively this point, we compare the quality of the CMB reconstruction assuming either constant and varying synchrotron spectral indices, β. In Fig. 4 we plot the original (dotted) and reconstructed (solid) CE,B for CMB, in the case of the SB foreground model with constant (left‐hand panels) or spatially varying β. The upper (lower) panels refer to the two frequency combinations 70, 100 (30, 44) GHz channels. Fig. 5 shows the results of the same analysis, but using the SG model.
4
Original (dotted) and reconstructed (solid) power spectra for E and B CMB modes in the case of the SB model, in the absence of noise: (left‐hand panels) constant spectral index; (right‐hand panels) space‐varying spectral index. Upper panels are from inputs at 70‐ and 100‐GHz channels, while bottom panels corresponds to inputs at 30 and 44 GHz. The outputs are conventionally plotted at the highest input frequency.
4
Original (dotted) and reconstructed (solid) power spectra for E and B CMB modes in the case of the SB model, in the absence of noise: (left‐hand panels) constant spectral index; (right‐hand panels) space‐varying spectral index. Upper panels are from inputs at 70‐ and 100‐GHz channels, while bottom panels corresponds to inputs at 30 and 44 GHz. The outputs are conventionally plotted at the highest input frequency.
5
Original (dotted) and reconstructed (solid) power spectra for E and B CMB modes in the case of the SG model, in the absence of noise: (left‐hand panels) constant spectral index; (right‐hand panels) space‐varying spectral index. Upper panels are from inputs at 70‐ and 100‐GHz channels, while bottom panels correspond to inputs at 30 and 44 GHz. The outputs are conventionally plotted at the highest input frequency.
5
Original (dotted) and reconstructed (solid) power spectra for E and B CMB modes in the case of the SG model, in the absence of noise: (left‐hand panels) constant spectral index; (right‐hand panels) space‐varying spectral index. Upper panels are from inputs at 70‐ and 100‐GHz channels, while bottom panels correspond to inputs at 30 and 44 GHz. The outputs are conventionally plotted at the highest input frequency.
As can be seen, the CMB signal is well reconstructed on all relevant scales, down to the pixel size. The same is true for the synchrotron emission, not shown. Per cent precision in frequency scaling recovery for CMB and synchrotron is achieved (see Table 1). As we stressed in the previous section, the precision on frequency scaling recovery corresponds to the precision on the estimation of the elements in the inverse of the matrices WQ and WU. Remarkably, fastica is able to recover the CMB B modes on all the relevant angular scales, even if they are largely subdominant with respect to the foreground emission, as can be seen in Fig. 2. This is due to two main reasons: the difference of the underlying statistics describing the distribution of CMB and foreground emission, and the high angular resolution of templates (3.5 arcmin). Such resolution allows the algorithm to converge close to the right solution by exploiting the wealth of statistical information contained in the maps. Note also that in the noiseless case with constant synchrotron spectral index, the CMB power spectrum is reconstructed at the same good level both for the 100 and 70 GHz and for the 30 and 44 GHz channel combinations, although in this frequency range the synchrotron emission changes amplitude by a factor of about 10. Indeed, by comparing the top‐left and bottom‐left‐hand panels of Figs 4 and 5, we can note that there is only a minimum difference between the B spectra at 44 and 100 GHz, arising at high ℓ, while the E spectra exhibit no appreciable difference at all.
1
Percentage errors on frequency scalings reconstruction in the noiseless case.
1
Percentage errors on frequency scalings reconstruction in the noiseless case.
In the case of spatially varying spectral index (right‐hand panels of Figs 4 and 5) a rigorous component separation is virtually impossible, because the basic assumption of rigid frequency scaling is badly violated. However, fastica is able to approach convergence by estimating a sort of ‘mean’ foreground emission, scaling roughly with the mean value of the spectral index distribution. Some residual synchrotron contamination of the CMB reconstructed maps cannot be avoided, however. This residual is proportional to the difference between the ‘true’ synchrotron emission and that corresponding to the ‘mean’ spectral index and is thus less relevant at the higher frequencies, where synchrotron emission is weaker. As shown by the upper right‐hand panels of Figs 4 and 5, when the 70–100 GHz combination is used, the power spectrum of the CMB E mode is still well reconstructed on all scales, and even that of the CMB B mode is recovered at least up to ℓ≃ 100. On smaller scales, synchrotron contamination of the B mode is strong in the SG case, but not in the SB case (at least up to ℓ≃ 1000). As expected, the separation quality degrades substantially if the 30–44 GHz combination is used (right‐hand bottom panels of Figs 4 and 5; see also Table 1 where the quoted error on frequency scaling for varying spectral index is the percentage difference between the average values 〈(ν12)β〉 computed on input and reconstructed synchrotron maps).
To compare the fastica performances when Q and U maps are dealt with separately or together (case QU), we have carried out a Monte Carlo chain on the CMB realizations, referring to the 70‐ and 100‐GHz channels, and we have computed the rms error on the CMB frequency scaling reconstruction, σQ, σU and σQU. The results are shown in Table 2. Again, the reconstruction is better when the weaker synchrotron model SB is considered. The slight difference between σQ and σU is probably due to the particular realization of the synchrotron model we have used (not changed through this Monte Carlo chain). The fact that the difference is present for both the SB and SG models is not surprising because the two models have different power as a function of angular scales but have the same distribution of polarization angles. The reason why we have not varied the foregrounds templates in our chain is the present poor knowledge of the underlying signal statistics.
2
Percentage rms for CMB frequency scaling reconstruction resulting from a Monte Carlo chain of fastica applied to 50 different CMB realizations at 70 and 100 GHz. σQ and σU are obtained by treating Q and U separately, while σQU is the result when they are considered as a single array. The 1σ error on the parameter estimation, assuming Gaussianity, is also indicated.
2
Percentage rms for CMB frequency scaling reconstruction resulting from a Monte Carlo chain of fastica applied to 50 different CMB realizations at 70 and 100 GHz. σQ and σU are obtained by treating Q and U separately, while σQU is the result when they are considered as a single array. The 1σ error on the parameter estimation, assuming Gaussianity, is also indicated.
The separation precision when Q and U are considered together is equivalent or better than if they are treated separately, as expected because the statistical information in the maps which are processed by fastica is greater. On the other hand, the fact that σQU is so close to σQ and σU clearly indicates that for a pixel size of about 3.5 arcmin the statistical information in the maps is such that the results are not greatly improved if the pixel number is doubled. In the following we just consider the most general case in which Q and U maps are considered separately.
### 4.2 Effect of noise
To study the effect of the noise on fastica component separation we use a map resolution of about 7 acrmin, corresponding to nside= 512 in a healpix scheme (Górski et al. 1999). At this resolution, the all‐sky separation runs take a few seconds. Moreover, we consider only the combination of 70‐ and 100‐GHz channels, and a space‐varying synchrotron spectral index. We give the results for one particular noise realization and then we show that the quoted results are representative of the typical fastica performance within the present assumptions. Moreover, we investigate how the foreground emission affects the recovered CMB map. The noise is assumed to be Gaussian and uniformly distributed, with rms parametrized with the signal‐to‐noise (S/N) ratio, where the signal stays for CMB. As we have already stressed, noise is subtracted both during the separation process and on the reconstructed C, according to the estimate in equation (7). The results are affected by the residual noise fluctuations, with power given by equation (8).
As expected, the noise primarily affects the reconstruction of the CMB B mode. In Fig. 6 we plot the reconstructed and original CMB E‐ and B‐mode power spectra, for SG (left) and SB (right) foreground emission, in the case S/N = 2. With this level of noise, separation is still successful: the E‐mode power spectrum comes out very well, while that of the B mode is well reconstructed up to the characteristic peak at ℓ≃ 100. Table 3 shows that the error on the frequency scaling recovery, for CMB, remains, in the noisy case, at the per cent level both for Q and U.
6
Original (dotted) and reconstructed (solid) 100‐GHz power spectra for E (top) and B (bottom) CMB modes for the SG (left) and SB (right) cases, assuming S/N = 2 and considering the 70‐ and 100‐GHz channels; the synchrotron spectral index is space‐varying.
6
Original (dotted) and reconstructed (solid) 100‐GHz power spectra for E (top) and B (bottom) CMB modes for the SG (left) and SB (right) cases, assuming S/N = 2 and considering the 70‐ and 100‐GHz channels; the synchrotron spectral index is space‐varying.
3
Percentage errors on CMB frequency scalings reconstruction in the noisy case by considering the 70‐ and 100‐GHz channels.
3
Percentage errors on CMB frequency scalings reconstruction in the noisy case by considering the 70‐ and 100‐GHz channels.
If the S/N ratio is decreased, B modes become quickly lost, while the algorithm is still successful in recovering the E power spectrum for S/N ≳ 0.2 (see Fig. 7). The algorithm starts failing at low multipoles, say ℓ≲ 100, where synchrotron dominates over the CMB both in the SG and SB cases. The results in Fig. 7 for the SG case are averages over eight multipoles, to avoid excessive oscillations of the recovered spectrum. For the S/N ratios in this figure, the B modes are lost on all scales. Because the SG model has a higher amplitude, fastica is able to catch up the statistics more efficiently than for SB, thus being able to work with a lower S/N. Table 3 shows the degradation of the separation matrix for the S/N ratios of Fig. 7, compared to the case with S/N = 2.
7
Input (dotted) and reconstructed (solid) 100‐GHz power spectra for E (top) and B (bottom) CMB modes for the SG case (left, assuming S/N = 0.2, averaged over eight multipole intervals) and the SB case (right, assuming S/N = 0.5), using the 70‐ and 100‐GHz channels. The synchrotron spectral index is space‐varying.
7
Input (dotted) and reconstructed (solid) 100‐GHz power spectra for E (top) and B (bottom) CMB modes for the SG case (left, assuming S/N = 0.2, averaged over eight multipole intervals) and the SB case (right, assuming S/N = 0.5), using the 70‐ and 100‐GHz channels. The synchrotron spectral index is space‐varying.
We stress that the noise levels quoted here are not the maximum that the algorithm can support. The quality of the separation depends on the noise level as well as on the number of channels considered; adding more channels, while keeping constant the number of components to recover, generally improves the statistical sample with which fastica deals and so the quality of the reconstruction as well as the amount of noise supported. In the next section we show an example where a satisfactory separation can be obtained with higher noise by considering a combination of three frequency channels.
We now investigate to what extent the results quoted here are representative of the typical fastica performance and we study how the foreground emission biases the CMB maps recovered by fastica. To this end, we have performed a Monte Carlo chain of separation runs, building for each of them a map of residuals by subtracting the input CMB template from the recovered one and studying the ensemble of those residual maps.
The residuals in the noiseless case are just a copy of the foreground emission. Their amplitude is greatly reduced with respect to the true foreground amplitude, in proportion to the accuracy of the recovered separation matrix. Because the latter accuracy is at a level of per cent of better, as can be seen in Table 1, the residual foreground emission in the CMB recovered map is roughly the true one divided by 100. In terms of the angular power spectrum (see, for example, Fig. 2), the residual foreground contamination to the CMB recovered power spectrum is roughly a factor of 104 less than the true one.
In the noisy separation, a key feature is that at the present level of architecture, the fastica outputs are just a linear combination of the input channels. Thus, even if the separation goes perfectly, the noise is present in the output just as the same linear combination of the input noise templates. Note, however, that this does not mean that the noise is transmitted linearly to the outputs. The way the separation matrix is found depends non‐linearly on the input data including the noise. In other words, the noise affects directly the estimation of the separation matrix, as we explained in Section 3.2. Equation (6) describes only the amount of noise which affects the outputs after the separation matrix is found. As we shall see in a moment, at least in the case S/N = 2 the main effect of the noise is that given by equation (6), dominant over the error induced by the noise on the separation matrix estimation. Moreover, the noise in the outputs reflects the input noise statistics, which is Gaussian and uniformly distributed in the sky. As we shall see now, this is verified if the foreground contamination is the stronger SG one.
The results presented in Table 4 show the ensemble average of the mean of the residuals 〈〉 together with its Gaussian expectation 〈2G1/2, and the mean rms error on the CMB frequency scaling recovery, σ. The most important feature is that a non‐zero mean value, at almost 10σ with respect to its Gaussian expectation, is detected. This is the only foreground contamination we find in the residuals. Note that the separation matrix precision recovery is at the per cent level. This means that the present amount of noise does not affect significantly the accuracy of the separation process. Of course, if the noise is increased, the separation matrix estimation starts to be affected and eventually the foreground residual in the CMB reconstructed map will be relevant.
4
Statistics of CMB residuals in Kelvin and percentage errors on frequency scaling reconstruction for the case SG, S/N = 2, 70‐ and 100‐GHz channels, on 50 different noise and CMB realizations. The 1σ error on the parameter estimation, assuming Gaussianity, is also indicated.
4
Statistics of CMB residuals in Kelvin and percentage errors on frequency scaling reconstruction for the case SG, S/N = 2, 70‐ and 100‐GHz channels, on 50 different noise and CMB realizations. The 1σ error on the parameter estimation, assuming Gaussianity, is also indicated.
We made a further check by verifying that the residuals obey a Gaussian statistics with rms given by equation (6) on all Galactic latitudes. We constructed a map having in each pixel the variance built out of the 50 residual maps in our Monte Carlo chain. In Fig. 8 we show the rms of such a map, plus/minus the standard deviation, calculated on rings with constant latitude with width equal to 1°. Together with the curves built out of our Monte Carlo chain, we report the theoretical values according to a Gaussian statistics, i.e. the average given by equation (6) equal to (3.90 × 10−6 K)2, and the standard deviation over N= 50 samples, given by . The agreement demonstrates that the Gaussian expectation is satisfied at all latitudes, especially at the lowest, where the foreground contamination is expected to be maximum. Note that the fluctuations around the Gaussian theoretical levels are larger near the poles because of the enhanced sample variance.
8
Latitude analysis of the variance map calculated out of the residual maps in our Monte Carlo chain, in the case SG, S/N = 2, 70‐ and 100‐GHz channels. The solid line is the average at the corresponding latitude, while the dashed curves represent the average plus/minus the standard deviation. The dotted lines are derived assuming Gaussian statistics.
8
Latitude analysis of the variance map calculated out of the residual maps in our Monte Carlo chain, in the case SG, S/N = 2, 70‐ and 100‐GHz channels. The solid line is the average at the corresponding latitude, while the dashed curves represent the average plus/minus the standard deviation. The dotted lines are derived assuming Gaussian statistics.
We conclude that, within the present assumptions, for a successful separation the residual foreground contamination in the recovered CMB map is subdominant with respect to the noise. On the other hand, further tests are needed to check this result against a more realistic noise model, featuring the most important systematic effects such as a non‐uniform sky distribution, the presence of non‐Gaussian features, etc.
## 5 AN APPLICATION TO PLANCK
In this section, we study how fastica behaves in conditions corresponding to the instrumental capabilities of Planck. While this work was being completed, the polarization capabilities at 100 GHz were lost because of a funding problem of the Low Frequency Instrument (LFI), but that capability could be restored if the 100‐GHz channel of the High Frequency Instrument (HFI) is upgraded, as is presently under discussion. The Planck polarization sensitivity in all its channels has a crucial importance, and it is our intention to support this issue. Thus, we work assuming Planck polarization sensitivity at 100 GHz, highlighting the fact that our results have been obtained under this assumption. At 30, 44, 70 and 100 GHz, the Planck beams have full width at half‐maximum (FWHM) of 33, 23, 14 and 10 arcmin, respectively. We study the fastica effectiveness in recovering E, B and TE modes, separately. We adopted, for polarization, the nominal noise level for total intensity measurements increased by a factor of (note also that due to the 1.10 and lower healpix version convention to normalize Q and U following the prescription by Kamionkowski et al. (1997), a further has to be taken into account when generating Q and U maps out of a given power in E and B). We neglect all instrumental systematics in this work. The Planck instrumental features assumed here, with noise rms in antenna temperature calculated for a pixel size of about 3.52 arcmin corresponding to nside= 1024 in the healpix scheme, are summarized in Table 5. By looking at the numbers, it can be immediately realized that the level of noise is sensibly higher than that considered in the previous section, so that the same method would not work in this case and an improved analysis, involving more channels as described below, is necessary.
5
Planck‐polarization performance assumed in this work.
5
Planck‐polarization performance assumed in this work.
### 5.1 E mode
Because of the high noise level, we found it convenient to include in the analysis the lower‐frequency channels together with those at 70 and 100 GHz. Because the fastica algorithm is unable to deal with channels having different FWHM, as in Maino et al. (2002), we had to degrade the maps, containing both signal and noise, to the worst resolution in the channels considered. However, a satisfactory recovery of the CMB E modes, extending on all scales up to the instrument best resolution, is still possible by making use of the different angular scale properties of both synchrotron and CMB. Indeed, as can be seen in Fig. 2, the Galaxy is likely to be a substantial contaminant on low multipoles, say ℓ≲ 200.
For the present application, we found it convenient to use a combination of three Planck channels, 44, 70 and 100 GHz for the SG model and 30, 70 and 100 GHz for the SB model. The reason for the difference is that the SB contamination is weaker, and the 30‐GHz channel is necessary for fastica to catch synchrotron with enough accuracy. Including a fourth channel does not imply relevant improvements. The maps, including signals properly smoothed and noise according to the Table 5, were simulated at 3.52‐arcmin resolution, corresponding to nside= 1024. Higher‐frequency maps were then smoothed to the FWHM of the lowest‐frequency channel and then re‐gridded to nside= 128, corresponding to a pixel size of about 28 arcmin and to a maximum multipole ℓ≃ 400. In all the cases shown, the spectral index for synchrotron has been considered variable. Fig. 9 shows the resulting CMB E‐mode power spectrum after separation, for the SG (left) and SB (right) synchrotron models. An average every four (left) and three (right) coefficients was applied to eliminate fluctuations becoming negative on the lowest signal part at ℓ≃ 10. The agreement between the original spectrum and the reconstructed one is good on all the scales probed at the present resolution, up to ℓ≃ 400. The reionization bump is clearly visible as well as the first polarization acoustic oscillation at ℓ≃ 100. Moreover, there is no evident difference in the quality of the reconstruction between the two synchrotron models adopted.
9
Original (dotted) and reconstructed (solid) CMB CE obtained by applying the fastica algorithm to the combination of 44‐, 70‐ and 100‐GHz channels for the SG synchrotron case (left) and of 30‐, 70‐ and 100‐GHz channels for the SB model (right).
9
Original (dotted) and reconstructed (solid) CMB CE obtained by applying the fastica algorithm to the combination of 44‐, 70‐ and 100‐GHz channels for the SG synchrotron case (left) and of 30‐, 70‐ and 100‐GHz channels for the SB model (right).
Let us turn now to the degree and subdegree angular scales, ℓ≳ 200. As we have already stressed, the Galaxy is expected to yield approximately equal power on E and B modes (see Fig. 2). On the other hand, CMB E and B modes are dramatically different on subdegree angular scales. Summarizing, on ℓ≳ 200, we expect to have
9
Therefore, on ℓ≳ 200 where the CMB contamination from synchrotron is expected to be irrelevant, the power spectrum of the CMB E modes can be estimated by simply subtracting, together with the noise, the B power as
10
where the total map CMB+Gal is used without any separation procedure. In other words, there is no need to perform separation to obtain the CMB E modes at high multipoles, because they are simply obtained by subtracting the B modes of the sky maps, because the latter are dominated by synchrotron, which has almost equal power on E and B modes. Fig. 10 shows the results of this technique applied to the Planck HFI 100‐GHz channel, assumed to have polarization capabilities, for both the SG and SB synchrotron models. Residual fluctuations are higher in the SG case because the synchrotron contamination is stronger. In both cases, CMB E modes are successfully recovered in the whole interval 100 ≲ℓ≲ 1000. It has also to be noted that the same subtraction technique would not help on the lower multipoles considered before, because in that case the foreground contamination is so strong that the tiny fluctuations making Galactic E and B modes different are likely to hide the CMB signal anyway.
10
Input (dotted) and reconstructed (solid) CMB CE obtained by subtracting the expected level of noise as well as the synchrotron contaminations SG (left) and SB (right) assumed to be matched by the B‐mode map. The adopted instrumental capabilities are those of the Planck HFI channel at 100 GHz, assumed to be polarization sensitive.
10
Input (dotted) and reconstructed (solid) CMB CE obtained by subtracting the expected level of noise as well as the synchrotron contaminations SG (left) and SB (right) assumed to be matched by the B‐mode map. The adopted instrumental capabilities are those of the Planck HFI channel at 100 GHz, assumed to be polarization sensitive.
Our results here can be summarized as follows. In the case of Planck capabilities, the fastica technique makes it possible to remove substantially the foreground contamination in the regions in which that is expected to be relevant. Planck is likely to measure the CMB E modes over all multipoles up to ℓ≃ 1000.
### 5.2 B mode
In Fig. 11 the B‐mode power spectrum after fastica separation described in the previous section is shown for the SG (left) and SB (right) synchrotron models. An average over 13 multipoles has been applied to both cases in order to avoid fluctuations becoming negative. The reconstructed signal approaches the original one at very low multipoles, say ℓ≲ 5. At higher multipoles, where the B signal is generated by gravitational waves, the overall amplitude appears to be recovered, even if with major contaminations especially in the region where the signal is low, i.e. right between the reionization bump and the rise toward the peak at ℓ≃ 100. Needless to say, such contaminations are due to a residual foreground emission.
11
Original (dotted) and reconstructed (solid) CMB CB obtained by applying the fastica algorithm to the combination of Planck 44‐, 70‐ and 100‐GHz channels for the SG synchrotron case (left) and to the combination of 30‐, 70‐ and 100‐GHz channels for the SB model (right). The data points in the insets show the recovered B‐mode power spectrum in the range 30 ≤ℓ≤ 120 averaged over 20 multipoles; the error bars are given by equation (8).
11
Original (dotted) and reconstructed (solid) CMB CB obtained by applying the fastica algorithm to the combination of Planck 44‐, 70‐ and 100‐GHz channels for the SG synchrotron case (left) and to the combination of 30‐, 70‐ and 100‐GHz channels for the SB model (right). The data points in the insets show the recovered B‐mode power spectrum in the range 30 ≤ℓ≤ 120 averaged over 20 multipoles; the error bars are given by equation (8).
In the insets of Fig. 11 we show (data points) the recovered B‐mode power spectrum in the range between 30 ≤ℓ≤ 120, averaged over 20 multipoles, with error bars given by equation (8). Even if the contamination is substantial, especially for ℓ≥ 100 and for the SG case, the results show a sign of the characteristic rise of the spectrum due to cosmological gravitational waves.
Concluding, our results indicate that the fastica technique is able to remove substantially the foreground contamination of the B mode, up to the peak at ℓ≲ 100 if the tensor to scalar perturbation ratio is at least 30 per cent.
### 5.3 TE mode
While the cosmological TE power spectrum is substantially stronger than that of any other polarized CMB mode, the opposite should happen in the case of foregrounds. On degree and subdegree angular scales, a measure of the synchrotron TE power spectrum can be achieved in the radio band. In the Parkes data at 1.4 GHz, Uyaniker et al. (1999) were able to isolate a region exhibiting low rotation measures, called the ‘fan region’, which is therefore expected to be only weakly affected by Faraday depolarization. This and other regions from the existing surveys in the radio band were used to predict the synchrotron power for the SB scenario (Baccigalpi et al. 2001).
In Fig. 12 we show the T, E, B and TE power spectra for the fan region. Total intensity anisotropies are represented by the upper curve (solid). E and B modes (light lines) have very similar behaviour. The TE mode (heavy solid line) is the weakest and, as can be easily seen by scaling the TE amplitude in Fig. 12 with the typical spectra index for synchrotron, it is markedly below the expected cosmological TE signal at CMB frequencies. Both synchrotron models SG and SB are consistent with this result, as illustrated in Fig. 13. It is straightforward to check that our models for the synchrotron emission have a TE power spectrum not far from the one in Fig. 12, when scaled to the appropriate frequency.
12
Power spectra of synchrotron T (solid line), E and B (light lines) and TE (heavy solid line) modes in the fan region at medium Galactic latitudes, at 1.4 GHz (Uyaniker et al. 1999).
12
Power spectra of synchrotron T (solid line), E and B (light lines) and TE (heavy solid line) modes in the fan region at medium Galactic latitudes, at 1.4 GHz (Uyaniker et al. 1999).
13
CTE of CMB (dotted) compared with that of synchrotron according to the SG (solid, left) and SB (solid, right) models, at 100 GHz.
13
CTE of CMB (dotted) compared with that of synchrotron according to the SG (solid, left) and SB (solid, right) models, at 100 GHz.
From the point of view of CMB observations, this means that, if the synchrotron contamination at microwave frequencies is well represented by its signal in the radio band, at least on degree and subdegree angular scales the contamination from synchrotron is almost absent due to the change in the magnetic field orientation along the line of sight. On the other hand, on larger scales, as can be seen in Fig. 13, the contamination could be relevant both in the SG and SB cases and we perform component separation as described in Section 5.1 for the Planck case. In Fig. 14 we show the recovery of the CMB TE mode, obtained by combining the templates of Q and U maps obtained after fastica application as in Section 5.1, with the CMB T template obtained, still with fastica‐based component separation strategy, in Maino et al. (2002). Oscillations due to residual noise are visible in the recovered CTE. However, as in the case of the E mode, the procedure was successful in substantially removing the contamination.
14
CTE of original (dotted) and recovered (solid) CMB emission obtained with fastica applied to simulated Planck maps, by considering the SG (left) and SB (right) foreground models, respectively. Results are shown at 100 GHz.
14
CTE of original (dotted) and recovered (solid) CMB emission obtained with fastica applied to simulated Planck maps, by considering the SG (left) and SB (right) foreground models, respectively. Results are shown at 100 GHz.
Both in the SG and SB cases, the synchrotron contamination is almost absent in the acoustic oscillation region of the spectrum, as is evident again from Fig. 13; neglecting it, we obtain the results shown in Fig. 15. The combination of Planck angular resolution and sensitivity allows the recovery of the TE power spectrum up to ℓ≃ 1200, corresponding roughly to the seventh CMB acoustic oscillation.
15
CTE of original (dotted) and recovered (solid) CMB emission adopting the SG (left) and SB (right) synchrotron models, considering the Planck performances at 100 GHz.
15
CTE of original (dotted) and recovered (solid) CMB emission adopting the SG (left) and SB (right) synchrotron models, considering the Planck performances at 100 GHz.
## 6 CONCLUDING REMARKS
Forthcoming experiments are expected to measure CMB polarization. The first detections have been obtained on pure polarization (Kovac et al. 2002), as well on its correlation with total intensity CMB anisotropies, by the WMAP satellite.
The foreground contamination is mildly known for total intensity measurements, and poorly known for polarization (see De Zotti 2002, and references therein). It is therefore crucial to develop data analysis tools able to clean the polarized CMB signal from foreground emission by exploiting the minimum number of a priori assumptions. In this work, we implemented the fastica technique in an astrophysical context; see Amari & Chichocki (1998), Hyvärinen (1999), Baccigalpi et al. (2000) and Maino et al. (2002) for blind component separation to deal with astrophysical polarized radiation.
In our scheme, component separation is performed both on the Stokes parameters Q and U maps independently and by joining them in a single data set. E and B modes, coding CMB physical content in the most suitable way (see Kamionkowski et al. 1997; Zaldarriaga & Seljak 1997), are then built out of the separation outputs. We have described how to estimate the noise on fastica outputs, on Q and U as well as on E and B.
We tested this strategy on simulated polarization microwave all‐sky maps containing a mixture of CMB and Galactic synchrotron. CMB is modelled close to the current best fit (Spergel et al. 2003), with a component of cosmological gravitational waves at the 30 per cent level with respect to density perturbations. We also included reionization, although with an optical depth lower than indicated by the WMAP results (Bennett et al. 2003a) because they came while this work was being completed, but consistent with the Gunn–Peterson measurements by Becker et al. (2001). Galactic synchrotron was modelled with the two existing templates by Giardino et al. (2002) and Baccigalpi et al. (2001). These models yield approximately equal power on angular scales above the degree, dominating over the expected CMB power. On subdegree angular scales, the Giardino et al. (2002) model predicts a higher power, but still subdominant compared to the CMB E‐mode acoustic oscillations. Note that, at microwave frequencies, the fluctuations at high multipoles (ℓ≳ 1000), corresponding to a few arcmin angular scales, are likely dominated by compact or flat spectrum radio sources (Baccigalpi et al. 2002b; Mesa et al. 2002). Their signal is included in the maps used to estimate the synchrotron power spectrum.
We studied in detail the limiting performance in the noiseless case, as well as the degradation induced by a Gaussian, uniformly distributed noise, by considering two frequency combinations: 30, 44 GHz and 70, 100 GHz. In the noiseless case, the algorithm is able to recover CMB E and B modes on all the relevant scales. In particular, this result is stable against the space variations of the synchrotron spectral index indicated by the existing data. In this case, fastica is able to converge to an average synchrotron component, characterized by a ‘mean’ spectral index across the sky, and to remove it efficiently from the map. The output CMB map, also containing residual synchrotron due to its space‐varying spectral index, is mostly good as far as the frequencies considered are those where the synchrotron contamination is weaker.
By switching on the noise we found that separation, at least for what concerns the CMB E mode, is still satisfactory for noise exceeding the CMB but not the foreground emission. The reason is that in these conditions the algorithm is still able to catch and remove the synchrotron component efficiently. We implemented a Monte Carlo chain varying the CMB and the noise realizations in order to show that the performance quoted above is typical and does not depend on the particular case studied. Moreover, we studied how the foreground emission biases the recovered CMB map, by computing maps of residuals, i.e. subtracting the true CMB map out of the recovered one. In the noiseless case, the residual is just a copy of the foreground emission, with amplitude decreased proportionally to the accuracy of the separation matrix. In the noisy case, for interesting noise amplitudes the residual maps are dominated by the noise in the input data, linearly mixed with the separation matrix. The situation is obviously worse for the weaker CMB B mode.
We applied these tools making reference to the Planck polarization capabilities, in terms of frequencies, angular resolution and noise, to provide a first example of how the fastica technique could be relevant for high precision large polarization data sets. We addressed separately the analysis of the CMB E, B and TE modes. While this work was being completed, the LFI lost its 100‐GHz channel, having polarization sensitivity. However, polarimetry at this frequency could be restored if the 100‐GHz channel of the HFI is upgraded, as is presently under discussion. Because of the scientific content of the CMB polarization signal, the Planck polarization sensitivity deserves great attention. Within our context here, it is our intention to support the importance of having polarization capabilities in all the cosmological channels of Planck, and in particular at 100 GHz. Our results have been obtained under this assumption.
To improve the signal statistics, we found it convenient to consider at least three frequency channels in the separation procedure, including those where the CMB is strongest, 70 and 100 GHz, plus one out of the two lower frequency channels, at 30 and 44 GHz. Because the latter have lower resolution we had to degrade the higher‐frequency maps because the present fastica architecture cannot deal with maps having different resolutions. CMB E and TE modes were accurately recovered for both the synchrotron models considered. The B‐mode power spectrum is recovered on very large angular scales in the presence of a conspicuous reionization bump. On smaller scales, where the B‐mode power mainly comes from cosmological gravitational waves, the recovery is only marginal for a 30 per cent tensor to scalar perturbation ratio.
On the subdegree angular scales, the contamination from synchrotron is almost irrelevant according to both models (Baccigalpi et al. 2001; Giardino et al. 2002). Moreover, it is expected that Galactic E and B modes have approximately the same power (Zaldarriaga 2001), while for CMB the latter are severely damped down because they are associated with vector and tensor perturbations, vanishing on subhorizon scales at decoupling corresponding to a degree or less in the sky (see Hu et al. 1999). This argument holds also if the B‐mode power is enhanced by weak lensing effects from matter structures along the line of sight (see Hu 2002, and references therein). Therefore, on these scales, we expect the E power spectrum to be a sum of Galactic and CMB contributions, while the B power comes essentially from foregrounds only. In these conditions, the CMB E power spectrum is recovered by simply subtracting the B power spectrum.
We also estimated the TE contamination from synchrotron to be irrelevant for CMB, because of the strength of the CMB TE component due to the intrinsic correlation between scalar and quadrupole modes exciting E polarization. By applying these considerations on subdegree angular scales, as well as the results of the fastica procedure described above on larger scales, we show how the Planck instrument is capable of recovering the CMB E and TE spectra on all scales down to the instrumental resolution, corresponding to a few arcmin scales. In terms of multipoles, the E and TE angular power spectra are recovered up to 1000 and 1200, respectively.
Summarizing, we found that the fastica algorithm, when applied to a Planck‐like experiment, could be able to substantially clean the foreground contamination on the relevant multipoles, corresponding to degree angular scales and above. Because the foreground contamination on subdegree angular scales is expected to be subdominant, the CMB TE and E modes are recovered on all scales extending from the whole sky to a few arcmin. In particular, the fastica algorithm can clean the B‐mode power spectrum up to the peak due to primordial gravitational waves if the cosmological tensor amplitude is at least 30 per cent of the scalar one. In particular, we find that on large angular scales, of a degree and more, foreground contamination is expected to be severe and the known blind component separation techniques are able to efficiently clean the map from such contamination, as is presently known or predicted.
Still, despite these good results, the main limitation of the present approach is the neglect of any instrumental systematics. While it is important to assess the performance of a given data analysis tool in the presence of the nominal instrumental features, as we do here, a crucial test is checking the stability of such tool with respect to relaxation of the assumptions regarding the most common sources of systematic errors, such as beam asymmetry, non‐uniform and/or non‐Gaussian noise distribution, etc., as well as the idealized behaviour of the signals to recover. In this work we have had a good hint about the second aspect, because we have shown that fastica is stable against relaxation of the assumption, common to all component separation algorithms developed so far, about the separability between space and frequency dependence for all the signal to recover. In a forthcoming work we will investigate how ica‐based algorithms for blind component separation deal with maps affected by the most important systematics errors.
## ACKNOWLEDGMENTS
C. Baccigalpi warmly thanks R. Stompor for several useful discussions. We are also grateful to G. Giardino for providing all‐sky maps of simulated synchrotron emission (Giardino et al. 2002), which has been called the SG model in this paper. The healpix sphere pixelization scheme, available at , by A. J. Banday, M. Bartelmann, K. M. Gorski, F. K. Hansen, E. F. Hivon and B. D. Wandelt, has been extensively used.
## REFERENCES
Amari
S.
Chichocki
A.
,
1998
,
Proc. IEEE
,
86
,
2026
DOI:
Baccigalpi
C.
et al
,
2000
,
MNRAS
,
318
,
769
Baccigalpi
C.
Burigana
C.
Perrotta
F.
De Zotti
G.
La Porta
L.
Maino
D.
Maris
M.
R.
,
2001
,
A&A
,
372
,
8
Baccigalpi
C.
De Zotti
G.
Burigana
C.
Perrotta
F.
,
2002
b, in
Cecchini
S.
Cortiglioni
S.
Sault
R.
Sbarra
C.
, eds, AIP Conf. Proc. 609,
Astrophysical Polarized Backgrouds
. Am. Inst. Phys. , New York , p.
84
Becker
R. H.
et al
,
2001
,
AJ
,
122
,
2850
Bennett
C. L.
et al
,
2003
a,
ApJ
,
583
,
1
Bennett
C. L.
et al
,
2003
b,
ApJS
,
148
,
97
Benoit
A.
et al
,
2003
,
A&A
,
399
,
L19
Bernardi
G.
Carretti
E.
Cortiglioni
S.
Sault
R. J.
Kevsteven
M. J.
Poppi
S.
,
2003
,
ApJ
,
594
,
L5
Brouw
W. N.
Spoelstra
T. A. T.
,
1976
,
A&AS
,
26
,
129
Bouchet
F. R.
Prunet
S.
Sethi
S. K.
,
1999
,
MNRAS
,
302
,
663
De Bernardis
P.
et al
,
2002
,
ApJ
,
564
,
559
De Zotti
G.
,
2002
, in
Cecchini
S.
Cortiglioni
S.
Sault
R.
Sbarra
C.
, eds, AIP Conf. Proc. 609,
Astrophysical Polarized Backgrouds
. Am. Inst. Phys. , New York , p.
295
Duncan
A. R.
Haynes
R. F.
Jones
K. L.
Stewart
R. T.
,
1997
,
MNRAS
,
291
,
279
Duncan
A. R.
Reich
P.
Reich
W.
Fürst
E.
,
1999
,
A&A
,
350
,
447
Duncan
A. R.
Haynes
R. F.
Reich
W.
Reich
P.
Grey
A. D.
,
1998
,
MNRAS
,
299
,
942
Finkbeiner
D. P.
,
2003
,
ApJS
,
146
,
407
Finkbeiner
D. P.
Davis
M.
Schlegel
D. J.
,
1999
,
ApJ
,
524
,
867
Fosalba
P.
Lazarian
A.
Prunet
S.
Tauber
J. A.
,
2002
, in
Cecchini
S.
Cortiglioni
S.
Sault
R.
Sbarra
C.
, eds, AIP Conf. Proc. 609,
Astrophysical Polarized Backgrouds
. Am. Inst. Phys. , New York , p.
44
Giardino
G.
Banday
A. J.
Górski
K. M.
Bennett
K.
Jonas
J. L.
Tauber
J.
,
2002
,
A&A
,
387
,
82
Górski
K. M.
Wandelt
B. D.
Hansen
F. K.
Hivon
E.
Banday
A. J.
,
1999
Haffner
L. M.
Reynolds
R. J.
Tufte
S. L.
,
1999
,
ApJ
,
523
,
223
Halverson
N. W.
et al
,
2002
,
ApJ
,
568
,
38
Haslam
C. G. T.
Stoffel
H.
Salter
C. J.
Wilson
W. E.
,
1982
,
A&AS
,
47
,
1
Hyvärinen
A.
,
1999
,
IEEE Signal Process. Lett.
,
6
,
145
DOI:
Hobson
M. P.
Jones
A. W.
Lasenby
A. N.
Bouchet
F.
,
1998
,
MNRAS
,
300
,
1
Hu
W.
,
2002
,
Phys. Rev. D
,
65
,
023003
DOI:
Hu
W.
Dodelson
S.
,
2002
,
ARA&A
,
40
,
171
Hu
W.
White
M.
Seljak
U.
Zaldarriaga
M.
,
1999
,
Phys. Rev. D
,
57
,
3290
DOI:
Kamionkowski
M.
Kosowsky
A.
Stebbins
A.
,
1997
,
Phys. Rev. D
,
55
,
7368
DOI:
Kodama
H.
Sasaki
M.
,
1984
,
Prog. Theor. Phys. Suppl.
,
78
,
1
Kogut
A.
et al
,
2003
,
ApJS
,
148
,
161
Kovac
J.
Leitch
E. M.
Pryke
C.
Carlstrom
J. E.
Halverson
N. W.
Holzapfel
W. L.
,
2002
,
Nat
,
420
,
772
Lazarian
A.
Prunet
S.
,
2002
, in
Cecchini
S.
Cortiglioni
S.
Sault
R.
Sbarra
C.
, eds, AIP Conf. Proc. 609,
Astrophysical Polarized Backgroud
. Am. Inst. Phys. , New York , p.
32
Lee
A. T.
et al
,
2001
,
ApJ
,
561
,
L1
Liddle
A.
Lyth
D. H.
,
2000
,
Cosmological Inflation and Large Scale Structure
.
Cambridge Univ. Press
, Cambridge
Maino
D.
et al
,
2002
,
MNRAS
,
334
,
53
Maino
D.
Banday
A. J.
Baccigalpi
C.
Perrotta
F.
Gòrski
K.
,
2003
,
MNRAS
,
344
,
544
Mandolesi
N.
et al
,
1998
,
Planck Low Frequency Instrument
, proposal submitted to ESA
Mesa
D.
Baccigalpi
C.
De Zotti
G.
Gregorini
L.
Mack
K. L.
Vigotti
M.
Klein
U.
,
2002
,
A&A
,
396
,
463
Moscardini
L.
Bartelmann
M.
Matarrese
S.
Andreani
P.
,
2002
,
MNRAS
,
335
,
984
S.
et al
,
2001
,
ApJ
,
549
,
L1
Puget
J. L.
et al
,
1998
,
High Frequency Instrument for the Planck mission
, proposal submitted to ESA
Schlegel
D. J.
Finkbeiner
D. P.
Davies
M.
,
1998
,
ApJ
,
500
,
525
Seljak
U.
Zaldarriaga
M.
,
1996
,
ApJ
,
469
,
437
Smoot
G. F.
,
1999
, in
Maiani
L.
Melchiorri
F.
Vittorio
N.
, eds, AIP Conf. Proc. 476,
3K Cosmology
. Am. Inst. Phys. , New York , p.
1
Spergel
D. N.
et al
,
2003
,
ApJS
,
148
,
175
Stolyarov
V.
Hobson
M. P.
Ashdown
M. A. J.
Lasenby
A. N.
,
2002
,
MNRAS
,
336
,
97
Tegmark
M.
Efstathiou
G.
,
1996
,
MNRAS
,
281
,
1297
Tenorio
L.
Jaffe
A. H.
Hanany
S.
Lineweaver
C. H.
,
1999
,
MNRAS
,
310
,
823
Toffolatti
L.
Argueso Gomez
F.
De Zotti
G.
Mazzei
P.
Franceschini
A.
Danese
L.
,
Burigana
C.
,
1998
,
MNRAS
,
297
,
117
Tucci
M.
Carretti
E.
Cecchini
S.
Fabbri
R.
Orsini
M.
Pierpaoli
E.
,
2000
,
New Astron
.,
5
,
181
DOI:
Tucci
M.
Carretti
E.
Cecchini
S.
Nicastro
L.
Fabbri
R.
Gaensler
B. M.
Dickey
J. M.
McClure‐Griffiths
N. M.
,
2002
,
ApJ
,
579
,
607
Uyaniker
B.
Fürst
E.
Reich
W.
Reich
P.
Wielebinski
R.
,
1999
,
A&AS
,
138
,
31
Vielva
P.
Martínez‐Gonz'alez
E.
Cayón
L.
Diego
J. M.
Sanz
J. L.
Toffolatti
L.
,
2001
,
MNRAS
,
326
,
181
Zaldarriaga
M.
,
2001
,
Phys. Rev. D
,
64
,
103001
DOI:
Zaldarriaga
M.
Seljak
U.
,
1997
,
Phys. Rev. D
,
55
,
1830
DOI:
## Footnotes
1
See for a list and details of operating and planned CMB experiments.
2
See
3
See
4
CMBpol, see .
5
See for a collection of presently operating and future CMB experiments.
6
See
|
Weighted Representation Asymptotic Basis of Integers
Received:December 26, 2014 Revised:March 20, 2015
Key Word: additive basis representation function
Fund ProjectL:Supported by the National Natural Science Foundation of China (Grant No.11471017).
Author Name Affiliation Yujie WANG School of Mathematics and Computer Science, Anhui Normal University, Anhui 241003, P. R. China Min TANG School of Mathematics and Computer Science, Anhui Normal University, Anhui 241003, P. R. China
Hits: 1466
Let $k_{1}, k_{2}$ be nonzero integers with $(k_{1}, k_{2})=1$ and $k_{1}k_{2}\neq-1$. Let $R_{k_{1}, k_{2}}(A, n)$ be the number of solutions of $n=k_{1}a_{1}+k_{2}a_{2}$, where $a_{1}, a_{2}\in A$. Recently, Xiong proved that there is a set $A\subseteq\mathbb{Z}$ such that $R_{k_{1}, k_{2}}(A, n)=1$ for all $n\in \mathbb{Z}$. Let $f: \mathbb{Z}\longrightarrow \mathbb{N}_{0}\cup\{\infty\}$ be a function such that $f^{-1}(0)$ is finite. In this paper, we generalize Xiong's result and prove that there exist uncountably many sets $A\subseteq \mathbb{Z}$ such that $R_{k_{1},k_{2}}(A, n)=f(n)$ for all $n\in\mathbb{Z}$.
|
Limited access
The $K_{sp}$ of $PbBr_2$ is $1.84 × 10^{-7}$.
If 0.020 M lead (II) nitrate and 0.010 M sodium bromide are mixed, what will be observed in the lab?
A
A precipitate will form at the bottom of the beaker, and the remaining solution will be clear and colorless.
B
The solution will be clear and colorless, but a precipitate will form if five more drops of lead (II) nitrate are added.
C
The solution will be clear and colorless, but a precipitate will form if five more drops of sodium bromide are added.
D
The solution will be clear and colorless. No precipitate will form if five more drops of either solution are added.
Select an assignment template
|
# Sum and Product Confusion Puzzle
## Sum and Product Confusion Puzzle
Sum Sam and Product Pete are in class when their teacher gives Sam the Sum of two numbers and Pete the product of the same two numbers (these numbers are greater than or equal to 2). They must figure out the two numbers.
• Sam: I don’t know what the numbers are Pete.
• Pete: I knew you didn’t know the numbers… But neither do I.
• Sam: In that case, I do know the numbers.
What are the numbers?
## Solution
Assume numbers are a and b
As Sam has sum of no. a + b = x (which has given )—————(1)
and Pete has product a*b = y (given)
now,
$(a + b)^{2} = a^{2} + B^{2} + 2ab$
find out $a^{2} + b^{2}$ from above
now find
$a – b = \sqrt{a^{2} + b^{2} – 2ab}——————–(2)$
solve (1) and (2)
n get the a and b … it depends on x (sum) and y (product)
Finally we can say if you know sum (name) and product (name) you know the number(:P)
|
# Consider the following statements regarding a transmission line:1. Its attenuation is constant and is independent of frequency.2. Its attenuation varies linearly with frequency.3. Its phase shift varies linearly with frequency.4. Its phase shift is constant and is independent of frequency.Which of the above statements are correct for distortionless line?
This question was previously asked in
ESE Electronics 2010 Paper 1: Official Paper
View all UPSC IES Papers >
1. 1, 2, 3 and 4
2. 2 and 3 only
3. 1 and 3 only
4. 3 and 4 only
## Answer (Detailed Solution Below)
Option 3 : 1 and 3 only
Free
CT 3: Building Materials
2565
10 Questions 20 Marks 12 Mins
## Detailed Solution
Lossless transmission line:
The conductors of the line are perfect and the dielectric medium separating them is lossless.
∴ R = 0 = G
Propagation constant:
$$\gamma = \sqrt {\left( {R + j\omega L} \right)(G + j\omega C} )$$
$$= \sqrt {\left( {j\omega L} \right)\left( {j\omega C} \right)} = j\omega \sqrt {LC}$$
γ = α + jβ
∴ Attenuation constant α = 0 and phase constant β = ω√ LC
Attenuation response does not depend on the frequency and phase constant depends linearly on frequency.
Distortionless transmission line:
$$\frac{R}{L} = \frac{G}{C}$$
Propagation constant:
$$\gamma = \sqrt {\left( {R + j\omega L} \right)(G + j\omega C} )$$
$$= \sqrt {RG\left( {1 + \frac{{j\omega L}}{R}} \right)\left( {1 + \frac{{j\omega C}}{G}} \right)}$$
$$= \sqrt {RG} \left( {1 + \frac{{j\omega C}}{G}} \right)$$
γ = α + jβ
∴ Attenuation constant α = √ (RG) and phase constant β = ω√ LC
Attenuation response does not depend on frequency (having flat response) and phase constant depends linearly on frequency.
|
# Introduction to Stacking Classifier
I already introduced you to Adaptive Boosting Classifier and Gradient Boosting Classifier, so now it is time to go to another ensemble classifier which I used in my research during creating a master’s thesis – Stacking. It will be a theoretical introduction to this algorithm and an example of its implementation in Python using the MLxtent and Scikit-learn libraries.
Theoretical introduction
The main idea behind Stacking is to explore the space of different models for the same problem. This approach is that you try to resolve it using various types of models that are able to deal with some parts of it. The goal is to build multiple learners to then create an indirect prediction. The next step is adding a new model that learns based on indirect predictions of the same goal. The input of the algorithm is the training set S = \lbrace {(x_i,y_i) }\rbrace ^{n}_{i=1} and the output is in the form of ensemble classifier H [1].
The algorithm can be presented in the following steps:
1. Training classifiers from the first level.
2. Creating a new set of predicitions: S_h = \lbrace {(x^’_i,y_i) }\rbrace , where x^’_i = \lbrace h_i(x_i),…,h_T(x_i) \rbrace
3. Training classifiers from the second level H and the final prediction.
The diagram of the stacking operation is presented in the following picture:
Choosing the classifiers
As I mentioned earlier, Stacking classifiers consist of a few classifiers and you need to choose them. The most popular of them used in this case are the following:
• the level one classifiers: Decision Tree classifier, Random Forest Classifier, Nearest Neighbour Classifier
• the level two classifiers: Neural Network, Random Forest Classifier, Support Vector Classifier
Example in Python
If you want to use this algorithm in Python, you can do that with the Sklearn and MLxtent. In this case, we use the k Nearest Neighbor algorithm for k equal to 1 and 3, as well as the Decision Tree algorithm. The second-degree model is the Random Forest. So you need to import the following method:
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.ensemble import RandomForestClassifier
from mlxtend.classifier import StackingClassifier
Let me leave part of the implementation responsible for preparing the training and test set. You can find it out in my other article. I will proceed to use the classifier:
c1 = KNeighborsClassifier(n_neighbors=1)
c2 = KNeighborsClassifier(n_neighbors=3)
c3 = DecisionTreeClassifier()
c4= RandomForestClassifier(random_state=1)
stacking = StackingClassifier(classifiers=[c1, c2, c3],
meta_classifier=c4)
stacking.fit(x_train, y_train.ravel())
predictions = stacking.predict(x_test)
The libraries allow you to set certain parameters for the classifiers, for example, count of the nearest neighbors for kNN or the randomness of the bootstrapping of the samples used when building trees in the Random Forest. All information can be found in the documentation of these libraries.
Summary
I wanted to introduce you to Stacking Classifier. This is another ensemble classifier I wrote about here. If you want to read more on this method, I encourage you to check the book Combining Pattern Classifiers Methods and Algorithms by Ludmila I. Kuncheva or the paper that I added in sources of this blog post. I invite you to read my other articles connected with Machine Learning.
Sources
[1] Rising Odegua, An Empirical Study of Ensemble Techniques (Bagging, Boosting and Stacking), Conference: Deep Learning IndabaX, 2019
Scroll to top
|
# Translating grid with extrusion speed
I am putting into MATLAB code the equations that describe a plastic extrusion process. From a paper, I found I should use a spatial grid that translates with the extrusion speed, being the reference coordinate system the centerline of the extrudate.
In particular, $$s = 0$$ is the extruder die and the length of the domain is time dependent as the extrudate gets longer and longer. In the paper, new grid points are continually added to the solution domain at $$s = 0$$ (the extruder die) to account for the extrusion of fresh fluid. How would I do this?
|
# Series divergence and convergence
The series is the sum from $n=1$ to infinity of $\frac {2n}{(8n+11)}$
I know this series diverges by the divergence test because the limit is $\frac 14$ (not zero) but how can I know what it diverges to? is it infinity?
• If $\lim_{n\to\infty}a_n>0$, then $\sum_{n=1}^Na_n\to+\infty$ – Simply Beautiful Art Mar 24 '17 at 21:59
• "Divergent" means that it doesn't converge, so it either goes to $\pm\infty$ or oscillates between values. – The Count Mar 24 '17 at 22:01
• Does the series move in one direction (positive vs. negative) or does it change directions? If the series diverges, and it does not change direction, then it must go to... – abiessu Mar 24 '17 at 22:12
• @TheCount "oscillate between values" may be a problematic intuition. for example take an enumeration $(q_n)_n$ of $\Bbb Q$, then $\sum_{n=1}^N (q_{n+1}-q_n)=q_{N+1}-q_1$ is not really oscillating (but generates a dense set) – tofurind Mar 24 '17 at 22:20
• @TheCount Yes, I were aware of that. ;) My comment were more meant as additional information for the OP not to memorize a maybe wrong interpretation of your comment. – tofurind Mar 24 '17 at 22:44
As you said, $\lim_{n\to\infty} \frac{2n}{8n+11} = \frac{1}{4}$, therefore the series does not converge. To answer your question we need to make two observations: the series is a sum of positive terms, and the sequence $\frac{2n+2}{8n+19}$ is an increasing one. In such a case, the series diverges to positive infinity for its value is arbitrarily large.
To see that $\frac{2n}{8n+11}$ is positive, it is sufficient to observe that n goes from 1 to infinity. On the other hand, we also see that $\frac{2n+2}{8n+19}>\frac{2n}{8n+11} \Leftrightarrow (2n+2)(8n+11)-2n(8n+19)>0 \Leftrightarrow 22>0$, thus we have what we aimed for.
One way to investigate the convergence/divergence of a series is to use the comparison test. For this series, one can compare: $\frac{2}{19}=\frac{2n}{19n} \leq \frac{2n}{8n+11} \leq \frac{2n}{8n}=\frac{1}{4}$. Now summing:
$$\sum_{n=1}^ \infty \frac{2}{19} \leq \sum_{n=1}^ \infty \frac{2n}{8n+11} \leq \sum_{n=1}^ \infty \frac{1}{4} \Rightarrow \infty \leq \sum_{n=1}^ \infty \frac{2n}{8n+11} \leq \infty$$
Another way is (mentioned by AlexT above) to see that it is an increasing positive series: $a_n<a_{n+1}.$
Now, here is a challenge for you: Does a decreasing positive series converge: $a_n>a_{n+1}>0$?
|
# Tag Info
12
If the multiplier takes two voltages as input and returns a voltage as output, then there is necessarily a constant involved, with units of [1/V]. Take for example, AD633 (which was the first search result). The output is the product of the 2 inputs times a constant: $V_{out} = \frac{V_1 \times V_2}{10V}$ So the output units are Volts.
4
In addition to Juancho's answer for the general mixer, I would like to give an example for a more simpler frequency mixer most commonly used in communication systems to shift the frequency spectrum of a message signal up or down for transmission or reception etc. The simplest understanding of a physical realisation of a mixer assumes an on-off switching ...
1
Mathematically speaking $V^2$ is perfectly alright to use. Physically I am not sure you can multiply electrical signals like that.
1
I can't find anything on the Internet regarding its practical applications. Translation, in general, occurs when a series is multiplied with (or "modulated by") a sinusoid. I write "series" because the translation effect can be applied to both the time domain and the frequency domain. The main thing to keep in mind here is that when you multiply a time ...
1
You can use a Sinc (or windowed Sinc) kernel for interpolation between FFT/DFT result bins. Note that near DC and Fs/2, the interpolation needs to be circular. After shifting a peak up and determining its phase, you can again use Sinc interpolation to figure out how it should be represented in some number of nearby (non-fractional) bins. Added: Linear ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Algebraic numbers are countable
Basically this question has many solutions in MSE for example proof 1, proof 2 etc. I have also tried to prove it and wanted to checked. It is as follows:
1. My first claim is there are only countably many polynomials in $\mathbb{Z}[x]$, i.e the set $\mathbb{Z}[x]$ is countable. Which is probably easy to see as we can see it as $\mathbb{Z}^n$.
2. Since each polynomials has at most finite roots so if we consider all the roots in a set that will be nothing but countable union of finite sets, which is countable.
Hence, set of algebraic numbers are countable.
• We can see $\mathbb{Z}[X]$ as $\mathbb{Z}^n$ needs to be thought again. What is $n$? – Gribouillis Aug 12 '17 at 7:06
• easy to see as we can see it as Z^n Not obvious what you mean by that (what's $n$?). See however Prove that the set of integer coefficients polynomials is countable. Then step 2 is the easy part. – dxiv Aug 12 '17 at 7:07
• You can view $\mathbb{Z}[X]$ as $\bigcup_{n\geq 0}\mathbb{Z}^n$, but not as you claim. – Mathematician 42 Aug 12 '17 at 7:08
• Okay thanks, now I got it. $n$ to be chosen. It's not obvious. – Sachchidanand Prasad Aug 12 '17 at 7:11
• Essentially right - but the set of polynomials is the union of the set of polynomials of each degree. Each of those sets is countable and a countable union of countable sets is countable. – Ethan Bolker Aug 12 '17 at 14:03
|
## Archipelago
A graph theory problem from the Riddler blog. Here it goes:
You live on the volcanic archipelago of Riddleria. Your archipelago is connected via a network of bridges, forming one unified community. In an effort to conserve resources, the ancient Riddlerians who built this network opted not to build bridges between any two islands that were already connected to the community otherwise. Hence, there is exactly one path from any one island to any other island.
Each island contains exactly one volcano. You know that if a volcano erupts, the subterranean pressure change will be so great that the volcano will collapse in on itself, causing its island — and any connected bridges — to crumble into the ocean. Remarkably, other islands will be spared unless their own volcanoes erupt. But if enough bridges go down, your once-unified archipelagic community could split into several smaller, disjointed communities.
If there were N islands in the archipelago originally and each volcano erupts independently with probability p, how many disjointed communities can you expect to find when you return? What value of p maximizes this number?
Here is my solution:
[Show Solution]
## Pool hall robots
This Riddler puzzle is about arranging pool balls using a robot!
You own a start-up, RoboRackers™, that makes robots that can rack pool balls. To operate the robot, you give it a template, such as the one shown below. (The template only recognizes the differences among stripes, solids and the eight ball. None of the other numbers matters.)
First, the robot randomly corrals all of the balls into the wooden triangle. From there, the robot can either swap the location of two balls or rotate the entire rack 120 degrees in either direction. The robot continues performing these operations until the balls’ formation matches the template, and it always uses the fewest number of operations possible to do so.
Using the template given above — a correct rack for a standard game of eight-ball — what is the maximum number of operations the robot would perform? What starting position would yield this? How about the average number of operations?
Extra credit: What is the maximum number of operations the robot would perform using any template? Which template and starting position would yield this?
Here is my solution:
[Show Solution]
Consider four towns arranged to form the corners of a square, where each side is 10 miles long. You own a road-building company. The state has offered you \$28 million to construct a road system linking all four towns in some way, and it costs you \$1 million to build one mile of road. Can you turn a profit if you take the job?
Extra credit: How does your business calculus change if there were five towns arranged as a pentagon? Six as a hexagon? Etc.?
Here is a longer explanation:
[Show Solution]
Here is the solution with minimal explanation:
[Show Solution]
This Riddler problem considers the classical map-coloring problem with an adversarial twist! One player draws countries and the other player colors them.
Allison and Bob decide to play a map-coloring game. Each turn, Allison draws a simple closed curve on a piece of paper, and Bob must then color the interior of the “country” that curve creates with one of his many crayons. If the new country borders any pre-existing countries, Bob must color the new country with a color that is different from the ones he used for the bordering ones.
Allison wins the game when she forces Bob to use a sixth color. If they both play optimally, how many countries will Allison have to draw to win?
Here is my solution:
[Show Solution]
## Pokémon Go Efficiency
This Riddler puzzle is about a topic near and dear to many hearts: Pokémon!
Your neighborhood park is full of Pokéstops — places where you can restock on Pokéballs to, yes, catch more Pokémon! You are at one of them right now and want to visit them all. The Pokéstops are located at points whose (x, y) coordinates are integers on a fixed coordinate system in the park.
For any given pair of Pokéstops in your park, it is possible to walk from one to the other along a path that always goes from one Pokéstop to another Pokéstop adjacent to it. (Two Pokéstops are considered adjacent if they are at points that are exactly 1 unit apart. For example, Pokéstops at (3, 4) and (4, 4) would be considered adjacent.)
You’re an ambitious and efficient Pokémon trainer, who is also a bit of a homebody: You wish to visit each Pokéstop and return to where you started, while traveling the shortest possible total distance. In this open park, it is possible to walk in a straight line from any point to any other point — you’re not confined to the coordinate system’s grid. It turns out that this is a really hard problem, so you seek an approximate solution.
If there are N Pokéstops in total, find the upper and lower bounds on the total length of the optimal walk. (Your objective is to find bounds whose ratio is as close to 1 as possible.)
Advanced extra credit: For solvers who prefer a numerical question with this theme, suppose that the Pokéstops are located at every point with coordinates (x, y), where x and y are relatively prime positive integers less than or equal to 1,000. Find upper and lower bounds for the length of the optimal walk, again seeking bounds whose ratio is as close to 1 as possible.
The problem of visiting a set of locations while minimizing total distance traveled is known as a Traveling Salesman Problem (TSP), and it is indeed a famous and notoriously difficult problem in computer science. That being said, bounding the solution to a particular TSP instance can be easy if we take advantage of its structure.
Here is my solution to the first part:
[Show Solution]
Here is my solution to the second part:
[Show Solution]
|
# Converse, Inverse, and Contrapositive
This is the third post in a series on logic, with a focus on how it is expressed in English. We’ve looked at basic ideas of translating between English and logical symbols, and in particular at negation (stating the opposite). Now we are ready to consider how to change a given statement into one of three related statements.
## A conditional statement and its converse
Math Logic - Determining Truth
A number divisible by 2 is divisible by 4. I'm suppose to figure out the hypothesis, the conclusion, and a converse statement, say whether the converse statement is true or false, and if it is false give a counterexample. I don't understand.
Ricky has been asked to break down the statement, “A number divisible by 2 is divisible by 4,” into its component parts, and then rearrange them to find the converse of the statement. I took the question:
You're asking about the terminology of logic, which is important in math to help us talk about proofs and how we know something is true. Words such as "converse" allow us to talk about our reasoning and see whether we are really making sense.
A statement such as "any number divisible by 2 is divisible by 4" (I've changed "a" to "any" to clarify the statement a little) can be rewritten as
IF a number N is divisible by 2, THEN the number N is divisible by 4
The hypothesis, or premise, is what is given or supposed, the "if":
a number N is divisible by 2
The conclusion is what is concluded from that, the "then":
the number N is divisible by 4
We commonly write such a statement symbolically as “$$p\rightarrow q$$“, where the hypothesis is p and the conclusion is q. I rewrote each part slightly to allow it to exist outside of the sentence, naming the number N to avoid needing pronouns. What was important was to rewrite the statement in if/then form.
The converse of this statement swaps the hypothesis and conclusion, making “$$q\rightarrow p$$“:
The converse of the statement "IF a THEN b" is "IF b THEN a", turning the statement around so that the conclusion becomes the hypothesis and the hypothesis becomes the conclusion. In this case, the converse is
IF a number N is divisible by 4, THEN the number N is divisible by 2
Ricky was asked to decide whether the converse is true or not, and then prove it, whichever way it goes. This part goes beyond mere logic and enters the realm of “number theory”; but commonly this sort of question is first asked in cases where the proof is not too hard, which is the case here.
Now we have to consider whether either statement is true. A statement and its converse may be either both true, or both false, or one true and the other false; knowing whether one is true says nothing about whether the other is true. In this case, the original statement is false. (This makes me wonder if you copied the problem wrong; it doesn't sound like this possibility was considered in the question.) How do I know it's false? Because I can give a counterexample: a number N for which the hypothesis is true but the conclusion is false. Can you see what I can use for N, which is even but not divisible by 4?
To show that a statement is not always true, we only need to find an example for which it is false. In this case, an easy example is 2, or we could use 6, or 102, or whatever we like.
But the question was about the converse:
However, the converse is true. See if you can see why. You might just try listing lots of numbers that are divisible by 4, and see whether they are all even. If all your examples are even, you haven't proven anything; but the list may suggest to you a reason why you will never be able to find a counterexample. That reason would be the basis of a proof.
I didn’t give a proof, in part because Ricky needed to think about that for himself, but also because I didn’t know what level of proof Ricky is expected to handle. One approach is to see that any multiple of 4 can be written as 4k for some integer k; but that can be written as 2(2k), which is clearly a multiple of 2.
## Converse, inverse, and contrapositive
Now we can review the meanings of all three terms, in this 1999 question, which again uses an example from basic number theory:
Contrapositive, Converse, Inverse
Let m and n be whole numbers, and consider the statement p implies q given by "if m + n is even, then m and n are even."
A) Express the contrapositive, the converse
and the inverse of the given conditional.
B) For the statements that are true, give a proof.
C) For the statements that are false, give a counterexample.
I have part A (I think) but I'm having trouble deciding which statements are true and which are false, and I'm completely lost on the proofs.
Doctor Kate could have asked Hollye for her answers to part A, to make sure she understands that part; but she chose to provide them:
I'll give you what I got for the first part, to see if it's the same as what you got. First, though, here's what my "p" and "q" are:
p is "m + n is even"
q is "m and n are even"
~p is "m + n is odd" ("~p" means "NOT p")
~q is "either m or n is odd"
It’s important to identify the parts of a conditional statement (if p then q); and since two of the new statements require negations, that also might as well be done early. Notice that the negation of “is even” could have been written as “is not even”, but since every number (integer) is either odd or even, writing “is odd” is cleaner. Also, the negation of “both are even” is “at least one is not even”; this is an application of De Morgan’s law, or can be seen by considering that if it is not true that both are even, then there must be one that is not even. These ideas were discussed last time.
Now here are the new statements:
A. Contrapositive (if ~q then ~p):
"If either m or n is odd, then m + n is odd."
B. Converse (if q then p):
"If m and n are even, then m + n is even."
C. Inverse (if ~p then ~q):
"If m + n is odd, then either m or n is odd."
We saw the converse above; there we just swap p and q. The inverse keeps each part in place, but negates it. The contrapositive both swaps and negates the parts.
To check out which of these are true, it's best to experiment a little. Try some numbers.
Let's look at and pick some numbers where m or n is odd:
2 and 3
3 and 7
1 and 8
Notice that I tried to pick a variety of numbers - sometimes both odd, sometimes only one. That is because the opposite of "m and n are even" is "at least one of m or n is odd, and maybe both are." You can figure that out by imagining all sorts of things that don't satisfy "m and n are even." It could be really false (both m and n are not even) or just a bit false (only n is not even or only m is not even).
Anyway, let's take a look at these numbers.
Is 2 + 3 odd? Yes.
Is 3 + 7 odd? That's 10... no, it's not.
Wait, statement A says 3 + 7 WOULD be odd. This is a counterexample.
So now we know that the contrapositive, “If either m or n is odd, then m + n is odd,” is false, because there is at least one case, 3 and 7, where the hypothesis is true but the conclusion is false.
Remember that a statement like "<BLAH> is always true" can be proven false by just one example of when <BLAH> could be false. If I claim all dogs are black, all you have to do is bring me a Dalmatian, and I am wrong, even if a lot of dogs are black. Statement A is claiming that ALL the time, if one or both of n or m is odd, n+m is ALWAYS odd. But look, we found an example where it isn't. So statement A is false.
That’s the essence of a counterexample.
Doctor Kate continued, showing a way to prove that B and C (the converse and inverse) are both false. You can read that on your own, since my goal here is just to look at the logic. (We’ll have a series on proofs some time in the future.)
## Rewriting the statement
Continuing, here is a similar question, where statements must first be written in conditional form:
Converse, Inverse, Contrapositive
For the directions it says "Write the converse, inverse, and contrapositive of each conditional. Determine if the converse, inverse, and contapositive are true or false. If false, give a counterexample."
I can't seem to do these:
If a ray bisects an angle,
then the two angles formed are congruent.
Vertical angles are congruent.
Thank you!
The second statement is straightforward, but the others need thought. Doctor Achilles first defined the three forms, as we’ve already seen, and then dealt with the first case:
The problem with your questions are that they don't neatly fit into the "if p, then q" format, so you need to first find EQUIVALENT sentences that are "if p, then q."
Your first example says "all squares are quadrilaterals." That is the same as saying "if x is a square, then x is a quadrilateral."
Thus, “all” (the universal quantifier) translates directly to a conditional. The answer, left for Hana to do, will be:
• Converse: “If x is a quadrilateral, then x is a square”; i.e. “Any quadrilateral is a square.”
• Inverse: “If x is not a square, then x is not a quadrilateral”; i.e. “Anything that is not a square is not a quadrilateral.”
• Contrapositive: “If x is not a quadrilateral, then x is not a square”; i.e. “Anything that is not a quadrilateral is not a square.”
The original statement, and the contrapositive, are true, because a square is a kind of quadrilateral; the converse and inverse are false, and a counterexample would be an oblong rectangle, which is not a square but is a quadrilateral.
The questions so far, where they dealt with truth at all, only asked about specific examples. Our last two questions will look more broadly at when these statements are equivalent.
## Which can I use in a proof?
Consider this question, from 2002:
Contrapositive
I have a logic proof that I'm trying to solve. I'm up to the point after I've written down all my givens. One of the givens is p-->q. I want to say ~p-->~q, with my reason being inverse. Am I allowed to do this?
If we know a statement is true, can we conclude that the inverse is true? Doctor TWE answered with a counterexample:
No. Although the statement ~p --> ~q is called the inverse of p --> q, it does not necessarily follow.
Let's look at an example. Suppose that:
p = "X is 2"
and
q = "X is an even number"
Clearly, p --> q is true ("If X is 2 then X is an even number."). But is the inverse, ~p --> ~q, also true? This statement reads, "If X is NOT 2 then X is NOT an even number." Suppose X = 4. Then the "if" part, X is NOT 2, is true, but the "then" part, X is NOT an even number, is false. So the statement as a whole is false.
Here we are using logic to talk about logic: The statement “For all p and q, $$(p\rightarrow q)\rightarrow(\lnot p\rightarrow\lnot q)$$” is false! Sometimes both original and inverse are true, but we can’t conclude the latter from the former.
What you *are* allowed to use in a logic proof is the contrapositive. The contrapositive of p --> q is ~q --> ~p. It turns out that any conditional proposition ("if-then" statement) and its contrapositive are logically equivalent. In our example, the contrapositive of "If X is 2 then X is an even number" would read, "If X is NOT an even number then X is NOT 2." We can see that this is also true.
Giving one example where the contrapositive is true does not prove that it is always equivalent; we’ll prove it below.
A third possible "switching" of the statement p --> q is q --> p. This is called the converse, but like the inverse, it does not follow logically from the original statement. The converse of our original statement would read, "If X is an even number then X is 2." Clearly, not all even numbers are 2. So the converse statement is false. (It turns out that the inverse and converse statements are logically equivalent to each other - but not logically equivalent to the original statement.)
To summarize, given the statement p --> q:
The inverse is q --> p, NOT equivalent to p --> q
The converse is ~p --> ~q, NOT equivalent to p --> q
The contrapositive is ~q --> ~p, IS equivalent to p --> q
In fact, the converse and inverse turn out to be equivalent to one another, though not to the original.
## Why is the contrapositive equivalent?
Let’s look at one more, from 2003:
Truth of the Contrapositive
The inverse of a statement's converse is the statement's contrapositive. True, but why?
I don't know how to explain it. I tried an example:
p: I like cats.
q: I have cats.
Converse If I have cats, then I like cats.
Inverse If I don't like cats, then I don't have cats.
Contrapositive If I don't have cats, then I don't like cats.
I still can't explain the answer "true" that I came up with. Maybe it is wrong.
The opening statement describes the contrapositive as the inverse of the converse. What that means is this: Suppose we start with “$$p\rightarrow q$$“. Its converse is “$$q\rightarrow p$$” (swapping the order), and the inverse of that is “$$\lnot q\rightarrow\lnot p$$” (negating each part). This is the contrapositive. In the example, the converse of “If I like cats, then I have cats” is “If I have cats, then I like cats”, and the inverse of that is “If I don’t have cats, then I don’t like cats”, which is the contrapositive.
Doctor Achilles, perhaps misreading the question, answered the bigger question: Which of these are true?
The contrapositive is true if and only if the original statement is true. It is false if and only if the original statement is false. So it is logically equivalent to the original statement.
Let's say you have a conditional statement: "if I like cats, then I have cats." What does this mean? When is it true? When is it false?
Well, for starters, if you like cats and you have cats, then the conditional will come out true. That is, (P -> Q) is true when P and Q are both true.
Also, if you don't like cats and you don't have cats, then the conditional will come out true. That is, (P -> Q) is true when P and Q are both false.
Also, if you don't like cats and you have cats, then the conditional still comes out true. Remember, it says that if you like cats, then you will have them; it makes NO claim at all about what will happen if you don't like cats. So, (P -> Q) is true when P is false and Q is true.
However, if you like cats and you don't have cats, then the conditional will come out false. It says that if you like cats, then you will have them. So it is proven wrong if you like cats, but you still don't have any. So, (P -> Q) is false when P is true and Q is false.
In effect, he has made a truth table:
P Q P->Q
--- --- ------
T T T
F F T
F T T
T F F
If you are unconvinced by any of the reasoning, see Why, in Logic, Does False Imply Anything?.
So to review, (P -> Q) is true under any of these conditions:
P is true and Q is true
P is false and Q is true
P is false and Q is false
And it is only false under this one condition:
P is true and Q is false
You can say that another way, using ~P and ~Q (not-P and not-Q).
(P -> Q) is true under any of these conditions:
~P is false and ~Q is false
~P is true and ~Q is false
~P is true and ~Q is true
And it is only false when:
~P is false and ~Q is true
Is there another sentence that uses P and Q that is only false when ~P is false and ~Q is true? Yes, the sentence is:
(~Q -> ~P)
You can go through the same analysis of this sentence as I did for (P -> Q) and you'll find that it has the same truth conditions.
So the truth table for the contrapositive is that same as for the original; this is what we mean when we say that two statements are logically equivalent.
We can instead just think through the example:
You can also understand this more intuitively:
The sentence:
"If I like cats, then I have cats."
says that as long as the first part, "I like cats," is true, the second part, "I have cats," will definitely be true. In this case, what does "I don't have cats" mean? The only way "I don't have cats" can happen is if "I like cats" is false. That is, the only way "I don't have cats" can be true is if "I don't like cats" is true. Therefore, "If I don't have cats, then I don't like cats."
Which is more convincing? That depends upon you.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
# 17. Retrieving information about compilation and execution¶
When developing models for the IPU, it is important to be able to see how compute tiles are being used and what the balance of memory use across them is. In certain cases, such as when investigating memory over-consumption of a model or investigating any tile imbalance issues, it is useful to produce a trace report that will show a number of different aspects of graph deployment on the IPU.
To retrieve trace information about the Poplar IPU compilation and execution, there are environment variables provided by Poplar itself to dump the compilation and execution reports into a file. See the Capturing IPU Reports chapter in the PopVision User Guide for more information. To enable time-based profiling of events, see the Capturing Execution Information chapter in the PopVision User Guide for more information.
## 17.1. TensorFlow options for reporting¶
Some tracing and reporting options are provided by TensorFlow as standard, and can be useful when developing graphs for the IPU.
TF_CPP_MIN_VLOG_LEVEL is an environment variable that enables the logging of the main C++ backend. Setting TF_CPP_MIN_VLOG_LEVEL=1 will show a lot of output. Included in this is the compilation and execution of the IPU code. The output of TF_CPP_MIN_VLOG_LEVEL can be overwhelming. If only the Poplar backend specific files are of interest, setting TF_POPLAR_VLOG_LEVEL=1 will filter the logging such that only those files produce outputs. Note that increasing the VLOG_LEVEL of either of those environment variables will increase the verbosity of the logs.
TF_CPP_VMODULE provides a mechanism to reduce the logging to certain translation units (source files). This combination is quite useful:
TF_CPP_VMODULE='poplar_compiler=1,poplar_executable=1'
Finally, there is an environment variable called XLA_FLAGS which provides options to the general XLA backend. For example, the follow will produce a Graphviz DOT file of the optimised HLO graph which is passed to the Poplar compiler.
XLA_FLAGS='--xla_dump_to=. --xla_dump_hlo_as_dot --xla_dump_hlo_pass_re=forward-allocation --xla_hlo_graph_sharding_color'
The HLO pass forward-allocation is one of the final passes to run before the HLO instructions are scheduled for passing to the Poplar graph compiler. Running with these options will create a file called something like module_0001.0001.IPU.after_forward-allocation.before_hlo-memory-scheduler.dot. (The way that the file names are generated is explained in XLA graph file naming.) The Graphviz dot command can be used to convert this data to an image.
More information on the XLA flags can be found in the definition of the XLA proto here: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/xla/xla.proto
## 17.2. XLA graph file naming¶
The number of files produced depends on the number of TensorFlow HLO modules generated. This can generally be predicted from the number of sess.run calls on distinct graphs that you make. For example, if your program contains a variable initialisation then this will be compiled as a separate XLA graph and appear as a separate file when dumped. If your program creates a report operation, then that will also be compiled as a separate XLA graph.
When you use ipu_compiler.compile, you force everything inside the compile call to be compiled into a single XLA graph. If you don’t use ipu_compiler.compile, then the results depend on the XLA scheduler, which will combine or split up parts of the TensorFlow graph as it sees fit, creating many arbitrary distinct XLA graphs. If you do not use ipu_compiler.compile, expect to see a larger number of XLA graphs generated. Please note, there is no guarantee your compiled op will only produce one XLA graph. Sometimes others are created for operations such as casting.
The following description provides a break down of the names of the generated files. These are of the general form:
module_XXXX.YYYY.IPU.after_allocation-finder.before_forward-allocation.dot
• There is always a module_ prefix, which indicates that this is the graph for an HLO Module.
• The first XXXX is the HLO module’s unique ID, generated here: https://github.com/tensorflow/tensorflow/blob/r2.1/tensorflow/compiler/xla/service/dump.cc#L263
There is no guarantee about the spacing between IDs, only that they are unique and increasing.
• To understand the rest of the name, YYYY.IPU.......dot, we need to understand that the XLA graph is operated on by multiple different HLO passes, each modifying the XLA graph by optimizing, shuffling or otherwise rewriting it. After these passes, the graph is then lowered to Poplar. There are some TensorFlow native HLO passes, and there are some IPU specific ones.
When dumping the XLA graphs, we can render the XLA graph before and after any HLO pass (for example, to see the effect of that pass on the graph) by supplying the argument --xla_dump_hlo_pass_re=xxxx, where xxxx is a regular expression describing which passes you want. TensorFlow will then render the XLA graph before and after every pass whose name matches that regex. For example, if you wanted to see the effect of every XLA HLO IPU pass involving while loops, you could use --xla_dump_hlo_pass_re=*While*.
The number YYYY is simply an ID related to the order in which these graphs are generated.
• Finally, the passes which the graph was “between” when it was rendered are appended to the filename.
The before_optimizations graph is always rendered if dumping XLA.
• The HLO modules have CamelCase class names by convention. For the file names, these are converted to snake_case.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.