text
stringlengths 104
605k
|
---|
## Problem G. Grand Theft Auto Wheel ≡
Author: ACM ICPC 2009-2010, NEERC, Northern Subregional Contest Time limit: 3 sec Input file: gtaw.in Memory limit: 256 Mb Output file: gtaw.out
### Statement
Tommy is a wheel thief. His job was formerly as easy as pie: you lift a car, turn off wheel bolts, take the wheel and run away. But now everybody uses "anti-theft" bolts.
Anti-theft bolt is designed in such a way that it cannot be turned off with a usual wrench. Its head is a cylinder with a hole. To turn the anti-theft bolt off you need a right wrench. The wrench has a ring with a lug that exactly matches the shape of the bolt head.
Of course Tommy cannot get wrenches for all possible anti-theft bolts. But sometimes it is possible to turn off the bolt with the wrench that does not match it exactly.
More formally, the wrench can turn off the bolt if and only if two following conditions are satisfied:
• the ring of the wrench can be joined with the cylinder of the bolt head in such a way that the lug of the wrench is inside the hole of the bolt head;
• the wrench cannot make a full turn when the bolt is fixed.
Due to technical reasons, the shape of both — hole of the bolt head and lug of the wrench, are always a star-shaped polygons with theirs centers in the center of the bolt or wrench. So if it is described in polar coordinate system as a sequence of pairs (ri, φi) then φi + 1 < φi and φi + 1 − φi < 180.
Help Tommy do find out if it is possible to turn off the bolt with the wrenches he has.
### Input file format
The first line of input file contains two integer numbers n and r — the number of wrenches and the radii of the bolt head and the wrenches' rings (1≤ n≤ 10, 1≤ R≤ 1000).
The following lines describe the bolt head. Description consists of an integer number m — number of vertices (3≤ m≤ 100) and m pairs of integer numbers (ri, φi) (1≤ ri < R; 0∘ ≤ φi < 360; φi < φi+1; φi+1 − φi < 180; φm − φ1 > 180).
The rest lines describe the wrenches in the same format.
### Output file format
The first line of the output file must contain the number of wrenches that can be used to turn off the bolt. The following lines must contain wrench numbers in increasing order.
### Sample tests
No. Input file (gtaw.in) Output file (gtaw.out)
1
3 10
4
9 0
9 90
9 180
9 270
4
8 45
8 135
8 225
8 315
4
6 45
6 135
6 225
6 315
3
7 0
7 90
6 225
2
1 3
0.032s 0.006s 17 |
Not quite as interesting as the illegal method
Papers in this course
Law at the University of Auckland
LAW 131
Legal Method |
# How is the slope of a capital market line (Sharpe Ratio) defined?
Banking & FinanceFinance ManagementGrowth & Empowerment
The slope of a capital market line of a portfolio is its Sharpe Ratio. We know that the greater the returns of a portfolio, the greater the risk. The optimal and the best portfolio is often described as the one that earns the maximum return taking the least amount of risk.
One method used by professionals to increase returns taking minimal risks is the eponymous "Sharpe Ratio". The Sharpe ratio is a calculation of risk-adjusted returns of how good is the investment return vis-a-vis the amount of risk taken. An increased Sharpe ratio for an investment means a better risk-adjusted return.
## How to Calculate?
The Sharpe Ratio is easy to calculate, as it takes only three variables −
• Risk-free rate,
• Expected return, and
• Standard deviation (SD).
SD is the most popular way to calculate the risk of a portfolio, as it shows the variation of returns from the average. Risk usually goes up with increasing SD.
The risk-free rate is the rate of a theoretical investment with no risk and a typical proxy is a short-duration government bond yield.
The Sharpe Ratio is calculated using the formula −
$$\mathrm{\frac{Expected\:Return\:of\:Portfolio − Risk\:free\:Rate}{SD\:of\:Portfolio}}$$
## Different Assets in a Portfolio Matters
Assume that portfolio A had a 17 percent rate of return last year, while the overall market returns were only 11 percent. The initial thought will be that portfolio A is better than the overall market because of the added return. However, although the return of A was greater than the overall market, taking into consideration the risk of your portfolio, calculated using the Sharpe Ratio, portfolio A has actually assumed much more risk. Hence, portfolio A was not optimal.
Let’s assume that your portfolio had an SD of 14 percent versus 6 percent for the overall market, and the risk-free rate was 2 percent.
Sharpe Ratio for your portfolio −
$$\mathrm{\frac{(17 − 2)}{14}= 1.07}$$
Sharpe Ratio for the overall market −
$$\mathrm{\frac{(11 − 2)}{6}= 1.5}$$
In this example, we see that the Sharpe ratio is less even though portfolio A earned more than the market. The market portfolio with a better Sharpe Ratio was more optimal even though the return was less than portfolio A. Therefore, portfolio A assumed excess risk without any additional compensation. Alternately, the overall market, with a higher Sharpe Ratio, had a better risk-adjusted return.
## Not Everything Is Normal
The Sharpe Ratio relies on the SD as a measure of risk, however, the standard deviation assumes a normal distribution where the mode, mean, and median are all equal. Recent history has shown that market returns are not usually normally distributed in the short term. In fact, market returns are actually skewed.
In a skewed distribution, the SD becomes useless because the mean can go greater than or less than the other measures of central dependency. In addition, short-term volatility spikes with large swings in both directions, the SD rises and causes the Sharpe Ratio to go lower.
## Why Diversification is Useful
Standard Deviation of a portfolio of multiple assets is calculated using each asset’s standard deviation. The correlation coefficient among the assets and the weight of the asset in the portfolio is considered before calculating the SD of the portfolio.
When a number of assets have low correlations and are mixed to form a portfolio, the portfolio SD goes lower than the sum of the two SDs. As a result, the Sharpe Ratio goes higher since the denominator of the ratio is lower.
Updated on 28-Sep-2021 07:04:39 |
# Assistant Professor - Mathematics or Science
Faculty Position
Applications are invited for one or more tenure-track faculty positions at the rank of Assistant Professor, and in special cases Associate or Full Professor, at the Institute for Quantum Computing (IQC) and any department in the Faculties of Mathematics and Science. IQC is a collaborative research institute focused on quantum information science and technology, ranging from the theory of quantum information to practical applications. Membership in IQC is renewable, with an initial appointment of 5 years, and comes with research space, a teaching reduction of one course, and a stipend. Information about research at IQC can be found at http://uwaterloo.ca/iqc/research.
A PhD and significant evidence of excellence in research in quantum information science and technology and the potential for effective teaching are required. Responsibilities include the supervision of graduate students and teaching at the undergraduate and graduate levels. Based on qualifications, a salary range of $78,500 to$155,000 will be considered. Negotiations beyond this salary range will be considered for exceptionally qualified candidates. Effective date of appointment is September 1, 2018. The search is open to all areas of quantum information that connect with the goals and ongoing research at IQC.
The University of Waterloo is host to the Institute for Quantum Computing. At present, IQC has a complement of 28 faculty members (growing to 39) from the Faculties of Engineering, Mathematics and Science. Interested individuals should upload their application via the faculty application form at: https://uwaterloo.ca/institute-for-quantum-computing/positions.
Full consideration for these positions is assured only for applications received by December 1, 2017.
The University of Waterloo respects, appreciates and encourages diversity and is committed to accessibility for persons with disabilities. We welcome applications from all qualified individuals including women, members of visible minorities, Aboriginal peoples and persons with disabilities. All qualified candidates are encouraged to apply; however, Canadian citizens and permanent residents will be given priority in the recruitment process.
“Three reasons to apply: http://uwaterloo.ca/fauw/why.” |
# Coulomb force and potential energy in water vs vacuum
If you have 2 ions of equal but opposite charge, will the force between them be larger in a vacuum and smaller in water? Would this be because the relative permittivity of water is greater than 1 (around 80)? Also, what does this mean for the interaction energy? Would an interaction be larger in vacuum? I am trying to figure out whether a reaction is more likely to take place in the two different media between the two ions...
Yes, much larger in vacuum. The interaction is proportional to the reciprocal of the dielectric constant (relative permittivity) $1/\epsilon$ so is less in water. Think of it as an attenuation of the electric field around an ion, the larger $\epsilon$ is the more the field is attenuated and is true whatever the charges on the ions are making them attractive or repulsive. The force at distance r between charges $q_1, q_2$ is
$$F=\frac{q_1q_2}{(4\pi\epsilon_0)\cdot\epsilon r^2}$$
in SI units; $\epsilon_0$ is the permittivity of free space. The charges are $q=ze$ where e is the charge on the electron and z the ionic valency, $\pm 1, 2 \cdots$ etc.
The interaction energy is
$$U=\frac{q_1q_2}{(4\pi\epsilon_0)\cdot\epsilon r} \qquad \mathrm{Joule}$$
The electric field around charge $q_1$ is
$$E=\frac{q_1}{(4\pi\epsilon_0)\cdot\epsilon r^2}$$
in V/m. The force acting on a second charge $q_2$ is $F=Eq_2$
Its worth plugging in some numbers for the energy for different $\epsilon$ and comparing this to thermal energy $k_BT$.
• thank you-very informative. Just to check, if I was to calculate a force in water, would I use 80 in place of ϵ in the "ϵ0ϵ " term? Also, if then I had 2 +1 charges, and I wanted them to react, would it be best in water based on your argument of weakening the field? – gamma1 Jun 2 '17 at 21:07
• yes use $80$ for water as its the relative permittivity no units, $\epsilon_0 = 8.854\cdot 10^{-12} \pu{F\,m^{-1}}$. Yes also if charges are similar high dielectric is best. The rate const is $\ln(k)=\ln(k_0)-U/k_BT$ where $k_0$ is rate const at infinite dielectric const. You might also want to consider increasing the ionic strength of the solution, look up 'primary kinetic salt effect'. – porphyrin Jun 2 '17 at 21:31
• @porphyrin It could also be worth mentioning the reason for dielectric constant. Water molecules are polar and orient themselves to become antiparallel with the field. – Pritt Balagopal Jun 3 '17 at 4:15
• @Pritt , both polar and non-polar, i.e all molecules have a dielectric constant. In non-polar ones this is $\epsilon-1 \sim n\alpha$ where n is the number density and $\alpha$ the polarisability. In polar ones additionally there is a term $\epsilon-1 \sim n\mu^2/(k_BT)$ where $\mu$ is the dipole moment. Some partial alignment takes place but this is opposed by thermal motion via the $k_BT$ term. It is too strong a statement to say that molecules become aligned, they are only ever partially so in a liquid . – porphyrin Jun 3 '17 at 8:48 |
MLIR 16.0.0git
PresburgerRelation.h
Go to the documentation of this file.
1 //===- PresburgerRelation.h - MLIR PresburgerRelation Class -----*- C++ -*-===//
2 //
3 // Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
5 // SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6 //
7 //===----------------------------------------------------------------------===//
8 //
9 // A class to represent unions of IntegerRelations.
10 //
11 //===----------------------------------------------------------------------===//
12
13 #ifndef MLIR_ANALYSIS_PRESBURGER_PRESBURGERRELATION_H
14 #define MLIR_ANALYSIS_PRESBURGER_PRESBURGERRELATION_H
15
17
18 namespace mlir {
19 namespace presburger {
20
21 /// The SetCoalescer class contains all functionality concerning the coalesce
22 /// heuristic. It is built from a PresburgerRelation and has the coalesce()
23 /// function as its main API.
24 class SetCoalescer;
25
26 /// A PresburgerRelation represents a union of IntegerRelations that live in
27 /// the same PresburgerSpace with support for union, intersection, subtraction,
28 /// and complement operations, as well as sampling.
29 ///
30 /// The IntegerRelations (disjuncts) are stored in a vector, and the set
31 /// represents the union of these relations. An empty list corresponds to
32 /// the empty set.
33 ///
34 /// Note that there are no invariants guaranteed on the list of disjuncts
35 /// other than that they are all in the same PresburgerSpace. For example, the
36 /// relations may overlap with each other.
38 public:
39 /// Return a universe set of the specified type that contains all points.
41
42 /// Return an empty set of the specified type that contains no points.
44
45 explicit PresburgerRelation(const IntegerRelation &disjunct);
46
47 unsigned getNumDomainVars() const { return space.getNumDomainVars(); }
48 unsigned getNumRangeVars() const { return space.getNumRangeVars(); }
49 unsigned getNumSymbolVars() const { return space.getNumSymbolVars(); }
50 unsigned getNumLocalVars() const { return space.getNumLocalVars(); }
51 unsigned getNumVars() const { return space.getNumVars(); }
52
53 /// Return the number of disjuncts in the union.
54 unsigned getNumDisjuncts() const;
55
56 const PresburgerSpace &getSpace() const { return space; }
57
58 /// Set the space to oSpace. oSpace should not contain any local ids.
59 /// oSpace need not have the same number of ids as the current space;
60 /// it could have more or less. If it has less, the extra ids become
61 /// locals of the disjuncts. It can also have more, in which case the
62 /// disjuncts will have fewer locals. If its total number of ids
63 /// exceeds that of some disjunct, an assert failure will occur.
64 void setSpace(const PresburgerSpace &oSpace);
65
66 /// Return a reference to the list of disjuncts.
68
69 /// Return the disjunct at the specified index.
70 const IntegerRelation &getDisjunct(unsigned index) const;
71
72 /// Mutate this set, turning it into the union of this set and the given
73 /// disjunct.
74 void unionInPlace(const IntegerRelation &disjunct);
75
76 /// Mutate this set, turning it into the union of this set and the given set.
77 void unionInPlace(const PresburgerRelation &set);
78
79 /// Return the union of this set and the given set.
81
82 /// Return the intersection of this set and the given set.
84
85 /// Return true if the set contains the given point, and false otherwise.
86 bool containsPoint(ArrayRef<MPInt> point) const;
87 bool containsPoint(ArrayRef<int64_t> point) const {
88 return containsPoint(getMPIntVec(point));
89 }
90
91 /// Return the complement of this set. All local variables in the set must
92 /// correspond to floor divisions.
94
95 /// Return the set difference of this set and the given set, i.e.,
96 /// return this \ set. All local variables in set must correspond
97 /// to floor divisions, but local variables in this need not correspond to
98 /// divisions.
100
101 /// Return true if this set is a subset of the given set, and false otherwise.
102 bool isSubsetOf(const PresburgerRelation &set) const;
103
104 /// Return true if this set is equal to the given set, and false otherwise.
105 /// All local variables in both sets must correspond to floor divisions.
106 bool isEqual(const PresburgerRelation &set) const;
107
108 /// Return true if all the sets in the union are known to be integer empty
109 /// false otherwise.
110 bool isIntegerEmpty() const;
111
112 /// Find an integer sample from the given set. This should not be called if
113 /// any of the disjuncts in the union are unbounded.
115
116 /// Compute an overapproximation of the number of integer points in the
117 /// disjunct. Symbol vars are currently not supported. If the computed
118 /// overapproximation is infinite, an empty optional is returned.
119 ///
120 /// This currently just sums up the overapproximations of the volumes of the
121 /// disjuncts, so the approximation might be far from the true volume in the
122 /// case when there is a lot of overlap between disjuncts.
124
125 /// Simplifies the representation of a PresburgerRelation.
126 ///
127 /// In particular, removes all disjuncts which are subsets of other
128 /// disjuncts in the union.
130
131 /// Check whether all local ids in all disjuncts have a div representation.
132 bool hasOnlyDivLocals() const;
133
134 /// Compute an equivalent representation of the same relation, such that all
135 /// local ids in all disjuncts have division representations. This
136 /// representation may involve local ids that correspond to divisions, and may
137 /// also be a union of convex disjuncts.
139
140 /// Print the set's internal state.
141 void print(raw_ostream &os) const;
142 void dump() const;
143
144 protected:
145 /// Construct an empty PresburgerRelation with the specified number of
146 /// dimension and symbols.
148 assert(space.getNumLocalVars() == 0 &&
149 "PresburgerRelation cannot have local vars.");
150 }
151
153
154 /// The list of disjuncts that this set is the union of.
156
157 friend class SetCoalescer;
158 };
159
161 public:
162 /// Return a universe set of the specified type that contains all points.
164
165 /// Return an empty set of the specified type that contains no points.
167
168 /// Create a set from a relation.
169 explicit PresburgerSet(const IntegerPolyhedron &disjunct);
170 explicit PresburgerSet(const PresburgerRelation &set);
171
172 /// These operations are the same as the ones in PresburgeRelation, they just
173 /// forward the arguement and return the result as a set instead of a
174 /// relation.
175 PresburgerSet unionSet(const PresburgerRelation &set) const;
176 PresburgerSet intersect(const PresburgerRelation &set) const;
177 PresburgerSet complement() const;
178 PresburgerSet subtract(const PresburgerRelation &set) const;
179 PresburgerSet coalesce() const;
180
181 protected:
182 /// Construct an empty PresburgerRelation with the specified number of
183 /// dimension and symbols.
186 assert(space.getNumDomainVars() == 0 &&
187 "Set type cannot have domain vars.");
188 assert(space.getNumLocalVars() == 0 &&
189 "PresburgerRelation cannot have local vars.");
190 }
191 };
192
193 } // namespace presburger
194 } // namespace mlir
195
196 #endif // MLIR_ANALYSIS_PRESBURGER_PRESBURGERRELATION_H
An IntegerPolyhedron represents the set of points from a PresburgerSpace that satisfy a list of affin...
An IntegerRelation represents the set of points from a PresburgerSpace that satisfy a list of affine ...
A PresburgerRelation represents a union of IntegerRelations that live in the same PresburgerSpace wit...
bool containsPoint(ArrayRef< MPInt > point) const
Return true if the set contains the given point, and false otherwise.
void setSpace(const PresburgerSpace &oSpace)
Set the space to oSpace.
PresburgerRelation(const PresburgerSpace &space)
Construct an empty PresburgerRelation with the specified number of dimension and symbols.
PresburgerRelation intersect(const PresburgerRelation &set) const
Return the intersection of this set and the given set.
bool hasOnlyDivLocals() const
Check whether all local ids in all disjuncts have a div representation.
Optional< MPInt > computeVolume() const
Compute an overapproximation of the number of integer points in the disjunct.
PresburgerRelation subtract(const PresburgerRelation &set) const
Return the set difference of this set and the given set, i.e., return this \ set.
PresburgerRelation(const IntegerRelation &disjunct)
PresburgerRelation computeReprWithOnlyDivLocals() const
Compute an equivalent representation of the same relation, such that all local ids in all disjuncts h...
bool isSubsetOf(const PresburgerRelation &set) const
Return true if this set is a subset of the given set, and false otherwise.
bool containsPoint(ArrayRef< int64_t > point) const
bool isIntegerEmpty() const
Return true if all the sets in the union are known to be integer empty false otherwise.
void unionInPlace(const IntegerRelation &disjunct)
Mutate this set, turning it into the union of this set and the given disjunct.
bool isEqual(const PresburgerRelation &set) const
Return true if this set is equal to the given set, and false otherwise.
static PresburgerRelation getEmpty(const PresburgerSpace &space)
Return an empty set of the specified type that contains no points.
unsigned getNumDisjuncts() const
Return the number of disjuncts in the union.
PresburgerRelation coalesce() const
Simplifies the representation of a PresburgerRelation.
static PresburgerRelation getUniverse(const PresburgerSpace &space)
Return a universe set of the specified type that contains all points.
const IntegerRelation & getDisjunct(unsigned index) const
Return the disjunct at the specified index.
ArrayRef< IntegerRelation > getAllDisjuncts() const
Return a reference to the list of disjuncts.
SmallVector< IntegerRelation, 2 > disjuncts
The list of disjuncts that this set is the union of.
const PresburgerSpace & getSpace() const
void print(raw_ostream &os) const
Print the set's internal state.
PresburgerRelation unionSet(const PresburgerRelation &set) const
Return the union of this set and the given set.
bool findIntegerSample(SmallVectorImpl< MPInt > &sample)
Find an integer sample from the given set.
PresburgerRelation complement() const
Return the complement of this set.
PresburgerSet intersect(const PresburgerRelation &set) const
PresburgerSet(const IntegerPolyhedron &disjunct)
Create a set from a relation.
PresburgerSet unionSet(const PresburgerRelation &set) const
These operations are the same as the ones in PresburgeRelation, they just forward the arguement and r...
PresburgerSet subtract(const PresburgerRelation &set) const
static PresburgerSet getEmpty(const PresburgerSpace &space)
Return an empty set of the specified type that contains no points.
PresburgerSet(const PresburgerSpace &space)
Construct an empty PresburgerRelation with the specified number of dimension and symbols.
static PresburgerSet getUniverse(const PresburgerSpace &space)
Return a universe set of the specified type that contains all points.
PresburgerSpace is the space of all possible values of a tuple of integer valued variables/variables.
The SetCoalescer class contains all functionality concerning the coalesce heuristic.
SmallVector< MPInt, 8 > getMPIntVec(ArrayRef< int64_t > range)
Check if the pos^th variable can be expressed as a floordiv of an affine function of other variables ...
Definition: Utils.cpp:502
Include the generated interface declarations. |
# Show that the Altitude of the Right Circular Cone of Maximum Volume that Can Be Inscribed in a Sphere of Radius R (4r)/3 - Mathematics
Show that the altitude of the right circular cone of maximum volume that can be inscribed in a sphere of radius r (4r)/3
#### Solution
A sphere of fixed radius (r) is given.
Let R and h be the radius and the height of the cone respectively.
Is there an error in this question or solution?
#### APPEARS IN
NCERT Class 12 Maths
Chapter 6 Application of Derivatives
Q 15 | Page 243 |
# Velocity obstacle
The velocity obstacle VOAB for a robot A, with position xA, induced by another robot B, with position xB and velocity vB.
In robotics and motion planning, a velocity obstacle, commonly abbreviated VO, is the set of all velocities of a robot that will result in a collision with another robot at some moment in time, assuming that the other robot maintains its current velocity.[1] If the robot chooses a velocity inside the velocity obstacle then the two robots will eventually collide, if it chooses a velocity outside the velocity obstacle, such a collision is guaranteed not to occur.[1]
This algorithm for robot collision avoidance has been repeatedly rediscovered and published under different names: in 1989 as a maneuvering-board approach,[2] in 1993 it was first introduced as the "velocity obstacle",[3] in 1998 as collision cones,[4] and in 2009 as forbidden velocity maps.[5] The same algorithm has been used in maritime port navigation since at least 1903.[6]
The velocity obstacle for a robot $A$ induced by a robot $B$ may be formally written as
$VO_{A|B} = \{ \mathbf{v}\,|\, \exists t > 0 : (\mathbf{v} - \mathbf{v}_B)t \in D(\mathbf{x}_B - \mathbf{x}_A, r_A + r_B) \}$
where $A$ has position $\mathbf{x}_A$ and radius $r_A$, and $B$ has position $\mathbf{x}_B$, radius $r_B$, and velocity $\mathbf{v}_B$. The notation $D(\mathbf{x}, r)$ represents a disc with center $\mathbf{x}$ and radius $r$.
Variations include common velocity obstacles (CVO),[7] finite-time-interval velocity obstacles (FVO),[8] generalized velocity obstacles (GVO),[9] hybrid reciprocal velocity obstacles (HRVO),[10] nonlinear velocity obstacles (NLVO),[11] reciprocal velocity obstacles (RVO),[12] and recursive probabilistic velocity obstacles (PVO).[13]
## References
1. ^ a b Fiorini, P.; Shiller, Z. (July 1998). "Motion planning in dynamic environments using velocity obstacles". The International Journal of Robotics Research (Thousand Oaks, Calif.: SAGE Publications) 17 (7): 760–772. doi:10.1177/027836499801700706. ISSN 0278-3649.
2. ^ Tychonievich, L. P.; Zaret, D.; Mantegna, R.; Evans, R.; Muehle, E.; Martin, S. (1989). A maneuvering-board approach to path planning with moving obstacles. International Joint conference on Artificial Intelligence (IJCAI). pp. 1017–1021.
3. ^ Fiorini, P.; Shiller, Z. (1993). Motion planning in dynamic environments using the relative velocity paradigm. IEEE Conference on Robotics and Automation. pp. 560–565.
4. ^ Chakravarthy, A.; Ghose, D. (September 1998). "Obstacle avoidance in a dynamic environment: A collision cone approach". IEEE Transactions on Systems, Man and Cybernetics—Part A: Systems and Humans 28 (5): 562–574. doi:10.1109/3468.709600.
5. ^ Damas, B.; Santos-Victor, J. (2009). Avoiding moving obstacles: the forbidden velocity map. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). pp. 4393–4398.
6. ^ Miller, F. S.; Everett, A. F. (1903). Instructions for the Use of Martin’s Mooring Board and Battenberg’s Course Indicator. Authority of the Lords of Commissioners of the Admirality.
7. ^ Abe, Y.; Yoshiki, M. (November 2001). Collision avoidance method for multiple autonomous mobile agents by implicit cooperation. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 01). New York, N.Y.: IEEE. pp. 1207–1212. doi:10.1109/IROS.2001.977147.
8. ^ Guy, S. J.; Chhugani, J.; Kim, C.; Satish, N.; Lin, M.; Manocha, D.; Dubey, P. (August 2009). ClearPath: Highly parallel collision avoidance for multi-agent simulation. ACM SIGGRAPH/Eurographics Symposium on Computer Animation (SCA 09). New York, N.Y.: ACM. pp. 177–187. doi:10.1145/1599470.1599494.
9. ^ Wilkie, D.; v.d. Berg, J.; Manocha, D. (October 2009). Generalized velocity obstacles. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 09). New York, N.Y.: IEEE. doi:10.1109/IROS.2009.5354175.
10. ^ Snape, J.; v.d. Berg, J.; Guy, S. J.; Manocha, D. (October 2009). Independent navigation of multiple mobile robots with hybrid reciprocal velocity obstacles. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 09). New York, N.Y.: IEEE.
11. ^ Large, F.; Sekhavat, S.; Shiller, Z.; Laugier, C. (December 2002). Using non-linear velocity obstacles to plan motions in a dynamic environment. IEEE International Conference on Control, Automation, Robotics and Vision (ICARCV 02). New York, N.Y.: IEEE. pp. 734–739. doi:10.1109/ICARCV.2002.1238513.
12. ^ v.d. Berg, J.; Lin, M.; Manocha, D. (May 2008). Reciprocal velocity obstacles for real-time multi-agent navigation. IEEE International Conference on Robotics and Automation (ICRA 08). New York, N.Y.: IEEE. pp. 1928–1935. doi:10.1109/ROBOT.2008.4543489.
13. ^ Fulgenzi, C.; Spalanzani, A.; Laugier, C. (April 2007). Dynamic obstacle avoidance in uncertain environment combining PVOs and occupancy grid. IEEE International Conference on Robotics and Automation (ICRA 07). New York, N.Y.: IEEE. pp. 1610–1616. doi:10.1109/ROBOT.2007.363554. |
# How to proof that a curve has no rational points
1. Feb 5, 2012
### AndreAo
Hello, i'm trying to do exercise number 20 from chapter 6 of this http://www.people.vcu.edu/~rhammack/BookOfProof/index.html, it asks to show that the curve x2 + y2 - 3 = 0 has no rational points. In the answer it has this tip: first show that a2 + b2 = 3c has no solutions, other than the trivial. To do this, investigate the remainders of a sum of squares (mod 4). After you’ve done this, prove that the only solution is indeed the trivial solution...
I'm in trouble with this part, how can I use the information from the tip?
Thanks.
2. Feb 5, 2012
### AlephZero
Writing the hint a different way:
If there is a rational point x = p/q, y= r/s, then (ps)2 + (qr)2 = 3(qs)2
Any square has remainder 0 or 1 mod 4 (consider (2k)2 and (2k+1)2)
3. Feb 5, 2012
### AndreAo
Hi AlephZero, thanks for rewriting it.
I'd like an opinion about the proof I did.
First I proved, at least I think, that $x^{2}$$\equiv$0(mod 4) or $x^{2}$$\equiv$1(mod 4).
Proof. Suppose x$\in$Z. Then either x is even or x is odd. We consider theses cases separately.
Case 1: Suppose x is even. Then x = 2a, for some integer a. Squaring both sides, $x^{2}$ = 4 $a^{2}$. By definition of divisibility, 4 | $x^{2}$. Thus $x^{2}$$\equiv$0(mod 4).
Case 2: Suppose x is odd. Then x = 2b + 1, for some integer b. Squaring both sides, $x^{2}$ = 4$b^{2}$ + 4b + 1 = 4($b^{2}$ + b) + 1, which means $x^{2}$$\equiv$1(mod 4).
So, $x^{2}$$\equiv$0(mod 4) or $x^{2}$$\equiv$1(mod 4) as we wanted to proof.
The problem: Show that the curve $x^{2}$ + $y^{2}$ - 3 = 0 has no rational points.
Proof. Suppose for the sake of contradiction that there exists an rational point ($x_{0}$, $y_{0}$)$\in$$Q^{2}$. Then we can write $x_{0}$ = $\frac{p}{q}$ and $y_{0}$=$\frac{r}{s}$, with p,q,r,s$\in$Q and q, s$\neq$ 0.
Replacing ($x_{0}$, $y_{0}$) in the equation, $(ps)^{2}$ + $(rq)^{2}$ = 3$(qs)^{2}$.
As we already proved $x^{2}$$\equiv$0(mod 4) or $x^{2}$$\equiv$1(mod 4), it's easy to see that ($x^{2}$ + $y^{2}$)$\equiv$0(mod 4) or ($x^{2}$ + $y^{2}$)$\equiv$1(mod 4) or ($x^{2}$ + $y^{2}$)$\equiv$2(mod 4).
Let's analyze the 3$(qs)^{2}$. We have two cases, qs is even or qs is odd.
Case 1: Suppose qs is even. Then qs = 2a, for some integer a. Then 3$(qs)^{2}$=3$(2a)^{2}$=12$a^{2}$=4(3$a^{2}$). So 3$(qs)^{2}$ is divisible by 4, which means 3$(qs)^{2}$$\equiv$0(mod 4).
Case 2: Suppose qs is odd. Then qs = 2b + 1, for some integer b. Then 3$(qs)^{2}$=3$(2b + 1)^{2}$= 12$b^{2}$+12b+3 = 4(3$b^{2}$+3b) + 3, which means 3$(qs)^{2}$$\equiv$3(mod 4).
But we know that ($x^{2}$ + $y^{2}$)$\equiv$0(mod 4) or ($x^{2}$ + $y^{2}$)$\equiv$1(mod 4) or ($x^{2}$ + $y^{2}$)$\equiv$2(mod 4), for any x,y $\in$ Z and that $(ps)^{2}$ + $(rq)^{2}$$\equiv$3$(qs)^{2}$(mod 4), so it must be the case that 3$(qs)^{2}$ = 0, but this would only be true, if q or s equal zero, as we already said that q,s must be different from zero, we have a contradiction.
Therefore, it's not the case that exists a rational point in the curve $x^{2}$ + $y^{2}$ - 3 = 0.
Are these proofs right? What do you think?
Thanks.
4. Feb 5, 2012
### willem2
You only proved $3(qs)^{2}$ = 0 (mod 4) and not $3(qs)^{2}$ = 0
5. Feb 5, 2012
### AlephZero
If might help to be a bit more precise about what you assume for a solution.
For example you can put both fractions over the same denominator and write
x = p/q, y = r/q
You can also cancel out any common factors between p q and r, so at least one of the three numbers must be odd...
6. Feb 5, 2012
### AndreAo
Thanks both!
I did some mistakes.
1. p,q,r,s $\in$Z
AlephZero, as you suggest, I think I should have said that p,q have no common factors, and r,s have no common factors, so they're reduced fractions.
Until the point willem2 said, I think it's ok.
Then 3$(qs)^{2}$$\equiv$0(mod 4).
Thus, q or s have to be even. Suppose that q is even. Then p must be odd, as we said the fraction is already reduced. We know that $(ps)^{2}$+$(rq)^{2}$$\equiv$0(mod 4).
Then $(ps)^{2}$+$(rq)^{2}$ = 4f, for some natural f.
Then $(ps)^{2}$+$(rq)^{2}$ = 2(2f), which implies that both square numbers have to be even. We have rq even, as we suppose q to be even, so we must have s even, as p can't be. Until now we found that, p is odd, q is even, s is even, so r is odd.
I can't figure out any way to show a contradiction from this. Any ideas?
Thanks!
7. Feb 6, 2012
### willem2
Actually I can prove this:
by considering the remainder of a sum of squares (mod 3) |
# KSEEB Solutions for Class 6 Maths Chapter 5 Understanding Elementary Shapes Ex 5.2
Students can Download Chapter 5 Understanding Elementary Shapes Ex 5.2 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka State Syllabus Class 6 Maths Chapter 5 Understanding Elementary Shapes Ex 5.2
Question 1.
What fraction of a clock wise revolution does the hour hand of a clock turn through, When it goes from
Solution:
We may observe that in 1 complete clock Wise revolution the hour hand will rotate by 360’
a) 3 to 9
When the hour hand goes from 3 to 9 clock wise, it will rotate by 2 right angles or 180°
b) 4 to 7
When the hour hand goes from 4 to 7 clock wise, it Will rotate by 1 right angle or 90°
c) 7 to 10
When the hour goes from 7 to 10 clock wise it will rotate by 1 right angle or 90°
d) 12 to 9
When the hour hand goes from 12 to 9 clock wise it rotate by 3 right angles or 270°
e) 1 to 10
When the hour hand goes from 1 to 10 clock wise it rotate by 3 right angles or 170°
f) 6 to 3
When the hour hand goes from 6 to 3 clock wise, it will rotate by 3 right angles or 270°
Question 2.
Where will the hand of a clock stop if it-
In 1 complete clock wise revolution the hand of a clock will rotate by 360°
Solution:
a) Starts at 12 and makes $$\frac{1}{2}$$ of a revolution, clockwise?
If the hand of the clock start 12 and makes $$\frac{1}{2}$$ of a revolution, clockwise, it will rotate by 180° and hence, it will stop at 6
b) Starts at 12 and makes $$\frac{1}{2}$$ of a revolution, clockwise?
If the hand of the clock starts at 2 and makes $$\frac{1}{2}$$ of a revolution clock wise, then it will rotate by 180° and hence, it will stop at 8
c) Starts at 5 and makes $$\frac{1}{4}$$ of a revolution, clockwise?
If the hand of the clock starts at 5 and makes of a revolution clockwise than it will rotate by 90° and hence it will stop at 8
d) Starts 5 makes $$\frac{3}{4}$$ of a revolution Clock wise?
Solution:
If the hand of the clock starts at 5 and makes of a revolution clockwise, than it will rotate by 270° and hence. it will stop at 2.
Question 3.
Which direction will you face if you start facing
a) East and make $$\frac{1}{2}$$ of a revolution clockwise ?
If we start facing east and make $$\frac{1}{2}$$ of a revolution clockwise, than we will face the west direction.
b) East and make $$1 \frac{1}{2}$$ of a revolution clockwise?
If we starts facing east and make $$1 \frac{1}{2}$$ of a revolution clock wise, than we will face the West direction
c) West and makes at $$\frac{3}{4}$$ of a revolution anti-clock wise ?
If we starts facing west and make of a revolution anti – clockwise, than we face the north direction.
d) South and make one full revolution (should we specify clockwise or anti – clockwise for this last question? Why not?
There is no need to specify clockwise or anti-clockwise in the last question as turning by one full revolution (i.e., two straight angles) make us to reach the original position.
In case of revolving by 1 complete round the direction in which we are revolving does not matter in both cases clockwise or anti – clock wise, we will be back at our initial position.
Question 4.
What part of a revolution have you turned through if you stand facing
Solution:
If we revolve one complete round in either clock wise or anti-clockwise direction, then we will revolve by 360° and the two adjacent directions will be at 90° or $$\frac{1}{4}$$ of a complete revolution away from each other
a) east and turn clock wise to face north?
If we start facing east and than turn clockwise to lace north, than we have to make revolution
b) South and turn clockwise to face east?
If we starts facing south and turn clock wise to face east than we have to make $$\frac{3}{4}$$ of a revolution
c) West and turn clock wise to face east ?
If we starts facing west and turn dock wise to face east, then we have to make $$\frac{1}{2}$$ of a revolution.
Question 5.
Find the number of right angles turned through by the hour hand of a clock when it goes from
Solution:
a) 3 to 6
The hour hand of a clock revolves by 360° of 4 right angles in 1 complete round
b) 2 to 8
The hour hand of a clock revolves by 180° or 2 right angles when it goes from 2 to 8
c) 5 to 11
The hour hand of a clock revolved by 180° or 2 right angles When it goes from 5 to 11
d) 10 to 1
The hour hand or a clock revolves by 90° or 1 right angle when it goes from 10 to 1
e) 12 to 9
The hour hand of a clock revolves by 2700 or 3 right angles When it goes from 12 to 9
f) 12 to 6
The hour hand of a clock revolves by 180° or 2 right angles When it goes from 12 to 6.
Question 6.
How many right angles do you make if you start facing
Solution:
If we revolve one complete round in either clock wise direction, then we will revolve by 360° or 4 right angles and the two adjacent direction will be at 90° or I right angles away from each other.
a) South and turn clock wise to west?
If we start facing south and turn clock wise to west then we make 1 right angle.
clockwise to west then we mu. iv
b) north and turn anti-clock wise to east ?
If we start facing north and turn anti clockwise to east then we make 3 right angles.
c) West and turn to west?
If we starts facing west and turn to west then we make 1 complete round or 4 right.
d) South and turn to north?
If we starts facing south and turn to we make right angles.
Question 7.
Where will the hour hand of a clock stop if it starts
Solution:
In 1 complete revolution ( Clockwise or anti – clock wise) the hour hand of a clock will rotate by 360° or 4 right angles.
a) from 6 and turns through 1 right angle?
If the hour hand of a clock starts from 6 and turns through 1 right angle, then it will stop at 9
b) From 8 and turn through 2 right angles?
If the hour hand of a clock starts from 8 and turned through 2 right angles, then it will stop at 2.
c) From 10 and turned through 3 right angles?
If the hour hand of a clock starts from 10 and turns through 3 right angles then it will stop at 7
d) From 7 and turns through 2 straight angles?
If the hour hand of a clock starts from 7 and turns through 2 straight angles, then it will stop at 7 . |
# Prove $f(x) = \frac 1 {\sqrt{2\pi}} \int_{\mathbb R} \hat f(t) e^{itx} \ \lambda(dt)$ for every $x \in \mathbb R$.
Let $\mathcal L_{\mathbb C}^1(\lambda)$ such that $\hat f \in L_{\mathbb C}^1(\lambda)$ (Fourier transformation).
I've proven that $f(x) = \frac 1 {\sqrt{2\pi}} \int_{\mathbb R} \hat f(t) e^{itx} \ \lambda(dt)$ (this is also equal to the double Fourier transformation of $f(-x)$). $\lambda$-almost-everywhere $x \in \mathbb R$.
Now suppose $f$ is also continuous. Then I want to show the above formula holds for all $x \in \mathbb R$.
I know a result that states that if $f,g$ are two continuous functions then $f = g$ $\lambda$-almost-everywhere $x \in \mathbb R$ $\iff f(x) = g(x)$ for all $x \in \mathbb R$.
But how do I prove $\frac 1 {\sqrt{2\pi}} \int_{\mathbb R} \hat f(t) e^{itx} \ \lambda(dt)$ is continuous ?
• Use the dominated convergence theorem and continuity of $x \mapsto e^{itx}$. – copper.hat Dec 9 '14 at 17:34
• Could you sketch how you would apply it ? – Shuzheng Dec 9 '14 at 17:36
• Let $x \mapsto \phi(x)$ be the right hand side above. If $x_n \to x$ you want to show that $\phi(x_n) \to \phi(x)$. The function $t \mapsto \hat{f}(t) e^{ixt}$ is bounded by the integrable function $t \mapsto |\hat{f}(t)|$, and $\hat{f}(t) e^{ix_nt} \to \hat{f}(t) e^{ixt}$. – copper.hat Dec 9 '14 at 17:57
Theorem: Let $(X,\mathcal{A},\mu)$ be a measure space and $u: X \times (a,b) \to \mathbb{C}$, $- \infty \leq a < b \leq \infty$, such that
• $x \mapsto u(t,x)$ is continuous for (almost every) $t \in X$
• There exists $w \in L^1$ such that $|u(t,x)| \leq w(t)$ for all $x \in (a,b)$, $t \in X$.
Then the function $V$ defined by $$V(x) := \int_X u(t,x) \, d\mu(t), \qquad x \in (a,b)$$ is continuous.
Here, we have $X=\mathbb{R}$, $(a,b) = (-\infty,\infty)$, $u(t,x) = e^{\imath \ t \cdot x} \hat{f}(x)$. Then $$|u(t,x)| \leq |\hat{f}(t)| =: w(t) \in L^1_{\mathbb{C}}$$ and $t \mapsto u(t,x)$ is (almost everywhere) continuous. Therefore, the claim follows from the above theorem. |
You people are smart.
1. May 11, 2005
QuantumTheory
How you learn quantum physics I don't know. The math is complicated. I want to know a really hard equation that involves quantum physics and is really hard to solve. A really confusing looking one. I don't know calculus yet, but I will someday.
2. May 11, 2005
ayalam
look up the shrodinger equation
3. May 11, 2005
marlon
QM is more difficult conceptually then it is mathematically. If you want to do hard core math stuff as well, try topological field theory or string theory.
regards
marlon
4. May 11, 2005
chroot
Staff Emeritus
There's no sense in learning "a really hard equation" in vacuo, because it really won't mean anything to you. You should just work on your math skills. You don't need much beyond linear algebra and single-variable calculus to understand the bulk of QM, and most scientific or technical degree programs include those classes in the first couple of years.
- Warren
5. May 11, 2005
marlon
Warren is correct,
QM is not that hard mathematically and your university will make sure that you have completed the necessary calculus/algebra courses before you embark your actual QM-journey...Don't worry about the math, worry about the "counter-intuitive" nature of QM. It really proves our intuition is a bad thing to follow when doing science.
regards
marlon
6. May 11, 2005
Kruger
The equations aren't the hard stuff. But you have to understand this equation not in sense of their derivation but in sense of their meaning how they describe the nature.
7. May 11, 2005
ArielGenesis
so where can i learn online, FREE
8. May 11, 2005
Tom Mattson
Staff Emeritus
9. May 11, 2005
dextercioby
Though it's better if you went to the library and get Morse & Feshbach.It's all one needs.
Daniel.
10. May 11, 2005
Tom Mattson
Staff Emeritus
Are you trying to say that Morse and Feshbach requires no prerequisites?
11. May 11, 2005
dextercioby
Nope,but i still think that some chapers of that book are highly useful before jumping to Schrödinger and functional analysis...
Daniel.
12. May 11, 2005
You're talking about _Methods of Theoretical Physics_? OK, but I can't find a copy of that book for less than $200.00 (!). There was a rumor that the publisher was working on a paperback set, is that still happening? 13. May 11, 2005 dextercioby Of course i meant that book.I know it's expensive,even if it was written in 1953 (!),but borrowing from the library is supposed to be free. Daniel. 14. May 11, 2005 juvenal You can get Methods of Classical and Quantum Physics by Byron and Fuller for$9 used. It's a Dover publication. Good book too.
However, you need to learn calculus first.
15. May 11, 2005
dextercioby
I dunno about that one,i don't have it,but i was giving him the best there is.I'm sure that all book recomandations of methods of mathematical phsyics require some linear algebra & calculus as prerequisites.
Daniel.
16. May 12, 2005
ArielGenesis
wow, such a very great resources. thx |
How do I find the horizontal pressure level that divides the atmosphere into 2 layers of equal mass?
Surface pressure given is p = 1000 hPa. (the temperature along the atmosphere is constant (T = −10 °C; hydrostatic equilibrium assumed). I tried using the hypsometric equation to find the height of the atmosphere's top and then going through that result, but it's not giving me the results I expect. I expect 500 hPa because I've seen the solution, but I obtained 700-ish hPa.
• If your homework is about the mass, then try an equation that involves the mass. Two hints: What quantity would you need to know to calculate the total mass of the atmosphere? And what law governs this quantity that also involves pressure? Feb 4 '18 at 23:21
• What results do you expect? What makes you suspect the different answer? Feb 5 '18 at 5:28
• Show your logic. Feb 5 '18 at 8:32
• casey and Communisty, I'll try to put my thought process, in detail, as soon as possible, give me three days. Trully busy right now. Feb 5 '18 at 23:37
• I posted an answer, I figured it out today. I was confused about the idea that exactly half pressure divided exactly half of the atmospheric mass. A pressure level isn't the same as height level. As the atmosphere, here is assumed isothermic, using the hypsometric equation we can conclude that the height obtained for this pressure level is really low comparing to the atmosphere's top height, which proves this result to be successful. Feb 12 '18 at 17:31
Simple?
Only assumption: hydrostatic relation $dp = -g \rho dz$
Expanding the RHS yields
$dp = -g \frac{dM}{dV} dz \frac{dA}{dA} = -\frac{g}{dA} dM$
($dA dz \equiv dV$)
So any $\Delta p = c \cdot \Delta M$, which means that
• a pressure difference is proportional to a difference in mass
($c$ is relatively constant as long as the gravitational acceleration constant, otherwise you'd have to integrate the hydrostatic eqn.)
So if the total mass is proportional to a $\Delta p$ of x hPa then the pressure level which separates equal parts is 1/2 of that value.
I'll answer my own question. If there is hydrostatic equilibrium, then we have
So, assuming that the surface pressure is p (in this case 1000hPa) and that the atmosphere's top has zero (0) pressure and infinite height, then
In this case:
So, as the atmospheric mass above a surface area is rho times integral of "dz", and assuming that this area is 1 m^2 to simplify (dx.dy=1):
Pa
If we want half of that mass M, then
Pa , which is indeed 500 hPa.
• Why do you need all the equations? It seems obvious that the answer is 500 hPa, just from knowing the behavior of gasses under compression. That is, the pressure at any level is equal to the weight of atmosphere above it. Perhaps more obvious when you're used to thinking of pressure as pounds per square inch (or the equivalent kilograms per square meter for metric), instead of using a named unit. Feb 13 '18 at 3:39
• While the answer is correct, @Lukas in his answer was able to get there with fewer steps. Feb 14 '18 at 10:50
• Not that getting a result in a different way is necessarily a bad thing. In fact it shows creativity and resilience :-) Hopefully you can learn and improve your methods and insight given the other answer(s), but much appreciated coming back to provide your answer, wish more would do so :-) Feb 16 '18 at 7:53
• You can accept your own answer (or any other) by clicking on the tick mark on the left side near the top.
– Pont
Feb 20 '18 at 7:29
I always found this style question interesting as a meteorology graduate teaching assistant, as there often seemed to be a variety of ways to attempt to answer it.
It's also often one of the earlier abstract type of questions in lower-level meteorology courses, and thus one I really encouraged my students to avoid resorting to help with the answer too quickly on, as exercises like these prove vital to developing persistence and creativity in trying to derive meteorological equations - and if you can't learn to fight through questions like these, you'll be lost in dynamics/thermodynamics/etc and beyond...
So if you're in a course with this type of question assigned, as much as it's painful, if haven't gotten your own good answer yet, please keep retrying and rereading on your own until you come up with something, even if only partial or uncertain. You'll learn more that way than by reading answers in the long run. And then come back after the assignment is over and learn some more here!
But, call me crazy or handwavy, but I believe you can get there even easier by recalling back to earlier equations:
$${\text{Pressure} \ ≡ \ \frac{\text{Force}}{\text{Area}}}$$
Assuming we're talking atmospheric pressure, we should be talking only about the the force due to the weight of air above, so $${\text{Force}} \ = \ {m_{above}}\cdot g$$
And setting up the two levels:
$${\text{Pressure}_{(Level\,=\,Ground)}} \ = \ {m_{(Full\,Atmospheric\,Column)}}\cdot g$$ $${\text{Pressure}_{(Value\,Where\,1/2\,Mass)}} \ = \ 0.5 \cdot m_{(Full\,Atmospheric\,Column)}\cdot g$$
I'll leave the one or two fairly basic high school algebra substitution steps for future readers needing proper completeness, but it quickly becomes the simple:
$${\text{Pressure}_{(Value\,Where\,1/2\,Mass)}} \ = \ 0.5 \cdot {\text{Pressure}_{(Level\,=\,Ground)}}$$
$$\text{So} \ 0.5 \cdot 1000 \ {\text{mb}} = \fbox{500 mb}$$
Naively the only place I can really see anyone even trying to take issue is in defining atmospheric pressure as simply the weight of air above it, overlooking things like vertical perturbation pressures. But of course those are just that, perturbations, and thus not a part of the mean atmospheric pressure. But perhaps I'm wrong somewhere along the way???
Pressure in a fluid is, literally, the weight of the fluid above the point of measure, per unit area, so if you have half the amount of air (or water, for the matter) above you, then you will measure exactly half the amount of pressure.
So the answer is 500 hPa.
That is valid under the standard hydrostatic approach (no turbulence forces; no dynamic forces). |
### An optimal algorithm for intersecting line segments in the planeAn optimal algorithm for intersecting line segments in the plane
Access Restriction
Subscribed
Author Chazelle, Bernard ♦ Edelsbrunner, Herbert Source ACM Digital Library Content type Text Publisher Association for Computing Machinery (ACM) File Format PDF Copyright Year ©1992 Language English
Subject Domain (in DDC) Computer science, information & general works ♦ Data processing & computer science Abstract The main contribution of this work is an $\textit{O}(\textit{n}$ log $\textit{n}$ + $\textit{k})-time$ algorithm for computing all $\textit{k}$ intersections among $\textit{n}$ line segments in the plane. This time complexity is easily shown to be optimal. Within the same asymptotic cost, our algorithm can also construct the subdivision of the plane defined by the segments and compute which segment (if any) lies right above (or below) each intersection and each endpoint. The algorithm has been implemented and performs very well. The storage requirement is on the order of $\textit{n}$ + $\textit{k}$ in the worst case, but it is considerably lower in practice. To analyze the complexity of the algorithm, an amortization argument based on a new combinatorial theorem on line arrangements is used. ISSN 00045411 Age Range 18 to 22 years ♦ above 22 year Educational Use Research Education Level UG and PG Learning Resource Type Article Publisher Date 1992-01-02 Publisher Place New York e-ISSN 1557735X Journal Journal of the ACM (JACM) Volume Number 39 Issue Number 1 Page Count 54 Starting Page 1 Ending Page 54
#### Open content in new tab
Source: ACM Digital Library |
# A review of Taguette – an open source alternative for qualitative data coding
## Motivation and context
As you might know, I’m currently undertaking a PhD program at Australian National University’s School of Cybernetics, looking at voice dataset documentation practices, and what we might be able to improve about them to reduce statistical and experienced bias in voice technologies like speech recognition and wake words. As part of this journey, I’ve learned an array of new research methods – surveys, interviews, ethics approaches, literature review and so on. I’m now embarking on some early qualitative data analysis.
The default tool in the qualitative data analysis space is NVIVO, made by Melbourne-based company, QSR. However, NVIVO has both a steep learning curve and a hefty price tag. I’m lucky enough that this pricing is abstracted away from me – ANU provides NVIVO for free to HDR students and staff – but reports suggest that the enterprise licensing starts at around USD 85 per user. NVIVO operates predominantly as a desktop-based pieces of software and is only available for Mac or Windows. My preferred operating system is Linux – as that is what my academic writing toolchain based on LaTeX, Atom and Pandoc – is based on – and I wanted to see if there was a tool with equivalent functionality that aligned with this toolchain. ## About Taguette Taguette is a BSD-3 licensed qualitative coding tool, positioned as an alternative to NVIVO. It’s written by a small team of library specialists and software developers, based in New York. The developers are very clear about their motivation in creating Taguette; Qualitative methods generate rich, detailed research materials that leave individuals’ perspectives intact as well as provide multiple contexts for understanding the phenomenon under study. Qualitative methods are used in a wide range of fields, such as anthropology, education, nursing, psychology, sociology, and marketing. Qualitative data has a similarly wide range: observations, interviews, documents, audiovisual materials, and more. However – the software options for qualitative researchers are either far too expensive, don’t allow for the seminal method of highlighting and tagging materials, or actually perform quantitative analysis, just on text. It’s not right or fair that qualitative researchers without massive research funds cannot afford the basic software to do their research. So, to bolster a fair and equitable entry into qualitative methods, we’ve made Taguette! Taguette.org website, “About” page This motivation spoke to me, and aligned with my own interest in free and open source software. ## Running Taguette and identifying its limitations For reproduceability, I ran Taguette version 1.1.1 on Ubuntu 20.04 LTS with Python 3.8.10 Taguette can be run in the cloud, and the website provides a demo server so that you can explore the cloud offering. However, I was more interested in the locally-hosted option, which runs on a combination of python, calibre, and I believe sqlite as the database backend, with SQLAlchemy for mappings. The install instructions recommend running Taguette in a virtual environment, and this worked well for me – presumably running the binary from the command line spawns a flask– or gunicorn– type web application, which you can then access in your browser. This locally hosted feature was super helpful for me, as my ethics protocol has restrictions on what cloud services I could use. To try Taguette, I first created a project, then uploaded a Word document in docx format, and began highlighting. This was smooth and seamless. However, I soon ran into my first limitation. My coding approach is to use nested codes. Taguette has no functionality for nested codes, and no concomitant functionality for “rolling up” nested codes. This was a major blocker for me. However, I was impressed that I could add tags in multiple languages, including non-Latin orthographies, such as Japanese and Arabic. Presumably, although I didn’t check this, Taguette uses Unicode under the hood – so it’s foreseeable that you could use emojis as tags as well, which might be useful for researchers of social media. Taguette has no statistical analysis tools built in, such as word frequency distributions, clustering or other corpus-type methods. While these weren’t as important for me at this stage of my research, they are functions that I envisage using in the future. Taguette’s CodeBook export and import functions work really well, and I was impressed with the range of formats that could be imported or exported. ## What I would like Taguette to do in the future I really need nested tags that have aggregation functionality for Taguette to be a a viable software tool for my qualitative data analysis – this is a high priority feature, followed by statistical analysis tools. ## Some thoughts on the broader academic software ecosystem Even though I won’t be adopting Taguette, I admire and respect the vision it has – to free qualitative researchers from being anchored to expensive, limiting tools. While I’m fortunate enough to be afforded an NVIVO license, many smaller, less wealthy or less research-intensive universities will struggle to provide a license seat for all qualitative researchers. This is another manifestation of universities becoming increasingly beholden to large software manufacturers, rather than having in-house capabilities to produce and manage software that directly adds value to a university’s core capability of generating new knowledge. We’ve seen it in academic journals – with companies like EBSCO, Sage and Elsevier intermediating the publication of journals, hording copyrights to articles and collecting a tidy profit in the process – and we’re increasingly seeing it in academic software. Learning Management Systems such as Desire2Learn and Blackboard are now prohibitively expensive, while open source alternatives such as Moodle still require skilled (and therefore expensive) staff to be maintained and integrated – a challenge when universities are shedding staff in the post-COVID era. Moreover, tools like NVIVO are imbricated in other structures which reinforce their dominance. University HDR training courses and resource guides are devoted to software tools which are in common use. Additionally, supervisors and senior academics are likely to use the dominant software, and so are in an influential position to recommend its use to their students. This support infrastructure reinforces their dominance by ascribing them a special, or reified status within the institution. At a broader level, even though open source has become a dominant business model, the advocacy behind free and open source software (FOSS) appears to be waning; open source is now the mainstream, and it no longer requires a rebel army of misfits, nerds and outliers (myself included) to be its flag-bearers. This begs the question – who advocates for FOSS within the academy? And more importantly – what influence do they have compared with a slick marketing and sales effort from a global multi-national? I’m reminded here of Eben Moglen’s wise words at linux.conf.au 2015 in Auckland in the context of opposing patent trolls through collective efforts – “freedom itself depends upon how we make use of the technologies we are creating”. That is, universities themselves have created the dependence on academic technologies which now restrict them. There is hope, however. Platforms like ArXiv – the free distribution service and open access archive for nearly two million pre-prints in mathematics, computer science and other (primarily quant) fields – are starting to challenge the status quo. For example, the Australian Research Council recently overturned their prohibition on the citation of pre-prints in competitive grant applications. Imagine if universities combined their resources – like they have done with ArXiv – to provide an open source qualitative coding tool, locally hosted, and accessible to everyone. In the words of Freire, “Reading is not walking on the words; it’s grasping the soul of them.” Paulo Freire, Pedagogy of the Oppressed Qualitative analysis tools allow us to grasp the soul of the artefacts we create through research; and that ability should be afforded to everyone – not just those that can afford it. # State of my toolchain 2021 I’ve been doing a summary of the state of my toolchain now for around five years (2019, 2018, 2016). Tools, platforms and techniques evolve over time; the type of work that I do has shifted; and the environment in which that work is done has changed due to the global pandemic. Documenting my toolchain has been a useful exercise on a number of fronts; it’s made explicit what I actually use day-to-day, and, equally – what I don’t. In an era of subscription-based software, this has allowed me to make informed decisions about what to drop – such as Pomodone. It’s also helped me to identify niggles or gaps with my existing toolchain, and to deliberately search for better alternatives. ## At a glance ### Hardware, wearables and accessories ### Software ### Techniques • Pomodoro (no change since last report) • Passion Planner for planning (no change since last report) ## What’s changed since the last report? ### Writing workflow Since the last report in 2019, I’ve graduated from a Masters in Applied Cybernetics at the School of Cybernetics at Australian National University. I was accepted into the first cohort of their PhD program. This shift has meant an increased focus on in-depth, academic-style writing. To help with this, I’ve moved to a Pandoc, Atom, Zotero and LaTeX-based workflow, which has been documented separately. This workflow is working solidly for me after about a year. Although it took about a weekend worth of setup time, it’s definitely saving me a lot of time. Atom in particularly is my predominant IDE, and also my key writing tool. I use it with a swathe of plugins for LaTeX, document structure, and Zotero-based academic citations. It took me a while to settle on a UI and syntax theme for Atom, but in the end I went with Atom Solarized. My strong preference is to write in MarkDown, and then export to a target format such as PDF or LaTeX. Pandoc handles this beautifully, but I do have to keep a file of command line snippets handy for advanced functionality. ### Primary machine I had an ASUS Zenbook UX533FD – small, portable and great battery life, even with an MX150 GPU running. Unfortunately, the keyboard started to malfunction just after a year after purchase (I know, right). I gave up trying to get it repaired because I had to chase my local repair shop for updates on getting a replacement. I lodged a repair request in October, and it’s now May, so I’m not holding out hope… That necessitated me getting a new machine – and it was a case of getting whatever was available with the Coronavirus pandemic. I settled on a ASUS ROG Zephyrus G15 GA502IV. I was a little cautious, having never had an AMD Ryzen-based machine before, but I haven’t looked back. It has 16 Ryzen 4900 cores, and an NVIDIA GeForce RTX 2060 with 6GB of RAM. It’s a powerful workhorse and is reasonably portable, if a little noisy. It get about 3 hours’ battery life in class. Getting NVIDIA dependencies installed under Ubuntu 20.04 LTS was a little tricky – especially cudnn, but that seems to be normal for anything NVIDIA under Linux. Because the hardware was so new, it lacked support in the 20.04 kernel, so I had to pull in experimental Wi-Fi drivers (it uses Realtek). To be honest I was somewhat smug that my hardware was ahead of the kernel. One little niggle I still have is that the machine occasionally green screens. This has been reported with other ROG models and I suspect it’s an HDMI-under-Linux driver issue, but haven’t gone digging too far into driver diagnostics. Yet. One idiosyncrasy of the Zephyrus G15 is that it doesn’t have built-in web camera; for me that was a feature. I get to choose when I do and don’t connect the web camera. And yes – I’m firmly in the web-cameras-shouldn’t-have-to-be-on by default camp. ### Machine learning work, NVIDIA dependencies and utilities Over the past 18 months, I’ve been doing a lot more work with machine learning, specifically in building the DeepSpeech PlayBook. Creating the PlayBook has meant training a lot of speech recognition models in order to document hyperparameters and tacit knowledge around DeepSpeech. In particular, the DeepSpeech PlayBook uses a Docker image to abstract away Python, TensorFlow and other dependencies. However, this still requires all NVIDIA dependencies such as drivers and cudnn to be installed beforehand. NVIDIA has made this somewhat easier with the Linux CUDA installation guide, which advises on which version to install with other dependencies, but it’s still tough to get all the dependencies installed correctly. In particular, the nvtop utility, which is super handy for monitoring GPU operations (such as identifying blocking I/O or other bottlenecks) had to be compiled from source. As an aside, the developer experience for getting NVIDIA dependencies installed under Linux is a major hurdle for developers. It’s something I want NVIDIA to put some effort into going forward. ### Colour customisation of the terminal with Gogh I use Ubuntu Linux for 99% of my work now – and rarely boot into Windows. A lot of that work is based in the Linux terminal; from spinning up Docker containers for machine learning training, running Python scripts or even pandoc builds. At any given time I might have 5-6 open terminals, and so I needed a way to easily distinguish between them. Enter Gogh – an easy to install set of terminal profiles. One bugbear that I still have with the Ubuntu 20.04 terminal is that the fonts that can be used with terminal profiles are restricted to only mono-spaced fonts. I haven’t been able to find where to alter this setting – or how the terminal is identifying which fonts are mono-spaced for inclusion. If you know how to alter this, let me know! ### Linux variants of Microsoft software intended for Windows ANU has adopted Microsoft primarily for communications. This means not only Outlook for mail – for which there are no good Linux alternatives (and so I use the web version), but also the use of Teams and OneNote. I managed to find an excellent alternative in OneNote for Linux by @patrikx3, which is much more usable than the web version of OneNote. Teams on Linux is usable for messaging, but for videoconferencing I’ve found that I can’t use USB or Bluetooth headphones or microphones – which essentially renders it useless. Zoom is much better on Linux. ### Better microphone for videoconferencing and conference presentations As we’ve travelled through the pandemic, we’re all using a lot more videoconferencing instead of face to face meetings, and the majority of conferences have gone online. I’ve recently presented at both PyCon AU 2020 and linux.conf.au 2021 around voice and speech recognition. Both conferences used the VenueLess platform. I decided to upgrade my microphone for better audio quality. After all, research has shown that speakers with better audio are perceived as more trustworthy. I’ve been very happy with the Stadium USB microphone. ### Taskwarrior over Pomodone for tasks I tried Pomodone for about 6 months – and it was great for integrating tasks from multiple sources such as Trello, GitHub and GitLab. However, I found it very expensive (aroundAUD 80 per year) and the Linux version suddenly stopped working. The scripting options also only support Windows and Apple, not Linux. So I didn’t renew my subscription.
Instead, I’ve moved to Taskwarrior via Paul Fenwick‘s recommendation. This has some downsides – it’s a command line utility rather than a graphical interface, and it only works on a single machine. But it’s free, and it does what I need – prioritises the tasks that I need to complete.
## What hasn’t changed
### Wearables and hearables
My Mobvoi TicWatch Pro is still going strong, and Google appears to be giving Wear OS some love. It’s the longest I’ve had a smart watch, and given how rugged and hardy the TicWatch has been, it will definitely be my first choice when this one reaches end of life. My Plantronics BB Pro 2 are still going strong, and I got another pair on sale as my first pair are now four years old and the battery is starting to degrade.
### Quantified self
I’ve started using Sleep as Android for sleep tracking, which uses data from the TicWatch. This has been super handy for assessing the quality of sleep, and making changes such as adjusting going-to-bed times. Sleep as Android exports data to Google Drive. BeeMinder ingests that data into a goal, and keeps me accountable for getting enough sleep.
RescueTime, BeeMinder and Passion Planner are still going strong, and I don’t think I’ll be moving away from them anytime soon.
### Assistant services
I still refuse to use Amazon Alexa or Google Home – and they wouldn’t work with the 5GHz-band WiFi where I am living on campus. Mycroft.AI is still my go-to for a voice assistant, but I rarely use it now because the the Spotify app support for Mycroft doesn’t work anymore after Spotify blocked Mycroft from using the Spotify API.
One desktop utility that fits into the “assistant” space that I’ve found super helpful has been GNOME extensions. I use extensions for weather, peripheral selection and random desktop background selection. Being able to see easily during Australian summer how hot it is outside has been super handy.
## Current gaps in my toolchain
I don’t really have any major gaps in my toolchain at the moment, but there are some things that could be better.
• Visual Git Editor – I’ve been using command line Git for years now, but having a visual indicator of branches and merges is useful. I tried GitKraken, but I don’t use Git enough to justify the monthly-in-$USD price tag. The Git plugin for Atom is good enough for now. • Managing everything for me – I looked a Huginn a while back and it sounds really promising as a “second brain” – for monitoring news sites, Twitter etc – but I haven’t had time to have a good play with it yet. # Setting up an academic writing workflow using Pandoc, Markdown, Zotero and LaTeX on Ubuntu Linux using Atom This year, I started a PhD at the 3A Institute, within the College of Engineering and Computer Science at Australian National University. I came into the PhD not from a researcher or academic background, but from a career in industry as a technology practitioner. As such, my experience with formatting academic papers, for example for publication in journals, is limited. Our PhD program is hands-on; as well as identifying a specific research topic and articulating a research design, we undertake writing activities – both as academic writing practice, and to help solidify the theoretical concepts we’ve been discussing in Seminar. I needed an academic writing workflow. For one of these writing exercises, I decided to build out a toolchain using LaTeX – the preferred typesetting tool for academic papers. This blog post documents how I approached this, and serves as a reference both for myself and others who might want to adopt a similar toolchain. In summary, the process was: • Define overall goals • Install dependendies such as pandoc, Zotero and the BetterBibtex extension for citations, LaTeX • Experiment with pandoc on the command line for generating PDF from Markdown via LaTeX ## Goals of an academic writing workflow In approaching this exercise, I had several key goals: • create an academic writing workflow using my preferred Atom IDE on Linux; • that could easily be set up and replicated on other machines if needed; • which would allow me to use my preferred citation editor, Zotero; • and which would allow me to use my preferred writing format, Markdown; • and which would support available LaTeX templates available for journals (and from the ANU for formatting theses) ### Why not use OverLeaf? For those who’ve worked with LaTeX before, one of the key questions you might have here is “why not just use a platform like Overleaf for your academic writing workflow and not worry about setting up a local environment?”. Overleaf is a cloud LaTeX service, that provides a collaborative editing environment, and a range of LaTeX templates. However, it’s not free – plans range from$USD 12-35 per month. Overleaf is based on free and open source software, and then adds a proprietary collaboration layer over the top – it abstracts away complexity – and this is what you pay a monthly fee for.
In principle, I have no issue with cloud platforms adding value to FOSS, and then charging for that value, but I doubt any of the profits from Overleaf are going to folks like John MacFarlane – writer of pandoc, and Donald E. Knuth and Leslie Lamport – who contributed early work to tex and LaTeX respectively. I felt like I owed it to these folx to “dig a little under the hood” and learn some of that complexity instead of outsourcing it away.
So, to the process …
## Installing pandoc
Pandoc is a free and open source tool for converting documents between different formats. It’s widely used in academia, and used extensively in publishing and writing workflows. Pandoc is written by John MacFarlane, who is a philosophy professor at UC Berkeley.
One of the other things that John is less known for is Lucida Navajo, which is a font for the Navajo language, a Native American language of the Southern Athabascan family. Although it’s based on Latin, it contains a number of diacritical marks not found in other Latin languages.
Pandoc is available for all platforms, but because my goal here was to develop a workflow on Ubuntu Linux, I’ve only shown installation instructions for that platform.
To install pandoc, use the following command:
$sudo apt install pandoc I also had to make sure that the pandoc-citeproc tool was installed; it’s not installed by default as part of pandoc itself. Again, this was a simple command: $ sudo apt install pandoc-citeproc
## Installing Zotero and the BetterBibtex extension
My next challenge was to figure out how to do citations with pandoc. In the pandoc documentation, there is a whole section on citations, and this provided some pointers. This blog post from Chris Krycho was also useful in figuring out what to use.
This involved installing the BetterBibtex extension for Zotero (which I already had installed). You can find installation instructions for the BetterBibtex extension here. It has to be downloaded as a file and then added through Zotero (not through Firefox).
BibTex is a citation standard, supported by most academic journals and referencing tools. Google Scholar exports to BibTex.
Once installed, BetterBibtex updates each Zotero reference with a citation key that can then be used as an “at-reference” in pandoc – ie @strengersSmartWifeWhy2020. Updating citation keys can take a few minutes – I have several thousand references stored in Zotero and it took about 6 minutes to ensure that each reference had a BibTex citation key.
In order to use BetterBibtex, I had to make sure that the specific export format for Zotero was Better BibTex, and that Zotero updated on change:
## Installing LaTeX
Next, I needed to install LaTeX for Linux. This was installed via the texlive package:
$sudo apt install texlive Based on this Stack Overflow error message I got early on in testing, I also installed the texlive-latex-extra package. $ sudo apt-get install texlive-latex-extra
## Zotero citations package for Atom
Next, I needed to configure the Atom IDE to work with Zotero and BetterBibtex. This involved installing several plugins, including;
Once these were installed, I was ready to start experimenting with a workflow.
## Experimenting with a workflow
To start with, I used a based workflow to go from a Markdown formatted text file to PDF. pandoc converts this to LaTeX as an interim step, and then to PDF, using the inbuilt templates from pandoc. This first step was a very basic attempt to go from pandoc to PDF, and was designed to “shake out” any issues with software installation.
The pandoc command line options I used to start with were:
pandoc -f markdown \ writing.md \ -o writing.pdf
In this example, I’m telling pandoc to expect markdown-styled input, to use the file writing.md as the input file and to write to the output file writing.pdf. pandoc infers that the output file is PDF-formatted from the .pdf extension.
Next, I wanted to include citations. First, I exported my Zotero citations to a .bib-formatted file, using the BetterBibtex extension. I stored this in a directory called bibliography in the same directory as my writing.md file. The command line options I used here were:
$pandoc -f markdown \ --filter=pandoc-citeproc \ --bibliography=bibliography/blog-post-citations.bib \ writing.md \ -o writing.pdf Note here the two additional commands – the --filter used to invoke pandoc-citeproc, and the --bibliography filter to include a BibTex formatted file. This worked well, and generated a plainly formatted PDF (based on the pandoc default format). ### Using a yaml file for metadata Becoming more advanced with pandoc, I decided to experiment with including a yaml file to help generate the document. The yaml file can specify metadata such as author, date and so on, which can then be substituted into the PDF file – if the intermediary LaTeX template accommodates these values. The basic LaTeX template included with pandoc includes values for author, title, date and abstract. Here’s the yaml file I used for this example: --- author: Kathy Reid title: blog post on pandoc date: December 2020 abstract: | This is the abstract. ... Note that the yaml file must start with three dashes ---and end with three periods ... The pandoc command line options I used to include metadata were: $ pandoc -f markdown+yaml_metadata_block \
--filter=pandoc-citeproc \
--bibliography=bibliography/blog-post-citations.bib \
-o writing.pdf
Note here that in the -f switch an additional option for yaml_metadata_block is given, and that the yaml file is listed after the first input file, writing.md. By adding the metadata.yml file in the command line, pandoc considers them both to be input files.
By using the yaml file, this automatically appended author, title, date and abstract information to the resulting PDF.
I also found that I could control the margins and paper size of the resulting PDF file by controlling these in the yaml file.
---
author: Kathy Reid
title: blog post on pandoc
date: December 2020
abstract: |
This is the abstract.
fontsize: 12pt
papersize: a4
margin-top: 25mm
margin-right: 25mm
margin-bottom: 25mm
margin-left: 25mm
...
It took a little while to get the hang of working with yaml, and I wanted a way to be able to inspect the output of the pandoc process. To do this, I added a switch to the command line option, and also piped the output of the command to a logging file.
$pandoc -f markdown+yaml_metadata_block \ --verbose --filter=pandoc-citeproc \ --bibliography=bibliography/blog-post-citations.bib \ writing.md metadata.yml \ -o writing.pdf > pandoc-log.txt The --verbose switch tells pandoc to use verbose logging, and the logging is piped to pandoc-log.txt. If the output wasn’t piped to a file, it would appear on the screen as stdout, and because it’s verbose, it’s hard to read – it’s much easier to pipe it to a file to inspect it. ### Working with other LaTeX templates Now that I had a Markdown to PDF via LaTeX workflow working reasonably well, it was time to experiment using an academic writing workflow with other templates. Many publications provide a LaTeX template for submission, such as these from the ACM, and ideally I wanted to be able to go from Markdown to a journal template using pandoc. I’d come across other blog posts where similar goals had been attempted, but this proved significantly harder to implement than I’d anticipated. My first attempt entailed trying to replicate the work Daniel Graziotin had done here – but I ran into several issues. After copying over the table-filter.py file from the blog post, an ACM .cls file, and copying the ACM pdf file to default.pdf in my pandoc-data directory, and running the below command, I got the following error. $ pandoc -f markdown+yaml_metadata_block \
--verbose \
--data-dir=pandoc-data \
--variable documentclass=acmart \
--variable classname=acmlarge \
--filter=pandoc-citeproc \
--filter=table-filter.py \
--bibliography=bibliography/blog-post-citations.bib \
-o writing.pdf
Error running filter table-filter.py:Could not find executable python
My first thought was that python somehow was aliased to an older version of python – ie python 2. To verify this I ran:
$which python This didn’t return anything, which explained the error. From this, I assumed that pandoc was expecting that the python alias resolved to the current python. I didn’t know how to change this – for example changing the pandoc preferences to point to the right python binary. Instead, I created a symlink so that pandoc could find python3. $ pwd/usr/bin
$ls | grep python python3 python3.8 python3.8-config python3-config python3-futurize python3-pasteurize x86_64-linux-gnu-python3.8-config x86_64-linux-gnu-python3-config$ sudo ln -s python3 python
### Installing pandocfilters package for Python
I attempted to run the pandoc command again but ran into another error.
Traceback (most recent call last):
File "table-filter.py", line 6, in
import pandocfilters as pf
ModuleNotFoundError: No module named 'pandocfilters'
Error running filter table-filter.py:
Filter returned error status 1
My guess here was that the python module pandocfilters had not been installed via pip. I installed this through pip.
$pip3 install pandocfiltersCollecting pandocfiltersDownloading pandocfilters-1.4.3.tar.gz (16 kB)Building wheels for collected packages: pandocfiltersBuilding wheel for pandocfilters (setup.py) … doneCreated wheel for pandocfilters: filename=pandocfilters-1.4.3-py3-none-any.whl size=7991 sha256=3c4445092ee0c8b00e2eab814ad69ca91d691d2567c12adbc4bcc4fb82928701Stored in directory: /home/kathyreid/.cache/pip/wheels/fc/39/52/8d6f3cec1cca4ceb44d658427c35711b19d89dbc4914af657fSuccessfully built pandocfiltersInstalling collected packages: pandocfiltersSuccessfully installed pandocfilters-1.4.3 This again swapped one error for another. Error producing PDF. ! LaTeX Error: Missing \begin{document}. See the LaTeX manual or LaTeX Companion for explanation. Type H for immediate help. … l.55 u Luckily, I had set –verbose, and piped to a log file. I went digging through the log file to see if I could find anything useful. Class acmart Warning: You do not have the libertine package installed. Please upgrade your TeX on input line 669. Class acmart Warning: You do not have the zi4 package installed. Please upgrade your TeX on input line 672. Class acmart Warning: You do not have the newtxmath package installed. Please upgrade your TeX on input line 675. After reading through this Stack Overflow article, these all looked like font issues. I used the solution given in the Stack Overflow article, which was to install another Ubuntu package: $ sudo apt-get install texlive-fonts-extra
Again, this swapped one error message for another.
! LaTeX Error: Command \Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help.…
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}
! ==> Fatal error occurred, no output PDF file produced!Transcript written on ./tex2pdf.-bf3aef739e05d883/input.log.
Error producing PDF.! LaTeX Error: Command \Bbbk' already defined.
See the LaTeX manual or LaTeX Companion for explanation.Type H for immediate help.…
l.261 …ol{\Bbbk} {\mathord}{AMSb}{"7C}
Another Google search, another solution from Stack Overflow, which suggested removing the LaTeX line \usepackage{amssymb} from the template. The challenge was, I wasn’t sure where the LaTeX template for pandoc was stored. Reading through this Stack Overflow post, it looked liked the default LaTeX template is stored at:
~/.pandoc/templates
But on my system, this directory didn’t exist. I created it, and used a command to store a default LaTeX file in there. Then I was able to remove the line.
\usepackage{amssymb,amsmath}
This then resulted in yet another error which required Googling.
! LaTeX Error: Missing \begin{document}.See the LaTeX manual or LaTeX Companion for explanation.
Type H for immediate help
…l.54 u
! ==> Fatal error occurred, no output PDF file produced!
Looking through the log file, this was generated by the parskip.sty package. More Googling revealed another incompatibility with the ACM class. I removed the parskip package from the LaTeX default template, but was continually met with similar errors.
On the plus side though, I learned a lot about pandoc, how to reference with it, and how to produce simple LaTeX documents with this workflow. |
# Math Help - Complex number trig question.
1. ## Complex number trig question.
Show that
$\frac{1+sin\theta+icos\theta}{1+sin\theta-icos\theta}=sin\theta+icos\theta$
and hence that
$
(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5
$
$+i(1+sin\frac{\pi}{5}-icos\frac{\pi}{5})^5=0$
I have done the first part and am going round in circles trying to show the second part, I was trying to make use of de moivre's theorem but to no avail. Please help.
2. EDIT: Oops, I misread and thought you were saying you were going in circles with the first part. Sorry
3. Like I said, I have done the first part, just wasn't able to do the second, and would like help for that part, thanks.
4. Hello,
Trying to solve the second problem, I made a few manipulations, but I've seemed to arrive at a contradiction.
$(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5+i(1+sin\f rac{\pi}{5}-icos\frac{\pi}{5})^5=0$
Using $i^5=i$
$(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5+(i+isin\f rac{\pi}{5}+cos\frac{\pi}{5})^5=0$
$(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5=-(i+isin\frac{\pi}{5}+cos\frac{\pi}{5})^5$
Using $\sqrt[5]{-1}=-1$
$1+sin\frac{\pi}{5}+icos\frac{\pi}{5}=-(i+isin\frac{\pi}{5}+cos\frac{\pi}{5})$
$1+sin\frac{\pi}{5}+icos\frac{\pi}{5}+i+isin\frac{\ pi}{5}+cos\frac{\pi}{5}=0$
$1+sin\frac{\pi}{5}+cos\frac{\pi}{5} +i(1+sin\frac{\pi}{5}+cos\frac{\pi}{5})=0$
which only holds if the real and imaginary parts are equivalently 0. But because $1+sin\frac{\pi}{5}+cos\frac{\pi}{5}$ is not, it's weird. My calculator says that I'm wrong.
5. Originally Posted by Dfrtbx
Hello,
Trying to solve the second problem, I made a few manipulations, but I've seemed to arrive at a contradiction.
$(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5+i(1+sin\f rac{\pi}{5}-icos\frac{\pi}{5})^5=0$
Using $i^5=i$
$(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5+(i+isin\f rac{\pi}{5}+cos\frac{\pi}{5})^5=0$
$(1+sin\frac{\pi}{5}+icos\frac{\pi}{5})^5=-(i+isin\frac{\pi}{5}+cos\frac{\pi}{5})^5$
Using $\sqrt[5]{-1}=-1$
$1+sin\frac{\pi}{5}+icos\frac{\pi}{5}=-(i+isin\frac{\pi}{5}+cos\frac{\pi}{5})$ The contradiction comes here, where you are taking the fifth root of both sides of an equation. A complex number has five fifth roots, and you can't assume that you are getting the same one on both sides of the equation. To take a much simpler example, (–1)^2 = 1^2, but you can't take the square root of both sides and conclude that –1 = 1.
$1+sin\frac{\pi}{5}+icos\frac{\pi}{5}+i+isin\frac{\ pi}{5}+cos\frac{\pi}{5}=0$
$1+sin\frac{\pi}{5}+cos\frac{\pi}{5} +i(1+sin\frac{\pi}{5}+cos\frac{\pi}{5})=0$
which only holds if the real and imaginary parts are equivalently 0. But because $1+sin\frac{\pi}{5}+cos\frac{\pi}{5}$ is not, it's weird. My calculator says that I'm wrong.
To solve the second problem, notice that $\sin\theta+i\cos\theta = i(\cos\theta-i\sin\theta) = i(\cos(-\theta) + i\sin(-\theta)).$ So $\frac{1+\sin\theta+i\cos\theta}{1+\sin\theta-i\cos\theta}=i(\cos(-\theta) + i\sin(-\theta)).$ Now put $\theta = \pi/5$, take the fifth power of both sides and use de Moivre's theorem. |
# Ranking recent information
I was happy to get the news that our paper, Estimation methods for ranking recent information, (co-authored with Gene Golovchinsky) was accepted for presentation at SIGIR 2011. I’ll let the paper speak for itself. But to prod people to read it, here are some of the motivations and findings.
Often a query expresses an information need where recency is a a crucial dimension of relevance. The goal of the paper was to formulate approaches to incorporating time into ad hoc IR when we have evidence that this is the case. For example, a web query like champaign tornado had a strong temporal dimension during our crazy weather a few nights back. This is in contrast to a query such as steampunk photo. Though so-called recency queries show up in many IR domains, they are especially important in the context of microblog search as discussed nicely here.
Of course handling recency queries (and other time-sensitive queries) is a well-studied problem. Articles in this area are too numerous to name here. But one of the canonical approaches was formulated by Li and Croft. In their work, Li and Croft use time to inform a document prior in the standard query likelihood model:
$Pr(d | q) \propto Pr(q | d) Pr(d | t_d)$
where for a time-stamped document d, Pr(d | t_d) follows an exponential distribution with rate parameter λ–newer documents have a higher prior probability. This approach is elegant, and it has been shown to work well for recency queries.
The problem, however, is how well such an approach works if we apply it to queries that aren’t concerned with recency. We found that using a time-based prior as shown above leads to decreased effectiveness on non-recency queries (which isn’t surprising). Of course we could mitigate this by classifying queries with respect to their temporal concerns. However, this strikes me as a hard problem.
Instead, we propose several methods of incorporating recency into retrieval that allow time to influence ranking, but that degrade gracefully if the query shows little evidence of temporal concern. Additionally, the approaches we outline show less sensitivity to parameterization than we see in previous work
The paper introduces a number of strategies. But the one that I find most interesting uses time to guide smoothing in the language modeling framework. To keep things simple, we limit analysis to Jelinek-Mercer smoothing, such that the smoothed estimate of the probability of a word w given a document d and the collection C is given by
$\hat{P}r(w|d) = (1-lambda_t)\hat{P}r_{ML}(w|d) + \lambda_t \hat{P}r(w|C)$
Where λ_t is a smoothing parameter that is estimated based on the “age” of the document. The intuition is that we might plausibly trust the word frequencies that drive a document model’s maximum likelihood estimator less for older documents than we do for recent documents insofar as an author might choose to phrase things differently were he to re-write an old text today.
The main work of the paper lies in establishing methods of parameterizing models for promoting recency in retrieved documents. Whether we’re looking at the rate parameter of an exponential distribution, the parameter for JM smoothing, or the mixture parameter for combining query terms with expansion terms, we take the view that we’re dealing with an estimation problem, and we propose treating the problem by finding the maximum a posteriori estimate based on temporal characteristics.
Dealing with recency queries comprises only a slice of the more general matter of time-sensitive retrieval. A great deal of recent work has shown (and continues to show) the complexity of dealing with time in IR, as well as ingenuity in the face of this complexity. It’s exciting to have a seat at this table.
# Snowball sampling for Twitter Research
By way of shameless promotion, I am currently encouraging people to help me evaluate an experimental IR system that searches microblog (Twitter) data. To participate, please see:
http://tacoma.lis.illinois.edu:8080/sparrow
Please consider giving it a moment…even a brief moment.
Now, onto a more substantive matter: I’ve been wrestling with the validity of testing an IR system (particularly a microblog IR system) using a so-called snowball sampling technique. For the uninitiated, snowball sampling involves recruiting a small number of people to participate in a study with the explicit aim that they will, in turn, encourage others to participate. The hope is that participation in the study will extend beyond the narrow initial sample as subjects recruit new participants.
Snowball sampling has clear drawbacks. Most obviously, it is ripe for introducing bias into one’s analysis. The initial “seed” participants will drive the demographics of subsequent recruits. This effect could amplify any initial bias. The non-random (assuming it is non-random) selection of initial participants, and their non-random selection of recruits calls into question the application of standard inferential statistics at the end of the study. What status does a confidence interval on, say, user satisfaction derived from a snowball sample have with respect to the level of user satisfaction in the population?
However, snowball sampling has its merits, too. Among these is the possibility of obtaining a reasonable number of participants in the absence of a tractable method for random sampling.
In my case, I have decided that a snowball sample for this study is worth the risks it entails. In order to avoid poisoning my results, I’ll keep description of the project to a minimum.
But I feel comfortable saying that my method of recruiting includes dissemination of a call for participation in several venues:
• Via a twitter post with a call for readers to retweet it.
• Via this blog post!
• By email to two mailing lists (one a student list, and the other a list of Twitter researchers).
In this case, the value of a snowball sample extends beyond simply acquiring a large N. The fact that Twitter users are connected by Twitter’s native subscription model suggests to me that the fact that my sample will draw many users who are “close” to my social network is not a liability. Instead it will, I hope, lend a level of realism to how a particular sub-community functions.
One problem with these rose-colored lenses is that I have no way to characterize this sub-community formally. Inferences drawn from this sample may generalize to some group. But what group is that?
Obviously some of the validity of this sample will have to do with the nature of the data collected and the research questions to be posed against it, neither of which I’m comfortable discussing yet. But I would be interested to hear what readers think: does snowball sampling have merits or liabilities for research on the use of systems that inherently rely on social connections that do not pertain to situations lacking explicit social linkage?
# Research award for microblog search
When it rains it pours.
After the exciting news that Google funded my application to their digital humanities program, I found out this week that they will also fund another project of mine (full list): Defining and Solving Key Challenges in Microblog Search. The research will focus largely on helping people find and make sense of information that comes across Twitter.
Over the next year the project will support me and two Ph.D. students as we address (and propose some responses to) questions such as:
• What are meaningful units of retrieval for IR over microblog data?
• What types of information needs do people bring to microblogging environments and how can we support them? Is there a place for ad hoc IR in this space? If not (or even if so) what might constitute a ‘query’ in microblog IR?
• What criteria should we pursue to help people find useful information in microblog collections? Surely time plays a role here. Topical relevance is a likely suspect, as are various types of reputation factors such as TunkRank (and here).
• How does microblog IR relate to more established IR problems such as blog search, expert finding, and other entity search issues?
This work builds on earlier work that I did with Gene Golovchinsky, as well as research I presented at SIGIR this week.
For me, one of the most interesting issues at work in microblog IR is: how can we aggregate (and then retrieve) data in order to create information that is useful once collected but that might be uninteresting on its own?
Is it useful to retrieve an individual tweet that shares keywords with an ad hoc query? Maybe. But it seems more likely that people might seek debates, consensus, emerging sub-topics, or communities of experts with respect to a given topic. These are just a few of the aggregates that leap to mind. I’m sure readers can think of others. And I’m sure readers can think of other tasks that can help move microblog IR forward.
In case anyone wonders how this project relates to the other work of mine that Google funded (which treats retrieval over historically diverse texts in Google Books data), the short answer is that both projects concern IR in situations where change over time is a critical factor, a topic similar to what I addressed in a recent JASIST paper. |
# TDD rectangles
Jason and I had a rather interesting pair programming session last week where we tackled a problem that I found on Topcoder. It is in fact a slight twist (easier) on the actual topcoder problem as I didn’t have it to hand when we were pairing.
## The Problem:
Given a composite rectangle (composed of unit rectangles) of arbitrary dimensions, calculate the number of sub rectangles that can be composed from the unit rectangles. Remember squares are rectangles…
If that’s not clear enough, this diagram should help articulate what the problem is.
Here we can see that you can create 9 rectangles from the initial 2x2 rectangle.
## The Design:
Tackling this in a TDD-way we first had to decide on rough roadmap of test cases. The simplest test case seemed to be a invalid rectangle, 0 rectangles high or wide, or both.
Test cases:
-----------
1. Do not accept geometrically impossible rectangle
We’d then go on to a 1x1 rectangle, 1x2, 2x1, 3x1, 1x3, 2x2 and 3x2. Jason stressed that we were just sketching out a path we might take, it may not have been neccessary to use all the test cases, or maybe we’d need more; the list seemed like it would probably be sufficient for a full implementation.
The next step was to figure out how many subrectangles there were in each composite rectangle, this involved a bit of drawing and tallying. In the end I think I only miscounted one test case which we noticed fairly quickly when coding.
Test cases:
-----------
1. Do not accept geometrically impossible rectangle
2. 1x1 has 1 subrectangle
3. 2x1 has 3 subrectangles
4. 1x2 has 3 subrectangles
5. 3x1 has 6 subrectangles
6. 1x3 has 6 subrectangles
7. 2x2 has 9 subrectangles
8. 3x2 has 18 subrectangles
Having written our roadmap it was now time to start tackling this problem. We used Java, and JUnit with Eclipse.
The implementation up to 3x1 was fairly simple, however it started to get a little more tricky after that stage, it wasn’t immediately obvious what formula we should have been using, $xy + (x-1)y + (y-1)x$ seemed like a promising start, passing all the previous tests up until that point, addition of another term sounded like it might hold the solution, however it was difficult to think of what it could be, we didn’t come up with anything in the end.
Our design up until this point had been heading towards a solution in the form of return some_algebraic_formula, we changed tack and started looking at it in terms of combinations as we thought it might be a more successful avenue of attack. Jason suggested it might be as simple as a factorial, given that there are $^n C_r ways of ordering$r$items from a sample of size$n$and… Maybe not as simple as a single factorial, but we were definitely dealing with sequences and/or combinations. The problem with combinations is that they calculate all possible combinations including non-contiguous options (think first and last subrectangle of a 3x1). We didn’t discuss this at the time because a simpler solution presented itself. Being Test Driven in design, we looked at our current test case, a 3x1 rectangle (I’ve illustrated a 1x3, imagine it’s just rotated!), after a bit of drawing and colouring it was evident that the number of subrectangles present in a column decreased by 1 for each unit increase in subrectangle length (refer to diagram). In the general case you have$n + (n-1) + (n-2) + \ldots + 1$subrectangles per column where$n$is the length of the column. We could iterate over each column and add either the general result of this formula ($\Sigma_0^nr$), or generate it on the fly with another loop. Initially it was simplest to just loop over columns and rows separately, instead of trying to take the larger step and implement the actual solution. Our implementation now took the form: The next test case: 2x2, introduced a new class of rectangles, 2D, the others had all been 1D in either the$x$or$y$axis. The previous implementation yields 5 subrectangles instead of 9. We looked at which ones it was counting, just the first row and first column – time to loop over the whole thing. Enclosing each loop with an outer loop iterating over either column or row depending on the inner loop and removing duplicates was a possibility, a very messy one, probably unlikely to succeed in being the general solution either. The algorithm needed to loop over both columns and rows (a nested loop?) and then calculate the number of possibilities, hopefully without counting duplicates. After a bit of trial and error we managed to reach the general solution: I like this, it’s rather clever. It’s easiest to understand given an example, let’s use the 2x2 rectangle we were trying to solve. The first column has 3 subrectangles, here’s what the code excecutes: i = 1: j = 1: numberOfSubrectangles += 1; // numberOfSubrectangles = 1; j = 2: numberOfSubrectangles += 2; // numberOfSubrectangles = 3; Looping over the first column we get 3 subrectangles, so good so far. i = 2: j = 1: numberOfSubrectangles += 2 // numberOfSubrectangles = 5 j = 2: numberOfSubrectangles += 4 // numberOfSubrectangles = 9 return 9 Through the second column, the subrectangles found in i = 1 are found again in this next column, however, we also need to take into account the rectangles that are formed along the rows, which there are 3 of, giving us a total of 6 additional subrectangles. ## Algebraic solution: We had a solution! I like maths so I’ve found the algebraic solution from the code. Reasoning about loops of loops in mathematics is particularly aesthetically pleasing to me. I’ll be using$z$as the numberOfSubrectangles, it makes writing the maths easier. You can literally change the Java code to return$z\$, how pleasing.
After tackling the problem with Jason, I had a go solving it mathematically a few days after and by then I’d forgotten the solution; I didn’t have much success at all. In this case I think developing in this way helped us to reach the insight that we should add i*j inside our nested loops. I don’t have the problem solving skills to incrementally build a solution mathematically, although I’m starting to aquire them in the domain of programming (very slowly), perhaps if I try and apply a TDD approach to maths I may get further with hard mathematical problems? |
elementary row transformations. If we want to perform an elementary row transformation on a matrix A, it is enough to pre-multiply A by the elemen-tary matrix obtained from the identity by the same transformation. Elementary Column Operation. You can switch the rows of a matrix to get a new matrix. We introduce a special term for matrices that are related via elementary row operations. Practice: Matrix row operations. To row reduce a matrix: Perform elementary row operations to yield a "1" in the first row, first column. 3. In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation (or column operation). Row Operations. Row-echelon form and Gaussian elimination. We know that elementary row operations do not change the determinant of a matrix but may change the associated eigenvalues. 2. An example. $$E_2 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix}$$ and multiply both sides of the system (II) by $$E_2$$ as follows: Multiplying row (3) by 2 is equivalent to multiplying the two sides of the system by the. Suppose you want to evaluate the determinant. The number of rows and columns of a matrix are known as its dimensions which is given by m $$\times$$ n, where m and n represent the number of rows and columns respectively. $E_2^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1/2 \end{bmatrix}$, . The first equation should have a leading coefficient of 1. Elementary row operations and some applications 1. Please note that, when we say a 2x2 matrix, we mean an array of 2x2. Add a multiple of one row to another Theorem 1 If the elementary matrix E results from performing a certain row operation on In and A is a m£n matrix, then EA is the matrix that results when the same row operation is performed on A. All rights reserved. Any matrix obtained from A by a ï¬nite sequence of elementary row operations is said to be row-equivalent to A. A matrix is an array of numbers arranged in the form of rows and columns. The elementary column operations are exactly the same operations done on the columns. B) A is 3 by 3 matrix, multiply row(3) by - 6. in Physics and Engineering, Exercises de Mathematiques Utilisant les Applets, Trigonometry Tutorials and Problems for Self Tests, Elementary Statistics and Probability Tutorials and Problems, Free Practice for SAT, ACT and Compass Math tests, Matrices with Examples and Questions with Solutions, Row Reduce Agmented Matrices - Calculator, Add, Subtract and Scalar Multiply Matrices. $E_3^{-1} = \begin{bmatrix} 1 & 0 & 0 \\ 2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$, Graphs of Functions, Equations, and Algebra, The Applications of Mathematics What is the elementary matrix of the systems of the form $A X = B$ for following row operations? These operations will allow us to solve complicated linear systems with (relatively) little hassle. Note: Determining the determinant of a matrix can be fun, especially when you know the right steps! Part 3 Find the inverse to each elementary matrix found in part 2. The resulting matrix is the elementary row operator, . DEFINITION 2.4.3 Let A be an m × n matrix. Apart from basic mathematical operations there are certain elementary operations that can be performed on matrix namely transformations. I know about the RowReduce command, but that does all the row operations at one time. Now that we can write systems of equations in augmented matrix form, we will examine the various row operations that can be performed on a matrix, such as addition, multiplication by a constant, and interchanging rows.. Identify the first pivot of the matrix. Elementary row operations Given an N × N matrix A, we can perform various operations that modify some of the rows of A. We show that when we perform elementary row operations on systems of equations represented by, it is equivalent to multiplying both sides of the equations by an, We start with the given system in matrix form, Interchange rows (1) and (3) and rewrite the system as, Interchanging rows (1) and (3) is equivalent to multiplying (from the left) the two sides of the system by the. Khan Academy is a 501(c)(3) nonprofit organization. These correspond to the following operations on the augmented matrix : 1. Add a multiple of one row to another row. To switch rows 1 and 2 in , that is , switch the first and second rows in . Use row operations to obtain zeros down the first column below the first entry of 1. (The reason for doing this is to get a 1 in the top left corner.) Matrix Row Operations (page 1 of 2) "Operations" is mathematician-ese for "procedures". Performing row operations on a matrix is the method we use for solving a system of equations. Let us now consider the system of equations (III), multiply row (1) by - 2 add it to row (2) to obtain: Add row (1) multiplied by - 2 to row (2) is equivalent to multiplying the two sides of the system by the. How to Perform Elementary Row Operations. This tutorial provides a great example of finding the determinant of a 2x2 matrix. Use row operations to obtain a 1 in row 2, column 2. Just select one of the options below to start upgrading. 1) ... Mutivariable Linear Systems and Row Operations Name_____ Date_____ Period____-1-Write the augmented matrix for each system of linear equations. ; To carry out the elementary row operation, premultiply A by E. We illustrate this process below for each of the three types of elementary row operations. The rows of the system are the equationswhere is the -th row of (it contains the coefficients of the -th equation) and is the -th entry of . Swapping any two rows ; Multiply a row by constant ; Adding any two rows ; The row operation is carried out on a matrix to turn it a lower triangular matrix or a upper triangular matrix to find out solution vector for system of linear equations. One of the advantages in using elementary matrices is that their inverse can be obtained without heavy calculations. This is illustrated below for each of the three elementary row transformations. Pre-multiply by to get . How To: Given an augmented matrix, perform row operations to achieve row-echelon form. The pivots are essential to understanding the row reduction process. Interchange rows or multiply by a constant, if necessary. In the table below, each row shows the current matrix and the elementary row operation to be applied to give the matrix in the next row. So as long as you keep track of the effects of the row operations you use, you can reduce your matrix to triangular form and then just calculate the product of the numbers down the diagonal. Reduced row echelon form takes a lot of time, energy, and precision. © Copyright 2017, Neha Agrawal. Write the augmented matrix for each system of linear equations. We start with the matrix A, and write it down with an Identity Matrix I next to it: (This is called the \"Augmented Matrix\") Now we do our best to turn \"A\" (the Matrix on the left) into an Identity Matrix. Donate or volunteer today! Create zeros in all the rows of the first column except the first row by adding the first row times a constant to each other row. As in previous lectures, a system of linear equations in unknowns is written in matrix form aswhere is the matrix of coefficients, is the vector of unknowns and is the vector of constants. How to find Inverse of a Matrix using elementary row transformations/ e-row operations? Matrix Row Operations: Examples (page 2 of 2) In practice, the most common procedure is a combination of row multiplication and row addition. We also allow operations of the following type : Interchange two rows in the matrix (this only amounts to writing ⦠We start off doing elementary row operations on an augmented matrix to solve a system of equations. Let's get a deeper understanding of what they actually are and how are they useful. They are . The goal is to make Matrix A have 1s on the diagonal and 0s elsewhere (an Identity Matrix) ... and the right hand side comes along for the ride, with every operation being done on it as well.But we can only do these \"Elementary Row O⦠$$E_3 = \begin{bmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix}$$ obtained from the identity matrix $$I_3$$. Matrix dimension: X About the method. Multiply a row a by k 2 R 2. As we have seen, one way to solve this system is to transform the augmented matrix $$[A\mid b]$$ to one in reduced row-echelon form using elementary row operations. Exchange two rows 3. Row-echelon form and Gaussian elimination. As we have already discussed row transformation in detail, we will briefly discuss column transformation. Multiply a row by a non-zero constant. Consider an example, say two $5 \times 5$ matrix are given: Matrix row operations. Elementary matrices are square matrices that can be obtained from the identity matrix by performing elementary row operations, for example, each of these is an elementary matrix: Elementary matrices are always invertible, and their inverse is of the same form. Thinking back to solving two-equation linear systems by addition, you most often had to multiply one row by some number before you added it to the other row. Next lesson. This is the currently selected item. The inverse of $$E_3$$ is obtained from I, it from row (2); hence the inverse of $$E_3$$ is given by Trust me you needn't fear it anymore. We now turn our attention to a special type of matrix called an elementary matrix.An elementary matrix is always a square matrix. Matrix row operations. Perform elementary row operations to yield a "1" in the second row⦠For matrices, there are three basic row operations; that is, there are three procedures that you can do with the rows of a matrix. Have questions? To calculate a rank of a matrix you need to do the following steps When reducing a matrix to row-echelon form, the entries below the pivots of the matrix are all 0. Left multiplication (pre-multiplication) by an elementary matrix represents elementary row operations, while right multiplication (post-multiplication) represents elementary column operations.. For our matrix⦠Elementary Operations! To find E, the elementary row operator, apply the operation to an r x r identity matrix. Let us now consider the system of equations (II) and multiply row (3) by 2 to obtain. There are three classes of elementary row operations, which we shall denote using the following notation: 1. Rj â Rk. Those three operations for rows, if applied to columns in the same way, we get elementary column operation. Use the reduced row echelon form only if youâre specifically told to do so by a pre-calculus teacher or textbook. C) A is 5 by 5 matrix, multiply row(2) by 10 and add it to row 3. Reminder: Elementary row operations: 1. Our mission is to provide a free, world-class education to anyone, anywhere. This gives us . The matrix on which elementary operations can be performed is called as an elementary matrix. SPECIFY MATRIX DIMENSIONS: Please select the size of the matrix from the popup menus, then click on the "Submit" button. The matrix in algebra has three row operations are called Matrix Elementary Row Operation. A) A is 2 by 2 matrix, add 3 times row(1) to row(2)? The four "basic operations" on numbers are addition, subtraction, multiplication, and division. If A is an invertible matrix, then some sequence of elementary row operations will transform A into the identity matrix, I. To use Khan Academy you need to upgrade to another web browser. $E_1^{-1} = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{bmatrix}$, , the inverse of $$E_2$$ is obtained from I, ; hence the inverse of $$E_2$$ is given by Example 1: Row Switching. Read the instructions. Using these elementary row operations, you can rewrite any matrix so that the solutions to the system that the matrix represents become apparent. Row operation calculator: v. 1.25 PROBLEM TEMPLATE: Interactively perform a sequence of elementary row operations on the given m x n matrix A. To log in and use all the features of Khan Academy, please enable JavaScript in your browser. To perform an elementary row operation on a A, an r x c matrix, take the following steps. Up Next. 1.5.2 Elementary Matrices and Elementary Row Opera-tions No headers. Elementary matrix row operations. Matrix rank is calculated by reducing matrix to a row echelon form using elementary row operations. Basically, to perform elementary row operations on , carry out the following steps: Perform the elementary row operation on the identity matrix . Here you can calculate matrix rank with complex numbers online for free with a very detailed solution. Learn how to perform the matrix elementary row operations. Our mission is to provide a free, world-class education to anyone, anywhere. , the inverse of $$E_1$$ is obtained from I, ; hence the inverse of $$E_1$$ is given by The only concept a student fears in this chapter, Matrices. Matrix row operations. $$E_1 = \begin{bmatrix} 0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 0 \end{bmatrix}$$ obtained from the identity matrix $$I_3$$. The elementary matrices generate the general linear group GL n (R) when R is a field. In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. $$E_2 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 2 \end{bmatrix}$$ obtained from the identity matrix $$I_3$$. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. Sort by: Top Voted. [ 2 3 â 2 6 0 0 3 â 6 1 0 2 â 3 ] â [ 1 0 2 â 3 2 3 â 2 6 0 0 3 â 6 ] In the example shown above, we move Row 1 to Row 2 , Row 2 to Row 3 , and Row 3 to Row 1 . If you're seeing this message, it means we're having trouble loading external resources on our website. 5 by 5 matrix, multiply row ( 3 ) by 10 and add it to row ( 3 nonprofit. Detailed solution column 2 for following row operations on, carry out following! The associated eigenvalues reducing matrix to a that the domains *.kastatic.org *... Relatively ) little hassle below for each system of equations mean an array of numbers arranged in same... Special type of matrix called an elementary matrix.An elementary matrix found in part 2 complicated linear and... That modify some of the three elementary row operations on a matrix is a field a 1 in. Namely transformations coefficient of 1 by an elementary matrix is the method we use for a! To yield a 1 '' in the top left corner. are.! Briefly discuss column transformation will allow us to solve a system of linear equations size of the systems of matrix! Row transformation in detail, we will briefly discuss column transformation coefficient of 1 can rewrite any obtained! Academy you need to upgrade to another web browser determinant of a matrix: perform the matrix represents apparent! As an elementary matrix is always a square matrix actually are and are... How are they useful as we have already discussed row transformation in,! Use for solving a system of equations is calculated by reducing matrix solve! Opera-Tions Please note that, when we say a 2x2 matrix, take the following notation: 1. â... Can perform various operations that can be performed on matrix elementary row operations 2x2 matrix transformations be obtained without heavy calculations you! '' on numbers are addition, subtraction, multiplication, and precision * are... Reduce a matrix to a done on the Submit '' button but may the! Times row ( 2 ) by 10 and add it to row 3 and columns always a matrix!: 1 is that their inverse can be fun, especially when you know the right steps 1! Found in part 2 in using elementary matrices generate the general linear group GL (! Below for each system of linear equations rows of a matrix which differs from the identity matrix 2. Illustrated below for each system of equations ( II ) and multiply row ( 1 )... Mutivariable linear with... Column operations are called matrix elementary row operations to obtain a 1 in row 2, column.... The rows of a matrix can be fun, especially when you know the right steps in. For solving a system of linear equations become apparent matrix in algebra has three row at... Matrix for each of the three elementary row operations to yield a 1 '' in the way! Please enable JavaScript in your browser of elementary row transformations allow us to solve complicated linear with... Inverse of a and second rows in entries below the first column below the column. Do elementary row operations 2x2 matrix by a pre-calculus teacher or textbook on numbers are addition, subtraction,,. An array of numbers arranged in the form of rows and columns pivots are essential to understanding the row process. YouâRe specifically told to do so by a pre-calculus teacher or textbook same operations done on the matrix. A into the identity matrix column 2 with complex numbers online for free with a very solution. Academy you need to upgrade to another web browser ) by - 6 the! Has three row operations, you can calculate matrix rank is calculated reducing... Find E, the elementary column operations on our website 10 and add it row., multiply row ( 2 ) they useful: Given an n n... Start off doing elementary row operations on a matrix which differs from the popup,! ( 3 ) by - 6 augmented matrix to a row echelon form using elementary row operator apply! Operations will transform a into the identity matrix, we will briefly discuss transformation! Operations can be performed on matrix namely transformations briefly discuss column transformation calculated by reducing matrix to solve a of... A ) a is 3 by 3 matrix, add 3 times row ( 1 to! Reducing matrix to a reason for doing this is to get a deeper understanding of what they actually and... Especially when you know the right steps which differs from the popup,... Pre-Calculus teacher or textbook 1 '' in the top left corner )! Provides a great example of finding the determinant of a matrix which differs from the menus. Not change the associated eigenvalues how to find inverse of a Given: elementary operations... Features of Khan Academy, Please enable JavaScript in your browser note: Determining the of. Is an invertible matrix, then click on the Submit ''.... Operations, while right multiplication ( pre-multiplication ) by 10 and add it to (... By an elementary row operations do not change the determinant of a matrix: perform the matrix on elementary... And elementary row operations numbers arranged in the first entry of 1 for... Lot of time, energy, and division 're seeing this message, it means we 're having loading! Three row operations to achieve row-echelon form, the elementary column operations arranged the. Click on the identity matrix notation: 1. Rj â Rk x = B \ ] for row... The inverse to each elementary matrix method we use for solving a system of linear equations 2 obtain... That elementary row Opera-tions Please note that, when we say a 2x2 matrix: 1. Rj Rk... Mathematics, an r x r identity matrix, multiply row ( 2?! External resources on our website but may change the determinant of a 2x2 matrix invertible matrix, can. Achieve row-echelon form B ) a is 5 by 5 matrix, add 3 times row ( 3 ) 2! Row to another web browser the Submit '' button what they actually are and how they! Consider the system that the matrix represents elementary column operations are exactly the same way, we will briefly column... Linear systems and row operations will allow us to solve a system of equations is that inverse! Select the size of the advantages in using elementary row operations to achieve row-echelon form, elementary!: perform the matrix in algebra has three row operations Name_____ Date_____ Period____-1-Write the augmented matrix: 1 have. On matrix namely transformations these elementary row operations to obtain a 1 in the same way, we mean array... Mathematical operations there are three classes of elementary row transformations and * are! Seeing this message, it means we 're having trouble loading external on... And how are they useful does all the features of Khan Academy is 501! 10 and add it to row ( 3 ) nonprofit organization which we shall denote using the following operations an., that is, switch the first equation should have a leading coefficient of 1 echelon. The resulting matrix is an invertible matrix, add 3 times row ( 1 to... Is, switch the first row, first column below the pivots are essential to the!, subtraction, multiplication, and division system of equations ( II ) and multiply row ( 3 ) 2... Matrices and elementary row transformations down the first entry of 1 of row! Transform a into the identity matrix below to start upgrading elementary matrices is that inverse... Heavy calculations Please select the size of the advantages in using elementary operations! ( r ) when r is a 501 ( c ) ( 3 ) by an elementary elementary..., say two $5 \times 5$ matrix are Given: elementary operations! Our mission is to get a deeper understanding of what they actually and! We have already discussed row transformation in detail, we get elementary column operation sure the!: Determining the determinant of a 1 '' in the top left.... Sure that the matrix elementary row operations × n matrix a, r. Out the following steps Period____-1-Write the augmented matrix: 1 of linear equations 3 times row ( 3 by. Equation should have a leading coefficient of 1 ( elementary row operations 2x2 matrix ) nonprofit organization basic operations '' on numbers are,! Multiplication, and division 2 matrix, add 3 times row ( 3 ) nonprofit.! Loading external resources on our website row echelon form only if youâre specifically told to do so elementary row operations 2x2 matrix a sequence... 2 ) apart from basic mathematical operations there are three classes of elementary row operation on the matrix! *.kastatic.org and *.kasandbox.org are unblocked first and second rows in mission is to get a in! Become apparent top left corner. those three operations for rows, if.... Following steps we mean an array of 2x2 arranged in the first and second rows in can calculate matrix with! And 2 in, that is, switch the first column have a coefficient! Identity matrix by one single elementary row operation a leading coefficient of.... Three operations for rows, if applied to columns in the same operations on! Solve a system of equations to the system that the solutions to the that... Be an m × n matrix by 10 and add it to row 3 row 2, column.! On numbers are addition, elementary row operations 2x2 matrix, multiplication, and precision matrix DIMENSIONS: Please select the size the... The associated eigenvalues of one row to another web browser column operations related via row! Not change the determinant of a matrix can be performed is called as an elementary matrix found in part.. This message, it means we 're having trouble loading external resources elementary row operations 2x2 matrix! |
## CryptoDB
### Paper: Adaptive Security of Multi-Party Protocols, Revisited
Authors: Martin Hirt Chen-Da Liu-Zhang Ueli Maurer DOI: 10.1007/978-3-030-90459-3_23 Search ePrint Search Google The goal of secure multi-party computation (MPC) is to allow a set of parties to perform an arbitrary computation task, where the security guarantees depend on the set of parties that are corrupted. The more parties are corrupted, the less is guaranteed, and typically the guarantees are completely lost when the number of corrupted parties exceeds a certain corruption bound. Early and also many recent protocols are only statically secure in the sense that they provide no security guarantees if the adversary is allowed to choose adaptively which parties to corrupt. Security against an adversary with such a strong capability is often called adaptive security and a significant body of literature is devoted to achieving adaptive security, which is known as a difficult problem. In particular, a main technical obstacle in this context is the so-called commitment problem'', where the simulator is unable to consistently explain the internal state of a party with respect to its pre-corruption outputs. As a result, protocols typically resort to the use of cryptographic primitives like non-committing encryption, incurring a substantial efficiency loss. This paper provides a new, clean-slate treatment of adaptive security in MPC, exploiting the specification concept of constructive cryptography (CC). A new natural security notion, called \cc-adaptive security, is proposed, which is technically weaker than standard adaptive security but nevertheless captures security against a fully adaptive adversary. Known protocol examples separating between adaptive and static security are also insecure in our notion. Moreover, our notion avoids the commitment problem and thereby the need to use non-committing or equivocal tools. We exemplify this by showing that the protocols by Cramer, Damgard and Nielsen (EUROCRYPT'01) for the honest majority setting, and (the variant without non-committing encryption) by Canetti, Lindell, Ostrovsky and Sahai (STOC'02) for the dishonest majority setting, achieve \cc-adaptive security. The latter example is of special interest since all \uc-adaptive protocols in the dishonest majority setting require some form of non-committing encryption or equivocal tools.
##### BibTeX
@article{tcc-2021-31520,
title={Adaptive Security of Multi-Party Protocols, Revisited},
booktitle={Theory of Cryptography;19th International Conference},
publisher={Springer},
doi={10.1007/978-3-030-90459-3_23},
author={Martin Hirt and Chen-Da Liu-Zhang and Ueli Maurer},
year=2021
} |
# SunPy wcs¶
## sunpy.wcs Package¶
Warning
As of version 0.8.0 the sunpy.wcs package is deprecated and will be removed in a future version, you should now transition to using sunpy.coordinates and sunpy.map.GenericMap.world_to_pixel / sunpy.map.GenericMap.pixel_to_world (or astropy.wcs directly) for the functionality provided in this module.
The WCS package provides functions to parse World Coordinate System (WCS) coordinates for solar images as well as convert between various solar coordinate systems. The solar coordinates supported are
• Helioprojective-Cartesian (HPC): The most often used solar coordinate
system. Describes positions on the Sun as angles measured from the center of the solar disk (usually in arcseconds) using cartesian coordinates (X, Y)
• Helioprojective-Radial (HPR): Describes positions on the Sun using angles,
similar to HPC, but uses a radial coordinate (rho, psi) system centered on solar disk where psi is measured in the counter clock wise direction.
• Heliocentric-Cartesian (HCC): The same as HPC but with positions expressed
in true (deprojected) physical distances instead of angles on the celestial sphere.
• Heliocentric-Radial (HCR): The same as HPR but with rho expressed in
true (deprojected) physical distances instead of angles on the celestial sphere.
• Stonyhurst-Heliographic (HG): Expressed positions on the Sun using
longitude and latitude on the solar sphere but with the origin which is at the intersection of the solar equator and the central meridian as seen from Earth. This means that the coordinate system remains fixed with respect to Earth while the Sun rotates underneath it.
• Carrington-Heliographic (HG): Carrington longitude is offset
from Stonyhurst longitude by a time-dependent scalar value, L0. At the start of each Carrington rotation, L0 = 360, and steadily decreases until it reaches L0 = 0, at which point the next Carrington rotation starts.
Some definitions
• b0: Tilt of the solar North rotational axis toward the observer
(helio- graphic latitude of the observer). Note that SOLAR_B0, HGLT_OBS, and CRLT_OBS are all synonyms.
• l0: Carrington longitude of central meridian as seen from Earth.
• dsun_meters: Distance between observer and the Sun. Default is 1 AU.
• rsun_meters: Radius of the Sun in meters. Default is 6.955e8 meters. This valued is stored locally in this module and can be modified if necessary.
References
Thompson (2006), A&A, 449, 791 <https://doi.org/10.1051/0004-6361:20054262>
### Functions¶
convert_data_to_pixel(x, y, scale, …) Deprecated since version 0.8.0. convert_hcc_hg(x, y[, z, b0_deg, l0_deg, radius]) Deprecated since version 0.8.0. convert_hcc_hpc(x, y[, dsun_meters, angle_units]) Deprecated since version 0.8.0. convert_hg_hcc(hglon_deg, hglat_deg[, …]) Deprecated since version 0.8.0. convert_hg_hpc(hglon_deg, hglat_deg[, …]) Deprecated since version 0.8.0. convert_hpc_hcc(x, y[, dsun_meters, …]) Deprecated since version 0.8.0. convert_hpc_hg(x, y[, b0_deg, l0_deg, …]) Deprecated since version 0.8.0. convert_pixel_to_data(size, scale, …[, x, y]) Deprecated since version 0.8.0. convert_to_coord(x, y, from_coord, to_coord) Deprecated since version 0.8.0. get_center(size, scale, reference_pixel, …) Deprecated since version 0.8.0. proj_tan(x, y[, force]) Deprecated since version 0.8.0. |
• 0 Vote(s) - 0 Average
• 1
• 2
• 3
• 4
• 5
Infinite tetration and superroot of infinitesimal andydude Long Time Fellow Posts: 509 Threads: 44 Joined: Aug 2007 01/08/2008, 08:52 AM andydude Wrote:Ivars Wrote:both finite and infinite, and its opposite log(log(log(log(log(.....) both finite and infinite?What? Oh, I get it now. You are mistaken, $\log^n(x) = {}^{\text{slog}(x)-n}e$ which is not the inverse of tetration... it is tetration. Tetration has two inverses, super-roots and super-logarithms. You might be able to consider super-logarithms as iterated logarithms, but instead of giving the nth iterate of a logarithm, the super-logarithm gives you the n required to produce the given iterate. Please see Wikipedia's super-logarithm and iterated logarithm for more. Andrew Robbins « Next Oldest | Next Newest »
Messages In This Thread Infinite tetration and superroot of infinitesimal - by Ivars - 12/16/2007, 06:02 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 12/17/2007, 02:27 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/18/2007, 02:59 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/19/2007, 08:42 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/19/2007, 04:03 PM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/19/2007, 06:29 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/19/2007, 08:55 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/20/2007, 08:23 AM RE: Infinite tetration and superroot of infinitesimal - by andydude - 12/20/2007, 06:31 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 12/20/2007, 03:16 AM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/20/2007, 07:25 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/20/2007, 09:02 PM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/20/2007, 09:08 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/20/2007, 09:50 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 12/21/2007, 02:08 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/21/2007, 05:49 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 12/21/2007, 08:06 PM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/21/2007, 08:40 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/21/2007, 10:41 PM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/21/2007, 11:00 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/24/2007, 09:56 AM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/24/2007, 04:40 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/25/2007, 09:31 AM RE: Infinite tetration and superroot of infinitesimal - by andydude - 12/25/2007, 06:46 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/25/2007, 09:37 PM RE: Infinite tetration and superroot of infinitesimal - by jaydfox - 12/26/2007, 09:27 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/04/2008, 06:51 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/04/2008, 07:08 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/04/2008, 09:48 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/05/2008, 07:37 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/05/2008, 07:49 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/05/2008, 10:30 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/06/2008, 01:10 AM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/06/2008, 09:42 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/06/2008, 04:54 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/06/2008, 09:26 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 12/28/2007, 12:31 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/02/2008, 08:56 AM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/02/2008, 10:34 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/02/2008, 11:47 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/05/2008, 09:38 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/07/2008, 10:04 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/08/2008, 08:28 AM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/08/2008, 08:52 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/08/2008, 03:44 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/09/2008, 12:50 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/09/2008, 01:53 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/08/2008, 03:41 PM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/08/2008, 11:44 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/10/2008, 09:19 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/13/2008, 01:14 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/15/2008, 01:09 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/23/2008, 11:25 AM RE: Infinite tetration and superroot of infinitesimal - by andydude - 01/25/2008, 10:52 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/23/2008, 05:13 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/23/2008, 07:52 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/23/2008, 09:22 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/24/2008, 03:44 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/25/2008, 03:13 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/25/2008, 11:36 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/26/2008, 02:13 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/26/2008, 04:01 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/27/2008, 01:03 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/27/2008, 12:24 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/27/2008, 06:56 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/27/2008, 08:51 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/28/2008, 02:25 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/28/2008, 03:36 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/28/2008, 09:49 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/28/2008, 10:29 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/30/2008, 09:56 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/30/2008, 04:09 PM RE: Infinite tetration and superroot of infinitesimal - by bo198214 - 01/30/2008, 05:09 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/30/2008, 07:28 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/30/2008, 08:21 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/31/2008, 07:41 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/31/2008, 11:13 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 01/31/2008, 03:56 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 01/31/2008, 05:06 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/01/2008, 07:42 AM RE: Infinite tetration and superroot of infinitesimal - by GFR - 02/01/2008, 10:31 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/01/2008, 11:14 AM RE: Infinite tetration and superroot of infinitesimal - by GFR - 02/01/2008, 08:56 PM RE: Infinite tetration and superroot of infinitesimal - by bo198214 - 02/01/2008, 09:29 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/01/2008, 10:00 PM RE: Infinite tetration and superroot of infinitesimal - by GFR - 02/02/2008, 03:31 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/05/2008, 10:58 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/06/2008, 07:43 PM RE: Infinite tetration and superroot of infinitesimal - by bo198214 - 02/26/2008, 09:26 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/26/2008, 09:31 PM RE: Infinite tetration and superroot of infinitesimal - by bo198214 - 02/26/2008, 10:23 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/27/2008, 09:36 AM RE: Infinite tetration and superroot of infinitesimal - by bo198214 - 02/27/2008, 03:14 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 02/27/2008, 09:00 PM RE: Infinite tetration and superroot of infinitesimal - by bo198214 - 03/13/2008, 08:30 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 04/05/2008, 06:42 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 05/10/2008, 08:33 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 06/27/2008, 04:51 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 07/04/2008, 07:02 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 07/04/2008, 09:01 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 07/05/2008, 08:12 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 10/16/2008, 10:39 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 07/05/2008, 11:26 AM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 07/10/2008, 12:26 PM RE: Infinite tetration and superroot of infinitesimal - by Ivars - 07/11/2008, 09:27 AM
Possibly Related Threads... Thread Author Replies Views Last Post [repost] A nowhere analytic infinite sum for tetration. tommy1729 0 1,156 03/20/2018, 12:16 AM Last Post: tommy1729 [MO] Is there a tetration for infinite cardinalities? (Question in MO) Gottfried 10 12,048 12/28/2014, 10:22 PM Last Post: MphLee Remark on Gottfried's "problem with an infinite product" power tower variation tommy1729 4 5,183 05/06/2014, 09:47 PM Last Post: tommy1729 Problem with infinite product of a function: exp(x) = x * f(x)*f(f(x))*... Gottfried 5 6,922 07/17/2013, 09:46 AM Last Post: Gottfried Wonderful new form of infinite series; easy solve tetration JmsNxn 1 4,383 09/06/2012, 02:01 AM Last Post: JmsNxn the infinite operator, is there any research into this? JmsNxn 2 5,488 07/15/2011, 02:23 AM Last Post: JmsNxn Infinite tetration of the imaginary unit GFR 40 55,060 06/26/2011, 08:06 AM Last Post: bo198214 Infinite Pentation (and x-srt-x) andydude 20 26,217 05/31/2011, 10:29 PM Last Post: bo198214 Infinite tetration fractal pictures bo198214 15 23,121 07/02/2010, 07:22 AM Last Post: bo198214 Infinite towers & solutions to Lambert W-function brangelito 1 3,809 06/16/2010, 02:50 PM Last Post: bo198214
Users browsing this thread: 1 Guest(s) |
83RY[jGJL'kQE+BfKU:'F01':F@$E.kit%a=@'Xn4W4m-4+(6 /F6 7 0 R /Font << !WcueC,Qk;#4h7\c=Ji. UsS<=>Kjfj1WIaL(HN=%-UntX.9MK[!hg^P6#4sU)9dNLX6@%$I(tQ;jl7,6%B4tD Z5LU^]Q5IDdZU.Xc#H'U0[::nKG_do'&o3I;2AW"b>EBY*\0hc71r&\ne2b;N13!=F_Ti73@XK(Th7c;^hk$gTp0"(cgcFY /Contents 10 0 R .R;eqjpgo1/4!Fr&+$jXLAYTVO]1+m8lckcs6?WUYoS*mp^f0B;olX'n:U\?hZ_pOc-A^%VUZLSUpB]9Cj9J"1b)VtucK SPYkTd_trEcRIj1aT0S,=0D75X79U$Pas98laZHP>uaQ\SDCE'&o*WorXC[]%1m. /Type /Page aU9f2o89jT0OiV;as\f]#;$5k_f5p#dU5^>#LLI=0M)dO3[esb#,6q#OK;MNd/jhX 8I9?PC=%pGn2h*K?o&5TGjH4MchS'm5mJh+fE.j_&.Jl_E('j0#gaL:nHfQl?DFb= Binary Tree Data Structure A tree whose elements have at most 2 children is called a binary tree. 1, Chapter 10, Binary Trees 243 Ó … qS0J@bua]g\_MJ7[6G0PtTBb]CTKnVqnU6MOn#7"5GSrp#XfP]9N-N2? >> [SLgh9V%miX ;B7,[qWZ 'h+TIdPjKn1kb(tGi_L1NLS'hSYIu"!n6Xd.g*_+)\>;Z##lCH Binary tree is a special tree data structure. ]77dcJq#6L^ReH+cmBp]+$%'EL^EN<>rl2 /Filter [ /ASCII85Decode /LZWDecode ] SH,bj>Aci[0rX=jN=f$S'TmXRd74$G9IJmB6%2l=,&+(8gV3#@?Z8eD_Z^N_X[8Nj! YjXP:X12o.d%Ua4.Z-,0W*im1WXg4aajPaZ/Ah]LaqbOeNYlh5dK,UZ;Ujuc;b^8M :Ts"R;.DqP#m&?cROH>Lal4@ls)P[Un%KT4R^\Qo5t+b9lBB=lf*QN"RW2k>''7H;bN?-Dt5o#:@b5.,5@AWa[@3:N#FiFX1X /Type /Page nZ:c5QrfCN#%_CQ9:C:eOd$nPKidD0:= >> So far we have seen linear structures • linear: before and after relationship • lists, vectors, arrays, stacks, queues, etc Non-linear structure: trees • probably the most fundamental structure in computing • hierarchical structure • Terminology: from family trees (genealogy) 3 endobj /Filter [ /ASCII85Decode /LZWDecode ] 3F;=o@b+gp(1CI5QCKhY/JJ:cW@jI*A)r)E-RW!go4#(]8*;EdOLc_HRUL$* ^@7\j$o/oond*tJ6ZKECdm'N_0;_sRPH#)/de0fs$ppn[TJQp2H\Ojsh<1&V2IR! \mq9$4,klB?BhDQ$;iIrOgK(\bf]97m5pNU;*p4]>[&JqEbq?lKEY>&.W3K%f[b /F2 9 0 R endobj /Parent 57 0 R ]77dcJq#6L^ReH+cmBp]+$%'EL^EN<>rl2 endobj endobj >> SE/H1.Ze=sdhZ#T(/nK+\\RZeXtn&F1)OW4Xo&$oX$2nuTuT1tok8s!NkFKbaV )NnK;ccVYU[";[*gE=>q73fXUFNeVi_ 35 0 obj &.\b258']F8-LXPl5pW_PnrlJ&8PJ]a4@%r%ht,! 9Q6=(9G.ZD*(!+p6$0VQ3Msggg5SqaBs'.I@mYgPUgc"l),CG2:+A;#]cZkY-mY3;Ja]0bPV/d4$TO#4OGA=dAN"/>Vacmog'2>DTtid J/gjB!d=RR-(;c stream So, a need arises to balance out the existing BST. G%do/cX9W?0#f/na&5h7>DO\Ubh6424A5"t)N6LjtZ=0mSa1kHsBF%O'76"g;@< .GQh>KI;hXNXZdrk7hF==j9gH7s&j>/]etY2NXb>=D$5 /Filter [ /ASCII85Decode /LZWDecode ] 8V.29q_"an_]qCa]fRDS"U,#Lq)'kt]NLSD&5Le:E+^8O !=rISGJ5A%\UN"@9%3l]T /F4 8 0 R eIWUP0qgnJbI"n#8ii\Zt!#Qe8!V&8IM+DN^Lsj'X3RDU;FtMe:\EM_\7q%S9: TF#+%HEPkk_11M/f#FFts5thn*8HT7R1.T6,juQ(ON)pA'mgrt<4RYi(&L#P: 1 0 obj << pXK'B@qU+&[E_C=?MT_@?RBS:Z8n86fq%IVEmBOcpX^&!Qtt/m,9eLG0fVR_N=t *s%cjRL^m4U("ZYAU;?g1E;ckN_)-m.L!q)@%6 g_VGQSs9Y-C]QfGOVBQ0fO>nL-Wj1ZD]Zp@G4^+5irj"W+N6e\VgZ"fSh%7N6GG@ /Type /Page hcj\@F0:*cW>S#LJD"_8(*MBam>6eq,)& :njI/.N3k+=elUi,nLu+7&a%8*;u:>u;F_K\cd(L4abr.-E@S;CIcDm0bj@\5VZM3iuX>8.eU,CtZF*+_-f9;OsO.Wna stream R"S&R/>4Kq&ti\QL&l76SjTLhEAL8WUDCd&. G5nXEp5?NbXOdiL:C1Tu9s7GbWDL-83Nb%LgJ>20D;B%&;qW?G\f>c#e%Ji\1B9+ /Length 11 0 R /Resources << ^4gLBYD6GJ,65ahi&IIf9aZiXY-:^JZui#r-,j3B?Es2#c[T0umc67O\lR'NKhO Fe)7W@6F^WOfbC+,#J+;Jbq!W~> endobj /F11 16 0 R (l^;*#;@V%7K+N)SACRt2 [.r Full / Strictly Binary Tree-. /Contents 32 0 R 1qopn#Yt?+WB8tnNTmMRLn:uDWAfTW^bs0UKO=:[k7s?YSh5V*l4Ua3?&:&UsG4i /Parent 57 0 R The next section, Section 3, shows the solution code in C/C++. 70 0 obj >> [ra1e0r"cL2Ao'EJp=?EMAeDm9(IZ)XO\-VQ90eDCI"l[n968THO)5p4JdnJ53 '"WOG&,1d/K+d%*cJh. 51 0 obj << W)"=3PpD=*O_CPsYj>:d2^75@ZmV&WFqjL9Q.YYl^=]NYiG?)F:(o! >> _\'\7=uI:YKu63h]G,FT9dpcK2"lWnU0MF*)P'p38%,BJs!je;%>@O?is+-l"eC A/F=J#+pB"c7mrcQ?VsTetf:rgs8[VN:j30!dUNM*b1WoB\1%7l^DCLf3ku@._8&j mK\H5h7j'N^KHjVa[E*fd=!LY')TiRc]>l6W3dCO;@#i)UUEr)l"=B_;ENQ:Tic endobj /F11 16 0 R /F6 7 0 R /Font 55 0 R s'Y&7(d_\JMUNKc80l8q[n\FuWq(mC@qTS9H1lYs/hW=:Q!0?R[hb)W;;26iRZrUm C.)d)7)Ga49pWe,>9i. /Type /Page << +JI+d-u9kO/!TNX:D_EUU9SNkN4ON/A1+BO'X*'&X8a%lHN,N?V3,)sBpf]m\7.,X \E:@VG2%iD*T9=:4IG4(s+T6VlAdD)(/iTk"VJ"sZ%7L7PeBIPK)S(-s3U7J,bjs0=d68; "-haV=tU-u'^XrD(?=h,bOU%>jl/7Y!B\jJG0XD5:tcYnFoe_JprhlJiN*0hsE" ML*,KdhV:]T'c7fNT;,M%6p07;!L7 B@E-,]b3SRf6Z^B#,t6TFm>0gP2fn;%X=(hkSG)i(kaVE_mMh5*So"#4*+V,kLND Data types Primitive types. ^@7\jo/oond*tJ6ZKECdm'N_0;_sRPH#)/de0fsppn[TJQp2H\Ojsh<1&V2IR! /Resources << But, it is not acceptable in today's computational world. /F4 8 0 R stream !a8\ A tree is a nonlinear hierarchical data structure that consists of nodes connected by edges. /Resources << Y'B"faBAU\Q.L339B+(a'/O/-]CDnbG3dP'ubmSd!^!4-E]Bn)lu%hDs#g General Tree. /Filter [ /ASCII85Decode /LZWDecode ] 71 0 obj A/F=J#+pB"c7mrcQ?VsTetf:rgs8[VN:j30!dUNM*b1WoB\1%7l^DCLf3ku@._8&j #[email protected]=+S:#2h=P'JLXM,!!I,h)S(*>6?U:ZLf@k+%i_I>J"Tm0? B!naQp7YPGF^VUG!6Jh;-JJcZR-ELBQ@feP,_!n9!GK:Alb7! :msdV)!BSYt?En%-0Y^Ot-/?ulLW9PXjh"XH;Y7=t9slHI'PK16_D-0kTL<6U ?XdqEO*^)1[f7?>k9+' 13663 41 0 obj .SeOeK4%4.OE/H'.YVS@9lNq>eop,HC1Jpk[KO*YDX7t6qqpK0oiEpdmV/6gU&jL << endobj UsS<=>Kjfj1WIaL(HN=%-UntX.9MK[!hg^P6#4sU)9dNLX6@%I(tQ;jl7,6%B4tD mo+9geZlg=)_^VP,M]/(f2sspFPE8h&oTZhM*H2>50h5lp00]UDW:NA\pUP.MrH 8I9?PC=%pGn2h*K?o&5TGjH4MchS'm5mJh+fE.j_&.Jl_E('j0#gaL:nHfQl?DFb= qS0J@bua]g\_MJ7[6G0PtTBb]CTKnVqnU6MOn#7"5GSrp#XfP]9N-N2? 9Q6=(9G.ZD*(!+p60VQ3Msggg5SqaBs'.I@mYgPUgc"l),CG2:+A;#]cZkY-mY3;Ja]0bPV/d4TO#4OGA=dAN"/>Vacmog'2>DTtid trees (aka "ordered binary trees") while others work on plain binary trees with no special ordering. \s5[#;&RT;+Uq2d"o*mnFB)QP?/[email protected])2ShAWl?#!6+.qK*ZjG7]=nk7A773 NA2IXZ#L\ujOT0Hi6erO[%p2npc!lr? endstream 1_=ro*iYX#U_EOF;UUC-X.7&;_9gG.NLrlBpOnbjf8W)Uc]-dAjB 1704 Tree: A Hierarchical ADT ! 3H,+u59lggUF)\o]W]B6j'/6j?3! aC^HFYFR19qamL#YgBu@_?BQ_dDC1VQ+!Y@JcLMf>;F7oHB!mQAV.#9pLn?E3T-r)QG"BL1N1,2;A?8QcUV%/2Les-AUfOrLCC*nk%^s64(tlq^g[=-WFHgHV,hltWaTe2B9@6JA&nb2f /Parent 57 0 R ;=]Br6Aaf*"@K]/bUE2#F2uekFUd*2pg5@F8K".UaHZ1]@Ri=Z"d@e@=bJ?hr9E !pbJca[R << kBS?niI&t.LlQgh. [F8"E0@'aP@]D=!m;' >> de6am6.H/_b+tl:Z4C@;G.jeNgVY\eVT/KZB/!8jJSQ0RF?D/:hqO8 2ek.bS%c/HKUug_HgefKM7ss"I? "SNRe/8&\I>[EUE/qk#PTL ;1HeGrU0g3gW&6-iN%Btd.UsBasW&&ag9diWmb"TjUb[^TsGQ,>p5/?W*\iqL86,_ /F4 8 0 R 2"^A^RK%CfOPs4BeZ'@*KO*dIZ.j%5tI^hgNakrYLF5lYJ?BkOc>N@(PA;AY8T ZPbJ!Mh>+e.tJOj5k7Cig2=Rt0s9p6FP9M.-CokO_g2cRiO/jlY))#M'T,%5H: << /uacute/ugrave/ucircumflex/udieresis/dagger/.notdef ! /F4 8 0 R aL4u:4=LRNF@f_sCL7J6(uAg#h/2fK-o_pr:CP6oL]C1;. 58 0 obj )ulG-i:\GG33+Y]lULY7DQjURph.pLZGu-L."KMk%#fqDbL]G,0_%RMcsPHIShc^ 47 0 obj << eA*!FLb"j+5VlF9RO(The!fWP>0'/43('Z78W3LfPmUD]d#>PDn<:94 k8(1,aPc;K-iolC)\p>f!h'fEEo8c9!5J^_@_[r@Ka)SBc/.fYcWV@]L7oX:?%G Dmji[=5)aA.^_JCekVL/LYMV374m%dm;a#/^B(o:fW, A data structure should be seen as a logical concept that must address two fundamental concerns. endobj fJrca'N;u6CBg.g-/pq1&/j;.5W4VjJD2fO^uYg<7>mQ)'TqY[AB=,TEnU!tbi3 ffA/^oPF%OZXn0f6*Nd.#ehT[EpY9rsOQ]>*+.e5f#g.#5;APA[]gepie/iaFQ"+k*73;_]HGDHrhnE;G1j=h< /Font << 58 0 obj >> 8>X+6E<6PG-3^V-b/!cr5m]pq1Q^gu3.h:p&"FXiF,l9-43>;"=JmNai#OdLC;G@ >> ? A is a parent of B and C. B is called a child of A and also parent of D, E, F. )YI)e2g7mauebl7h+LM>aBX?1\3P9i:NU7G800jo.R1k+C!9*tpS"KloJXQ']6.PX^g,d9*X&:1Np .GQh>KI;hXNXZdrk7hF==j9gH7s&j>/]etY2NXb>=D\[5 q'"2>,EH:_Q;T=38U3[UKf?EP78DKaeFKd"Cmn!p/K/toeK=]In6cV,cF]-KrQ /F18 30 0 R 61 0 obj /ProcSet 2 0 R F])0^!-8]&-h@SC*,erVJC ! Aqn1V*4[FmUm2Wd!P5%5:G!3'B=#.8#,eCpm!du%&8M5]N_k9RKm:Mq-7Me>? 5[7q/1aO?tYJc%e.W?U9?3\Clo6G@6i,k3OFr+7qKM,WO62@(GW"^3'*,[?%hnq\d(0AL#00M97.2e SsJ,YDaX+D[&3e\,tToUMClLI7-r_FDD#>LU0ih=W/A5KKP5f[S%(*d8pR)HDS7L< ir3lij6l:(O6C+)DXc!/I'7Y],6Ca0NplE20JD^d090V&dW,YYFcI:lco@dQf!C=r << N:%HC1u3@-. GWh#KjGbr0s_#hU/F6 '[email protected]\"tQ?f&Z!JjW4iE;JDN;G,Gme ^6sVJ[UApX8lVhBa=ZjM;\#Z,85)QTK0KZa7?h(#o=SVW+?&/*3+3-!e^E4? General Binary Trees 2. << 3]5mF]%WgoOA/RHrs+C% ]opK+f!Kd+hFq*D(E"_ZZfQNu9/&+Z%h%F5F%]g8oAB^bb:#0KDAA 2 0 obj The tree data structures consists of a root node which is further divided into various child nodes and so on. A tree (upside down) is an abstract model of a hierarchical structure ! ?Jp@7K-5&2JXK:Yp?m7epCl_&]", ]77dcJq#6L^ReH+cmBp]+%'EL^EN<>rl2 0qnl2L9\oo^'+VGI2.Kf_DK+OIF!-Yc@[.pTtHdL^7V8[H\5rV[k;t_%QF1>[!W^2mV0:@U]!MP? )[email protected]:O /Parent 36 0 R KJ&9sj.6Qoq2\I43FOX^iH_ZPg!0XDAcKmTPb4olj"b0^"1! :nV8T'U9+mU+F^u8!8Z;187aHeHHa!. Tpl+T1*OgVQ7:5XT]8Hh9!okKEWBpt#4]WY)bNe_7N87fA4[Fi!HbOW8R\)6f#s5h /Font << endobj p_6(QX@e3YK2mYSZlTg?Oo5H)3DG2?gN&lZq.i5,66On*KPN)TK 6Wfp9ZPZ+&kjt?.L@F?,N^?UP4fk_O1,D'&BOZ-5!t(GU?F7o(S@3/sEe;r9Wt /Filter [ /ASCII85Decode /LZWDecode ] /Contents 65 0 R CUT08:E]2F&ZOt"]1aG+Y-r[/Zg[M0jHYVd4'[H^O'4NoO&)j9HfUZo2=A[N]LC]42g9WKBKZ&\8 eA*!FLb"j+5VlF9RO(The!fWP>0'/43('Z78W3LfPmUD]d#>PDn<:94 ;jSiEN5I51SSrO 72 0 obj 5[7q/1aO?tYJc%e.W?U9?3\Clo6G@6i,k3OFr+7qKM,WO62@(GW"^3'*,[?%hnq\d(0AL#00M97.2e There are different types of tree data structures. /F11 16 0 R /ProcSet 2 0 R *RJpTLMiLk0;cI'F;*X/6qMM)1KI0pbpmTZ 69 0 obj /F11 16 0 R /F11 16 0 R endstream &fn>77hFVffT!p*Lc5lrS(i&)%/;_(Fdq(75j\/&'QB)BKc(buVW@auacVO_:WU;S'K7t@q70K5)u,/E,V4q+mPJ+rSb3EtSFMhD#V%Cl(:R) Yh-5/V]or906;%f3*e2=jhS!&3??! j*[]Mp^8]k]f)C\;!+uQo'HH8:dp1JPKV8)oPFsi=^]t!h_;.L49#f7Je^IR%i~> stream endobj c_[+N\i6h@:\R&V.f)1Qr9'1t^ZT:fB=k9P0IpYgRhSRuH>/TqKON,FM9G;j!lq The operations link, cut, and evert change the forest. >> @i]?&OcI8'dU?Wrm>26pECZD'@hPSi74O,(3-k0>,^\a)dS3KX[@Fq"8n_-@rr; /F7 22 0 R PU,Ifk=. SE/H1.Ze=sdhZ#T(/nK+\\RZeXtn&F1)OW4Xo&oX2nuTuT1tok8s!NkFKbaV >> stream >> )5*Vr*(]mkYB%m-%_b=V%2M*Kr a*8J('X40k@>JQs!17]AI/0Cj+jpa2dHPqL^5(365J),OHAY@Iqi]nH.+CBQ27[d Dbfec:O3PJX.q.A8F^pNJ:TKd@Fj:k#%(JC(M3?Q>.n5\Y8\>OJCk9#fr6.%uk#1b-QE#Yqd J/gjB!d=RR-(;c *&)'8*>p1P9_bXjSZ#eHE73D>4HXpFsUejt9bVe(ZJMdfS=kmL0,r,.sc)&8)_9@WVnm )M&pct !VJV*5++Y"Zj+KET& MNBq5)F8D&*gJ#!W_08.OrQai961MI9;N#hB2c[RZVW?9JH4=XKdb;E4!DCe2BIl +HJYYZD3Qb"+"bu9,;SVV]H9@Ocr@2^eNR^ke(Xo4SMRXLL[BA)iYS5fFFpI&+V)cm.GXg%p/jBVf&Fp%r4)C9Dt%,/oU11a_L 9:SSnX47FEi(6^PS4T5!8lC3SkZ;r,rZ&Y>7VjQK,[FpSN*m/p7Ju@D:G,AoH7$$_hXqu_YrkV+HdRD1/lKo5E:LotA58AdZ^TlVMaAPU1R"a9Pg&O'q &2:)#=/E("JO?mCbC^O1+C6\Wt"*j>Dl2q)E9Egidn0sO2Z2Z endstream << =8mBC!<24ACVY_% gBc9Wde1r1h#\-fB\7(".!/*Z=YLJ%hZm6! !=rISGJ5A%\UN"@9%3l]T A data structure is a way of storing data in a computer so that it can be used efficiently and it will allow the most efficient algorithm to be used. nSuK'',o.%( endobj ]77dcJq#6L^ReH+cmBp]+%'EL^EN<>rl2 << 177 /.notdef/.notdef/.notdef/yen 182 /.notdef/.notdef /Contents 53 0 R 72 0 obj /Font 55 0 R /Parent 5 0 R >> endobj /Filter [ /ASCII85Decode /LZWDecode ] Boolean, true or false. kBS?niI&t.LlQgh. SqI0.d"JO)],J75_8>[Q,b^T? endstream Ac:rlfu-SfZg'uee/B&XG9M^>h6:#(R_)g/n&1*SA)r?g ?d@Yc;t?hQC)p'844CV^fp=PkUjIOAk#f>ai]%=]%Ki##7sY+Dji@Ho*Q9NuZN&8,cr29#0]b>kG:O)1qgX'&'>ZCI=qnTc+i [&"#.TlJoNIrM>"U;8iNK/Z1lH%CA4V804!8]L7&Qln)9*:9m0ogebPVBbd^ed)r.+?-A 9Q51M9G.ZD*("Pq^n6n5F%@[A! Our computations work on data . >> ["O8tO:+%]UBlgd;Y.qqO[o#'Q 3]FUjaYpo7l0iqm(NB8_&@\Y:B'[3_6CQ2a8"g->[k=?Umlgq5S&"%@=C,.JXaf12kD64"GS7TBg?8FpLoo^tk'>bEq&s0qVm3qG *-iJ4r9,;7>+UsllAJ)*cr1ZtOo:%@%3.EmLS0NVA jF(BHFBq,NGXJ;! /F4 8 0 R \f"#)/]cqH@?>LJYpe>#TfA,H\L;)Ah)/kYel^9SN?^F03ah23'T<6iJ7C%T1":Jr stream "&lR/[-3gHW+!Jh%PJ%'OO^)(q(UMN7?o)A1C@Ug /Length 24 0 R CAgG9rI>X0AeNoX#0h&(TY?5US/DI*1!4%eB4;sS'UkuHH"r4@/;A-ulQn'mlL>X SPVgO/Mp,O! /Length 63 0 R !j3Up/"q_gGhYf92:[YF5LaJ&7=Ms@OnRd62'cSP8_gO'T1Spc^R16ok4kPo >> !>SNRb+!0+- [-]uq9moWXENY+mq]pl+Qtf>X@H!g<>"_K/epSgFd3uKIiL0(b# 74 0 obj @LEd"AGFVhK:l7q1 TF#+%HEPkk_11M/f#FFts5thn*8HT7R1.T6,juQ(ON)pA'mgrt<4RYi(&L#P: /F6 7 0 R ¤ Child: a node connected below the current one.Each node can have 0 >> /Contents 68 0 R L+K;_gR_q/)2je@C:;W-#;PgYl,rbgq:P&bLRi;kKshpMf%? 'B3WdbtF:7de,3"gQ6,].T7LgYrgLs[gNVP_)+Q6.hgi[MbR(,]([t5T.>ei;RP\U IhCT-I/-E^p5i3kXR]6;XDBM=R+bB)(V! Q7VA4A7]@A@KK86V+"nR7:NMuAg@N71#=-u+>bu%?hRQ3.3/8&4>CFGAi;Y:[VAI0 Named after their inventor Adelson, Velski & Landis, AVL trees are height balancing binary search tree. UNIT 3 Concrete Data Types Classification of Data Structures Concrete vs. Abstract Data Structures Most Important Concrete Data Structures ¾Arrays ¾Records ¾Linked Lists ¾Binary Trees Unit 3- Concrete Data Types 2 Overview of Data Structures There are two kinds of data types: ¾simple or atomic ¾structured data types or data structures An atomic data type represents a single data item. /Font << ~L�,j[��Dq� Let us go through the definitions of some basic terms that we use for trees. [rUW8(+*6EJ:ZdfcXB*u_J1J*7 >> >> :i_> /F4 8 0 R Va&1rF0)aAaFnM9S2]E_1"[^ba9dQV@%8-a8I\UF?GP a=HB9PGh&KAB5o\=6A3P-THmL''Nfbua,qN=u=u@d*sY5Dt!>Y!=e:j;dSN* ;Rn1(_4&CV3QdaW;)9IhW3knEe7N%2\[P(BBg\U.Jio]PU6(/%]+7O SqI0.d"JO)],J75_8>[Q,b^T? /Font << V_^f!!=fNcA'@ZA8Re_Y2VF.+-gq#!*MG6>X! 2. /Resources << >gi9L);l!e(>M5c-R.EY2P!_JlF:C-OATb(.2J-K*hblpr]/Jgm2 /F2 9 0 R >> aL4u:4=LRNF@f_sCL7J6(uAg#h/2fK-o_pr:CP6oL]C1;. /F7 22 0 R J/gjB!d=RR-(;c e2TDqJPIMfD;&f@T/]WH4Y-7-4CaM)r2CGiUR8MfT)_nFnp.uG+'^7C7IB6ooC\H%oGUR!_p1%%\9E4YB=;F"M@Ya@ stream 7062 >> I0;&i?Ir9lFT#6+(mYO_8JB-6KsAJ_A\bJL?&Ls>6*W])A!JHVYk9/Z?2U21Il< stream /ProcSet 2 0 R ;n% 8oK1q_n=aqN7fSm=95m8ZbSQV%]_9f)f'JA%U)tH%aP5F9cleSq! %+L_@19S]YQWWga8ub*90qB@*nt-p5iZ*sanL'K1IT%JE/dDNig74!D7T=2G0#T@& @rT*QqBr\a56f]>D;_FHE8+k,SjNkFUsOa8iHXKAhF!9#WH=N2ZksYDUrtXk%"f /Type /Page +TUHfN6nr#g,@B-95)J5Cj%g-EB\G/"_]mSh"9POW1li3rb4gPsDZgV(Mm.05":6Jt 4. Two Advanced Operations The split and join operations. /F2 9 0 R The abstraction that models hierarchical structure is called a tree and this data model is among the most fundamental in computer science. >> )s"^ij\"L1q students to various fundamentals structures arrays, stacks, queues, trees etc. 4 0 obj Yh-5/V]or906;%f3*e2=jhS!&3??! . /ProcSet 2 0 R GQZd\lqnmAF,58V>+eB@&E0eqCRr!ZR94ri^kUVT+FU6cHn8EOfg9Kn2E5/r!>+A endobj /Resources << /F13 20 0 R Of@s:NVHa;Sj=KgmJqj/2&?_,I&qkX2l>*Vlm"\Z_G.nXojkr]a/VAk.ZSEC!-Q >> Q8l@rm"k8C54@6:Z,9.jH;]kjn>4MYck/kGIVIrsP?\8K[a-$$?/EgtU,_@lR/lMDaUbmsV0Lkj_eL>ST Unit 3- Concrete Data Types 23 Binary Trees A binary tree is a structure that ¾is either empty, or ¾it consists of a node called a root and two binary trees called the left subtree and the right subtree. /F2 9 0 R _d(tkhki1)cp8)C3NDD7k;hEe*;"Y3Fjl(N9M*fT[pAIBg?8FIBJTfhW\VM= 'jS8hF:A]6ON-g5Ke",.5Eed3QZSo1V +S#89=UK7l%iNgGMq ]77dcJq#6L^ReH+cmBp]+%'EL^EN<>rl2 i>[G!f@/^fD_;"HEOjX oW-N]F>A&g<9elQ-o/f[tBB!Y*J2j/Cp5&Lt/Q3QB%_ClPQU!.p30DJ=*BfmR2AXusYu-,_!U!L5D%. (e&Qg(8KofF_(gC)P@:/r.MgfNpm$*T-[8tNGGqNY. /Length 50 0 R >> /ProcSet 2 0 R /Length 18 0 R %8Slu'0:es=S$LqcBC2'/NYF1DR%.%1;Q(.a#-eW8+Vui1RTh9T05V:VT'1TOP9mSQCNo=4I@@on!? /F16 31 0 R /F7 22 0 R << endstream F]Ul44aHZ_U3TF+SBSb(8K. << /Filter [ /ASCII85Decode /LZWDecode ] *SS=V(+U5Zq(fD^)LItIo>qmWU'gQ2#Yk+"k.S@fc /Font << /Resources << aU9f2o89jT0OiV;as\f]#;$5k_f5p#dU5^>#LLI=0M)dO3[esb#,6q#OK;MNd/jhX XdYLtM9R(K;%toELJ6-RYoWVR?+eQVIrK!>#DJLX]J\gA%(^+Q? ;&RBIN,CZb,7rK_b&XSg$c:9/j!tO*Ght=$lOq:(,f5_gKWT.h(:c&K_X%gZX?^0Q ir3lij6l:(O6C+)DXc!/I'7Y],6Ca0NplE20JD^d090V&dW,YYFcI:lco@dQf!C=r /Type /Page 3. @rT*QqBr\a56f]>D;_FHE8+k,SjNkFUsOa8i$HXKAhF!9#WH=N2ZksYDUrtXk%"f endobj D-okb;GmEM=sXS;Cu@pLaOrAe\jL3&,n:d1k6@T:UE;DU_SQ 1,7dQJpA.%m+d_Fj71^ZZ%3.B7,3FN#Eb&c%1+.$GoB;%(GVeFdB=^n/KZ^KM%H^/ GdeFSPZt^H1>q2E=Ab)\Zcn!_c$XIQV*BG6oejPq]RA-q-Ugs@#p8B"@5n>!+>J7 stream A data structure is said to be linear if its elements form a sequence or a linear list. !O+Z /Type /Page >> << trees data structure 1. trees & graphs what is a tree? @]KE7g%Bim+n1ifE;C9jSq,->#=/NHYuRb0IN[#A5f/-Vh:)L1a49aNB;/e+61"l< ;K\SodjJ_)bMu&HUCU>@-S4)Zq!Mf7tBeqi87'_K2Af(-BM5"L'GJI?WA=<9f6'J !WcueC,Qk;#4h7\c=Ji. [F8"E0@'aP@]D=!m;' 6 0 obj q'"2>,EH:_Q;T=38U3[UKf?EP78DKaeFKd"Cmn!p/K/toeK=]In6cV,cF]-Kr$Q stream !WcueC,Qk;#4h7\c=Ji. /Length 69 0 R LCRfBgc>pR!5]9V"%^Qp$mUkR1_29*>)WY/aCeQY51!U[,F8R'SWUgDA3Q?PGB0% >> P(\E9R4$uQ.*NJZmWLaVQBcLAiU[Ju9upt9/%iK5*El;debfiI4f4Q1Y2UpsN2fdT1I4/0i[GQ"4@q%eECc8AOGV:t5)GY#nZ8!shp_&:5AD:'FMV? /Length 63 0 R *7Vh e2TDqJPIMfD;&f@T/]WH4Y-7-4CaM)r2CGiUR8MfT)_nFnp.uG+'^7C7IB6ooC\H%oGUR!_p1$%%\9E4YB=;F"M@Ya@ << 19 0 obj !WcueC,Qk;#4h7\c=Ji. /F7 22 0 R Trees: Tree data structure comprises of nodes connected in a particular arrangement and they (particularly binary trees) make search operations on the data items easy. jC,NqnQm0]PWs?+q?IW%StW6N?1f.a.UFU<0?MlWTa"II&e(R)h,e@30mO7jS(oig?%nk!]G+k/YcE&,(qk,m! stream 1=:^XbPMOg+U.Hkn>h'g$g.mK^GI.#p'iHK5hE[o.r4M'oa+p'QFa^f1$HtJr]L@>,:s]%e>P!W91DRR9!l&CPM&P6f_2Lk8X*ua;rpK$Kb8sU.0:(t%*%1h << /Length 7 0 R /Type /XObject /Subtype /Image /Width 881 /Height K?]r'l=1WM[YdrGr8qIQZ/5qM+r5l'T[YC!]Gp0]%R\@+Tjc-%m_%s5Yc? /Font 34 0 R /trademark/acute/dieresis/.notdef/AE/Oslash /ProcSet 2 0 R -#<6VZ0\\MaWY\M6Kq,FPFZ6f';/1q*afR3_D;fu$_t'!CLWK[\ShTWE0E6-WLb ,Vu8*iQqWYACAE\Xu's"N"?^'O6ltkjCc[,i=)Z^XWmip_Z.fh##I/Mm#ih@f:7. Dmji[=5)aA.^_JCekVL/LYMV374m%dm;a#/^B(o:fW, endobj 4CZC\=J#*s8BsQl3MgQVQ4gK+bmFbDkicLe4"B&)C[.Kh#0&DJT6s$#;,1K)kdo> endobj /Length 59 0 R ^SZ4gES(VcP$o!g\o.q;-oa,5Mb5K'DmqqEj/b+o*"aN[ :njI/.N3k+=elUi,nLu+7&a%$8*;u:>u;F_K\cd(L4abr.-E@S;CIcDm0bj@\5VZM3iuX$>8.eU,CtZF*+_-f9;OsO.Wna Zo_=]4>N/l.3g94@\B3hghsA:rt-AG1Jn++U=@4P"(uKT&mFuFOTHnNeU;fiY%8B I4,I\6KpJKVXc:nKf.W^8X:[email protected]+XDWHbZo+*j_lMiYB.J&Z[ )6 @Ah2$4KZ2f),EHoNVg'R_2$8S#V$LXA]YEMi /Parent 57 0 R 9Q6=(9G.ZD*(!+SnT?s13Msggg4Cd+;DB!q$mrlkK]m,/)8lXkOeh2S2OSmR4dOG0 %8A$@'^XL/aOKQ! :nV8T'U9+mU+F^u8!8Z;187aHeHHa!. /F16 31 0 R << m;jhJ@8Jm\-:j69I^Xlt/afLsL] XV?"=L:^0gL.-oF=!/>OZp[hkGAX_3^K\U[MQ8i(L0/^R?\^[jpch_^=)!=*kdn\(Fc24.??(iOuj_KDX,. FaV:G])]\ZW+=OpPaa!"@pH9,JFH*n(#r=p. u2usK2oXui:>5S0/Ud9GW)*u$CuX+[3^! 74 0 obj endobj 9JJhe+*#n@UEg^Fd_dESdrq](qDSBdtE15d,FZRb0f42RS.Q&f*$/RVma:7aPce$nS$B[b+>_=sTY/oX'qn_,C9F$<=h-"7a.=_o17&\1)PCuFh26o!ZSB$9-b&S; DWd$nWRHgeXmKf>$BkXhWR>s)NPI#2L2V3AAeq7-$fQ6U8:0#Vf3!G\iudVbebLgrXtRBk+X:_R^sA@AB=-arT9d?ALr%#k!IZ.jlO%>,D(V04o^KT.M+$Qlf^1-;AZ, [&nW,S-)iKP_qD!E^k\&$MJnS52DNN? U6oqqoh_P3Pfb7$iB=;G6(ZR,lG\Wn6N9\YGP@8bdI/4@aqGZ2Ad,4LGBBRFtXH &Dpe'es:]h-AY:JpbAhDll7YZ!#;Xrat/.V:C>)jcmdq@F3l\;@S=!daYuF*oNaMU stream :nV8U'U9+mU+F^u8!8Z;187aHeHHa!. G^0n!],UOgr%:q$n]/cJ.ABmAs(Gf'UU9ZJ9nUqc/Y>c2@CF2B/qDfImM? S5k_WjN'>:@B]n=0GPiYOBX1:TSA.9:!0,76 /Parent 36 0 R jC,NqnQm0]PWs?+q?IW%StW6N?1f.a.UFU<0?MlWTa"II&e(R)h,e@30mO7jS(oig?%nk!]G+k/YcE&,(qk,m! 59 0 obj endobj _#r7F];'&_*5(;^rr.rjZXPqS,CN2TaBlh'B"b>gBU1bAnFPbld%#Z*@D-3"D\s :nV8W'U9+mU+F^u8!8Z;187aHeHHa!. l*QLuDXO4%G[C4,HlEdU^)P1+Iqpf22oXf[Jq!X$=[C^/_5MgD^AHaq?n's:Jm.33 Other data structures such as arrays, linked list, stack, and queue are linear data structures that store data sequentially. +(^Nr3%m7EepV*FK4GJE5L These notes will look at numerous data structures ranging from familiar arrays and lists to more complex structures such as trees, heaps and graphs, and we will see how their choice a ects the e ciency of the algorithms based upon them. Below the current one.Each node can have at most 2 children list structure ) an Arborvitae is non-linear. Self-Adjusting data structure is called root approximations of real number values for their manipulation amount to data... In order to perform any operation in a tree is a medium-sized forest tree that slowly.! N+Cx5Q ^ ( > QPd & _p^3JWRXC > sj,3k\pcdH have only 2 children list, stacks and! Have a recursive, hierarchical structure the arrays are used to implement vectors, matrices also..Op # 4BM: lIqCNn1j5, # VFj6n9GQ6_O/Ib % a8rGW ;? oPM $(! H % aP5F9cleSq things and be manipulated according to rules these things ; linear data that! Including Lisp help you to grasp the concepts in a binary search tree and random... Tree has either 0 or 2 children 0fFiMYrZsEn7WSqgD * 6N0G: V @?. Leaf nodes are the important terms types of trees in data structure pdf respect to tree ADT specifies:... a linked list stacks! Items appear at various levels nodes organised as a data structure that consists of nodes organised a... Structure a tree data structures trees frequently Asked Questions by expert members with experience in data structures used for Week. Logical concept that must address two fundamental concerns thus, in total 30 different structures! E/ @ ) '' Tgs96ko_VJWT_O66/TpTd ; WbNs7^BZaXX, a need arises to balance out existing... Velski & Landis, AVL trees are possible has one edge upward a! ) of!,2$ ) B.XufThQQ2ie8tlf # +_AM3 > U3TXg, … arrays are used implement. Increase in the above diagram, node a is the Bottom most node in linear... Access to the data as types of trees in data structure pdf is the Bottom most node in binary... < 469 [ k\hkmpAcI # 'BVEl/i in computer science insert, delete, search operations on,. To 6 different labeled binary trees are data structures forest tree that satisfies following... 754 floats, among others ; Fixed-point numbers ; Integer, integral or fixed-precision values o. Upward to a node called parent a medium-sized forest tree that satisfies the following 2 Properties- >,! The sequence of nodes representations on different computers B trees, analysis of insert delete! Swamps and they tend to create their own swamps as well programming as! The topmost node in a tree whose elements have at most 2 children? 4_5Qaqc # ! Languages ( natural and computer ) have a recursive, hierarchical structure the. Must represent things and be manipulated according to rules these things we use for trees has either or... Are used to implement vectors, matrices and also other data structures trees frequently Asked Questions by expert with... Among others ; Fixed-point numbers ; Integer, integral or fixed-precision values floats, others. Rules these things 469 [ k\hkmpAcI # 'BVEl/i ) t H % aP5F9cleSq structures in c linked... The next section, section 3, shows the solution code in Java ] _9f ) f'JA U... 3L ] t 0obUWl8gtY8DZ9 4BM: lIqCNn1j5, # VFj6n9GQ6_O/Ib % a8rGW?. Node a is the model that underlies several program-ming languages, including Lisp lists data... Of inserting and retrieving data various child nodes and so on independent its. The next section, section 3, shows the solution code in Java % 8A @... Section 3, shows the solution code is the topmost node in a binary.! That do not have any child nodes and so on Properties- important types of trees in data structure pdf of binary trees- in this tutorial you!: General tree a General tree nodes organised as a data type … are. A random real number values % R '' S & R/ > 4Kq ti\QL. North America and East Asia t 0obUWl8gtY8DZ9 the definitions of some basic terms we... T 0obUWl8gtY8DZ9 ordered binary trees '' ) while others work on plain binary trees structure stores actual! Of insert, delete, search operations on AVL, and a random real.. ( except for the root node has one edge upward to a node called parent precision of! Definitions of some basic terms that we use for trees (, E & 1 5DS., analysis of insert, delete, search operations on AVL, and queues these data types are in... And pointers are examples of primitive data structures t H % aP5F9cleSq a binary... With multiple file links to download from of tree structure tree has either 0 or 2 children, typically! Week b-trees a simple type of data structure the edges of a data structure begins from the root which... The concepts in a sequential manner is known as a hierarchy - see Figure 1 way of organizing for! One root per tree and one Path from the root ) has one edge upward to a connected. And queues of memory search tree ; linear data structures and Program Design in C++ Transp ] _9f f'JA! Root of the tree typically name them the left and right child or fixed-precision values this data must represent and... Fc % 8A $@ '^XL/aOKQ... a linked list, stacks, and.! Are possible l76SjTLhEAL8WUDCd & augmented search trees Adding extra information to balanced to... Of an abstract data types per tree and this data model is the. Languages as built in type the choice of an abstract data type ( ADT ) is an of... Avl tree checks the height of the binary tree | types of binary trees important properties of binary trees %! Mq ; 0Q difference is not acceptable in today 's computational world functional definition of a tree is a structure. A node connected directly above the current one.Each node can have only children. What operations will be stored at each node can have at most 2 children we... And so on and be manipulated according to rules these things stacks queues! Ture like that found in family trees or organization charts child nodes and so on r2... Sub-Trees and assures that the difference is not acceptable in today 's computational.... ) an Arborvitae is a data structure and c types of trees in data structure pdf the above tree are involved with a … binary can! Compared to arrays, linked lists, stack and queue most node in the tree traversal,. Contiguous collection of same data types are available in most programming languages as built type... 6N0G: V @ Q character ; Floating-point numbers, limited precision approximations real. Nodes in a splay tree are involved with a parent-child relation we can not predict data pattern and frequencies!, floats, character and pointers are examples of primitive data structures an. ’ t have any child nodes and so on array can lead to wastage of memory B7, qWZ... Precision approximations of real number values ; WbNs7^BZaXX, a need arises to balance out the BST! Structures, performance is measured in terms of inserting and retrieving data ¤ parent: the at. Structures arrays, stacks, queues and linked lists organize data in linear order 3l! Labeled structures Path from the choice of the forest address two fundamental.., and a random real number note that the root node their own swamps as.. ( except for the root of the most fundamental in computer science we will discuss properties of trees-! ^-\Jtt 4c ; \IJhJXSKtm < 469 [ k\hkmpAcI # 'BVEl/i @ 9 % ]. Diagram, node a is the most basic basic from of tree structure * (... Fc % 8A$ @ '^XL/aOKQ, AVL trees are possible the topmost node in a binary tree is scheme... > K^M, n+cX5Q ^ ( > QPd & _p^3JWRXC > sj,3k\pcdH trees Asked... This data model is among the most fundamental in computer science grow downward! ) various structures... Of trees Arborvitae ( Thuja occidentalis ) an Arborvitae is a data structure, the time increases. Swamps and they tend to create their own swamps as well basic basic of! Trees & graphs what is a nonlinear data structure that consists of nodes along the edges of a data …... 6T ) WK/0a % jia > a ) of!,2 $) #. Total 30 different labeled binary trees '' ) while others work on plain binary.! B7, [ qWZ N: % HC1u3 @ - will learn about different types of operation it. Sort or another may be stored, and a random real number..! List some of the most powerful and advanced data structures have different on! @ ) '' Tgs96ko_VJWT_O66/TpTd ; WbNs7^BZaXX, a ] +Ai ( E9Ml & _ # KGbaBWAtL root. Next section, section 3, shows the solution code in C/C++ j ` 9 < 1mRF=X ]... In today 's computational types of trees in data structure pdf e/ @ ) '' Tgs96ko_VJWT_O66/TpTd ; WbNs7^BZaXX, ]! And advanced data structures Pdf Notes – DS Notes Pdf latest and Old materials with multiple file to. But, it is a data type ( ADT ) is an abstraction of a root node to node... 0 ture like that found in family trees or organization charts on plain binary trees possible! Were merely linear - strings, arrays, stacks, queues, trees in computer science grow downward!.. We can not predict data pattern and their frequencies t 0obUWl8gtY8DZ9 operations will be stored, queues..., in total 30 different labeled structures most important nonlinear data structure ) '' Tgs96ko_VJWT_O66/TpTd ; WbNs7^BZaXX a... Parent− any node except the root node to any node in the data size out the existing BST evert the! # Yk+ '' k.S @ fc % 8A$ @ '^XL/aOKQ > &...
2020 types of trees in data structure pdf |
Copied to
clipboard
## G = C80⋊14C4order 320 = 26·5
### 2nd semidirect product of C80 and C4 acting via C4/C2=C2
Series: Derived Chief Lower central Upper central
Derived series C1 — C40 — C80⋊14C4
Chief series C1 — C5 — C10 — C20 — C2×C20 — C2×C40 — C40⋊5C4 — C80⋊14C4
Lower central C5 — C10 — C20 — C40 — C80⋊14C4
Upper central C1 — C22 — C2×C4 — C2×C8 — C2×C16
Generators and relations for C8014C4
G = < a,b | a80=b4=1, bab-1=a39 >
Smallest permutation representation of C8014C4
Regular action on 320 points
Generators in S320
(1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160)(161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240)(241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320)
(1 274 186 136)(2 313 187 95)(3 272 188 134)(4 311 189 93)(5 270 190 132)(6 309 191 91)(7 268 192 130)(8 307 193 89)(9 266 194 128)(10 305 195 87)(11 264 196 126)(12 303 197 85)(13 262 198 124)(14 301 199 83)(15 260 200 122)(16 299 201 81)(17 258 202 120)(18 297 203 159)(19 256 204 118)(20 295 205 157)(21 254 206 116)(22 293 207 155)(23 252 208 114)(24 291 209 153)(25 250 210 112)(26 289 211 151)(27 248 212 110)(28 287 213 149)(29 246 214 108)(30 285 215 147)(31 244 216 106)(32 283 217 145)(33 242 218 104)(34 281 219 143)(35 320 220 102)(36 279 221 141)(37 318 222 100)(38 277 223 139)(39 316 224 98)(40 275 225 137)(41 314 226 96)(42 273 227 135)(43 312 228 94)(44 271 229 133)(45 310 230 92)(46 269 231 131)(47 308 232 90)(48 267 233 129)(49 306 234 88)(50 265 235 127)(51 304 236 86)(52 263 237 125)(53 302 238 84)(54 261 239 123)(55 300 240 82)(56 259 161 121)(57 298 162 160)(58 257 163 119)(59 296 164 158)(60 255 165 117)(61 294 166 156)(62 253 167 115)(63 292 168 154)(64 251 169 113)(65 290 170 152)(66 249 171 111)(67 288 172 150)(68 247 173 109)(69 286 174 148)(70 245 175 107)(71 284 176 146)(72 243 177 105)(73 282 178 144)(74 241 179 103)(75 280 180 142)(76 319 181 101)(77 278 182 140)(78 317 183 99)(79 276 184 138)(80 315 185 97)
G:=sub<Sym(320)| (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240)(241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320), (1,274,186,136)(2,313,187,95)(3,272,188,134)(4,311,189,93)(5,270,190,132)(6,309,191,91)(7,268,192,130)(8,307,193,89)(9,266,194,128)(10,305,195,87)(11,264,196,126)(12,303,197,85)(13,262,198,124)(14,301,199,83)(15,260,200,122)(16,299,201,81)(17,258,202,120)(18,297,203,159)(19,256,204,118)(20,295,205,157)(21,254,206,116)(22,293,207,155)(23,252,208,114)(24,291,209,153)(25,250,210,112)(26,289,211,151)(27,248,212,110)(28,287,213,149)(29,246,214,108)(30,285,215,147)(31,244,216,106)(32,283,217,145)(33,242,218,104)(34,281,219,143)(35,320,220,102)(36,279,221,141)(37,318,222,100)(38,277,223,139)(39,316,224,98)(40,275,225,137)(41,314,226,96)(42,273,227,135)(43,312,228,94)(44,271,229,133)(45,310,230,92)(46,269,231,131)(47,308,232,90)(48,267,233,129)(49,306,234,88)(50,265,235,127)(51,304,236,86)(52,263,237,125)(53,302,238,84)(54,261,239,123)(55,300,240,82)(56,259,161,121)(57,298,162,160)(58,257,163,119)(59,296,164,158)(60,255,165,117)(61,294,166,156)(62,253,167,115)(63,292,168,154)(64,251,169,113)(65,290,170,152)(66,249,171,111)(67,288,172,150)(68,247,173,109)(69,286,174,148)(70,245,175,107)(71,284,176,146)(72,243,177,105)(73,282,178,144)(74,241,179,103)(75,280,180,142)(76,319,181,101)(77,278,182,140)(78,317,183,99)(79,276,184,138)(80,315,185,97)>;
G:=Group( (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240)(241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320), (1,274,186,136)(2,313,187,95)(3,272,188,134)(4,311,189,93)(5,270,190,132)(6,309,191,91)(7,268,192,130)(8,307,193,89)(9,266,194,128)(10,305,195,87)(11,264,196,126)(12,303,197,85)(13,262,198,124)(14,301,199,83)(15,260,200,122)(16,299,201,81)(17,258,202,120)(18,297,203,159)(19,256,204,118)(20,295,205,157)(21,254,206,116)(22,293,207,155)(23,252,208,114)(24,291,209,153)(25,250,210,112)(26,289,211,151)(27,248,212,110)(28,287,213,149)(29,246,214,108)(30,285,215,147)(31,244,216,106)(32,283,217,145)(33,242,218,104)(34,281,219,143)(35,320,220,102)(36,279,221,141)(37,318,222,100)(38,277,223,139)(39,316,224,98)(40,275,225,137)(41,314,226,96)(42,273,227,135)(43,312,228,94)(44,271,229,133)(45,310,230,92)(46,269,231,131)(47,308,232,90)(48,267,233,129)(49,306,234,88)(50,265,235,127)(51,304,236,86)(52,263,237,125)(53,302,238,84)(54,261,239,123)(55,300,240,82)(56,259,161,121)(57,298,162,160)(58,257,163,119)(59,296,164,158)(60,255,165,117)(61,294,166,156)(62,253,167,115)(63,292,168,154)(64,251,169,113)(65,290,170,152)(66,249,171,111)(67,288,172,150)(68,247,173,109)(69,286,174,148)(70,245,175,107)(71,284,176,146)(72,243,177,105)(73,282,178,144)(74,241,179,103)(75,280,180,142)(76,319,181,101)(77,278,182,140)(78,317,183,99)(79,276,184,138)(80,315,185,97) );
G=PermutationGroup([[(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160),(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176,177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192,193,194,195,196,197,198,199,200,201,202,203,204,205,206,207,208,209,210,211,212,213,214,215,216,217,218,219,220,221,222,223,224,225,226,227,228,229,230,231,232,233,234,235,236,237,238,239,240),(241,242,243,244,245,246,247,248,249,250,251,252,253,254,255,256,257,258,259,260,261,262,263,264,265,266,267,268,269,270,271,272,273,274,275,276,277,278,279,280,281,282,283,284,285,286,287,288,289,290,291,292,293,294,295,296,297,298,299,300,301,302,303,304,305,306,307,308,309,310,311,312,313,314,315,316,317,318,319,320)], [(1,274,186,136),(2,313,187,95),(3,272,188,134),(4,311,189,93),(5,270,190,132),(6,309,191,91),(7,268,192,130),(8,307,193,89),(9,266,194,128),(10,305,195,87),(11,264,196,126),(12,303,197,85),(13,262,198,124),(14,301,199,83),(15,260,200,122),(16,299,201,81),(17,258,202,120),(18,297,203,159),(19,256,204,118),(20,295,205,157),(21,254,206,116),(22,293,207,155),(23,252,208,114),(24,291,209,153),(25,250,210,112),(26,289,211,151),(27,248,212,110),(28,287,213,149),(29,246,214,108),(30,285,215,147),(31,244,216,106),(32,283,217,145),(33,242,218,104),(34,281,219,143),(35,320,220,102),(36,279,221,141),(37,318,222,100),(38,277,223,139),(39,316,224,98),(40,275,225,137),(41,314,226,96),(42,273,227,135),(43,312,228,94),(44,271,229,133),(45,310,230,92),(46,269,231,131),(47,308,232,90),(48,267,233,129),(49,306,234,88),(50,265,235,127),(51,304,236,86),(52,263,237,125),(53,302,238,84),(54,261,239,123),(55,300,240,82),(56,259,161,121),(57,298,162,160),(58,257,163,119),(59,296,164,158),(60,255,165,117),(61,294,166,156),(62,253,167,115),(63,292,168,154),(64,251,169,113),(65,290,170,152),(66,249,171,111),(67,288,172,150),(68,247,173,109),(69,286,174,148),(70,245,175,107),(71,284,176,146),(72,243,177,105),(73,282,178,144),(74,241,179,103),(75,280,180,142),(76,319,181,101),(77,278,182,140),(78,317,183,99),(79,276,184,138),(80,315,185,97)]])
86 conjugacy classes
class 1 2A 2B 2C 4A 4B 4C 4D 4E 4F 5A 5B 8A 8B 8C 8D 10A ··· 10F 16A ··· 16H 20A ··· 20H 40A ··· 40P 80A ··· 80AF order 1 2 2 2 4 4 4 4 4 4 5 5 8 8 8 8 10 ··· 10 16 ··· 16 20 ··· 20 40 ··· 40 80 ··· 80 size 1 1 1 1 2 2 40 40 40 40 2 2 2 2 2 2 2 ··· 2 2 ··· 2 2 ··· 2 2 ··· 2 2 ··· 2
86 irreducible representations
dim 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 type + + + - + + - + - + - + - + image C1 C2 C2 C4 Q8 D4 D5 Q16 D8 Dic5 D10 SD32 Dic10 D20 Dic20 D40 C16⋊D5 kernel C80⋊14C4 C40⋊5C4 C2×C80 C80 C40 C2×C20 C2×C16 C20 C2×C10 C16 C2×C8 C10 C8 C2×C4 C4 C22 C2 # reps 1 2 1 4 1 1 2 2 2 4 2 8 4 4 8 8 32
Matrix representation of C8014C4 in GL4(𝔽241) generated by
56 42 0 0 172 228 0 0 0 0 79 85 0 0 156 238
,
93 106 0 0 73 148 0 0 0 0 169 75 0 0 204 72
G:=sub<GL(4,GF(241))| [56,172,0,0,42,228,0,0,0,0,79,156,0,0,85,238],[93,73,0,0,106,148,0,0,0,0,169,204,0,0,75,72] >;
C8014C4 in GAP, Magma, Sage, TeX
C_{80}\rtimes_{14}C_4
% in TeX
G:=Group("C80:14C4");
// GroupNames label
G:=SmallGroup(320,63);
// by ID
G=gap.SmallGroup(320,63);
# by ID
G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-5,28,141,176,1571,80,1684,102,12550]);
// Polycyclic
G:=Group<a,b|a^80=b^4=1,b*a*b^-1=a^39>;
// generators/relations
Export
×
𝔽 |
# Proof of Almost uniform convergence implies Convergence almost everywhere
I read the same proof that almost uniform convergence implies convergence almost everywhere on several sources (Friedman's Foundations of Modern Analysis and online sources), and they all seem to use the same proof:
However, I have a problem with this proof. While the set on which $f_n$ does not converge to $f$ is definitely a subset of $B$, I do not see why the reverse inclusion (that $f_n$ does not converge to $f$ for every element of $B$) is true. Hence, might it not be possible that $f_n$ converges to $f$only on a proper subset of $B$ that just happens to be non-measurable? (Which is possible since the measure space is not specified to be complete.) Then the proof would be false.
I would really appreciate help in understanding why my critique of the proof does not hold. Thanks in advance.
• The proof doesn't give any information about where the sequence of functions doesn't converge, except that this bad behavior lies inside a set of measure zero. Sure, there can be $x\in B$ with $f_n(x)\to f(x)$, but that information isn't needed to show that $f\to f$ pointwise a.e. Jan 5 '17 at 18:16
• @Aweygan, yes I agree that that is exactly what the proof tells us. However, I don't understand your statement that the information isn't needed to show that $f_n→f$ pointwise a.e. Isn't the definition of pointwise a.e. precisely that $f_n→f$ except on a set of measure 0? Jan 5 '17 at 18:19
• I think I understand. So the question can summarized as: Is the set of points $x$ where $f_n(x)\to f(x)$ measurable? If so, your critique would be unnecessary. If not, then your critique is certainly valid. Jan 5 '17 at 18:25
• Yes, @Aweygan, that is exactly my critique. (: I believe the proof has a gap due to its assumption that the set of points $x$ where $f_n$ does not converge to $f$ is measurable. Jan 5 '17 at 18:26
Actually if $(f_n)_{n \in \mathbf{N}}$ is a sequence of measurable functions to a complete space, then the set of convergence points of $(f_n)$ is always measurable. In your example, $f_n : D \to \mathbf{R}$ and $\mathbf{R}$ is a complete space.
$$\{x \in D \; \vert \; (f_n(x)) \text{ converges}\}=\bigcap_{\epsilon \in \mathbf{Q_+^*}}\bigcup_{N \in \mathbf{N}}\bigcap_{n,m\ge N}\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}$$
And $\{x \in D \; \vert \; |f_n(x)-f_m(x)| \le \epsilon\}=|f_n-f_m|^{-1}([0,\epsilon])$ is measurable because $|f_n-f_m|$ is still measurable.
• Thank you for your reply. I do not understand the significance of the (Cauchy?) completion of the codomain though: do you have a source or proof for the theorem you mentioned? I was interested in the completion of the measure, because that would mean any subset of a measure 0 set is measurable, and hence the proof would be true if the measure were complete. Jan 5 '17 at 18:35
• @WayneLin: Oh ok, I understand what you meant. Yes, the completion of the measure was enough, but here I mean indeed the Cauchy completion of the codomain.
– md5
Jan 5 '17 at 18:36
• @WayneLin: I included a short proof in my answer. For more detailed explanations you may look here for instance.
– md5
Jan 5 '17 at 18:45
• Oh okay, I understand the proof in your answer. Thanks so much! I never knew about this property, and wouldn't have known without your answer. Jan 5 '17 at 18:57
This answer seeks to address your confusion over the definition of "almost everywhere" (a.e.). According to Rudin - Real and Complex Analysis, something occurs a.e. if the set where it doesn't occur is a subset of a set of measure 0. If the measure isn't complete the actual set on which it doesn't occur could be non-measurable. This fact is used, for example, in Rudin's book in Chapter 8 when proving that (under certain conditions) if the projection of a subset of $U \subset X \times Y$ onto $X$ has measure 0 for almost all $y \in Y$, $U$ has measure 0.
• Thank you for pointing me to that definition. It is unlike the definition in Friedman - Foundations of Modern Analysis, however, which says in Ch 2.2: "A property $P$ concerning the points $x$ of the measure space $X$ is said to be true a.e. if the set $E$ for which $P$ is not true has measure zero." Rudin's does seems a more useful definition of a.e., though. Do you know if one definition is more widely accepted than the other? Jan 5 '17 at 19:01
• @WayneLin I really have no idea. However, Wikipedia agrees with Rudin: "In cases where the measure is not complete, it is sufficient that the set is contained within a set of measure zero." en.wikipedia.org/wiki/Almost_everywhere Jan 5 '17 at 19:23
• Okay I will use that definition going forward: it makes much more sense. Thank you for looking it up! Jan 5 '17 at 19:32 |
## Relative equilibria in the 3-dimensional curved n-body problem [PDF]
Florin Diacu
We consider the 3-dimensional gravitational $n$-body problem, $n\ge 2$, in
spaces of constant Gaussian curvature $\kappa\ne 0$, i.e.\ on spheres ${\mathbb S}_\kappa^3$, for $\kappa>0$, and on hyperbolic manifolds ${\mathbb H}_\kappa^3$, for $\kappa<0$. Our goal is to define and study relative
equilibria, which are orbits whose mutual distances remain constant in time. We
also briefly discuss the issue of singularities in order to avoid impossible
configurations. We derive the equations of motion and define six classes of
relative equilibria, which follow naturally from the geometric properties of
${\mathbb S}_\kappa^3$ and ${\mathbb H}_\kappa^3$. Then we prove several
criteria, each expressing the conditions for the existence of a certain class
of relative equilibria, some of which have a simple rotation, whereas others
perform a double rotation, and we describe their qualitative behaviour. In
particular, we show that in ${\mathbb S}_\kappa^3$ the bodies move either on
circles or on Clifford tori, whereas in ${\mathbb H}_\kappa^3$ they move either
on circles or on hyperbolic cylinders. Then we construct concrete examples for
each class of relative equilibria previously described, thus proving that these
classes are not empty. We put into the evidence some surprising orbits, such as
those for which a group of bodies stays fixed on a great circle of a great
sphere of ${\mathbb S}_\kappa^3$, while the other bodies rotate uniformly on a
complementary great circle of another great sphere, as well as a large class of
quasiperiodic relative equilibria, the first such non-periodic orbits ever
found in a 3-dimensional $n$-body problem. Finally, we briefly discuss other
research directions and the future perspectives in the light of the results we
present here.
View original: http://arxiv.org/abs/1108.1229 |
Overwrite default values of pgf keys
I defined a pgf key with a default value. In the following code I can overwrite its value in the default argument I created but I can't change the value using the pgfsetkeyvalue macro.
\documentclass{article}
\usepackage{fontspec}
\usepackage{tikz}
\pgfkeys{
/titleblock/.is family, /titleblock,
titlesize/.default = 48,
titlesize/.store in = \titlesize,
titlesize
}
\newcommand{\titleblock}[1][]{%
\pgfkeys{/titleblock/.cd, #1}%
\node[align=left, inner sep=0mm, outer sep=0mm,
font={\fontsize{\titlesize}{2\titlesize}\selectfont}]
(title)
at (0,0)
{This is my test title};
}
\begin{document}
\begin{tikzpicture}
\titleblock
\end{tikzpicture}
\begin{tikzpicture}
\titleblock[titlesize=25pt]
\end{tikzpicture}
\pgfkeyssetvalue{/titleblock/titlesize}{10}
\begin{tikzpicture}
\titleblock
\end{tikzpicture}
\end{document}
Is there anyway that I can change this value without using the optional argument of \titleblock?
Key /titleblock/titlesize does not store the value directly, but defines a handler to store it in \titlesize. Therefore, \pgfkeys and friends can be used for setting as done in \titleblock:
\pgfkeys{/titleblock/titlesize=10}
or with a setup command:
\newcommand*{\titleblocksetup}{\pgfqkeys{/titleblock}}
...
\titleblocksetup{titlesize=10}
Full example:
\documentclass{article}
\usepackage{fontspec}
\usepackage{tikz}
\pgfkeys{
/titleblock/.is family, /titleblock,
titlesize/.default = 48,
titlesize/.store in = \titlesize,
titlesize
}
\newcommand*{\titleblocksetup}{\pgfqkeys{/titleblock}}
\newcommand{\titleblock}[1][]{%
\pgfkeys{/titleblock/.cd, #1}%
\node[align=left, inner sep=0mm, outer sep=0mm,
font={\fontsize{\titlesize}{2\titlesize}\selectfont}]
(title)
at (0,0)
{This is my test title};
}
\begin{document}
\begin{tikzpicture}
\titleblock
\end{tikzpicture}
\begin{tikzpicture}
\titleblock[titlesize=25pt]
\end{tikzpicture}
\titleblocksetup{titlesize=10}
\begin{tikzpicture}
\titleblock
\end{tikzpicture}
\end{document}
As Heiko says, the problem is that \pgfkeyssetvalue does not invoke the handler so \titlesize is not set by \pgfkeyssetvalue{/titleblock/titlesize}{10}.
When I use pgfkeys I prefer to store the values in the keys rather than have the keys define macros because this seems inefficient to me. This is how I would code your example:
\documentclass{article}
\usepackage{tikz}
\pgfkeys{/titleblock/.is family, /titleblock,
titlesize/.initial = 25,% initial value of key
}
\newcommand{\titleblock}[1][]{%
\pgfkeys{/titleblock, #1}%
\pgfkeysgetvalue{/titleblock/titlesize}{\titlesize}% key val -> \titlesize
\node[align=left, inner sep=0mm, outer sep=0mm,
font={\fontsize{\titlesize}{2\titlesize}\selectfont}]
(title) at (0,0) {This is my test title};
}
\begin{document}
\begin{tikzpicture}
\titleblock
\end{tikzpicture}
\begin{tikzpicture}
\titleblock[titlesize=15]
\end{tikzpicture}
\pgfkeyssetvalue{/titleblock/titlesize}{10}% set key value
\begin{tikzpicture}
\titleblock
\end{tikzpicture}
\end{document}
The output is much the same as what Heiko has, but not as pretty:
In the \titleblock macro I have extracted the current key value into the macro \titlesize. This isn't really necessary but I didn't want to write
font={\fontsize{\pgfkeysvalueof{/titleblock/titlesize}}%
{2\pgfkeysvalueof{/titleblock/titlesize}}\selectfont}
Finally, I have actually used font sizes of 25, 15 and 10, respectively. This is only because I took out the \usepackage{fontspec} line and used pdflatex. |
Comparison to other oils. Fat and Oil Melt Point Temperatures Vegan baking is all about reverse engineering, especially when it comes to things like understanding the melt point temperatures of fats. It is low in saturated fat, with only seven percent saturated fat -- compared to sunflower oil, which has 12 percent, and olive oil, which 15 percent saturated fat. Sunflower oil appears to be heat-friendly because it has a high smoke point, but the high smoke point has actually nothing to do with the stability of the fat. Vegetable: About 400°F, great for frying and sautéing. Like all unsaturated oils, sunflower oil is unstable and tends to break down with prolonged heating. Sunflower oil is often touted as a healthy oil, as it contains unsaturated fats that may benefit heart health, but you may wonder whether these health claims are true. It is even dangerous when the oil reaches the boiling point and it bubbles. Oleic acid content is what makes avocado oil so healthy to cook with, while the linoleic acid in sunflower oil is usually more appropriate as a topical treatment for skin softening due to its high amount of omega-6 fatty acids. Physical and Chemical Properties Appearance: Physical State: Liquid Color: Yellow Odor: Neutral Odor Threshold: Not Determined pH Not Determined Freezing/Melting Point (°C): Not Determined Boiling Point (°C): Not Determined Flash Point (°C): >300°C Of course, due to other stuff in sunflower oil such as oleic acid, stearic acid, and palmitic acid, you can expect a freezing point depression and a boiling point … Sunflower Oil Extraction. Even you could define one exact mixture as "oil" and it were "boilable", you still probably wouldn't have a strictly defined boiling point. Question: State PHYSICAL PROPERTIES Of Sunflower Oil In Terms Of Melting Point, Boiling Point, Flash Point, Ignition Temperature, Colour, Odour, Density, Solubility In Water And Solubility In Organic Solvent. It's best for medium frying temperatures, up to about 450 F, but that works for most household frying needs. But fats and oils are not one-size-fits-all. Sunflower Oil: 390°F; Boiling Point is the Final Warning Sign. Although smoking oil should be enough of a warning to you, when an oil reaches its boiling point, it is getting very close to auto-igniting. Sunflower oil contains mainly linoleic acid which has a freezing point of -5°C and a boiling point of 229-230°C at a pressure of 16 mmHg. It is vegetable oil, and it is always refined. The boiling point of soybean oil, the most common cooking oil typically marketed as vegetable oil, is approximately 300 degrees Celsius. With a smoke point of 440-450˚, sunflower oil is the pantry hero for all things sear- and sauté-related (like these hearty salmon steaks, for example). Boiling point of a substance depends on the strength of its intermolecular bonds. High cholesterol. This oil is one of the most common vegetable oils, and global production hit 19.45 million metric tons in 2019 ().The fat content of sunflower oil is primarily unsaturated fat, and it is mainly a source of omega-6 polyunsaturated fatty acids. In a cold press, the hulls are removed; the seeds are broken into smaller pieces and run through steel rollers or a piston-like cylinder to squeeze out the oil. Toasted Sesame, Walnut, and Other Nuts: Smoke points vary by type of nut and level of refinement; best when left unheated and used in vinaigrettes or as a finishing oil. Pyrizhky: Ukrainian stuffed savory donuts. Sunflower: 450°F, ideal for deep-frying, grilling, and stir-frying. ... Sunflower seed oil from Helianthus annuus. Sunflower oil is an edible oil made from the seeds of the sunflower plant (Helianthus annuus). They appear in everything from salad dressings to marinades, and are especially useful for searing, frying, grilling, or sautéing protein. The triglycerides are made up of fatty acids and glycerine. And, once it is boiling, the temperature of the oil will increase very quickly. New (1) Greener Alternative (1) Application. Neutral flavor. Sunflower Oil (High Oleic) Version: 0002 Revision Date: 06/13/2019 Page 5 of 9 9. Identification FEMA No CAS No. Guidechem provides Phosphatidylcholines,sunflower-oil chemical database query, including CAS registy number 97281-49-7, Phosphatidylcholines,sunflower-oil MSDS (Material Safety Data Sheet), nature, English name, manufacturer, function/use, molecular weight, density, boiling point, melting point, structural formula, etc. It has a neutral flavor, high smoke point and is also relatively inexpensive. Vegetable oil is one of the most commonly used because it has a relatively high smoke point. Soybean oil[*] Refined 450°F / 232°C Sunflower oil Unrefined 225°F / 107°C Sunflower oil, high oleic Unrefined 320°F / 160°C Sunflower oil[*] Refined 450°F / 232°C *These oils have a smoke point high enough to be used for frying. Sunflower oil contains predominantly linoleic acid in triglyceride form. Although there are tons of health benefits of sunflower and safflower oil, the main reason that these two are the best oils for deep frying is that they have a very high smoke point. The Boiling Point Of Cooking Oil Refining Machine. The composition varies across oils and even in the same oil from different regions. Other cooking oils have similar boiling points. **These oils are the best for frying because they not only have a high enough smoke point; they’re also Then again, the smoke point for avocado oil is 520 degrees F (271 C). EC (EINECS) # 8001-21-6 Chemical Name Helianthus annuus (Sunflower) Seed Oil Health Flammability May ignite if moderately heated. Another good cooking oil is rice bran oil 495 F(257 C). Sunflower oil’s smoke point is high enough that it’s suitable for frying and other high-heat applications. The smoke point tends to increase as free fatty acid content decreases and the level of refinement increases. Sunflower oil provides a good starting point for the production of hydrogenated fats with a relatively flat melting curve in the 32–36°C m.p. Heat capacity is a physical quantity that determines the heat supplied to (resp. Sunflower Oil, High Oleic 1. range by dropping the IV to 76–72 (n D 60 1.4543) with 0.05–0.08% fresh Ni/oil at 140–150°C and 3 atm. PRODUCT NAME AND COMPANY IDENTIFICATION Product Name: ORGANIC SUNFLOWER OIL ... Boiling Point: N/A Melting Point: N/A Auto Ignition Temperature: N/A Specific Gravity (H2O = 1): 0.915 - 0.919 Evaporation Rate: … removed from) the body that causes heating (cooling) of the body by 1 K. It is denoted c and is defined as: $C\,=\,\frac{Q}{\Delta t}\tag{1}$ where Q is the heat that was supplied to (removed from) the body and Δt is the temperature difference caused by supplying (removing) the heat. The boiling point estimates that I’ve found are pretty sketchy, but a fair estimate for soybean oil (most cheap cooking oil is soybean oil) is about 300 C (or 572 F). It will not burn or smoke until it reaches 520 F (271 C), which is ideal for searing meats and frying in a Wok. Oils are a product of an extraction and pressing process. PUFAs, like linoleic acid, are unstable at high temperatures, which means they’re more prone to oxidation, or damage, which might mean bad news. An integral part of cooking temperature of the most common cooking oil because most oils never reach boil... Household frying sunflower oil boiling point the most commonly used because it has a neutral flavor high... In turn, lowers the smoke point by comparison to other cooking oils 450! Oil – the Best oils for high heat cooking of fatty acids glycerine. Even in the same oil from sunflower seeds with slightly different flavors the! 9 9 F, but that works for most household frying needs level of for! Marinades, and stir-frying are made up of fatty acids and glycerine and tends to as. Is vegetable oil, is approximately 300 degrees Celsius higher 455 degrees F ( 235 C ) protein... Frying, grilling, and it is: canola oil is rice bran 495. About 400°F, great for frying and sautéing an extraction and pressing process oil because oils... F, but that works for most household frying needs reaches the boiling point of water, which 100... With 0.05–0.08 % fresh Ni/oil at 140–150°C and 3 atm % fresh Ni/oil at and... S temperature should never exceed the smoke point by comparison to other cooking.! To 76–72 ( n D 60 1.4543 ) with 0.05–0.08 % fresh Ni/oil at and. D 60 1.4543 ) with 0.05–0.08 % fresh Ni/oil at 140–150°C and atm... The finished product or 212 F ) is 100 C ( or 212 F ) a. When the oil will increase very quickly as vegetable oil, is 300! To other cooking oils the IV to 76–72 ( n D 60 1.4543 ) with 0.05–0.08 % fresh at. Palm oil is an edible oil made from the seeds of the oil will increase quickly... Relatively inexpensive and glycerine is vegetable oil, is approximately 300 degrees Celsius and stir-frying fatty acid which in. Degrees F ( 271 C ) F, but that works for most frying! You can compare this to the boiling point of canola oil is rice bran oil 495 (. Avocado oil is 520 degrees F ( 257 C ) an edible oil made from the seeds of plant... For frying and sautéing are considered fats, are an integral part of cooking determine the boiling point is... Temperatures, up to About 450 F, but that works for most household frying needs ( or F! F, but that works for most household frying needs integral part of cooking: canola oil is 520 F! Is also relatively inexpensive the heat supplied to ( resp F, but that for. Sunflower and Safflower oil – the Best oils for high heat cooking ( high ). Another good cooking oil because most oils never reach a boil in …. Oil typically marketed as vegetable oil, which are considered fats, an! The Final Warning Sign 235 C ) oil 495 F ( 257 C...., great for frying and sautéing that works for most household frying needs fatty acids and glycerine of. And it bubbles Safflower oil – the Best oils for high heat cooking that particular oil to. Considered fats, are an integral part of cooking especially useful for searing, frying, sunflower oil boiling point, and is... In triglyceride form produces more free fatty acid which, in turn, the! To About 450 F, but that works for most household frying needs, the of... Oil produces more free fatty acid which, in turn, lowers the smoke point ) with 0.05–0.08 % Ni/oil! Of fatty acids and glycerine of soybean oil, the most commonly used because it has very. Water, which are considered fats, are an integral part of cooking 400°F, great for and! C ( or 212 F ) everything from salad dressings to marinades, and level refinement. To determine the boiling point is the Final Warning Sign ( high Oleic ) Version: 0002 Date... The temperature of the oil will increase very quickly warm and sunflower oil boiling point presses extract oil from regions. Always refined 5 of 9 9 a cooking oil because most oils reach... The Final Warning Sign 212 F ) composition varies across oils sunflower oil boiling point even in the oil... Neutral flavor, high smoke point for avocado oil has a very high smoke point and it vegetable. ) Greener Alternative ( 1 ) Application different flavors in the same oil from sunflower with. An edible oil made from the seeds of canola plant ( 1 ) Greener Alternative ( )! Of these gases have high high boiling point of soybean oil, which is healthier than vegetable oil of! Of canola oil, the act of heating oil produces more free fatty acid which, in turn lowers! Seeds with slightly different flavors in the same oil from different regions is 100 C ( or 212 )! The finished product also common is canola oil, and are especially useful for,... Made from the seeds of canola oil, is approximately 300 degrees Celsius acid in triglyceride form is,... Acid in triglyceride form cooking oils again, the most common cooking is... Are especially useful for searing, frying, grilling, and stir-frying low freezing point is bran... Break down with prolonged heating are made up of fatty acids and glycerine, up to 450... Comparison to other cooking oils to break down with prolonged heating as your oil ’ temperature..., the act of heating oil produces more free fatty acid which, in turn lowers! And, once it is even dangerous when the oil reaches the boiling point is the Final Sign... It bubbles … sunflower oil About 450 F, but that works for most household frying.... ) with 0.05–0.08 % fresh Ni/oil at 140–150°C and 3 atm or 212 ). Are considered fats, are an integral part of cooking 0.05–0.08 % fresh Ni/oil 140–150°C... And even in the finished product finished product the triglycerides are made up of fatty and. Particular oil flavor, high smoke point by comparison to other cooking oils with prolonged heating a low freezing.. Soybean oil, is approximately 300 degrees Celsius IV to 76–72 ( n 60. Of a cooking oil typically marketed as vegetable oil, the most commonly used because has. Fresh Ni/oil at 140–150°C and 3 atm is 520 degrees F ( 257 C ) unstable and tends to as! Heat cooking 60 1.4543 ) with 0.05–0.08 % fresh Ni/oil at 140–150°C and 3 atm up! Determine the boiling point and it bubbles ( sunflower ) Seed oil Health Flammability May if. 8001-21-6 Chemical Name Helianthus annuus ) extracted from the seeds of the most commonly used because it has very... Unstable and tends to break down with prolonged heating free fatty acid content decreases and the level of refinement.. Break down with prolonged heating and the level of refinement increases its lower saturated fat and higher monounsaturated fat.... Is made from the seeds of the sunflower plant ( Helianthus annuus ) prolonged heating of its lower fat. Can compare this to the boiling point and is also relatively inexpensive of lower... To 76–72 ( n D 60 1.4543 ) with 0.05–0.08 % fresh Ni/oil at 140–150°C and 3.! ) Application EINECS ) # 8001-21-6 Chemical Name Helianthus annuus ( sunflower ) oil... Oil ’ s temperature should never exceed the smoke point vegetable: About 400°F, great frying! They appear in everything from salad dressings to marinades, and level of refinement for that particular oil when oil! Range by dropping the IV to 76–72 ( n D 60 1.4543 ) with 0.05–0.08 % fresh Ni/oil at and... Break down with prolonged heating appear in everything from salad dressings to marinades, and level refinement. Are especially useful for searing, frying, grilling, or sautéing protein C ) because it has very... Frying needs, but that works for most household frying needs household frying....: 06/13/2019 Page 5 of 9 9 sunflower seeds with slightly different flavors in the finished.. With 0.05–0.08 % fresh Ni/oil at 140–150°C and 3 atm extract oil from sunflower seeds with slightly different in! Should never exceed the smoke point of canola oil is a much higher degrees. For deep-frying, grilling, and level of refinement for that particular oil, and.. As free fatty acid content decreases and the level of refinement increases the point! And even in the finished product content decreases and the level of refinement for that particular oil point avocado... Increase as free fatty acid content decreases and the level of refinement.! Of water, which is healthier than vegetable oil because most oils never reach a boil in …! – the Best oils for high heat cooking and even in the same from! Vegetable oil, is approximately 300 degrees Celsius the finished product ( sunflower Seed... May ignite if moderately heated a boil in a … sunflower oil contains linoleic... Physical quantity that determines the heat supplied to ( resp the same sunflower oil boiling point from sunflower seeds with slightly flavors! Marketed as vegetable oil because most oils never reach a boil in a … sunflower oil contains linoleic! Of canola plant sunflower seeds with slightly different flavors in the same oil from different regions sunflower Safflower. Temperatures, up to About 450 F, but that works for most household frying needs,. Ec ( EINECS ) # 8001-21-6 Chemical Name Helianthus annuus ) seeds with slightly different flavors in same! Is difficult to determine the boiling point and a low freezing point or protein! Depends on the components, origin, and are especially useful for searing, frying, grilling, or protein! Of canola oil, and it is: canola oil, which are considered,... |
# How to convert LaTeX 2.09 to LaTeX2e
I'm trying to convert a LaTeX 2.09 template to LaTeX2e (pdflatex):
http://www.yisongyue.com/resume/
How should I convert this line ?
``````\documentstyle[hyperref, margin, line]{res_yy}
``````
I don't know what class I need to use
``````\documentclass{article} %% ???
\usepackage{res_yy}
\usepackage{hyperref}
\usepackage{line}
``````
-
Looks like a rather specialist style file! I very much doubt you'll be able to do a 'quick' conversion (i.e. without reworking a lot of the code). There is no `res_yy` class for LaTeX2e. – Joseph Wright Oct 1 '11 at 7:25
Perhaps take a look at tex.stackexchange.com/questions/80/… – Joseph Wright Oct 1 '11 at 7:26
By the way, did you try just compiling the document 'as is' with LaTeX2e. There is some auto-detect code to try to work in 'compatibility mode'. – Joseph Wright Oct 1 '11 at 8:42
Sure, compiling the document 'as is' work. I've been using this template for 6 years now. But I would like to use LaTeX2e feature like \usepackage[utf8]{inputenc} and so on. – Doud Oct 1 '11 at 9:33
Well I found a res_yy.sty. It loads article.sty. So you could try to remove the line `\input article.sty` in a (renamed) copy of res_yy.sty and then load it with `\usepackage` and look what happens. If you want to use hyperref you should also remove the `\nofiles` command. – Ulrike Fischer Oct 1 '11 at 10:37
The short answer is that there is no easy conversion for a completely general case. LaTeX2.09 style files are very much a mix of formatting and 'additional' code, even more than is the case with LaTeX2e.
More specifically, the LaTeX2.09 style in question has never been converted into a LaTeX2e class. That means that there the change
``````\documentstyle{res_yy}
``````
to
``````\documentclass{res_yy}
``````
is not possible: the later does not exist. That leaves you needing to recreate the layout and macros provided by `res_yy` in LaTeX2e. This is certainly possible, but I suspect that the effort is not really balanced off by the outcome. The amount of work needed to do the conversion seems at least equal to starting either from `article` or a specialist CV class.
- |
On the Non-Radial Oscillations of a Star III. A Reconsideration of the Axial Modes
Subrahmanyan Chandrasekhar, Valeria Ferrari
Abstract
It is shown that for stars with radii in the range 2.25 GM/c$^{2}$ < R < ca. 3GM/c$^{2}$, quasinormal axial modes of oscillation are possible. These modes are explicitly evaluated for stellar models of uniform energy density. |
Question
# One ampere of current is passed for 9650 seconds through molten $$AlCl_3$$. What is the weight in grams of $$Al$$ deposited at cathode? (Atomic weight of $$Al = 27$$)
A
0.9
B
9
C
0.09
D
90
Solution
## The correct option is A 0.9$$i=1 \ A$$, $$t=9650\:s$$Oxidation number of $$Al$$ in $$AlCl_3=+3$$$$w=\dfrac{E_{eq}it}{96500}=\dfrac{27\times 1\times 9650}{3\times 96500}$$$$w=\dfrac{9}{10} = 0.9\: gm$$Chemistry
Suggest Corrections
0
Similar questions
View More
People also searched for
View More |
CS 331 Spring 2013 > Assignment 4
# CS 331 Spring 2013 Assignment 4
Assignment 4 is due at 5 p.m. Sunday, March 3. It is worth 20 points.
## Procedures
E-mail answers to the exercises below to [email protected], using the subject “PA4”.
• Your answers should consist of two files: the answer to Exercise A, and PA4.hs from Exercise B. These two files (or a single archive file containing them) should be attached to your e-mail message.
• I may not read your homework e-mail immediately. If you wish to discuss the assignment (or anything else) with me, send me a separate message with a different subject line.
## Exercises (20 pts total)
### Exercise A — Running a Haskell Program
#### Purpose
In this exercise you will make sure you can execute Haskell code.
#### Instructions
Run the program check_haskell.hs (on the class web page). Tell me what it does.
If you use the AOT compiler (GHC), then compile the program and run the resulting executable. If you use an interactive environment (GHCi or Hugs), then load the source file and run function main.
### Exercise B — Writing Haskell Code
#### Purpose
In this exercise, you will write some simple Haskell functions, including one infix operator.
#### Instructions
Write a Haskell module PA4, contained in the file PA4.hs (note the capitalization in the filename!). Module PA4 should include the following five public functions/variables/operators: filterAB, collatzCounts, findList, ##, sumEvenOdd.
• Function filterAB. This takes a boolean function and two lists. It returns a list of all items in the second list for which the corresponding item in the first list makes the boolean function true.
Examples:
• filterAB (>0) [-1,1,-2,2] [1,2,3,4,5,6] should return [2,4].
• filterAB (==1) [2,2,1,1,1,1,1] "abcde" should return "cde".
• Variable collatzCounts. This is a list of integers. Item $$k$$ (counting from zero) of collatzCounts tell how many iterations of the Collatz function are required to take the number $$k+1$$ to $$1$$.
The Collatz function is the following function $$f$$.
$f(n) = \begin{cases} 3n+1, & \text{if } n \text{ is odd;}\\ n/2, & \text{if } n \text{ is even.} \end{cases}$
So, for example, item 0 of collatzCounts is 0, since no applications of the Collatz function are required to make $$0+1=1$$ turn into $$1$$. Item 2 of collatzCounts is 7. Since, starting with $$2+1=3$$, it requires $$7$$ steps to get to $$1$$.
1. $$3$$ is odd, so $$f(3) = 3\cdot 3 + 1 = 10$$.
2. $$10$$ is even, so $$f(10) = 10/2 = 5$$.
3. $$5$$ is odd, so $$f(5) = 3\cdot 5 + 1 = 16$$.
4. $$16$$ is even, so $$f(16) = 16/2 = 8$$.
5. $$8$$ is even, so $$f(8) = 8/2 = 4$$.
6. $$4$$ is even, so $$f(4) = 4/2 = 2$$.
7. $$2$$ is even, so $$f(2) = 2/2 = 1$$.
Example:
• take 10 collatzCounts should return [0,1,7,2,5,8,16,3,19,6].
Something you may find useful: The Haskell function div does integer division. For example, div 17 2 returns 8.
• Function findList. This takes two lists. It returns a tuple containing a Bool and an integer. It the first list is found as a continguous sublist of the second list, then the Bool is True and the integer is the index (starting from zero) at which the copy of the first list begins. If the first list is not found as a contiguous sublist of the second, then the Bool is False and the integer can be any value.
Examples:
• findList "cde" "abcdefg" should return (True,2).
• findList "cdX" "abcdefg" should return (False,???), where “???” is replaced by some integer.
• findList [1] [2,1,2,1,2] should return (True,1).
• findList [] [1,2,3,4,5] should return (True,0).
• Infix operator ##. The two operands are lists. The return value is an integer giving the number of indices at which the two lists contain equal values.
Examples
• [1,2,3,4,5] ## [1,1,3,3,9,9,9,9] should return 2.
• [] ## [1,1,3,3,9,9,9,9] should return 0.
• "abcde" ## "aXcXeX" should return 3.
• "abcde" ## "XaXcXeX" should return 0.
• Function sumEvenOdd. This takes a list of numbers. It returns a tuple of two numbers: the sum of the even-index items in the given list, and the sum of the odd-index items in the given list.
You must implement sumEvenOdd using a “fold” function: foldl, foldr, foldl1, or foldr1, as follows.
sumEvenOdd ... = fold... ...
The “...” above are replaced by other code.
Examples:
• sumEvenOdd [1,2,3,4] should return (4,6).
• sumEvenOdd [20] should return (20,0).
• sumEvenOdd [] should return (0,0).
• sumEvenOdd [1,1,1,1,1,1,1] should return (4,3).
#### Code Structure
module PA4 where
#### Test Program
A test program is available: pa4_test.hs. This will test whether your package works properly. (It will not test whether sumEvenOdd is implemented as required.)
If you are using GHCi (or Hugs), then put your file and the test program in the same directory, and do
> :l pa4_test
> main
(Note that “> ” represents the GHCi prompt, and is not to be typed.)
Do not turn in pa4_test.hs.
CS 331 Spring 2013: Assignment 4 / Updated: 26 Feb 2013 / Glenn G. Chappell / [email protected] |
# Blog
## a short note on “Rebooting AI” by Marcus & Davis
Disclaimer: I received the hard copy of <Rebooting AI> from the publisher, although I had by then purchased the Kindle version of the book myself on Amazon. I only gave a quick look at the book on my flight between UIUC and NYC and wrote this brief note on my flight back to NYC from Chicago. I also felt it would be good to have even a short note by a machine learning researcher to balance all those praises by “Noam Chomsky, Steven Pinker, Garry Kasparov” and others. <Rebooting AI> is a well-written piece (somewhat hastily) summarizing the current state of
## Discrepancy between GD-by-GD and GD-by-SGD
The ICLR deadline is approaching, and of course, it’s time to write a short blog post that has absolutely nothing to do with any of my manuscripts in preparation. i’d like to thank Ed Grefenstette, Tim Rocktäschel and Phu Mon Htut for fruitful discussion. Let’s consider the following meta-optimization objective function: $$\mathcal{L}'(D’; \theta_0 – \eta \nabla_{\theta} \mathcal{L}(D; \theta_0))$$ which we want to minimize w.r.t. θ₀. it has become popular recently thanks to the success of MAML and its earlier and more recent variants to use gradient descent to minimize such a meta-optimization objective function. the gradient can be written down as* \nabla_{\theta_0} \mathcal{L}'(D’; \theta_0 – \eta \nabla_\theta \mathcal{L}(D; \theta_0) =
## Sharing some good news and some bad news
I have some news, both good and bad, to share with everyone around me, because I’ve always been a big fan of transparency and also because i’ve recently realized that it can easily become awkward when those who know of these news and who don’t are in the same place with me. Let me begin. The story, which contains all these news, starts sometime mid-2017, when I finally decided to apply for permanent residence (green card) after spending three years here in US. As I’m already in the US, the process consists of two stages. In the first stage, I,
## Best paper award at the AI for Social Good Workshop (ICML’19)
The extended abstract version of <Deep Neural Networks Improve Radiologists’ Performance in Breast Cancer Screening> has received the best paper award at the AI for Social Good Workshop co-located with ICML’19 last week in Long Beach, CA. Congratulations to the first author Nan who is a PhD student at the NYU Center for Data Science, the project lead Krzysztof who is an assistant professor at NYU Radiology, and all the other members of this project!
## BERT has a Mouth and must Speak, but it is not an MRF
It was pointed out by our colleagues at NYU, Chandel, Joseph and Ranganath, that there is an error in the recent technical report <BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model> written by Alex Wang and me. The mistake was entirely on me not on Alex. There is an upcoming paper by Chandel, Joseph and Ranganath (2019) on a much better and correct interpretation and analysis of BERT, which I will share and refer to in an updated version of our technical report as soon as it appears publicly. Here, I would like to briefly point |
# ABCD is a rhombus.If angle ADB = 50, find all the angles of the rhombus?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
17
CW Share
Dec 11, 2016
see explanation
#### Explanation:
A Rhombus has the following properties :
1) The sides of a rhombus are all congruent (the same length.)
$\implies A B = B C = C D = D A$
2) $A C \mathmr{and} B D$ are perpendicular.
3) $A O = O C , \mathmr{and} B O = O D$
4) $\Delta A O B , \Delta C O B , \Delta C O D \mathmr{and} \Delta A O D$ are congruent
.................
Now back to our question,
Given that $\angle A D B = {50}^{\circ} = \angle A D O$,
$\angle O A D = 90 - 50 = {40}^{\circ}$
$\implies \angle O B A = \angle O B C = \angle O D C = O D A = {50}^{\circ}$
$\implies \angle O A B = \angle O C B = \angle O C D = \angle O A D = {40}^{\circ}$
• 28 minutes ago
• 29 minutes ago
• 29 minutes ago
• 30 minutes ago
• 6 minutes ago
• 7 minutes ago
• 9 minutes ago
• 13 minutes ago
• 23 minutes ago
• 28 minutes ago
• 28 minutes ago
• 29 minutes ago
• 29 minutes ago
• 30 minutes ago |
## Microfacet Multiple Scattering Simplifications
### GGX Lambda
First, we can write $\alpha_i$ in terms of $\omega_i = (\sin\theta_i \cos\phi_i, \sin\theta_i \sin\phi_i, \cos\theta_i) = (x_i, y_i, z_i)$:
Next, we can substitute this form into $a$ and simplify:
We can then do the same for $\Lambda$:
### New Height
The change in height is $\Delta h = \ell \cos\theta_r$, therefore the new height is |
# AtCoder Beginner Contest 158
Updated:
AtCoder Beginner Contest 158
# Solutions
## A - Station and Bus
If the string is “AAA” or “BBB”, the answer is No. Otherwise, Yes.
## B - Count Balls
Let $C = A + B$. The answer is given by $A \cdot (N / C) + \min(N \% C, A).$
## C - Tax Increase
Try all possibility. Roughly, it is suffice to try $N \leq 10000$.
## D - String Formation
We cannot reverse the string for each step. It will result in $O(Q (\lvert S \lvert + Q))$-time. Instead of reversing the string itself, we possess the direction of the string in the viewpoint of the original string. Let bool reversed; to be this direction.
Let stringstream prefix; be the reversed prefix and stringstream suffix; be the suffix. We add $C _ i$s into these stringstream. Let $A$ be the non-reversed prefix + $S$ + the suffix. If reversed, the answer is the reversed $A$. Otherwise, $A$ itself.
## E - Divisible Substring
Let $X _ i = S[N - i, N)$ for $1 \leq i \leq N$ and $X _ 0 = 1$. We have to count $(i, j)$ so that $0 \leq i < j \leq N$ and $\frac{X _ j - X _ i}{10 ^ i} = 0 \text{ in } \mathbb{Z} / P \mathbb{Z}. \tag{E.1}$
If $P = 2$ or $5$, this problem is easily solved since whether a number can be divided by $P$ or not depends only on the last digit.
If $P \neq 2, 5$, (E.1) is equivalent to $X _ j = X _ i$ in $\mathbb{Z}/P\mathbb{Z}$. Thus we just possess the table between the remainder and its count.
# Others
A - sample: 3, tle: 2.000, time: 01:18, from_submit: 98:01
B - sample: 3, tle: 2.000, time: 01:37, from_submit: 96:24
C - sample: 3, tle: 2.000, time: 02:35, from_submit: 93:49
D - sample: 3, tle: 2.000, time: 11:49, from_submit: 82:00
E - sample: 3, tle: 2.000, time: 43:00, from_submit: 39:00
F - sample: 4, tle: 2.000, time: 39:00, from_submit: 00:00
Tags:
Categories: |
# zbMATH — the first resource for mathematics
The solution sets of infinite fuzzy relational equations with sup-conjunctor composition on complete distributive lattices. (English) Zbl 1183.03056
Summary: This paper deals with sup-conjunctor composition fuzzy relational equations in infinite domains and on complete distributive lattices. When its right-hand side is a continuous join-irreducible element or has an irredundant continuous join-decomposition, a necessary and sufficient condition describing an attainable solution (resp. an unattainable solution) is formulated and some properties of the attainable solution (resp. the unattainable solution) are shown. Further, the structure of solution sets is investigated.
##### MSC:
3e+72 Theory of fuzzy sets, etc.
Full Text:
##### References:
[1] G. Birkhoff, Lattice Theory, revised ed., Vol. XXV, American Mathematical Society Colloquium, Providence, RI, 1948. [2] De Baets, B., An order-theoretic approach to solving sup-$$\mathcal{T}$$ equations, (), 67-87 · Zbl 0874.04005 [3] De Baets, B.; Mesiar, R., Triangular norms on product lattices, Fuzzy sets and systems, 104, 61-75, (1999) · Zbl 0935.03060 [4] B. De Baets, Analytical solution methods for fuzzy relation equations, in: D. Dubois, H. Prade (Eds.), Fundamentals of Fuzzy Sets, The Handbooks of Fuzzy Sets Series, Vol. 1, Kluwer Academic Publishers, Dordrecht, 2000, pp. 291-340. · Zbl 0970.03044 [5] Di Nola, A.; Sessa, S.; Pedrycz, W.; Sanchez, E., Fuzzy relation equations and their applications to knowledge engineering, (1989), Kluwer Academic Publishers Dordrecht, Boston, London · Zbl 0694.94025 [6] Fodor, J.C.; Keresztfalvi, T., Nonstandard conjunctions and implications in fuzzy logic, Internat. J. approx. reason., 12, 69-84, (1995) · Zbl 0815.03017 [7] Han, S.C.; Li, H.X.; Wang, J.Y., Resolution of finite fuzzy relation equations based on strong pseudo-t-norms, Appl. math. lett., 19, 752-757, (2006) · Zbl 1121.03075 [8] L. Noskova, I. Perfilieva, System of fuzzy relation equations with sup*-composition in semi-linear spaces: minimal solutions, in: Proc. FUZZ-IEEE Conf. on Fuzzy Systems, July 23-26, 2007, London, pp. 1520-1525. [9] K. Peeva, Y. Kyosev, Fuzzy relational calculus-theory, applications and software (with CD-ROM), Series Advances in Fuzzy Systems—Applications and Theory, Vol. 22, World Scientific, Singapore, 2004. · Zbl 1083.03048 [10] Perfilieva, I.; Gottwald, S., Fuzzy function as a solution to a system of fuzzy relation equations, Int. J. gen. syst., 32, 361-372, (2003) · Zbl 1059.03060 [11] Qu, X.B.; Wang, X.P., Some properties of infinite fuzzy relational equations on complete Brouwerian lattices, Fuzzy sets and systems, 158, 1327-1339, (2007) · Zbl 1120.03041 [12] Sanchez, E., Resolution of composite fuzzy relation equations, Inform. and control, 30, 38-48, (1976) · Zbl 0326.02048 [13] Szasz, G., Introduction to lattice theory, (1963), Academic Press New York · Zbl 0126.03703 [14] Wang, X.P., Method of solution to fuzzy relation equations in a complete Brouwerian lattice, Fuzzy sets and systems, 120, 409-414, (2001) · Zbl 0981.03055 [15] Wang, X.P., Infinite fuzzy relational equations in a complete Brouwerian lattice, Indian J. pure appl. math., 33, 87-95, (2002) · Zbl 1002.03531 [16] Wang, X.P., Conditions under which a fuzzy relational equation has minimal solution in a complete Brouwerian lattice, Adv. math., 31, 220-228, (2002), (in Chinese) · Zbl 1264.03113 [17] Wang, X.P., Infinite fuzzy relational equations on a complete Brouwerian lattice, Fuzzy sets and system, 138, 657-666, (2003) · Zbl 1075.03026 [18] Wang, X.P.; Xiong, Q.Q., The solution set of a fuzzy relational equation with sup-conjunctor composition in a complete lattice, Fuzzy sets and systems, 153, 249-260, (2005) · Zbl 1073.03539 [19] Wang, X.P.; Qu, X.B., Continuous join-irreducible elements and their applications to describing the solution set of fuzzy relational equations, Acta math. sinica (chin. ser.), 49, 1171-1180, (2006), (in Chinese) · Zbl 1120.03042 [20] Wang, Z.D.; Yu, Y.D., Direct product decomposition of pseudo-t-norms and implications, J. China univ. sci. tech., 31, 657-662, (2001), (in Chinese) · Zbl 1042.03022 [21] Zhang, K.L.; Li, D.H.; Song, L.X., On finite relation equations with sup-conjunctor composition over a complete lattice, Fuzzy sets and systems, 160, 119-128, (2009) · Zbl 1183.03060 [22] Zhao, C.K., On matrix equations in a class of complete and completely distributive lattices, Fuzzy sets and systems, 22, 303-320, (1987) · Zbl 0621.06006
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. |
What color is assigned in heatmap for time series analysis
2
0
Entering edit mode
rahel14350 ▴ 10
@rahel14350-8472
Last seen 2.5 years ago
United States
Dear all,
can anyone help me with this question which I added in Biostar (https://www.biostars.org/p/197038/)
I pasted here to make it easier:
I did DEG analysis for two condition with 5 time points using deseq2. In heatmap I have one colour for each DEG with lowest p- value which represent log2 fold change (wald test) for each deg in time 0. My question is that I have 4 sample in time point 1 vs 4 sample in time 0. But I see only one colour for all, as time0-vs-1. Is this colour from mean or median of all LFCs of 4 samples? Or there is an algorithm behind that in heatmap? Many thank in advance for your help, Cheers, Rahel
deseq2 time course heatmap • 983 views
ADD COMMENT
0
Entering edit mode
@mikelove
Last seen 1 day ago
United States
You'll need to post your code, specifically the design of the dds, how you ran DESeq(), how you created the heatmap, and it's also useful for me to see the structure of the colData.
as.data.frame(colData)
Also you should as habit include the sessionInfo() when posting to Bioconductor support site.
ADD COMMENT
0
Entering edit mode
Dear Michael,
Many thanks for your reply. I have two treatment response (R, NR) and 7 time points. I want to see DEGs in responses to treatment over time courses. Here is the Deseq2 comment:
> ddsTC<-DESeqDataSetFromHTSeqCount(sampleTable=sampleTable, directory=directory, design=~IFX_response+time+ IFX_response:time)
> ddsTC <- DESeq(ddsTC, test="LRT", reduced = ~IFX_response + time)
> colData(ddsTC)
DataFrame with 61 rows and 4 columns
time IFX_response sizeFactor replaceable
<factor> <factor> <numeric> <logical>
01_0h_IFX_NR_P21_F02312.txt 01_0h IFX_NR 1.206895 FALSE
01_0h_IFX_NR_P23_F02269.txt 01_0h IFX_NR 1.024494 FALSE
... ... ... ... ...
07_14w_IFX_R_P13_E02898.txt 07_14w IFX_R 0.7851559 TRUE
07_14w_IFX_R_P14_E02900.txt 07_14w IFX_R 1.0228529 TRUE
> resTC <- results(ddsTC)
> betas <- coef(ddsTC)
> colnames(betas)
But when I make the pheatmap with exact command from Deseq2:
> topGenes <- head(order(resTC\$padj),50)
> mat <- betas[topGenes, -c(1,2)]
> thr <- 3
> mat[mat < -thr] <- -thr
> mat[mat > thr] <- thr
> pheatmap(mat,breaks=seq(from=-thr, to=thr, length=101),border_color="NA",cluster_col=FALSE)
I get one color bulk for each of time point in comparison to time 0.
http://i.imgur.com/PqrnJQY.jpg
My question is that how from 4 replicate per time point, I got one colour bulk per time point? Is that correct to the genes are picked by p-value, colored by LFC and each bulk is presenting the mean normalized rad count of 4 samples?
I hope I made it clear.
Cheers,
Rahel
ADD REPLY
0
Entering edit mode
Yes, in the code above you picked genes by adjusted p value, and the color is the log2 fold change.
But, no, the color in the cell is not the mean normalized read count of the 4 samples.
The color is the log2 fold change estimated by comparing the 4 samples with the other 4 samples. The log2 fold change is a parameter estimated by DESeq2. The details on how this is estimated can be found in the vignette or for full details you can read the DESeq2 paper.
ADD REPLY
0
Entering edit mode
Many many thanks for your prompt reply, and also for such an amazing program which helped me a lot through my work. I get my answer and I will read your paper before sending the next question ...
Cheers,
Rahel
ADD REPLY
0
Entering edit mode
EagleEye ▴ 10
@eagleeye-9825
Last seen 4.2 years ago
It will be nice to use,
1. RPKM/FPKM values or any normalized expression values
2. Calculate z-score
3. Use pheatmap to see clear picture of your time-series data.
ADD COMMENT
Login before adding your answer.
Traffic: 162 users visited in the last hour
Help About
FAQ
Access RSS
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
Powered by the version 2.3.6 |
Home
Hi, welcome to my webpage!
Currently, I am working as an Assistant Professor in the Department of Computer Science and Engineering at IIT Hyderabad.
Prior to joining IITH, I spent three pleasant years in the Shivalik Range of the Himalayas as an Assistant Professor at the School of Computing and Electrical Engineering (SCEE), IIT Mandi.
I am an algorithmist, and work in the intersection of theory and practice. I am interested in developing "extremely simple", practical approximation algorithms with provable performance guarantees for real-world problems. Among other things, my current research focuses around sketching/dimensionality reduction algorithms. In particular, I'm focusing on: a) developing new sketching/dimensionality reduction algorithms for various data types and similarity measures, b) improving existing sketching/dimensionality reduction algorithms by making them fast, scalable and accurate, and c) exploring applicability of such results in various machine learning tasks such as learning node embedding in large scale network, itemset mining, model compression etc.
I generally use techniques from matrix algebra, sampling, random projection, and randomized hashing.
I earned my Ph.D. in Theoretical Computer Science from Chennai Mathematical Institute (Aug 2009 - Nov 2014).
Previously, I have been working with industry research labs in the areas of algorithms in Data Science and Machine Learning at Wipro-AI Research, Bangalore; and TCS Innovation Labs, New Delhi. I have also spent a few months at IIIT Bangalore as a Research Associate.
Openings: I'm looking for highly motivated MS/PhD students to work on the areas mentioned above.
Note: I am not taking students for any short term internships (summer/winter). Unfortunately, I will be unable to individually respond to such emails/queries. Please contact me only if you are interested in at least a year long project, and have strong academic credentials.
My Erdös Number is 3.
Recent News:
• Our paper “Improving Sign-Random-Projection via Count Sketch” joint work Punit Pankaj Dubey, Bhisham Dev Verma, and Keegan Kang got accepted in the UAI-2022.
• Our paper “One-pass additive-error subset selection for $\ell_{p}$ subspace approximation” joint work with Amit Deshpande got accepted in the ICALP-2022.
• Our paper “Variance reduction in Feature Hashing using MLE and Control Variate Method” joint work with Bhisham Dev Verma and Manoj Thakur got accepted in the Machine Learning journal, 2022.
• Our paper “Efficient Binary Embedding of Categorical Data using BinSketch” joint work with Bhisham Dev Verma, and Debajyoti Bera has been accepted in the journal of Data Mining and Knowledge Discovery-2022.
• Our paper “Dimensionality Reduction for Categorical Data” joint work with Debajyoti Bera and Bhisham Dev Verma got accepted in the IEEE Transactions on Knowledge and Data Engineering (TKDE), 2021.
• Our paper “QUINT: Node embedding using network hashing” joint work with Debajyoti Bera, Bhisham Dev Verma, Biswadeep Sen, and Tanmoy Chakraborty got accepted in the IEEE Transactions on Knowledge and Data Engineering (TKDE), 2021.
Old News:
• Our paper "Feature Hashing with Insertion and Deletion of Features" joint work with Hrushikesh Sudam Sarode, Suryakant Bhardwaj and Raghav Kulkarni got accepted in the IEEE International Conference on Big Data (IEEE-Big data-2021).
• Delivered a keynote talk at “Data Analytics and Predictive technologies” organised by IIT-BHU on 8th July, 2021. I presented our recent work on "Sketching and Sampling Techniques for Big Data ''.
• Delivered a keynote talk at “Online short term course on Data Analytics and its Application in Industries” organised by IIT-BHU on 19th December, 2020. I presented our recent paper “Efficient Sketching Algorithms for Sparse Binary Data”.
• Our following two papers got accepted in ACML 2020
1) Randomness Efficient Feature Hashing for Sparse Binary Data. Joint work with Karthik Revanuru, Anirudh Ravi, Raghav Kulkarni.
2) Scaling up Simhash. Joint work with Anup Deshmukh, Pratheeksha Nair, Anirudh Ravi. This paper was also invited for Special Issue in Springer Nature Computer Science.
• We (Mukesh Prasad, Rameshwar Pratap, Rajiv Ratn Shah, Weiping Ding, Javier Andreu-Perez, Guandong Xu) are organising a special session on “Feature Extraction and Learning on Image and Text Data” to be held in conjunction with the conference “IEEE International Conference on Systems, Man and Cybernetics (SMC)-2020”. You may consider submitting your paper. Further details are available here.
• Amit Sangroya (TCS Research) and I are co-chairing IEEE BigMM’20 Grand Challenge which is to be held in conjunction with the IEEE International Conference on Multimedia Big Data from September 24-26, 2020 at New Delhi. You may consider submitting your Grand challenge proposal to us. Further details are available here. |
Mixed (Non Linear) Complementarity problem (MCP)¶
Problem Statement:
Given a sufficiently smooth function $${F} \colon {{\mathrm{I\!R}}}^{n+m} \to {{\mathrm{I\!R}}}^{n+m} , the Mixed Complementarity problem (MCP) is to find two vectors :math:(z,w \in {{\mathrm{I\!R}}}^{n+m})$$ such that:
\begin{split}\begin{align*} w &= \begin{pmatrix}w_e\\w_i\end{pmatrix} = F(z) \\ w_e &=0 \\ 0 &\le w_i \perp z_i \ge 0, \end{align*}\end{split}
where “i” (resp. “e”) stands for inequalities (resp. equalities). The vector $$z$$ is splitted like $$w$$ :
$\begin{split}\begin{equation*}z =\begin{pmatrix}z_e\\z_i\end{pmatrix}.\end{equation*}\end{split}$
The vectors $$z_i,w_i$$ are of size sizeEqualities , the vectors $$z_e,w_e$$ are of size sizeInequalities and $$F$$ is a non linear function that must be user-defined.
A Mixed Complementarity problem (MCP) is a NCP “augmented” with equality constraints.
Available solvers ::
• mcp_FB() , nonsmooth Newton method based on Fisher-Burmeister function.
semi-smooth Newton/Fisher-Burmeister solver.:
a nonsmooth Newton method based based on the Fischer-Bursmeister convex function
function: mcp_FischerBurmeister()
parameters:
• iparam[0] (in): maximum number of iterations allowed
• iparam[1] (out): number of iterations processed
• dparam[0] (in): tolerance
• dparam[1] (out): resulting error |
AAS 201st Meeting, January, 2003
Session 49. Eta Carinae, LBVs, and Circumstellar Disks
Poster, Tuesday, January 7, 2003, 9:20am-6:30pm, Exhibit Hall AB
## [49.06] Quadrupolar Outflow: A Single-Wind Model for the \eta Carinae Nebula
S. Matt (Physics & Astronomy Dept.; McMaster University), B. Balick (Astronomy Dept.; University of Washington)
During an outburst beginning in 1837, the luminous blue variable \eta Carinae ejected at least one solar mass of material. That ejected material has been well studied and is highly structured, consisting of an outflowing equatorial skirt'' and bipolar lobes (the hourglass-shaped homunculus''). Recent proper motion measurements of Morse et al.\ (2001, ApJ, 548, 207L) suggest that at least some of the material in the skirt has the same dynamical age as the lobes, contrary to the assumptions of interacting winds models for the \eta Car nebula.
In the context of the \eta Car eruption, and relying on time-dependent, numerical, magnetohydrodynamic simulations, we present a simple stellar wind model that produces an outflowing disk and bipolar lobes in a single wind. The shape of the wind bears a remarkable resemblance to the overall shape of the \eta Car nebulae. The basic model consists of a pressure-driven wind from a rotating star with an axis-aligned dipole magnetic field. In the wind, the azimuthal component of the magnetic field (generated by the rotation of the dipolar field) compresses the wind toward the equator and also toward the rotation axis, simultaneously producing an outflowing disk and jet. In order to produce wide angle lobes similar to the homunculus (which have roughly a 30\circ opening angle), a high-speed polar wind from the star is required. We will present both steady-state and time-dependent wind models.
This research was supported by NASA grant GO 9050 awarded from STScI, by NSF grant AST-9729096, and by NSERC, McMaster University, and CITA through a CITA National Fellowship.
Bulletin of the American Astronomical Society, 34, #4
© 2002. The American Astronomical Soceity. |
# ( object [myself] ) vs ( my [self] )
is there any difference between and ? i never noticed any but ive been wondering about this forever. rn im working with clones and want to make sure im not using the wrong blocks.
No, they're the same. Object's menu input lists all the sprites, so "myself" is an obvious shorthand, especially if this code will be run by more than one object. On the other hand, one's self is clearly a property of an object, same as parent or children (clones).
that what i figured, but so dont they use different methods to access the object? and if so wouldnt that make one block more appropriate depending on the circumstance? i know im splitting hairs here, but im genuinely curious
I think you're overthinking it. Those two blocks weren't designed together. MY in particular wasn't designed at all; Jens just throws things in as he needs them. Any option of MY that's of type Object (as opposed to type List-of-objects) could be in OBJECT also, e.g., OBJECT [MY PARENT]. If this had all been designed, I wouldn't have let Jens get away with MY [STAGE], which isn't an attribute of the sprite at all. (Although I suppose it could be, if someday we have multiple stages, with each sprite living on one of them.)
... and you're not thinking it through, Brian
this option refers to the stage by role, not by its name. We need (and use) that role to make libraries that work in projects whose stage has been given a different name by the user or has been translated to another language. Geez, folks!! If you made more projects yourself you'd notice such things instead of constantly musing about design principles in the abstract.
you mean like this?(which is where the question arose...)
no, like Brian's
ive never been to @bh's page,, ill be back :)
Oh, you mean if you change the name, OBJECT [STAGE] changes to OBJECT [FOO] but MY [STAGE] doesn't change? Huh, you learn something every day.
But if you switch from English to some other language, do they both change?
The point is that sometimes you need to write a block that works with THE STAGE, regardless what its name might be, because you want to share that block in a library. That's when you need to refer to the stage by its role, which is the point of MY STAGE.
(In retrospect I regret our decision to let users name the stage themselves, If I did it again I'd stay with Scratch's decision not to)
it translates it : =>
doesnt change, even if you rename the stage, you have to go back and reselect the new name from the dropdown for it to point to the stage again
Yeah I get that, but I'm asking about details. OBJECT [STAGE] changes when you change the name of the stage? Because you could imagine that OBJECT [STAGE] could work like OBJECT [MYSELF], which doesn't change if you change the sprite's name.
i think
: / ::
:
its confusing because they have the same name as what they are (their role?). its like naming a person 'Person'. if they went to a different country, the word person would change but the their name, 'Person', would stay the same.
Yeah I get that. I don't need an explanation of the deep ideas; I need to know what the actual behavior is!
Which I've just checked. When you change languages, both OBJECT [STAGE] and MY [STAGE] change to the new language. Same for OBJECT [MYSELF] and MY [SELF]. (Sidenote: In the French translation both MYSELF and SELF change to MOI-MÊME. Google says that SELF in French is SOI.)
When you change the name of the stage, you get OBJECT [FRED] but MY [STAGE] (or the translation of STAGE to the chosen language). When you change the name of a sprite, you still get MY [SELF], but for OBJECT you get this:
or in English:
I.e., you still have MYSELF above the line, and also the sprite's name below the line.
What does that line mean? In practice I guess it means that below the line comes the list of all named sprites (that is, not including temporary clones), by name, and above the line everything else. I can see how this makes sense as an implementation, supposing there's already code in the implementation to produce a list of sprite names.
But as a user, it seems strange to me. In the case of my self, it appears in the menu twice: once, above the line, by role (MYSELF), and once, below the line, as named object (MARSHA). Why isn't it the same for the stage? Above the line it should be called STAGE, translatable, and below the line it should be listed by name, not translatable.
I don't think this is a disaster or needs to be changed quickly or anything. But I present it as evidence for my claim that these blocks are not designed; they just grew.
(Another piece of evidence is the lack of visible organization of the MY block's menu. At the very least it needs dividers; even better would be submenus so that the categories could be called out by name rather than implicit.)
thats just the way i understood it. i was referring to the behavior, i was just using a metaphor.
and oh my you said i was overthinking it, no need to add stage twice (or else why not add every object twice?), take myself out, and the my block would have those word-by-definition references; my [ self ], my [ stage ], and object would have those by-name references; object [ Fred ], object [ Marsha ]. i agree the my block is a bit messy though. some dividers would be nice.
Because most of them don't have special roles in the object system. I could imagine having "Parent" above the line too. But I guess you're right, if we're going to have my [stage] perhaps we could live without object [stage]. But then the block should be called OBJECT NAMED or something.
so is there a difference between a sprite - stage relationship and a sprite - sprite relationship? like does a sprite 'know' about the roles or does it just see another object?
So I'm guessing your kinda left-field question comes from the fact that you like to read the Snap! source files, and so you see this.parentThatIsA(Stage) all over the place (where this is a sprite). I have no idea why it's ilke that. I mean, the proximate reason is that that's how John did it implementing Scratch, but to me it feels like a messy confusion between the ideas of part-of and one-of. E.g., radio buttons are part of a radio, but they don't inherit from radios, because it wouldn't make sense to apply the same methods to both. (There are methods shared by sprites and the stage, but only because both of those are legit children of Morph.)
I suppose maybe for Real OOP People there's no such thing as a global procedure; everything has to be a method of something. And so the Scratch/Snap! global blocks have to belong to some object under the hood, and so John and therefore Jens put them in the stage.
But none of that has anything to do with the picture we give to users, namely, sprites and the stage are two entirely different kinds of things. You can't clone the stage. So your question really doesn't arise.
OMG, I so hate these half-informed ill educated musings about what y'all are reading into the source code. In Morphic "parent" means container, nothing more, nothing less. And "parentThatIsA()" includes the receiver in case it qualifies the given type, which is why the stage also returns itself if being sent that message, a concept otherwise known as polymorphism.
In JavaScript, "prototype" is the gimmick that governs inheritance.
If you want to learn JavaScript, please learn JavaScript, and if you wish to learn Snap, DO NOT READ THE SOURCE CODE, because you also don't read Python's C source code to understand Python. I can't believe I have to say this over and over again. If you're interested in Morphic, please do read the source code and do with it whatever you want, but for heaven's sake stop conflating the two.
Can we please, please, pretty please stop these amateurish-but-pro-sounding source code discussions in this forum? No wonder kids are all getting the impression that there is some deeper truth reserved to the initiated that read the sources. Why-oh-why are we inflicting these pedagogical horrors on this community?
omg this is ridiculous. youd think having two of the developers here id be in pretty good hands but you two bicker like a married couple. i cant believe im getting third wheeled in this, my own post. i mean its becoming a pattern, i ask a simple question about snap, bh "answers", jens gives a nice and thorough explanation why bh is wrong. you both are being drama queens and this thread is a disaster. |
Assemble data into a ctd object.
as.ctd(
salinity,
temperature = NULL,
pressure = NULL,
conductivity = NULL,
scan = NULL,
time = NULL,
other = NULL,
units = NULL,
flags = NULL,
missingValue = NULL,
type = "",
serialNumber = "",
ship = NULL,
cruise = NULL,
station = NULL,
startTime = NULL,
longitude = NULL,
latitude = NULL,
deploymentType = "unknown",
pressureAtmospheric = 0,
sampleInterval = NA,
profile = NULL,
debug = getOption("oceDebug")
)
## Arguments
salinity There are several distinct choices for salinity. It can be an rsk object (see “Converting rsk objects” for details). It can be a vector indicating the practical salinity through the water column. In that case, as.ctd employs the other arguments listed below. It can be an rsk object (see “Converting rsk objects” for details). It can be something (a data frame, a list or an oce object) from which practical salinity, temperature, pressure, and conductivity can be inferred. In this case, the relevant information is extracted and the other arguments to as.ctd are ignored, except for pressureAtmospheric. If the first argument has salinity, etc., in matrix form (as can happen with some objects of argo), then only the first column is used, and a warning to that effect is given, unless the profile argument is specified and then that specific profile is extracted. It can be an rsk object (see “Converting rsk objects” for details). It can be an rsk object (see “Converting rsk objects” for details). It can be unspecified, in which case conductivity becomes a mandatory argument, because it will be needed for computing actual salinity, using swSCTp(). in-situ temperature in $$^\circ deg$$C on the ITS-90 scale; see “Temperature units” in the documentation for swRho(). Vector of pressure values, one for each salinity and temperature pair, or just a single pressure, which is repeated to match the length of salinity. electrical conductivity ratio through the water column (optional). To convert from raw conductivity in milliSeimens per centimeter divide by 42.914 to get conductivity ratio (see Culkin and Smith, 1980). optional scan number. If not provided, this will be set to seq_along(salinity). optional vector of times of observation optional list of other data columns that are not in the standard list an optional list containing units. If not supplied, defaults are set for pressure, temperature, salinity, and conductivity. Since these are simply guesses, users are advised strongly to supply units. See “Examples”. if supplied, this is a list containing data-quality flags. The elements of this list must have names that match the data provided to the object. optional missing value, indicating data that should be taken as NA. Set to NULL to turn off this feature. optional type of CTD, e.g. "SBE" optional serial number of instrument optional string containing the ship from which the observations were made. optional string containing a cruise identifier. optional string containing a station identifier. optional indication of the start time for the profile, which is used in some several plotting functions. This is best given as a POSIXt time, but it may also be a character string that can be converted to a time with as.POSIXct(), using UTC as the timezone. optional numerical value containing longitude in decimal degrees, positive in the eastern hemisphere. If this is a single number, then it is stored in the metadata slot of the returned value; if it is a vector of numbers, then they are stored in the data slot. optional numerical value containing the latitude in decimal degrees, positive in the northern hemisphere. See the note on length, for the longitude argument. character string indicating the type of deployment. Use "unknown" if this is not known, "profile" for a profile (in which the data were acquired during a downcast, while the device was lowered into the water column, perhaps also including an upcast; "moored" if the device is installed on a fixed mooring, "thermosalinograph" (or "tsg") if the device is mounted on a moving vessel, to record near-surface properties, or "towyo" if the device is repeatedly lowered and raised. A numerical value (a constant or a vector), that is subtracted from pressure before storing it in the return value. (This altered pressure is also used in calculating salinity, if that is to be computed from conductivity, etc., using swSCTp(); see salinity above.) optional numerical value indicating the time between samples in the profile. optional positive integer specifying the number of the profile to extract from an object that has data in matrices, such as for some argo objects. Currently the profile argument is only utilized for argo objects. an integer specifying whether debugging information is to be printed during the processing. This is a general parameter that is used by many oce functions. Generally, setting debug=0 turns off the printing, while higher values suggest that more information be printed. If one function calls another, it usually reduces the value of debug first, so that a user can often obtain deeper debugging by specifying higher debug values.
A ctd object.
## Converting rsk objects
If the salinity argument is an object of rsk, then as.ctd passes it, pressureAtmospheric, longitude, latitude ship, cruise, station and deploymentType to rsk2ctd(), which builds the ctd object that is returned by as.ctd. The other arguments to as.ctd are ignored in this instance, because rsk objects already contain their information. If required, any data or metadata element can be added to the value returned by as.ctd using oceSetData() or oceSetMetadata(), respectively.
The returned rsk object contains pressure in a form that may need to be adjusted, because rsk objects may contain either absolute pressure or sea pressure. This adjustment is handled automatically by as.ctd, by examination of the metadata item named pressureType (described in the documentation for read.rsk()). Once the sea pressure is determined, adjustments may be made with the pressureAtmospheric argument, although in that case it is better considered a pressure adjustment than the atmospheric pressure.
rsk objects may store sea pressure or absolute pressure (the sum of sea pressure and atmospheric pressure), depending on how the object was created with as.rsk() or read.rsk(). However, ctd objects store sea pressure, which is needed for plotting, calculating density, etc. This poses no difficulties, however, because as.ctd automatically converts absolute pressure to sea pressure, if the metadata in the rsk object indicates that this is appropriate. Further alteration of the pressure can be accomplished with the pressureAtmospheric argument, as noted above.
## References
Culkin, F., and Norman D. Smith, 1980. Determination of the concentration of potassium chloride solution having the same electrical conductivity, at 15 C and infinite frequency, as standard seawater of salinity 35.0000 ppt (Chlorinity 19.37394 ppt). IEEE Journal of Oceanic Engineering, volume 5, pages 22-23.
Other things related to ctd data: CTD_BCD2014666_008_1_DN.ODF.gz, [[,ctd-method, [[<-,ctd-method, cnvName2oceName(), ctd-class, ctd.cnv, ctdDecimate(), ctdFindProfiles(), ctdRaw, ctdTrim(), ctd, d200321-001.ctd, d201211_0011.cnv, handleFlags,ctd-method, initialize,ctd-method, initializeFlagScheme,ctd-method, oceNames2whpNames(), oceUnits2whpUnits(), plot,ctd-method, plotProfile(), plotScan(), plotTS(), read.ctd.itp(), read.ctd.odf(), read.ctd.odv(), read.ctd.sbe(), read.ctd.woce.other(), read.ctd.woce(), read.ctd(), setFlags,ctd-method, subset,ctd-method, summary,ctd-method, woceNames2oceNames(), woceUnit2oceUnit(), write.ctd()
## Examples
library(oce)
## 1. fake data, with default units
pressure <- 1:50
temperature <- 10 - tanh((pressure - 20) / 5) + 0.02*rnorm(50)
salinity <- 34 + 0.5*tanh((pressure - 20) / 5) + 0.01*rnorm(50)
ctd <- as.ctd(salinity, temperature, pressure)
fluo <- 5 * exp(-pressure / 20)
ctd <- oceSetData(ctd, name="fluorescence", value=fluo,
unit=list(unit=expression(mg/m^3), scale=""))
summary(ctd)#> CTD Summary
#> -----------
#>
#> * Data Overview
#>
#> Min. Mean Max. Dim. NAs OriginalName
#> scan 1 25.5 50 50 0 -
#> salinity [PSS-78] 33.486 34.111 34.521 50 0 -
#> temperature [°C, ITS-90] 8.9567 9.7808 11.033 50 0 -
#> pressure [dbar] 1 25.5 50 50 0 -
#> fluorescence [mg/m³] 0.41042 1.7903 4.7561 50 0 -
#>
#> * Processing Log
#>
#> - 2020-07-21 16:49:04 UTC: create 'ctd' object
#> - 2020-07-21 16:49:04 UTC: as.ctd(salinity = salinity, temperature = temperature, pressure = pressure)
#> - 2020-07-21 16:49:04 UTC: oceSetData(object = ctd, name = "fluorescence", value = fluo, unit = list(unit = expression(mg/m^3), scale = ""))
## 2. fake data, with supplied units (which are the defaults, actually)
ctd <- as.ctd(salinity, temperature, pressure,
units=list(salinity=list(unit=expression(), scale="PSS-78"),
temperature=list(unit=expression(degree*C), scale="ITS-90"),
pressure=list(unit=expression(dbar), scale=""))) |
# Start a new discussion
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorUrs
• CommentTimeJun 8th 2020
• (edited Jun 8th 2020)
I am wondering about the following:
Let $Singularities$ denote the global orbit category of finite groups, i.e. simply the full sub-$2$-category of all $\infty$-groupoids on those of the form $\ast \!\sslash\! G$ for $G$ a finite group.
Regarded as an $\infty$-site with trivial coverage, this is a cohesive $\infty$-site. Therefore, given any $\infty$-topos $\mathbf{H}_{\subset}$ we obtain a new $\infty$-topos
$\mathbf{H} \;\coloneqq\; PSh_\infty(Singularities, \mathbf{H}_{\subset})$
which has the following properties:
1. for each finite group $G$ there is the usual $\ast \!\sslash\! G \in \mathbf{H}$, but in addition there is an object to be denoted $\prec^G \in \mathbf{H}$ – to be thought of as the the “generic $G$-orbi-singularity”
(namely that arising as the image of the corresponding object in $Singularities$ under the Yoneda-embedding and passing along the inverse terminal geometric morphism of $\mathbf{H}_{\subset}$ )
2. it carries an adjoint triple of modalities
$\lt \;\;\dashv\;\; \subset \;\;\dashv\;\; \prec$
$singular \dashv smooth \dashv orbisingular$
3. such that (at least when $\mathbf{H}_{\subset}$ is itself cohesive):
1. $\lt(\prec^G) \simeq \ast$
(“the purely singular aspect of an orbi-singularity is a plain quotient of a point, hence a point”)
2. $\subset(\prec^G) \simeq \ast \!\sslash\! G$
(“the purely smooth aspect of an orbi-singularity is a homotopy quotient of a point)
3. $\prec(\prec^G) \simeq \prec^G$
(“an orbi-singularity is purely orbi-singular”)
$\,$
I am wondering about the converse:
Suppose an $\infty$-topos $\mathbf{H}$ is such that these three conditions hold (the first one without its parenthetical remark).
Can we conclude that $\mathbf{H}$ is of the form $PSh_\infty(Singularities, \mathbf{H}_{\subset})$?
If not, which axioms could be added to make it work?
• CommentRowNumber2.
• CommentAuthorDavid_Corfield
• CommentTimeJun 9th 2020
Sorry, nothing to add. Just to comment that discussion and material in this area is getting spread out, e.g., most discussion is at orbifold cohomology and most exposition at the corresponding page orbifold cohomology.
• CommentRowNumber3.
• CommentAuthorUrs
• CommentTimeOct 7th 2021
• (edited Oct 8th 2021)
Should be hyperlinking the new entry cohesion of global- over G-equivariant homotopy theory, here and in related entries. |
### Theory:
Ovum is the female gamete that is produced in the ovaries of the female.
The process of formation of a mature ovum in the ovaries is known as oogenesis.
The mature ovum or an egg is spherical in shape. The ovum is essentially yolk-free. It has abundant cytoplasm called ooplasm and a large nucleus. The nucleus contains a prominent nucleolus.
The plasma membrane surrounds the cytoplasm. Small vesicles called cortical granules are found under the plasma membrane. The ovum is surrounded by three membranes, namely:
1. Zona pellucida
3. Vitelline membrane
Structure of the ovum
The plasma membrane is surrounded by an inner thin zona pellucida and an outer thick corona radiata. Zona pellucida is acellular. The corona radiata is formed of the follicular cells. The membrane forming the surface layer of the ovum is called the vitelline membrane. The fluid-filled space between the zona pellucida and the surface of the egg is called perivitelline space.
The video showing the structure of the ovum:
In contrast to males, in females, the initial steps occur prior to their birth. Diploidoogonia and primary oocytes are produced in the foetus when she is born.
In oocytes, the first meiotic division is initiated and then stopped. No further development occurs until the girl becomes sexually mature. After maturity, the primary oocytes resume their development. The primary oocytes grow further and complete meiosis I, forming a large secondary oocyte and a small polar body.
Only after fertilization, meiosis II is completed. By the completion of meiosis II, secondary oocytes get converted into a fertilized egg or zygote.
Process of oogenesis
Puberty:
When the reproductive system in both males and females becomes functional, there is an increase in sex hormone production, resulting in puberty. This phenomenon starts earlier in females than in males. Generally, boys attain puberty between the age of $$13$$ to $$14$$ $$years$$. The girls reach puberty between $$11$$ to $$13$$ $$years$$.
In a male, the onset of puberty is triggered by the secretion of the hormone testosterone in the testes. While in females, the secretion of estrogen and progesterone from the ovary triggers puberty.
As we have seen in an earlier chapter, the secretion of male and female hormones are under the control of the pituitary gonadotropins luteinizing hormone (LH) and follicle-stimulating hormone (FSH).
During puberty, the individuals of the two sexes show distinctive features called secondary sexual characteristics. Some of the male secondary sexual characteristics are facial hair, cracking of voice, etc.
Female secondary sexual characteristics include development of breasts, broadening of hips, etc. Such distinguishing features are present in all the animals. These characteristics serve to identify and attract sex partners.
Reference:
https://humanreproduction11.wordpress.com/fertilization/ |
# Thrash reduction no longer a priority for Linux kernel devs?
Version 3.5 of the Linux kernel has been released.
freshly installed ipod linux, booting. during Wikipedia:Workshop Köln (Photo credit: Wikipedia)
One of the changes it includes is the removal of the “swap token” code – one of the very few ‘local’ memory management policies grafted on to the ‘global’ page replacement mechanisms in the kernel.
There are various technical reasons offered for the removal of the code – on which I am not qualified to comment – but the borrow line is that it was broken in any case, so the removal seems to make sense.
What does slightly disturb me, though, is the comment that Rik Van Riel, the key figure in kernel memory management code, makes:
The days of sub-1G memory systems with heavy use of swap are over.
If we ever need thrashing reducing code in the future, we will have to
implement something that does scale.
I think the days of sub-1G systems are far from over. In fact I suspect there are more of them, and more of them running Linux, than ever before and that trend is not going to stop.
He’s right of course about the need to find that code that works – my own efforts (in my MSc report) didn’t crack this problem, but I do think there is more that can be done.
# In praise of Peter Denning
Peter Denning (Photo credit: Wikipedia)
It’s often said one should not meet one’s heroes as, all too often, they turn out to be, well, just a bit too human. To be sure, in politics I have often been up close to people who were seen as etherial beings by many but were indeed a bit ordinary close up (though you also can see in some people an inexpressible quality of brilliance – Tony Blair and Ken Clarke both had this).
Well, I have never met Peter J. Denning, the discoverer of the “working set method”, and the man whose work formed the intellectual backdrop to my MSc project last year. But I have now exchanged a few emails with him and I do want to say that they have all increased my admiration for him as one of the great foundational figures of modern computing.
He sought me out after he came across my MSc. I have to say my first section was that he was likely to tear a strip off me (actually my very first reaction was to think he was a recruitment consultant wasting his time – it simply never occurred to me that the Peter Denning emailing me would be that Peter Denning) – as I had described his formulation of the space-time product for memory management as flawed. In fact he just pointed out that I was criticising an approximation which he accepted did not fully represent the space-time needed to manage a working set method of page replacement but also pointed out he had accounted for this in other papers (and he had).
He and I then exchanged a few emails about memory management issues and about my current research interests.
A great man. A giant of computing. And nice to boot. Who would have thought it?
# Working set heuristics and the Linux kernel: my MSc report
My MSc project was titled “Applying Working Set Heuristics to the Linux Kernel” and my aim was to test some local page replacement policies in Linux, which uses a global page replacement algorithm, based on the “2Q” principle.
There is a precedent for this: the so-called “swap token” is a local page replacement policy that has been used in the Linux kernel for some years.
My aim was to see if a local replacement policy graft could help tackle “thrashing” (when a computer spends so much time trying to manage memory resources – generally swapping pages back and forth to disk – it makes little or no progress with the task itself).
The full report (uncorrected – the typos have made me shudder all the same) is linked at the end, what follows is a relatively brief and simplified summary.
Fundamentally I tried two approaches: acting on large processes when the number of free pages fell to one of the watermark levels used in the kernel and acting on the process last run or most likely to run next.
For the first my thinking – backed by some empirical evidence – was that the largest process tended to consume much more memory than even the second largest. For the second the thought was that make the process next to run more memory efficient would make the system as a whole run faster and that, in any case the process next to run was also quite likely (and again some empirical evidence supported this) to be the biggest consumer of memory in the system.
To begin I reviewed the theory that underlies the claims for the superiority of the working set approach to memory management – particularly that it can run optimally with lower resource use than an LRU (least recently used) policy.
Peter Denning, the discoverer of the “working set” method and its chief promoter, argued that programs in execution do not smoothly and slowly change their fields of locality, but transition from region to region rapidly and frequently.
The evidence I collected – using the Valgrind program and some software I wrote to interpret its output, showed that Denning’s arguments appear valid for today’s programs.
Here, for instance is the memory access pattern of Mozilla Firefox:
Working set size can therefore vary rapidly, as this graph shows:
It can be seen that peaks of working set size often occur at the point of phase transition – as the process will be accessing memory from the two phases at the same time or in rapid succession.
Denning’s argument is that the local policy suggested by the working set method allows for this rapid change of locality – as the memory space allocated to a given program is free to go up and down (subject to the overall constraint on resources, of course).
He also argued that the working set method will – at least in theory – deliver a better space time product (a measure of overall memory use) than a local LRU policy. Again my results confirmed his earlier findings in that they showed that, for a given average size of a set of pages in memory, the working set method will ensure longer times between page faults, compared to a local LRU policy – as shown in this graph:
Here the red line marks the theoretical performance of a working set replacement policy and the blue line that of a local LRU policy. The y-axis marks the average number of instructions executed between page faults, the x-axis the average resident set size. The working set method clearly outperforms the LRU policy at low resident set values.
The ‘knee’ in either plot where $\frac{dy}{dx}$ is maximised is also the point of lowest space time product – at this occurs at a much lower value for the working set method than for local LRU.
So, if Denning’s claims for the working set method are valid, why is it that no mainstream operating system uses it? VMS and Windows NT (which share a common heritage) use a local page replacement policy, but both are closer to the page-fault-frequency replacement algorithm – which varies fixed allocations based on fault counts – than a true working set-based replacement policy.
The working set method is just too difficult to implement – pages need to be marked for the time they are used and to really secure the space-time product benefit claimed, they also need to be evicted from memory at a specified time. Doing any of that would require specialised hardware or complex software or both, so approximations must be used.
“Clock pressure”
For my experiments I concentrated on manipulating the “CLOCK” element of the page replacement algorithm: this removes or downgrades pages if they have not been accessed in the time been alternate sweeps of an imaginary second hand of an equally imaginary clock. “Clock pressure” could be increased – ie., pages made more vulnerable to eviction – by systematically marking them as unaccessed, while pages could be preserved in memory by marking them all as having been accessed.
The test environment was compiling the Linux kernel – and I showed that the time taken for this was highly dependent on the memory available in a system:
The red line suggests that, for all but the lowest memory, the compile time is proportional to $M^{-4}$ where $M$ is the system memory. I don’t claim this a fundamental relationship, merely what was observed in this particular set up (I have a gut feeling it is related to the number of active threads – this kernel was built using the -j3 switch and at the low memory end the swapper was probably more active than the build, but again I have not explored this).
Watermarks
The first set of patches I tried were based on waiting for free memory in the system to sink to one of the “watermarks” the kernel uses to trigger page replacement. My patches looked for the largest process then either looked to increase clock pressure – ie., make the pages from this large process more likely to be removed – or to decrease it, ie., to make it more likely these pages would be preserved in memory.
In fact the result in either case was similar – at higher memory values there seemed to be a small but noticeable decline in performance but at low memory values performance declined sharply – possibly because moving pages from one of the “queues” of cached pages involves locking (though, as later results showed also likely because the process simply is not optimal in its interaction with the existing mechanisms to keep or evict pages).
The graph below shows a typical result of an attempt to increase clock pressure – patched times are marked with a blue cross.
The second approach was to interact with the “completely fair scheduler” (CFS) and increase or decrease clock pressure on the process lease likely to run or most likely to run.
The CFS orders processes in a red-black tree (a semi-balanced tree) and the rightmost node is the process least likely to run next and the leftmost the process most likely to run next (as it has run for the shortest amount of virtual time).
As before the idea was to either free memory (increase clock pressure) or hold needed pages in memory (decrease clock pressure). The flowchart below illustrates the mechanism used for the leftmost process (and decreasing clock pressure):
But again the results were generally similar – a general decline, and a sharp decline at low memory values.
(In fact, locking in memory of the leftmost process actually had little effect – as shown below:)
But when the same approach was taken to the rightmost process – ie the process that has run for the longest time (and presumably may also run for a long time in the future), the result was a catastrophic decline in performance at small memory values:
And what is behind the slowdown? Using profiling tools the biggest reason seems to be that the wrong pages are being pushed out of the caches and need to be fetched back in. At 40MB of free memory both patched and unpatched kernels show similar profiles with most time spent scheduling and waiting for I/O requests – but the slowness of the patched kernel shows that this has to be done many more times there.
There is much more in the report itself – including an examination of Denning’s formulation of the space-time product - I conclude his is flawed (update: in fairness to Peter Denning, who has pointed this out to me, this is as regards his approximation of the space-time product: Denning’s modelling in the 70s also accounted for the additional time that was required to manage the working set) as it disregards the time required to handle page replacement – and the above is all a (necessary) simplification of what is in the report – so if you are interested please read that.
Applying working set heuristics to the Linux kernel
# Hard faults and soft faults in the real world
I ran Audacity under valext and here is the graph of real memory use:
And here is the soft and hard fault count:
My surmise as to what you can see here? Lots of initialising – with memory use shooting up and down – though the low level of hard faults suggests much of this is from libraries already loaded in the system. Then pages getting swapped out as nothing happens – audacity did not actually display a window – not sure why – I killed it after it had been running for about two and half minutes of virtual time (around 24 hours of wall clock time) as that was more than enough time to produce something on screen!
Still, as a first test of the tool, that was not bad.
# Counting soft and hard faults
Image via Wikipedia
When a running program references a page of memory that is not mapped into its address space the operating system throws a “page fault” – calling some kernel code to ensure that the page is loaded and mapped, or if the address referenced is not legal, that an appropriate error (a seg fault on x86) is signalled and the program’s execution stopped.
If the address is ‘legal’ then two types of fault exist – a ‘hard’ fault where the missing memory (eg some code) has to be loaded from disk or a ‘soft’ fault where the missing page is already in memory (typically because it is in a shared library and being used elsewhere) and so all that has to happen is for the page to be mapped into the address space of the executing program.
(The above is all a simplification, but I hope it is clear enough.)
Soft faults, as you might expect are handled much faster than hard faults – as disk access is generally many orders of magnitude slower than memory access.
Memory management and paging policies are generally designed to minimise the number of faults, especially hard faults.
So – what is the ratio of hard and soft faults? I have further extended the valext program I wrote for my MSc project to count just that – and it seems that on a loaded Linux system soft faults are generally an order of magnitude more common than hard faults even when launching a new program form ‘scratch’ (eg I am seeking to run an instance of ‘audacity’ under valext – and after executing 326,000 instructions there have been 274 soft faults and 37 hard faults).
That is good, of course, because it makes for faster, more efficient computing. But it also means that further optimising the paging policy of Linux is tough – hard faults take time so you can run a lot of optimising code and hope to have a better performance if you cut the number of hard faults even only slightly. But if soft faults out number hard faults by 10 to 1 then running a lot of extra code to cut the number of faults may not be so beneficial.
(You can download valext at github – here NB: right now this extra feature is in the ‘faultcount’ branch – it will be merged into master in due course.)
# Writing more code to avoid writing any of the report?
Image by mrbill via Flickr
I have managed to churn out 160 lines of working C today – which I think is quite good going, though, according to this, maybe I could have churned out 400 of C++ or even 960 of Perl (I love Perl but the mind boggles).
My program will tell you how pages pages are present or how many have been swapped (it has the ability to do a bit more too, but I have not exploited that even though it is essentially there) – you can fetch it from github here: valext (the name reflects the idea that this was going to be an extension to Valgrind but then I discovered/realised that was never going to work).
Anyway, what I now have to face up to is whether I am churning out code to avoid actually writing any of my MSc project report – I have a feeling that some of that is going on and think I need to start rereading Writing for Computer Science – which helped me greatly with the proposal.
# Performance collapse in the Open JVM
Image via Wikipedia
Unfortunately, I do not have time to investigate this further myself, but others may do.
But yesterday I had a serious performance issue with the (open) JVM – though I was able to solve it with an algorithm change – swapping the problematic (integer) code for a lot of floating point maths: not the usual way to fix a performance issue but one that works.
My original code (in Groovy) appended many millions of integers to a list and then, once a loop was complete, calculated the average for the list (calculating the average working set size for a running process). When I was dealing with 2 – 3 million integers it worked well and performance, if anot exactly zipping along, was good. Push that up to 10 – 11 million and the first couple of times through the loop CPU utilisation dropped precepitatively (this was multithreaded – with runs through the loop operating in parallel) but the code was still visibly working but after that the intervals between loop completion grew to the point that the code seemed to have failed.
Even when I pre-allocated 0×1000000 items in what I now explicitly declared as an ArrayList the performance was little better – the first couple of iterations seemed a bit faster but performance then died.
I do not know what is going on – though excessive memory fragmentation perhaps coupled with poor garbage collection seem like the obvious answers: seems there is probably a brick wall for ArrayList size that sees whatever memory allocation algorithm operates inside the JVM fall over.
How did I fix it? Update the average in real time – in pseudo code below:
average = 0 previousInstructions = 0 loop [0, maxInstructions - 1) { currentInstructions++ if (change_in_working_set_size) { average = ((average * previousInstructions) + workingSetSize() * (currentInstructions - previousInstructions))/currentInstructions previousInstructions = currentInstructions } }
Like I say, floating point, but it works.
# Itsy-OS kernel: who needs memory protection anyway?
Image via Wikipedia
I have to say I love this little piece of code – a 380 byte “operating system kernel” for the Intel 8086. Actually just a task switcher and close to the simplest memory management imaginable. But if you had a device that you could be happy just to switch on and off if and when something crashes, this could even be useful. |
# Find Theorems of a Formal Theory
Going through a book on formal logic, I have encountered the following problem. Since I am somewhat new to formal logic, I am confused about how to approach it.
A certain formal theory has exactly two axioms:
(a) 2 + 2 = 4 -> (2 + 2 = 4 -> 2 + 3 = 6)
(b) 2 + 2 = 4
and has modus ponens, i.e., (P->Q, P) -> Q
Find all theorems of this theory.
I understand that the axioms themselves are theorems. How can I find the others?
• Hint: think about it like like a self-assembly kit. You have been given some basic parts (the axioms) and some tools for combining existing parts to make new ones (the inference rules). In your case you only have one inference rule (modus ponens) and a very small supply of axioms. What can you build? Mar 28, 2017 at 21:17
• What confuses me is the double implication in axiom (a). How does one understand it? Mar 28, 2017 at 23:45 |
The parent graph of any exponential function crosses the y-axis at (0, 1), because anything raised to the 0 power is always 1. Solution of + + + + = written out in full. Just as with other parent functions, we can apply the four types of transformations—shifts, reflections, stretches, and compressions—to the parent function $f\left(x\right)={b}^{x}$ without loss of shape. This function produces an exponential graph that slopes upward and becomes steeper as the value of x increases. How to transform the graph of a function? Just add the transformation you want to to. If k<0, then the graph of . For example, lets move this Graph by units to the top. Description. Memorizing the Parent functions helps with knowing what is happening with the transformations. To learn more, see Understanding functions for Parent-Child Hierarchies in DAX. Item Value. Let us start with a function, in this case it is f(x) = x 2, but it could be anything: f(x) = x 2. Now you can filter your module on this line item : Show children only is equal to True. Only children will be dislayed. This example references the following sheet information: Row # Row Hierarchy. KEY to Chart of Parent Functions with their Graphs, Tables, and Equations Name of Parent Function Graph of Function Table of Values y=f(x)+ k. is a translation of |k| units . Insert a formula in this line item : TRUE. To observe the characteristics of reciprocal functions, let's consider the most basic reciprocal function, which is f(x) = 1/x. 1. y = -4x - 1 answers translated right by 1 unit 2. y = 1 - 4x translated down 1 unit 3. y = -4-x reflected across the x-axis 4. y = -4x + 1 reflected across the y-axis 5. y = 4x translated up by 1 unit 6. y = -1 - 4x translated left by 1 unit When the parent function $f\left(x\right)={\mathrm{log}}_{b}\left(x\right)$ is multiplied by –1, the result is a reflection about the x-axis. Sample Problem 3: Use the graph of parent function to graph each function. Sample Problem 1: Identify the parent function and describe the transformations. Inversely, if a is less than one is becomes wider because the graph is growing (steeper) at a slower rate. Learn more Accept. Parts of a formula 1. y = f ( x). Project % Complete. In a parent function, you can create a handle to a nested function that contains the data necessary to run the nested function. Find the domain and the range of the new function. Function Description; PATH: Returns a delimited text string with the identifiers of all the parents of the current identifier. Given a graph or verbal description of a function, the student will determine the parent function. Vertical Translations. Learn vocabulary, terms, and more with flashcards, games, and other study tools. A trick for calculating the phase shift is to set the argument of the trigonometric function equal to zero: B FC L0, and solve for T. The resulting value of T is the phase shift of the function. In either case, all formulas and functions are entered in a cell and must begin with an equal sign ’=’. Start studying Algebra 2 Parent Functions FORMULA, DOMAIN, RANGE, GRAPH. Using constants in Excel formulas . Yay Math in Studio returns, with the help of baby daughter, to share some knowledge about parent functions and their transformations. 2 + 3. Example: f(x) = x. Recursive Function is a function which repeats or uses its own previous term to calculate subsequent terms and thus forms a sequence of terms. 2) Build a validation item to check if the selected item is not a parent item (Not Sector A or Sector B ) Create two line items : First line item is named "counter level". In contrast, functions are pre-defined formulas that come with Excel. By shifting the graph of these parent functions up and down, right and left and reflecting about the x- and y-axes you can obtain many more graphs and obtain their functions by applying general changes to the parent formula. In algebra, a quartic function is a function of the form = + + + +, where a is nonzero, which is defined by a polynomial of ... General formula for roots. This formula is too unwieldy for general use; hence other methods, or simpler formulas for special cases, are generally used. PARENT([ reference] ) reference —[optional] Refer to a specific cell to identify its parent. Learn math quiz parent functions with free interactive flashcards. In Excel,the calculation can be specified using either a formula or a function. Hierarchy functions allow you to include cells in other functions based on their indent level in a sheet. Functions are modeled after Microsoft Excel functions. Using the x and y values from this table, you simply plot the coordinates to get the graphs. PARENT Function. The most widely-known exponential parent function involves Euler's number e and follows the formula y = e^x. References the parent of the specified cell. Example: f(x) = (x + 4) *Remember the actual transformation is (x-h), and subtracting a negative is the same as addition. Signals return information about the environment. For example, Sqrt(25) returns 5. Usually, we learn about this function based on the arithmetic-geometric sequence, which has terms with a common difference between them.This function is highly used in computer programming languages, such as C, Java, Python, PHP. The basic parent function of any exponential function is f(x) = b x, where b is the base. For example is the value of a is greater than one the graph becomes narrower. You can place a hierarchy function inside of another function, for example, to automatically reference all indented child cells underneath a parent, even as new child rows are added to the parent row. a Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. graph of the parent function; a negative phase shift indicates a shift to the left relative to the graph of the parent function. This website uses cookies to ensure you get the best experience. We call these basic functions “parent” functions since they are the simplest form of that type of function, meaning they are as close as they can get to the origin $$\left( {0,0} \right)$$. This is it. The functions shown above are called parent functions. A parent function is the simplest function that still satisfies the definition of a certain type of function. By using this website, you agree to our Cookie Policy. The parent function has been reflected in the x-axis, vertically stretched by a factor of 3, translated 8 units to the left and 5 units down. Make sure to keep the summary option to NONE. If no cell is specified, returns the parent of the current cell. Graphing Transformations Of Reciprocal Function. Answers: 1 on a question: Match each function formula with the corresponding transformation of the parent function y = 5-^x 1. y = 5x translated left 5 units 2. y = 5–x + 5 translated right by 5 units 3. y = 5–x – 5 translated up by 5 units 4. y = 5–x+ 5 translated down by 5 units 5. y = –5–x reflected across the x-axis 6. y = 5–x – 5 reflected across the y-axis These functions manage data that is presented as parent/child hierarchies. You’ll probably study some “popular” parent functions and work with these to learn how to transform functions – how to move them around. Functions take parameters, perform an operation, and return a value. Constants: Numbers or text values entered directly into a formula, such as 2. b) The parent function is g(x) = √ The parent function has been vertically stretched by a factor of 4, reflected in the y-axis, horizontally compressed by a factor of , translated 4 units to the left and 6 units up. Functions: The PI() function returns the value of pi: 3.142... 2. How to move a function in y-direction? DOWN. Using f(x)=a|x-h|+k we can look at certain values to predict what is going to happen to the graph. In this category . PARENT([Task Name]6) Syntax. Example: Given the function $$y = \frac{{ - 2}}{{3(x - 4)}} + 1$$ a) Determine the parent function b) State the argument c) Rearrange the argument if necessary to determine and the values of k and d d) Rearrange the function equation if necessary to determine the values of a and c Operators: The ^ (caret) operator raises a number to a power, and the * (asterisk) operator multiplies numbers. ⭐️ Mathematics » Match each function formula with the corresponding transformation of the parent function y = (x - 1) 2 1. y = - (x - 1) 2 Translated right by 1 unit 2. y = (x - 1) 2 + 1 Reflected over the y-axis 3. y = (x + 1) 2 Translated left by 4 units 4. In general, transformations in y-direction are easier than transformations in x-direction, see below. Figure a, for instance, shows the graph of f(x) = 2 x, and Figure b shows . Sample Problem 2: Given the parent function and a description of the transformation, write the equation of the transformed function!". When the input is multiplied by –1, the result is a reflection about the y-axis. GROWTH returns the y-values for a series of new x-values that you specify by using existing x-values and y-values. Answers: 1 on a question: Match each function formula with the corresponding transformation of the parent function y = -4 x . If k>0, then the graph of y=f(x)+kis a translation of kunits UPof the graph of y = f (x). 4. PATHCONTAINS: Returns TRUE if the specified item exists within the specified path. Nested functions can use variables that are not explicitly passed as input arguments. This article describes the formula syntax and usage of the GROWTH function in Microsoft Excel. Sample Usage. The gradually steeper curve is produced because of the geometric progression that the function is following. Free functions and graphing calculator - analyze and graph line equations and functions step-by-step. Choose from 500 different sets of math quiz parent functions flashcards on Quizlet. References: A2 returns the value in cell A2. 3. This depends on the direction you want to transoform. Formulas are self-defined instructions for performing calculations. Take a look at the graph of this function. Calculates predicted exponential growth by using existing data. parent function. Some functions have side effects, such as SubmitForm, which are appropriate only in a behavior formula such as Button.OnSelect. Examples. of the graph of . Corresponding transformation of the geometric progression that the function is the base the PI ( ) returns! Operator multiplies Numbers, lets move this graph by units to the graph of the current cell the! This table, you simply plot the coordinates to get the best experience to happen to the top graph... Level in a cell and must begin with an equal sign ’ ’! Curve is produced because of the new function use variables that are not explicitly passed as arguments! Recursive function is following in a parent function is the simplest function that still satisfies the definition a... Baby daughter, to share some knowledge about parent functions helps with knowing what happening. ] ) reference — [ optional ] Refer to a nested function that still the... To keep the summary option to NONE of function generally used calculate terms... Functions flashcards on Quizlet contrast, functions are entered in a sheet Given the parent function and a description the... Equal sign ’ = ’ returns TRUE if the specified PATH a cell must... Line item: TRUE PATH: returns a delimited text string with the corresponding of!, which are appropriate only in a parent function, you simply plot the coordinates to get the graphs by! Raises a number to a power, and return a value certain type of function Identify! The y-axis: the PI ( ) function returns the parent function to graph each function formula with help! Cell is specified, returns the y-values for a series of new x-values that you specify by using this uses! ) function returns the value of a certain type of function domain the. Nested functions can use variables that are not explicitly passed as input arguments instance... Effects, such as SubmitForm, which are appropriate only in a formula! + = written out in full + k. is a translation of |k| units Problem 1: Identify parent! 500 different sets of math quiz parent functions helps with knowing what is with! More with flashcards, games, and more with flashcards, games and. Pi: 3.142... 2 look at certain values to predict what is to. Produces an exponential graph that slopes upward and becomes steeper as the value a! The transformed function! equation of the parent function and describe the.... Item: TRUE x ) = 2 x, where b is the base left relative the... Values from this table, you simply plot the coordinates to get the best experience from... Describe the transformations: Numbers or text values entered directly into a formula in line. For special cases, are generally used exponential function is a translation of |k| units data that is presented parent/child... ; a negative phase shift indicates a shift to the graph is growing ( steeper ) at a rate! In cell A2 of parent function ; a negative phase shift indicates a shift to the top,... Translation of |k| units equations and functions are entered in a parent function any!, the result is a translation of |k| units Sqrt ( 25 returns... Other methods, or simpler formulas for special cases, are generally used definition of certain... This article describes the formula Syntax and usage of the GROWTH function in Microsoft.... Sqrt ( 25 ) returns 5 come with Excel, domain, range, graph, are generally used,... Knowing what is going to happen to the graph information: Row # Row hierarchy is! Problem 1: Identify the parent functions flashcards on Quizlet range, graph include cells in other based. That still satisfies the definition of a certain type of function x-direction, see Understanding functions for hierarchies! Multiplied by –1, the result is a function which repeats or uses its own previous term calculate! This example references the following sheet information: Row # Row hierarchy,! Nested functions can use variables that are not explicitly passed as input arguments 3: use the graph becomes.! Less than one the graph of parent function to graph each function formula with the transformations as input.. Generally used out in full in x-direction, see below < 0, then graph... Passed as input arguments of baby daughter, to share some knowledge parent.: the ^ ( caret ) operator multiplies Numbers get the graphs = ’ by,... And the range of the current cell going to happen to the graph of this function produces exponential... Learn math quiz parent functions and graphing calculator - analyze and graph line equations and functions step-by-step for... Function returns the value of a certain type of function slower rate more with flashcards, games, figure! Identifiers of all the parents of the geometric progression that the function is f ( x =a|x-h|+k! When the input is multiplied by –1, the result is a function which repeats or its!, functions are pre-defined formulas that come with Excel is becomes wider because the graph of at... Indent level in a sheet of baby daughter, to share some knowledge about parent formula. The y-axis 0, then the graph of this function produces an exponential graph that slopes upward and steeper... Other methods, or simpler formulas for special cases, are generally used written out full! The formula Syntax and usage of the transformed function! is going to happen to the graph the identifier... + = written out in full + = written out in full graph f. Make sure to keep the summary option to NONE the simplest function that still the. Instance, shows the graph of parent function, you agree to our Cookie Policy into a,... [ reference ] ) reference — [ optional ] Refer to a function... In full run the nested function that contains parent functions formula data necessary to run the function... Geometric progression that the function is the value of x increases greater than one the graph becomes narrower 2,. Include cells in other functions based on their indent level in a behavior formula such as Button.OnSelect variables are. Steeper ) at a slower rate GROWTH returns the value of PI:...!, transformations in x-direction, see Understanding functions for Parent-Child hierarchies in DAX, simply. - analyze and graph line equations and functions step-by-step indent level in a sheet a behavior formula such 2. Range of the new function easier than transformations in x-direction, see.... Geometric progression that the function is a function which repeats or uses its own previous term calculate! B x, where b is the simplest function that contains the necessary! Are easier than transformations in x-direction, see Understanding functions for Parent-Child hierarchies in DAX one the of... Hence other methods, or simpler formulas for special cases, are used! That slopes upward and becomes steeper as the value of x increases the result is a translation of |k|.... Than transformations in x-direction, see Understanding functions for Parent-Child hierarchies in DAX ; PATH: returns TRUE the! ) + k. is a translation of |k| units figure b shows Row hierarchy the parent functions free! Best experience analyze and graph line equations and functions step-by-step write the equation of current... Contains the data necessary to run the nested function with the corresponding transformation of parent!, range, graph new x-values that you specify by using existing x-values and y-values on. < 0, then the graph of a cell and must begin with an equal sign ’ = ’ and... Cells in other functions based on their indent level in a behavior formula such as Button.OnSelect are... The function is f ( x ) =a|x-h|+k we can look at certain values to predict what is with! To learn more, see below [ optional ] Refer to a power, and study! Of any exponential function is a reflection about the y-axis one the graph of parent function describe! Values from this table, you simply plot the coordinates to get the graphs specific to... At the graph children only is equal to TRUE as parent/child hierarchies this line item: Show children only equal. Vocabulary, terms, and more with flashcards, games, and return a value Numbers or text values directly... Parent-Child hierarchies in DAX units to the left relative to the graph of this function [ optional Refer. Of PI: 3.142... 2 ) = b x, where b is the base...... That come with Excel for general use ; hence other methods, or simpler formulas for special,! You agree to our Cookie Policy, if a is less than one is becomes wider because graph. Using the x and y values from this table, you can create handle... Use variables that are not explicitly passed as input arguments memorizing the parent function and a description of the cell... ) returns 5 = ’ functions parent functions formula Parent-Child hierarchies in DAX to run the function... General use ; hence other methods, or simpler formulas for special cases, are generally used calculate subsequent and. Delimited text string with the help parent functions formula baby daughter, to share some about... Free functions and their transformations function to graph each function simplest function still... Is f ( x ) + k. is a function which repeats or uses its own previous term to subsequent. Or uses its own previous term to calculate subsequent terms and thus forms a sequence of terms — [ ]! New function the top steeper curve is produced because of the GROWTH function in Microsoft Excel domain the. In Studio returns parent functions formula with the transformations returns 5 curve is produced because of the parent.! Parent of the parent function y = -4 x vocabulary, terms, and the * ( ). |
# In sexual reproduction, an offspring is produced with genes from both parents. When the offspring has a new genetic variation that it did not inherit from either parent, it is called
###### Question:
In sexual reproduction, an offspring is produced with genes from both parents. When the offspring has a new genetic variation that it did not inherit from either parent, it is called
###  A valid conclusion based on the experience of Japanese Americans during World War II is that in wartime1.first-generation immigrants become security risks2.constitutional liberties may be limited3.loyalty oaths are necessary to protect the national interests4.fear and uncertainty do not interfere with normal life
 A valid conclusion based on the experience of Japanese Americans during World War II is that in wartime1.first-generation immigrants become security risks2.constitutional liberties may be limited3.loyalty oaths are necessary to protect the national interests4.fearÂ...
### 9.4 Qiz Setting up and solving simultaneous equations B The total height of 4 building blocks and a flagpole on top is 23 cm. The total height of 9 building blocks and a flagpole on top is 43 cm. Find the height of a building block and the height of a flagpole. Height of a building block = cm Height of a flagpole = cm
9.4 Qiz Setting up and solving simultaneous equations B The total height of 4 building blocks and a flagpole on top is 23 cm. The total height of 9 building blocks and a flagpole on top is 43 cm. Find the height of a building block and the height of a flagpole. Height of a building block = cm Height...
### There are many different kinds of map projections, and this lesson showed three important examples. Why are different map projections necessary? Why can’t we have just one map projection?HELPPPP!!
There are many different kinds of map projections, and this lesson showed three important examples. Why are different map projections necessary? Why can’t we have just one map projection?HELPPPP!!...
### [MC] Read the text, then answer the question that follows: Wild animals as viewed from a mountain camp—Camille Grant, October 2011 Through my binoculars, I viewed a group of wild animals in action. A pride of lions was sleeping when a small, yellow bus pulled up beside them. Tourists on a safari were packed into the bus like sardines in a can. Armed with cameras, they invaded the lions' territory, hoping to capture the perfect photograph. The crowd leaned out the windows, hooting and hollering,
[MC] Read the text, then answer the question that follows: Wild animals as viewed from a mountain camp—Camille Grant, October 2011 Through my binoculars, I viewed a group of wild animals in action. A pride of lions was sleeping when a small, yellow bus pulled up beside them. Tourists on a safari w...
### The effects of industrialization were widespread. Please explain at least 3 effects of industrialization around the globe.
The effects of industrialization were widespread. Please explain at least 3 effects of industrialization around the globe....
### The free market fails to protect ______ interests.
the free market fails to protect ______ interests....
### Which is the lie????? HELP ASAP PLZ
Which is the lie????? HELP ASAP PLZ...
### Diana has 1 square yard of fabric she will make one pillow that requires 3/8 square yard of fabric and another pillow that requires 2/8 square yard of fabric Diana uses the equations shown below to explain her process to find the number of square yards of fabric she will use and the number of square yards of fabric she will have left
Diana has 1 square yard of fabric she will make one pillow that requires 3/8 square yard of fabric and another pillow that requires 2/8 square yard of fabric Diana uses the equations shown below to explain her process to find the number of square yards of fabric she will use and the number of square...
### Describe the end behavior of the following function:
Describe the end behavior of the following function:...
### Three different planet-star systems, which are far apart from one another, are shown above. The masses of the planets are much less than the masses of the stars.In System A , Planet A of mass Mp orbits Star A of mass Ms in a circular orbit of radius R .In System B , Planet B of mass 4Mp orbits Star B of mass Ms in a circular orbit of radius R .In System C , Planet C of mass Mp orbits Star C of mass 4Ms in a circular orbit of radius R .(a) The gravitational force exe
Three different planet-star systems, which are far apart from one another, are shown above. The masses of the planets are much less than the masses of the stars.In System A , Planet A of mass Mp orbits Star A of mass Ms in a circular orbit of radius R .In System B , Planet B of mass 4M...
### Emma wrote the following paragraph proof showing that rectangles are parallelograms with congruent diagonals. (first pic) According to the given information, quadrilateral RECT is a rectangle. By the definition of a rectangle, all four angles measure 90°. Segment ER is parallel to segment CT and ______________ by the Converse of the Same-Side Interior Angles Theorem. Quadrilateral RECT is then a parallelogram by definition of a parallelogram. Now, construct diagonals ET and CR. Because RECT is a
Emma wrote the following paragraph proof showing that rectangles are parallelograms with congruent diagonals. (first pic) According to the given information, quadrilateral RECT is a rectangle. By the definition of a rectangle, all four angles measure 90°. Segment ER is parallel to segment CT and __...
### 1.15y + 0.02y = 1.19 + y Pls solve this question
1.15y + 0.02y = 1.19 + y Pls solve this question...
### Which of the answer choices is an advantage of having two slightly different photosystems in the chloroplasts? a) Cells are able to use light energy at twice the maximum efficiency predicted for cells with a single photosystem. b) The electrons can be elevated to a higher energy level than is possible with a single photosystem. c) Twice as many electrons can be excited by a given amount of light energy. d) Cells are able to use both blue and green wavelengths of light for photosynthesis.
Which of the answer choices is an advantage of having two slightly different photosystems in the chloroplasts? a) Cells are able to use light energy at twice the maximum efficiency predicted for cells with a single photosystem. b) The electrons can be elevated to a higher energy level than is possib...
### M9.5 Peter Sagan is in charge of maintaining hospital supplies at Champs Hospital. During the past year the mean weekly demand for a special type of tubing was 186 packages of this tubing with a standard deviation of 13 packages of tubing. The lead time for receiving this tubing from the supplier is 1.5 weeks. Peter would like to maintain a 95% service level and places an order for 750 packages every time an order is placed. a) How much safety stock should be used for a 95% service level
M9.5 Peter Sagan is in charge of maintaining hospital supplies at Champs Hospital. During the past year the mean weekly demand for a special type of tubing was 186 packages of this tubing with a standard deviation of 13 packages of tubing. The lead time for receiving this tubing from the supplier is...
### Complete each sentence below with the correct preterite form of repetir. Yo ____________________ el precio. a. repitió c. repitieron b. repetà d. repetimos Please select the best answer from the choices provided A B C D
Complete each sentence below with the correct preterite form of repetir. Yo ____________________ el precio. a. repitió c. repitieron b. repetà d. repetimos Please select the best answer from the choices provided A B C D...
### What is the result of a person relying heavily on drugs to avoid dealing with the problems of life. 1. Addiction 2.physical dependence 3. Psychological dependence
What is the result of a person relying heavily on drugs to avoid dealing with the problems of life. 1. Addiction 2.physical dependence 3. Psychological dependence...
### Different elements must have different numbers of what a. all particles b. protons c. electrons d. neutrons
different elements must have different numbers of what a. all particles b. protons c. electrons d. neutrons...
### MOSS COMPANY Selected Balance Sheet Information December 31, 2017 and 2016 2017 2016 Current assets Cash $91,150$ 33,300 Accounts receivable 31,500 45,000 Inventory 66,500 55,400 Current liabilities Accounts payable 43,400 32,200 Income taxes payable 2,700 3,500 MOSS COMPANY Income Statement For Year Ended December 31, 2017 Sales $549,000 Cost of goods sold 357,600 Gross profit 191,400 Operating expenses Depreciation expense$ 49,000 Other expenses 128,500 177,500 Income b
MOSS COMPANY Selected Balance Sheet Information December 31, 2017 and 2016 2017 2016 Current assets Cash $91,150$ 33,300 Accounts receivable 31,500 45,000 Inventory 66,500 55,400 Current liabilities Accounts payable 43,400 32,200 Income taxes payable 2,700 3,500 MOSS COMPANY Income Sta... |
# Heatmaps – Part 3: How to create a microarray heatmap with R?
It is time to deal with some real data. I have hinted in Part 1 of this series that gene expression profiling using microarrays is a prime application for heatmaps. Today, we will look at the differences of gene expression in Acute Lymphoblastic Leukemia (ALL) samples that have either no cytogenetic abnormalities or the famous BCR/ABL chromosomal translocation (“Philadelphia chromosome”). Treatment of patients with the BCR/ABL translocation was the first big success of targeted chemotherapy using the small molecule kinase inhibitor Imatinib (Gleevec) around the turn of the century.
We will investigate whether the gene expression profile between the two types of ALL are different, and if yes, how well hierarchical clustering can detect the type of ALL from the microarray data. An important follow-up to such an analysis would be to determine the genes that contribute to a gene expression “fingerprint” that predicts the type of ALL simply based on the gene expression profile of a patient sample so that targeted therapy can be administered if available.
For this tutorial, I am assuming that you have a reasonable familiarity with R. You should know about the basic data types, be comfortable with subsetting, and be able to write simple functions.
This analysis is inspired by an example in the slightly dated but excellent book Bioconductor Case Studies.
### Step 1: Prepare the data
The data itself is conveniently available in an R package called “ALL”.
library(ALL)
data(ALL)
Let’s look at what exactly we are dealing with here.
# look at help page associated with "ALL"
?ALL
# determine class of "ALL"
class(ALL)
# how much data are we dealing with?
dim(ALL)
There are several pieces of important information:
1. The data is not a “data.frame” or “matrix” but an ExpressionSet. ExpressionSets are the go-to data representation for microarray data in a bundle of R libraries called “Bioconductor“. It not only makes it easy to extract the actual data as a “matrix” but also contains useful annotation. In our case “ALL” is an ExpressionSet with 12625 genes and 128 cancer samples.
2. The information on the cytogenetic phenotype is stored in a variable called “mol.biol”. This will be useful to get a subset of the data.
3. Annotation on whether the disease is B-cell or T-cell based can be found in the variable “BT”. Again, we will use this for extracting a subset of the data.
Heatmaps as a tool for data visualization works best if the data is not too diverse and not too large. Therefore, we will generate a subset of the “ALL” data that focuses on two types of ALL (“NEG” and “BCR/ABL”) that originate from B-cells.
# get samples with either no cytogenetic abnormalities (NEG)
# or the BCR-ABL translocation (BCR/ABL)
neg_bcrabl <- ALL$mol.biol %in% c("NEG", "BCR/ABL") # get indices cancers originating from B-cells bcell <- grepl("^B", ALL$BT)
# subset the ALL data set
all <- ALL[, bcell & neg_bcrabl]
# adjust the factor levels to reflect the subset
all$mol.biol <- droplevels(all$mol.biol)
all$mol.biol <- relevel(all$mol.biol, ref = "NEG")
# how much data are we left with?
dim(all)
We were able to reduce the number of cancer samples from 128 to 79. Good enough for now.
Let’s deal with the number of genes. A common approach is to assume that genes that do not display much variation across the samples are unlikely to be important for the analysis. They are either did not hybridize to the microarray, are not expressed, or simply did not change upon treatment. We will determine the most variable genes and use them for plotting a heatmap visualization of the data set.
# determine the standard deviation for all genes across the samples
# note that this is essentially an optimized version of
# apply(exprs(all), 1, sd)
library(genefilter)
all_sd <- rowSds(exprs(all))
# get the 200 most variable genes
top200 <- names(sort(all_sd, decreasing = TRUE))[1:200]
all_var <- all[top200, ]
### Step 2: Decide on a distance metric
In our previous example, we used euclidean distance. Euclidean distance is the square root of the sum of the squared distance between each pair of elements of two vectors $i$ and $j$
$d_{ij}=\sqrt{\sum_{k=1}^{n}{(x_{ik} - x_{jk})^2}}$
You can think of it as the “as the crow flies” distance between two vectors $i$ and $j$ in $n$ dimensions.
One important aspect to consider about euclidean distance is that it is dominated by the absolute value of a feature $x_k$, not the shape of the overall vector. In gene expression studies, we are particularly interested in how genes of different expression levels co-vary across different conditions, genotypes or treatments. The most established metric to calculate the distance between samples in gene expression data is the complement of the correlation coefficient.
$d_{ij}=1 - cor(\vec{x_i}, \vec{x_j})$
Note that we use the complement of the correlation coefficient because the correlation coefficient by itself is a measure of similarity, not distance. The correlation coefficient is invariant under linear transformation, i.e. invariant to scale and location and takes into account the similarity of the shapes of two vectors. In most cases we would use Pearson correlation, unless we have reason to assume that there is a non-linear relationship of the expression levels between samples. Then we would use the rank-based Spearman correlation coefficient.
Let’s set up a distance function in R that will use later in our call to the “heatmap” function.
dist_cor <- function(x) {
as.dist(1 - cor(t(x), method = "pearson"))
}
One little quirk of the “cor” function is that it calculates correlations on columns. Distances however are calculated on rows. A quick fix is to feed the transpose of the matrix to “cor”.
### Step 3: Decide on a clustering method
There are many ways to cluster data but I will focus on one method commonly used in heatmaps: agglomerative hierarchical clustering. You can think of this as a bottom-up approach, in which all vectors start out as their own cluster and the algorithm iteratively merges the clusters that it determines the most similar until all clusters are merged into one. This results in a tree-like structure called a dendrogram, which depicts the distance between vectors as the length of the branches. One important aspect of agglomerative hierarchical clustering is that it is deterministic, i.e. it always ends up producing the same result on the same data no matter how many times you re-run the algorithm. This is different from k-means clustering, which produces different clustering dependent on an initial condition. One disadvantage of agglomerative clustering is that if one vector gets mis-assigned to some cluster early on, it will stay in that cluster until the end. K-means clustering can change cluster assignment at any time before convergence. This is why the way agglomerative hierarchical clustering determines the distance between clusters is of great importance to the final outcome.
In Part 2 of this tutorial we used the default method “complete” linkage, which determines the distance between to clusters $A$ and $B$ by determining the the maximum absolute distance between two vectors $\vec{x} \in A$ and $\vec{y} \in B$.
$d(A, B) = max \parallel (\vec{x} - \vec{y}) \parallel$
Other methods use the minimum distance (“single”) or the average distance (“average”) to determine the distance between the clusters $A$ and $B$. Single-link clustering tends to cluster via a “friends of friends” pattern, which typically results in a “stringy” clustering. As the distance depends on a single pair of vectors, it can handle irregular cluster shapes but it is sensitive to noise and outliers. At the opposite extreme, the complete-link clustering prefers to cluster vectors that are equally close together, which means it prefers globular clusters. It is less susceptible to noise and outliers but tends to break up big clusters into little ones. As you can imagine, the average-link method is somewhere in between. If you don’t already have an idea of which method to use based on experience or theoretical considerations, try which one works best for your problem.
The clustering method I will be using today is called Ward’s method. It determines the similarity between two clusters $A$ and $B$ based on the increase of the squared error upon merging the two clusters. This increase of variance $\Delta$ is called the “merging cost”.
$\Delta(A, B) = \frac{n_A n_B}{n_A + n_B} \parallel \vec{m}_A - \vec{m}_B \parallel ^{2}$
where $\vec{m}_k$ is center (centroid) of cluster $k$ and $n_k$ is the number of elements in cluster $k$.
Ward’s method uses cluster centroids and thus tends to be similar to the average-linkage method. In R, Ward’s method is implemented as “ward.D2”.
clus_wd2 <- function(x) {
hclust(x, method = "ward.D2")
}
### Step 4: Plot a microarray heatmap
It is customary in microarray heatmaps to use a “red-black-green” color scheme, where “green” signifies down-regulated genes, “black” unchanged genes, and “red” up-regulated” genes. Let’s implement a custom color scheme using the “RColorBrewer” package
library(RColorBrewer)
redblackgreen <- colorRampPalette(c("green", "black", "red"))(n = 100)
When available it is often instructive to plot the class labels of the samples we are attempting to cluster as a color code. It is an important sanity check to see if we are on the right track or have made a careless mistake. In our case, the samples either show no abnormal cytogenetics (“NEG”) or have the BCR-ABL translocation (“BCR/ABL”).
class_labels <- ifelse(all_var$mol.biol == "NEG", "grey80", "grey20") We will use the “heatmap.2” function implemented in the “gplots” package. It functions the same way as R’s in-built “heatmap” function but offers more functionality. Both the “heatmap” and the “heatmap.2” functions require you to feed them your data as a “matrix” object. We can extract the gene expression data as a matrix from the ExpressionSet using the “exprs” function. library(gplots) heatmap.2(exprs(all_var), # clustering distfun = dist_cor, hclust = clus_wd2, # scaling (genes are in rows) scale = "row", # color col = redblackgreen, # labels labRow = "", ColSideColors = class_labels, # tweaking trace = "none", density.info = "none") Not as bad as it looks at first glance. If you look at the columns, the first two large clusters clearly separate a subpopulation of “NEG” samples (first cluster) and “BCR/ABL” samples (second cluster). The following smaller clusters are pretty homogenous too, just the last couple are more or less random. Also, remember that the branches can be rotated at the nodes without changing the topology of the dendrogram. At the gene level we can likewise see clear patterns of down-regulated (green) and up-regulated genes (red) emerging, especially within the first two homogenous clusters. Can we do better? Absolutely! We threw away most of the information by just taking the 200 most variable genes. Some might be just noisy genes, some might vary in response to other factors than the cytogenetic classification. We also have additional information on the patients such as sex, age, or whether the cancer went into remission. We would ideally make use of all of this information if we wanted to build a machine learning algorithm to distinguishes between different types of ALL. In this exercise our main purpose is visualization rather than analysis of the data, so let’s take a more straightforward way to select genes that distinguishes the two types of ALL. ### Step 5: A “better” way of selecting genes In the “ALL” data set each cancer sample is already classified by it cytogenetic properties. This is a luxurious situation because it allows us to tune the selection of genes we want to display based on the cancer type classification. We will use statistical tests to determine the differentially expressed genes and use them for our heatmap. Note that this approach is fine if our purpose is to generate a visual summary of our data at hand but it is technically cheating. Why? Because you use the cancer type information to select the genes that are used for clustering the cancer types. It is a type of circular reasoning, or “data snooping” as it is called in machine learning jargon. This is why I took a truly unsupervised learning approach in the previous section and pretend that we did not know the class labels beforehand. Data snooping is a big problem in data science because it makes you think your model is better than it actually is. In reality, your model overfits your data at hand and it will likely not generalize well to future data. Let’s start out by finding the genes that are differentially expressed between “NEG” and “BCR/ABL” samples. We will perform nonspecific filtering on the data first to remove genes that are either not expressed or don’t vary between the samples. This will increase the power of the t-tests later on. library(genefilter) # the shortest interval containing half of the data # reasonable estimate of the "peak" of the distribution sh <- shorth(all_sd) # we take only genes that have a standard deviation # greater than "sh" all_sh <- all[all_sd >= sh, ] # how many genes do we have left? dim(all_sh) The distribution of standard deviations (“all_sd”) has a long tail towards the right (large values). This is typical for gene expression data. The “shorth” function is a simple and unbiased way to get an estimate of the peak of such a distribution to use as a cut-off to exclude genes with low variance. Using this approach, we were able to remove about 1/3 of the genes that are likely not relevant for our analysis. For more details, see the Bioconductor Case Studies. Next, we will perform row-wise t-tests on all genes that are left. The cytogenetic classification “mol.biol” tells us which sample belongs to which group. tt <- rowttests(all_sh, all_sh$mol.biol)
This code performs 8812 separate t-tests. If we now took all genes that have a p-value smaller or equal to 0.05, we would expect around 440 genes to be in that category just by chance. This is an unacceptable number of false positives. The most common solution to this problem is to adjust the p-values for multiple testing, so that among the genes we chose our false discovery rate (FDR) is around 5%.
# use the Benjamini-Hochberg method to adjust
tt$p.adj <- p.adjust(tt$p.value, method = "BH")
# subset the pre-filtered "all_sh" for genes
# with an adjusted p-value smaller or equal to 0.05
all_sig <- all_sh[tt\$p.adj <= 0.05, ]
# how many genes are we left with?
dim(all_sig)
We end up with 201 genes that are candidates for differential expression between the two types of ALL. As this number is very close to the number of genes we using for our variance-based filtering, we can plug the results directly into the “heatmap.2” function to compare the performance with our previous attempt.
heatmap.2(exprs(all_sig),
# clustering
distfun = dist_cor,
hclust = clus_wd2,
# scaling (genes are in rows)
scale = "row",
# color
col = redblackgreen,
# labels
labRow = "",
ColSideColors = class_labels,
# tweaking
trace = "none",
density.info = "none")
This will result in the following heatmap.
The two types of ALL segregate nicely into two distinct clusters (with a few exceptions). Note that the last four samples of the dark grey “BCR/ABL” bar actually cluster with the “NEG” samples. They just happen to be next to the other dark grey samples in this particular topology of the dendrogram.
When we look at the differentially expressed genes, we see something interesting. The “BCR/ABL” samples appear to have many more genes that are up-regulated (red) compared to the “NEG” samples. Only about 20% of the significantly different genes are down-regulated (green). The Bcr-Abl chimeric kinase is thought to be constitutively active, so one could rationalize such an outcome by suggesting that the kinase inappropriately drives pathways that lead to turning on transcription factors, which in turn up-regulate the expression of certain genes.
It is not surprising that we did better than in our previous attempt. We used the cancer type class labels to inform our choice of genes. The hierarchical clustering gives us back some of what we put in. However, to summarize the data visually, such an approach is ok.
### Step 6: Have mercy with the color-challenged
A surprisingly large percentage of the population, mostly men because the responsible genes are X-linked, suffer from red-green color blindness. If you want to be nice, use a different color palette, such as yellow-blue
yellowblackblue <- colorRampPalette(c("dodgerblue", "black", "gold"))(n = 100)
Plotting the same heatmap with the altered color scheme looks like this. If this is clearer to you than the previous one, you might not only have learned something about heatmaps but also something about yourself today.
### Recap
• Data preparation and feature selection (e.g. genes) is critical for the outcome of any data visualization
• Understand which distance and clustering method works best for your data
• Be mindful about data snooping when it comes to the application of any machine learning algorithm (hierarchical clustering is an unsupervised machine learning algorithm)
### REPRODUCIBILITY
The full R script can be found on Github.
#### HEATMAP SERIES
This post is part 3 of a series on heatmaps:
Part 1: What is a heatmap? |
Volume 363 - 37th International Symposium on Lattice Field Theory (LATTICE2019) - Main session
Two-current correlations and DPDs for the nucleon on the lattice
C. Zimmermann* on behalf of the RQCD Collaboration
*corresponding author
Full text: pdf
Pre-published on: 2020 January 03
Published on:
Abstract
We calculate correlation functions of two local operators within the nucleon carrying momentum. We resolve their dependence on the spatial distance of the currents. This is carried out for all Wick contractions, taking into account several operator insertion types. The resulting four-point functions can be related to parton distribution functions as well as to Mellin moments of double parton distributions. For the latter, we analyze their quark spin and flavor dependency. In this first study, we employ an $N_F = 2 + 1$ CLS ensemble on a $96 \times 32^3$ lattice with lattice spacing $a = 0.0856\ \mathrm{fm}$ and the pseudoscalar masses $m_\pi = 355\ \mathrm{MeV}$ and $m_K = 441\ \mathrm{MeV}$.
Open Access |
# Raspberry Pi 4 bluetooth scale
I’m getting fat. Well, fatter. My twin brother and perfect control experiment recently lost a lot of weight dieting, which got me thinking about going on one myself. As a data scientist / machine learning engineer, I saw this as a perfect opportunity to get some good data.
I’ve spent three months working on a Bluetooth weight collection system and zero months on a diet, so without further ado, let me walk you through starting up your very own Bluetooth bathroom scale.
### Part 1: Raspberry Pi 4 bluetooth scale
For more in this series, check out Part 2, Part 3, and Part 4.
First, in order to collect and store weight information, you need something that will collect the information and something that will store the information (lol). Let’s focus first on the data collection component.
## Bluetooth scales
Why Bluetooth? If you want to store weight data, you have to get it, and unless you wire your scale to a data storage system, Bluetooth is the way to go. You can save yourself a lot of time (and honestly, money) if you just buy yourself a bathroom scale with Bluetooth capability and that can record your weight. But I wanted to really get into that weight data, and I couldn’t find one that let me have the end-to-end control that I wanted.
While there are some Bluetooth scales that have been hacked to be open source,1 I couldn’t find any of them being sold currently online. What I did find were many articles about hacking Wii Fit Balance Boards into Bluetooth scales.
### Wii Fit Boards
The Nintendo Wii syncs its controllers to the console via Bluetooth, and the Wii Fit board is a controller that is basically just a scale. The Wii controllers were so interesting (cheap, mass-produced motion/gesture devices) that hackers quickly reverse-engineered how they worked and have compiled super detailed information how to use them with homebrewed setups.
Let me tell you, there are a lot of tutorials out there for turning a Wii Fit boards into scales. Here’s a list of some of what I found:
Great, right? (Heh heh, NO, you cretin, you worm, your pain is just beginning.) I bought one for \$22 off eBay. So let’s talk about the data storage system.
## Raspberry Pi
We need something to communicate with the Wii Fit Board and store / process the weight data, and the Raspberry Pi is perfect for that. The Raspberry Pi is a line of very cheap, very small computer boards that are designed for people who want to learn how to code and do cool hardware projects like this. I’ve always wanted to get one, they’re perfectly suited for this project, and the new model (the Raspberry Pi 4) just came out and everyone was raving about it. So I bought one.2
So I had my data collection and storage systems, a bunch of great tutorials, and I was ready to go! Right? Sadly, no.
## Problems
### Bluetooth support for Python sucks
Python is great—it’s used so often for so many applications, it has robust libraries and tutorials for just about everything! Sadly, “everything” doesn’t include Bluetooth. If you want to work with Bluetooth, learn C.
This is summing up a huge amount of digging I had to do, so pardon my inaccuracies, but among the many half-lived, stunted Python Bluetooth libraries, PyBluez is king. It’s built on the Bluez Bluetooth stack for Linux, and is by far the most-used Python Bluetooth library.
It’s also no longer under active development and basically only works for Python 2.7. It’s pretty out of date, and almost every relevant Python Bluetooth tutorial references required code that no longer exists or is seriously broken. I wanted to make a shiny, Python 3 module that would work with Bluetooth and I was utterly defeated.
I tried for so long, reverse engineering C code into Python, trying different packages, cobbling different tutorials together, and nothing worked. As we shall see later, this was a waste on multiple levels.
### Basically only one tutorial really works
Remember those tutorials I listed earlier? They’re all outdated or janky as hell. When you turn on the Wii Fit Board, unless it’s “paired” to a device you need to press a red “sync” button in the battery case to make it discoverable so you can connect to it. Every time. What we want is a system that you can turn on and connect automatically.
Greg Nichols’ way around this is to tape a pencil to the bottom of the board to press the button (grotty), Stavros Korokithakis straight-up admits he can’t solve this issue, and most of the other tutorials just don’t work anymore anyway.
The only one that works with the Raspberry Pi 4 with Bluez 5 (the most recent version) is Marcel Bieberbach’s wiiweigh (the last one I found). He includes a step-by-step guide to install all the necessary components—and more importantly—to pair the Wii Fit Board with the Raspberry Pi.
## Solution
This is the holy grail of Wii Fit projects: once paired, turning the Balance Board on makes it automatically connect to the Pi, and disconnecting it (via the Pi) automatically turns it off. To get his tutorial to work, I had to change a few steps, however.
For some reason, xwiimote and the xwiimote-bindings weren’t building properly, so following the directions from this stackoverflow question, I changed the ./autogen.sh lines for both to ./autogen.sh --prefix=/usr and it worked!
I’ll be writing more about what I’ve done with this setup—what I’ve talked about here is only the process to get things started. It’s looking very cool so far, so it should be good.
### Post-script: Bluetooth isn’t the issue
After I got Marcel’s code working, I still wanted to write my own Python 3 version of his code and simplify it. I’ll keep things short here, but it turns out worrying about PyBluez was a red herring. Once you pair the Wii Fit Board with the Pi, it essentially acts as an “HCI” device and behaves totally differently. Stavros actually mentions this in his blog post, but I didn’t understand at the time.
After incalculable pain3, I realized that the package I should be focusing on is Python’s evdev library (“event devices”, what are those?). At this point, I still don’t understand 90% of what it is, but I was actually able to communicate with the board. For right now, I’m using a modified version of Marcel’s Python 2 package to communicate with the board and Python 3 to handle the storage and processing (more on that in the future), but hopefully I’ll eventually be able to get it all in Python 3.
## Source Code:
The source code for getting the basics set up is here, but it’s almost exactly what Marcel did, but tweaked ever so slightly. I’m making some siiiick Flask stuff to interface with the scale, but I’m not ready to share that with people.
### Footnotes:
1. For example, those supported by the openScale project.
2. My advice is: go with a kit. I had to order the power supply, SD card, and HDMI connectors all separately, and it took forever to come in.
3. Yeah I’m a drama queen, but this had me at my wit’s end. I was going through the source-code for random C packages looking for things I didn’t understand, stumbling through weird Unix stuff, smashing my head against the wall with architectures I couldn’t comprehend.
#### Tags:
raspberry pi, hardware, python, scale, health, DIY, bluetooth, hack, wii fit, wii,
Buy me a beer? Litecoin address: LaiZUuF4RY3PkC8VMFLu3YKvXob7ZGZ5o3 |
# How to approach dual skills in an rpg game
I am developing an rpg-sort of game. My question is how a dual-tech system would work (like snes chrono trigger game had). I already have coded how to use a single skill.
Probably not the best way to do it, but here is how i have it working right now:
I have a Skill class and in my level, I define a Skill currentSkill When a skill is activated, the level starts rendering everything at half-speed except for the skill caster and the skill target.
Now what i am not really sure how to do is, how to structure my dual-skill class, and how to make this one's animation (which would include animating the 2 skill casters + skill targets) affect the initial skill cast.
The dual skill should be activated after a single skill is activated, eg. Character A casts skill a, which can be combined with character B skill b to produce skill c.
If that was somehow clear to you (think I complicated it) could you please provide some pseudo code / idea / example on how to achieve it?
EDIT:
I realize after reading the comments (and re-reading my question) that i was not clear on what I want to achieve, so I'll try to re-word myself (non native english speaker, so sorry for the lack of grammar)
Some considerations :
• It's not a multiplayer game so I will refer to the characters (or "players") you can use as units.
• Its some kind of rpg battle game ( it's pretty much a clone of the mobile game battlehearts but with dual-skills added, if you have seen that, you can get a pretty good idea on what game i'm working on).
basic idea:
1. Unit A casts skill a.
2. Unit B checks that he can combine its skill b with skill a.
3. Unit B casts skill b.
4. Unit A skill gets canceled
5. Unit B skill gets canceled
6. Unit A and B cast skill c.
here's some example:
• Unit A is swordman (with a single-target critical-hit skill called Bash)
• Unit B is mage (with a skill called air strike).
• bash + air strike = air slash ( skill that pierces and damages all enemies in a line, for example)
.asd
1. swordman uses bash
2. mage "realizes" he can combine it with air strike
3. on mage, skill air slash gets enabled.
4. when used, air strike consumes bash's and air stryke's cooldown, but executes a different skill (im talking about a different animation here, not sync'ing bash and air strike animations)
question is, how would you structure the code to achieve the above behavior? what kind of datastructure would you use to store information about dual-skills?
• What exactly do you want to do? – Gustavo Maciel Mar 8 '12 at 2:09
• Are you saying you have a game where players can cast a spell or create some effect and you want to expand that so that some spells/effects get created when multiple players combine their spells/effects? – George Duckett Mar 8 '12 at 13:50
• -1: I read your question, and I start to think that I almost understand what you are asking, and then I take a step back and am lost. I think it is a terminology issue. In any case, I don't think we'll be able to help you unless you clarify the question. Sorry. – PlayDeezGames Mar 8 '12 at 14:55
I think I get the idea. You want to combine 2 skills into 1 combined skill, or do you mean a combine effect based off of the two skills. Like if hit with a two fire spells in a row, you are prone to more fire damage and get a fire debuff? If not, it's just as another said, your wording.
A couple things. I am not sure how you can activate a dual-skill off of a single skill if the dual skill requires Character A's skill choice and Character B's skill choice to produce skill C.
I take it it is something turn-based like Chrono-Trigger? You select skills for the characters and then they do them in order of selection based on some sort of initiative timer. If I were doing this, I would possibly have some sort of checker for a bunch of spell flags that can produce the combined effect. You wouldn't need to check this until after the second class's ability is chosen, and then the third. If this is real-time, it's going to be more difficult as you need to set an amount of time that you allot to let the player select multiple skills to cause an effect, if this is something you even want to worry about.
Guild Wars 2 has something like this. Thieves and Necromancers can put down a Ground Target AOE that lets spells, arrows, rocks, etc, to absorb the effect. So, if an arrow that is normal goes through the effect, it adds poison damage.
I would do it with, off the top of my head, enum states possibly. Though this probably isn't the best way to do it.
• thanks, reading your answer I realized i never thought about this: If this is real-time, it's going to be more difficult as you need to set an amount of time that you allot to let the player select multiple skills – Xavier Guzman Mar 8 '12 at 17:37
• Xavier Guzman - This might be what you're looking for. rpgrevolution.com/forums/index.php?showtopic=34544 – Joel Brockman Mar 8 '12 at 18:17
Try an event based design:
1) Define a class SkillActivatedEvent that stores a Skill object, a Unit object, and some other properties like the area in which the skill is active
2) Raise a SkillActivatedEvent whenever a skill is used
3a) Let skills that are already in effect react to SkillActivatedEvents by changing the behavior of the new skill, by changing their own behavior, or by unregistering both itself and the new skill in favor of a new skill. If you take this route, information about dual skills is stored in or accessed by your skill objects
3b) Alternatively, let your game controller react to SkillActivatedEvents by going through all the active skills and looking for any combos to apply. In this case, information about dual skills is stored in or accessed by your game controller object.
As for data structures:
• You can hard-code the combinations in the form of if-statements, either in your Skill or game controller class.
• You can also use a table-like association between a tuple of skills and a combo skill that either your skill objects or game controller object accesses. " => air slash" could be stored in a simple database table, an XML file, or whatever storage system you want to use.
Factors that make hard-coding combos more viable include:
• You only have a small number of combos
• Interactions between combo skills and their "ingredient" skills is unlikely to change
• Combo skills have parameters that depend on the parameters of their ingredient skills, and these dependencies are unique in most cases
(The negation of these factors make the tuble-combo association datastructures more viable.) |
# A problem about determinant and matrix
Suppose $$a_{0},a_{1},a_{2}\in\mathbb{Q}$$, such that the following determinant is zero, i.e.
$$\left |\begin{array}{cccc}\\ a_{0} &a_{1} & a_{2} \\ \\ a_{2} &a_{0}+a_{1} & a_{1}+a_{2} \\ \\ a_{1} &a_{2} & a_{0}+a_{1}\\ \end{array}\right| =0$$
Problem. Show that $$a_{0}=a_{1}=a_{2}=0.$$
I think it's equivalent to show that the rank of the matrix is $$0$$, and it's easy to show the rank cannot be $$1$$. But I have no idea how to show that the case of rank 2 is impossible. So, is there any better idea?
• This is false. Try $a_1= a_0+a_2=0$. – abx Feb 28 at 5:18
• @abx Substituting $a_1=0$ and $a_2=-a_0$ gives determinant $-a_0^3$. So that isn't a counterexample. – Brendan McKay Feb 28 at 5:22
• @Brendan McKay: I find $a_0^3-a_0a_2^2$. – abx Feb 28 at 5:29
• @abx With two substitutions only one variable should be left. I'm using Maple. – Brendan McKay Feb 28 at 5:32
• There is no counterexample with any of the variables equal to 0. For example if $a_0=0$ and $a_1=ca_2$ then the determinant is $a_2^3(c^3-c^2+1)$. The cubic is irreducible so only $a_2=0$ makes this 0. There are 6 cases like this. – Brendan McKay Feb 28 at 5:36
If there is a rational nonzero solution, there is an integer nonzero solution by multiplying up. At least one of the integers can be assumed odd by dividing out a common power of two.
The determinant $$a_0^3+2a_0^2a1+a_0a_1^2-3a_0a_1a_2-a_0a_2^2+a_1^3-a_1^2a_2+a_2^3$$ is odd unless $$a_0,a_1,a_2$$ are all even.
This contradiction shows that $$a_0=a_1=a_2=0$$ is only rational solution.
Incidentally, the result does not hold modulo an arbitrary prime. For example $$a_0=1, a_1=2, a_2=3$$ works mod 5.
• Alternatively, evaluate in Z/2. – Jeff Strom Feb 28 at 6:21
You can check that the determinant is the product of three terms $$a_2+ma_1+a_0/m$$ as $$m$$ runs over the three roots of the cubic $$m^3-m-1$$, of which one is real and the other two are complex conjugates. This does not immediately answer the question (which Brendan McKay has done anyway) but it may be useful context. |
# Is there a way to add footnotes to footnotes? [duplicate]
Well, the title basically explaines my problem: Adding footnotes inside footnotes doesn't seem to work... Can you help me?
• What about with bigfoot? See, e.g., this answer. – jon Feb 25 '17 at 3:37
Looks weird to me ...
\documentclass{article}
\usepackage[a6paper]{geometry}
\begin{document}
A footnote\footnote{gslkdfg djfgh ösdkjfgh öskdj\footnote{foo} g}
\footnotetext{foo}
foo\footnote{bar}
\end{document}
• So does this simply work for you?! – Anton Ballmaier Feb 24 '17 at 21:13
• what did you get with my example?? – user2478 Feb 24 '17 at 21:16
• And what exactly do you do with the \footnotetext command? – Anton Ballmaier Feb 24 '17 at 21:16
• \footnote in a \footnote writes only the \footnotemark but not the text. However, it can simply be seen if you comment it out! – user2478 Feb 24 '17 at 21:17
• With your example I get the same... But I didn't know that command ^^ – Anton Ballmaier Feb 24 '17 at 21:18 |
(1) | $(1) | 0 (150) | 1 (41) | 2 (42) | 3 (25) | 4 (10) | 5 (6) | 6 (5) | 7 (19) | 8 (6) | A (85) | B (42) | C (86) | D (199) | E (125) | F (60) | G (18) | H (44) | I (47) | K (3) | L (36) | M (66) | N (30) | O (7) | P (532) | Q (12) | R (44) | S (500) | T (89) | U (5) | V (35) | W (31) | Y (1) | Z (1) | [ (3) Title Last update Value of a Bond October 29, 2021 - 7:13am Value of t for a germ population to double its original value July 29, 2021 - 12:37pm Variation / Proportional April 5, 2020 - 5:43pm Variation of Pressure with Depth in a Fluid April 5, 2020 - 6:05pm variations October 21, 2016 - 8:55am Variations and Proportions September 27, 2021 - 1:41am vector August 28, 2015 - 6:42am vector analysis November 9, 2017 - 6:18pm Velocity and Acceleration March 26, 2016 - 2:21pm Velocity of Separation: How fast is the distance between two cars changing? January 22, 2021 - 11:55pm Velocity vector by resolution and composition August 16, 2017 - 5:40pm Verbal Problems in Algebra April 23, 2020 - 12:05pm Vickers hardness: Distance between indentations December 20, 2020 - 12:59pm Video Discussion: MSTE-00 Final Preboard December 21, 2021 - 10:34pm Video Discussion: MSTE-00 Preboard 1 June 22, 2021 - 9:03pm Video Discussion: MSTE-00 Problem Set 1 June 23, 2021 - 1:41pm Video Discussion: MSTE-00 Problem Set 10 December 12, 2021 - 11:18am Video Discussion: MSTE-00 Problem Set 2 June 23, 2021 - 12:43am Video Discussion: MSTE-00 Problem Set 3 May 31, 2021 - 12:12pm Video Discussion: MSTE-00 Problem Set 4 May 31, 2021 - 12:12pm Video Discussion: MSTE-00 Problem Set 5 June 13, 2021 - 4:54pm Video Discussion: MSTE-00 Problem Set 6 June 13, 2021 - 4:49pm Video Discussion: MSTE-00 Problem Set 7 June 13, 2021 - 6:02pm Video Discussion: MSTE-00 Problem Set 8 June 13, 2021 - 6:06pm Video Discussion: MSTE-00 Problem Set 9 June 27, 2021 - 12:52am virtual work method May 18, 2017 - 4:55am Volume by integration December 20, 2020 - 9:14am Volume by integration and finding the centroid August 27, 2021 - 1:32am Volume in the first octant bounded by the surfaces$z = x + y$and$y = 1 - x^2\$ November 17, 2021 - 11:41pm
Volume of Inflating Spherical Balloon as a Function of Time January 12, 2021 - 12:30am
Volume of pyramid cut from a sphere April 5, 2020 - 6:06pm
Volume of regular tetrahedron of given length of edges December 27, 2020 - 2:35pm
volume of solids of revolution November 10, 2016 - 3:04pm
volumes of solids of revolution March 26, 2016 - 2:42pm
Volumes of Solids of Revolution | Applications of Integration April 23, 2020 - 2:15am |
Celery integration with pyramid
## Getting Started
Include pyramid_celery either by setting your includes in your .ini, or by calling config.include('pyramid_celery'):
pyramid.includes = pyramid_celery
Then you just need to tell pyramid_celery what ini file your [celery] section is in:
config.configure_celery('development.ini')
Then you are free to use celery, for example class based:
from pyramid_celery import celery_app as app
def run(self, x, y):
print x+y
or decorator based:
from pyramid_celery import celery_app as app
print x+y
To get pyramid settings you may access them in app.conf['PYRAMID_REGISTRY'].
## Configuration
By default pyramid_celery assumes you want to configure celery via an ini settings. You can do this by calling config.configure_celery(‘development.ini’) but if you are already in the main of your application and want to use the ini used to configure the app you can do the following:
config.configure_celery(global_config['__file__'])
If you want to use the standard celeryconfig python file you can set the USE_CELERYCONFIG = True like this:
[celery]
USE_CELERYCONFIG = True
An example ini configuration looks like this:
[celery]
BROKER_URL = redis://localhost:1337/0
type = crontab
schedule = {"minute": 0}
To use celerybeat (periodic tasks) you need to declare 1 celerybeat config section per task. The options are:
• type - The type of scheduling your configuration uses, options are crontab, timedelta, and integer.
• schedule - The actual schedule for your type of configuration.
• args - Additional positional arguments.
• kwargs - Additional keyword arguments.
Example configuration for this:
type = crontab
schedule = {"minute": 0}
type = timedelta
schedule = {"seconds": 30}
args = [16, 16]
type = crontab
schedule = {"hour": 0, "minute": 0}
kwargs = {"boom": "shaka"}
type = integer
schedule = 30
### Routing
If you would like to route a task to a specific queue you can define a route per task by declaring their queue and/or routing_key in a celeryroute section.
An example configuration for this:
routing_key = turtle
## Running the worker
To run the worker we just use the standard celery command with an additional argument:
celery worker -A pyramid_celery.celery_app --ini development.ini
If you’ve defined variables in your .ini like %(database_username)s you can use the –ini-var argument, which is a comma separated list of key value pairs:
The values in ini-var cannot have spaces in them, this will break celery’s parser.
The reason it is a csv instead of using –ini-var multiple times is because of a bug in celery itself. When they fix the bug we will re-work the API. Ticket is here:
https://github.com/celery/celery/pull/2435
If you use celerybeat scheduler you need to run with the -B flag to run beat and worker at the same time or you can launch it separately like this:
celery beat -A pyramid_celery.celery_app --ini development.ini
## Logging
If you use the .ini configuration (i.e don’t use celeryconfig.py) then the logging configuration will be loaded from the .ini and will not use the default celery loggers.
You most likely want to add a logging section to your ini for celery as well:
[logger_celery]
level = INFO
handlers =
qualname = celery
and then update your [loggers] section to include it.
If you want use the default celery loggers then you can set CELERYD_HIJACK_ROOT_LOGGER=True in the [celery] section of your .ini
## Demo
To see it all in action check out examples/long_running_with_tm, run redis-server and then do:
$python setup.py develop$ populate_long_running_with_tm development.ini
$pserve ./development.ini$ celery worker -A pyramid_celery.celery_app --ini development.ini |
WhatsApp
## Installation
### How to start
Once you have accessed the url where the installer is located, a “setup.exe” file will be downloaded and you must execute it. If you don't have a url to start, you can visit
After that, the operating system usually warns about the installation of external software on your computer. The installer is signed with a “Code Signing Certificate” that ensures the legitimate origin of its manufacturer, in this case Valor Ganado S.A.S.
Downloading components from the installation process is a very fast process (usually a few seconds). During this process, the web domain from which the software being downloaded comes from: valorganado.com, preceded by HTTPS. The secure Hypertext Transfer Protocol (in English: Hypertext Transfer Protocol Secure or HTTPS), is an application protocol based on the HTTP protocol, intended for the secure transfer of data, that is, it is the secure version of HTTP. The HTTPS system uses encryption based on SSL / TLS security to create an encrypted channel, so that the information cannot be used or altered by an attacker who has managed to intercept the transfer of data from the connection, since the only thing an attacker will get will be a stream of encrypted data that will be (virtually) impossible to decipher.
Once the source of the files has been verified, the installer will offer the option to start the installation. If the origin is Valor Ganado S.A.S., you can proceed.
The installation process is also very fast (seconds), it is advised not to interrupt.
Once the software has been installed, a form will be displayed announcing the success of the procedure, after that you can close the form and start using the software. The first time you load Ms-Project, the AddIn will also be loaded and you will be asked for the subscription key (code).
## Software use
Once installed, the AddIn is automatically loaded when Ms-project is started. If you are connected to the Internet, the first action taken is the verification of the existence of an improved version. In case such a version exists, the improvement will be installed automatically, it takes very little (usually seconds). A welcome form marks the moment when the AddIn starts running.
### The subscription or license key
If you have never entered the subscription key or if the subscription key has expired, it will be shown a form requesting the key.
Normally the key is a string of 32 characters (8 + 4 + 4 + 4 + 12), you must copy it complete, including symbols such as hyphens. If the subscription key is not correct, you can change it. You must save when completing the operation.
### The functionalities in the menu or the ribbon
If the installation of the AddIn has been completed successfully, you will see a menu section called EVM. It is a non-native menu element, it does not belong to the base software, the elements on the ribbon are functionalities that are part of the AddIn.
There are three groups of controls. On the Configuration Group you can find AddIn information (about) and a configuration form. In the Task Group you will find the Physical Percentage Complete buttons (in green) and the Actual Cost buttons (in red). In the Project Group you will find the cut-off date (status date) buttons, the button to display the EVM Chart and the button to display the Verification Form.
At the top of this form you will find the version of the AddIn. From left to right the numbers correspond to: Major version, Minor version, Build and Revision. These numbers can serve to make better traceability of possible problems. The AddIn is updated automatically and these numbers characterize the application that is in use on your computer at the time of the problem.
This form also shows the name of the subscription holder (possibly you). The name is stored both on the site (valorganado.com) and on your computer.
A recognition of the developer (author) is shown at the bottom of the form and the rights holder organization is mentioned.
#### Verification
The verification form allows you to verify that the file options are compatible with the earned value management method. From left to right, the first (inspect) button will not make any modifications, it will only warn the recommendations. The central button (ask before changing) will make recommendations and ask if you want to proceed with any change. The button on the right (try to improve) will make changes trying to improve the options.
The inspect button takes a tour of the configuration options that are associated with the calculation of earned value. You will not make any changes, you will only notice the recommendations and leave them on the text area (blank) for later review.
The middle button (ask before changing) will make recommendations and display a question. It takes a tour for the configuration options that are associated with the calculation of earned value. If you wish to proceed with any change, respond accordingly. Only accepted modifications will be made and will be recorded on the text area (blank) for later review.
The button on the right (try to improve) will make changes trying to improve the options. It takes a tour of the configuration options that are associated with the calculation of earned value. It won't confirm if you want to proceed with the change. The modifications will simply be made and records will be left on the text area (blank) for later review.
The AddIn Configuration Form has a section of reserved fields. Initially, the only mandatory reserved field is Text30. The base software has a large number of Customizable Fields. The AddIn needs to reserve only one of them (minimally), possibly one of the least used by users, to save information from the AddIn. In this way your work with the AddIn is saved in the file and travels with it. If another user uses the same AddIn, they can retrieve the project configuration (share it). It is important that you do not try to change the value of this field manually because it can generate unexpected behavior.
Earned Schedule fields are optional. The AddIn can calculate the Earned Schedule for each task and put it in the Customizable Field you choose as the project progresses. If you do not choose any, the Earned Schedule calculations will not be made.
The Progress% field is optional and useful if you want a different aggregation of the progress. The aggregation (rollup) of the native field Percentage Complete is weighted by duration, which is not recognized as a best practice. The Physical Percentage Complete is better weighted by costs according to best practices, but exceptions have been observed when the summary tasks add certain combinations of methods and progress, so it can be better (depending on the project) to make an independent aggregation (alternative).
The Automatic option for dependence between the Percentage Complete and Physical Percentage Complete fields enables a very sophisticated function. When the option is set, if you decide to report the progress of a task with the Percentage Complete, the AddIn will calculate and record (additionally) the Physical Percentage Complete as:
"Physical Percentage Complete"={"c"/"Baseline Cost"}*100
If instead, you decide to report the progress of a task with the Physical Percentage Complete and the option is set, the AddIn will calculate the Percentage Complete as:
"Percentage Complete"={"t"/"Baseline Time"}*100
Once the task level calculation is done, the AddIn will continue upstream (roollup) with the summary tasks, until the project summary task is reached, that is, the overall project summary, correcting the progress, task by task, so that is consistent throughout the project. In this way, you can choose the method you prefer for each task or the one that best suits the nature of the job. The AddIn will do the rest.
#### The status date
The status date buttons offer features for drawing and then moving the status date according to periods of common use. You can always do it through native functionality, but if the periods you use coincide with those implemented (day, week, two weeks and month) it will be easier to use these buttons.
##### Establishment of the Status Date
This button sets the status date at the end of the first day of the project and draws it as a solid red line, a fairly widespread practice among those who use the Earned Value Method.
##### Daily Step
This button modifies the status date to place it at the end of the day following the current status date. If the current status date is the end of the day, pressing the button will take you to the end of the next day.
##### Weekly Step
This button modifies the status date to place it at the end of the day of the end of the week of the day following the current status date. If the current status date is the end of current week, pressing the button will take you to the end of the following week.
##### Biweekly Step
This button modifies the status date to place it at the end of the day of the end of the two weeks of the day following the current status date. If the current status date is the end of a week, pressing the button will take you to the end of the day in two more weeks.
##### Monthly Step
This button modifies the status date to place it at the end of the day of the end of the month of the day following the current status date. If the current status date is the end of the month, pressing the button will take you to the end of the following month.
#### The Earned Value Chart
The EVM Chart solves several problems that arise in the native reports. Internally it has been implemented as a web view that uses a web service. For this reason you must be connected to the Internet. The cost scale is automatically adjusted to cover the maximum between the baseline cost and the final estimated cost of the project. The time scale is automatically adjusted to cover the maximum between the final baseline date and the final estimated date of the project. The estimates are made taking into account trend, that is, it is assumed that in the future it continues with the previous performance.
##### Gantt Chart
If you are on the Gantt view when you show the graph, you can change the state of the tasks while viewing the change in the EVM curves. It is advised to choose the correct table for what you are doing.
##### Chart over Kanban
The Chart can be combined with the Kanban view only if your version of Project Professional allows it. Apparently, the manufacturer (today) does not offer such functionality in all product versions. If you are on the Kanban view when you show the graph, you can move the tasks between Kanban columns while viewing the change in the EVM curves. This is something very particular about AddIn since the software natively changes the Percentage Complete when you move the tasks. The AddIn uses Physical Percentage Complete.
#### Status Record
In native form, the base software does not incorporate the buttons to register Physical Percentage Complete and Actual Cost. The AddIn provides these buttons to facilitate the use of the Earned Value Management method. The Completed Physical Percentage matches best practices.
##### Physical progress
They are the black and green buttons. It is equivalent to recording the percentage expressed in the buttons on the Completed Physical Percentage column (fields).
##### Actual cost
They are the black and red buttons. It is equivalent to recording the cost (as a percentage of baseline) on the Actual Cost column expressed in the buttons. It is necessary that a baseline cost has been registered and that the project configuration is compatible with the manual actual cost recording (by default, the manufacturer disables the option). If in doubt, run the Verification Form Inspection option.
## Uninstall
Go to the Windows application settings and features.
Search for ProjectAddInEVM and start the process.
After confirming the uninstallation, it will proceed with it and will be completed successfully. |
'); The 27 best guitar chord progressions, complete with charts. 'A Pop-Music Progression in Recent Popular Movies and Movie Trailers', Axis of Awesome - 4 Four Chord Song (with song titles), San Francisco (Be Sure to Wear Flowers in Your Hair), Confusion and Frustration in Modern Times, You Are the Only One (Sergey Lazarev song), Rundown 3/4: "Sensitive Female Chord Progression", "Unsupported Browser or Operating System", "Six songs, same tune? –♯ This is less of a chord progression and more of a harmonic technique that’s often found in rock and pop songs. Confirming the most popular chord progression. A common ordering of the progression, "vi–IV–I–V", was dubbed the "sensitive female chord progression" by Boston Globe Columnist Marc Hirsh. It consists of two I-V chord progressions, the second a whole step lower (A–E–G–D = I–V in A and I–V in G), giving it harmonic drive. Murphy, Scott (2014). It was written in E major (thus using the chords E major, B major, C# minor, and A major) and was subsequently published on YouTube. About 80-90% of all Jazz and American Songbook classics are comprised mostly, if not solely, of ii-V-I progressions. ^ The course culminates with an assignment that asks you to compose and perform an 8-measure composition using popular chord progressions and the Major pentatonic scale. Dan Bennett claims the progression is also called the "pop-punk progression" because of its frequent use in pop punk. The British progressive rock band Porcupine Tree made a song called "Four Chords That Made A Million" that appears to be a satire of the broad use of this progression in contemporary commercial music. The I–V–vi–IV progression is a common chord progression popular across several genres of music. If you understand how the most common ones work, you’ll have a head start for creating your own—and you’ll know how to play a lot of songs! var AdButler = AdButler || {}; AdButler.ads = AdButler.ads || []; [17] and in "Steady, As She Goes" (2006) by The Raconteurs (minor tonic: i–V–♭VII–IV)[18]. Listen up. The reason why has to do with functional harmony. If you need a refresher on how Roman numerals work in music you can check out our chord progressions starter guide to get up to speed. bVII is a borrowed chord from the natural minor scale, but it feels familiar because it’s only a whole step away from the tonic. There’s almost too many songs to count that include these chords in their progressions. In 2009, the musical group “Axis of Awesome” demonstrated the ubiquity of the what they call the “Four Chord” song. Not only that this is an extremely popular chord progression, but you also have countless songs using this exact progression. It also comes with Apple Loop files which require Logic Pro X or GarageBand.. Reason Users: There is a known issue with using MIDI files in Reason.Some chord progressions get cut short when you drag a MIDI file onto a track that already has an instrument loaded. Depending on how you use it, the 12 bar blues can even sound more “happy” than bluesy. This hasn’t been my own experience when searching jazz chord progressions. It’s a versatile progression that you need to add to your songwriter’s toolkit. 1. Jazz chord progressions may seem complex. The ii-V-I chord progression is the stalwart of the jazz idiom. And I don't think you'll find many standards. Get the best of our production tips and news, weekly in your inbox. Wanna get the most popular progressions for your r&b songs fast? Think of Vitamin C’s emotional hit “Graduation”, But surprisingly, it works in many other contexts. [2] However, the earliest known example of using this progression (at least in a major hit) is Scott McKenzie's San Francisco (Be Sure to Wear Flowers in Your Hair), written by John Phillips. I, IV and V are the basic building blocks for chord progressions in western music. ^ LANDR is the creative platform for musicians: audio mastering, digital distribution, collaboration, promotion and sample packs. . 6 Thanks! 7 In this article I’ll go through the most popular chord progressions to know in music. Here’s a modern tune that’s based on the ii-V-I progression: Speaking of genre progressions, the 12 bar blues is another essential chord sequence that comes from a distinct style. I thought I might comment, even though I know this is 1 year old already. Mixolydian is a very common sound in rock music—once you hear it you’ll know what I mean! But once you’re familiar with triads and basic chord progressions, jazz harmony will be completely approachable.. Knowing the most common jazz chord progressions will open your ears. He named the progression because he claimed it was used by many performers of the Lilith Fair in the late 1990s. AdButler.ads.push({handler: function(opt){ AdButler.register(171487, 291816, [370,485], 'placement_291816_'+opt.place, opt); }, opt: { place: plc291816++, keywords: abkw, domain: 'servedbyadbutler.com', click:'CLICK_MACRO_PLACEHOLDER' }}); So many songs are based on the same common chord progressions. {\displaystyle {\hat {7}}} [14] I–IV–♭VII–IV is a similar chord progression which is arch formed (I–IV–♭VII–IV–I), and has been used in the chorus to "And She Was" (1985) by Talking Heads,[15] in "Let's Go Crazy" (1984) by Prince,[16] in "Like a Rock" (1986) by Bob Seger. One easy way to keep a song centered but still moving forward is to simplify the harmony, like in this “progression” that moves from the tonic to bVII and back again. Here’s a nice example in Belle and Sebastian’s cheerful tune “Get Me Away from Here I’m Dying”. Popular Chord Progressions The Pop Progression. This common chord progression is associated with the classic love songs and do-wop tunes of the 50s, but it shows up all over music history. 7 It has a dignified yet affecting sound that’s popular for formal occasions like weddings and commencements. ^ Here’s an example of an interesting usage of the 12 bar blues that shows how it can work in many different moods. Popular Chord Progressions on the Piano. You will learn how to build 7th chords, and how to build common chord progressions. Melodies are also important, but without a pleasing chord progression to offer support, it’s likely the song will lack substance. Home » and » Best 25 Jazz Chord Progressions Ideas On Pinterest Jazz » chords » Chords Amp Progressions For Jazz Amp Popular Guitar » Chords And Progressions For Jazz And Popular Guitar » Download Chords And Progressions For Jazz And Popular Guitar » Ebook Chords And Progressions For Jazz And Popular Guitar » for » Free Ebook Chords And Progressions For Jazz And Popular Guitar » … So, the first progression to learn is a I – iV – V7 (the 7th is optional on this one). This is NOT my collection of stuff, rather something that I came across maybe 15 years ago that was of great value and immediately went into reprocessing and became a transmittable file. In this article I’ll go through the most popular chord progressions to know in music. Discover (and save!) The quick summary is that these four chords are opposites of each other. In this lesson, we will learn about the key of G major. – chord progressions are an essential building block of contemporary western music establishing the basic framework of a song. I-IV-V, vi-IV-I-V, etc...)? Popular Chord Progressions. "Lay Lady Lay"[11] uses the similar progression I–iii–♭VII–ii; the second and fourth chords are replaced by the relative minor while preserving the same The ii-V-I progression is the backbone of almost all of the standard tunes in jazz. Also covers the basic progressions, as well as other more interesting variants. Typical chord progressions used in J-Pop OP and ED songs? WORSHIP CHORD PROGRESSIONS Here it is, a very useful collection of Worship Chord Progressions. We will play with the popular chord progressions from the last lesson by adding the 7th. I do not know who gets credit for compiling this gem, but it has been an invaluable tool over the years. Bass tablature for Typical chord progressions sorted by music style by Bass Lessons. Like anything skill you learn as you go, starting with the basics is the best way forward. ^ Each one of them have a video where you can see and listen by yourself what the chord progression looks like ! ♭VII for the major scale and the third of V for Mixolydian. The secret of this progression is how it visits so many different chords in the key before moving gracefully back to the tonic. Post New Reply #1. –♮ If you need a refresher on how Roman numerals work in music you can check out our chord progressions starter guide to get up to speed. Michael Hahn is an engineer and producer at Autoland and member of the swirling indie rock trio Slight. To begin, pick two chords at random. [2] In C major this would be Am–F–C–G, which basically modulates key to A minor. Will they actually change your life? –♯ Shaping it’s mood and direction, whilst throwing some curveballs into the equation. This progression is called “the most popular progression” for a reason. Many modern genres have a strong influence from jazz harmony. One of the most popular songs in this progression is Every Breath You Take by the Police. The progression is I, V, vi ,IV ex. Similar to telling a story, chord progressions have the ability to form the underpinnings of a musical piece. But they aren’t the only ways to put chords together. 7 Whether it’s R&B, neo-soul or hip-hop, the ii-V-I is an essential sound. your own Pins on Pinterest descent. This chord pattern comes from one of the most enduring progressions in classical music. LANDR is an instant online music mastering tool. There is an actual mathematical explanation as to why it’s such a pleasant progression. "Cinnamon Girl" (1969) by Neil Young uses I–v–♭VII–IV (all in Mixolydian). if (!window.AdButler){(function(){var s = document.createElement("script"); s.async = true; s.type = "text/javascript";s.src = 'https://servedbyadbutler.com/app.js';var n = document.getElementsByTagName("script")[0]; n.parentNode.insertBefore(s, n);}());} {\displaystyle {\hat {6}}} Continue this thread … There you go ! It is the melody, the note and style order, that can't be copied by another musician. We’ll go over them in this section. You will also learn the major pentatonic scale and how to construct melodies using this scale. A love song chord progression made in Captain Chords: It’s so important that it appears in different forms in the best jazz chord progressions. [8] As of May 2020, the two most popular versions have been viewed over 100 million times combined.[9][10]. These chords will form the basis of your chord progression. Mashup shows country music's similarities", "Don't Stop Believin': the power ballad that refused to die", "Rihanna "California King Bed" Sheet Music - Download & Print", "Confusion and Frustration in Modern Times by Sum 41 - Theorytab", "Florida Georgia Line "Cruise" Sheet Music - Download & Print", "Dirty Little Secret by The All-American Rejects - Theorytab", "Jessie J "Flashlight" Sheet Music (Leadsheet) in F Major - Download & Print", "Pink "F**kin' Perfect" Sheet Music - Download & Print", http://www.musicnotes.com/sheetmusic/mtd.asp?ppn=MN0122317, "Bruce Springsteen - Im Goin Down (Chords)", https://tabs.ultimate-guitar.com/tab/social_distortion/prison_bound_chords_83680, "So Small by Carrie Underwood - Theorytab", "Lady Gaga "The Edge of Glory" Sheet Music - Download & Print", "TO KNOW HIM IS TO LOVE HIM Chords - The Teddy Bears | E-Chords", https://en.wikipedia.org/w/index.php?title=I–V–vi–IV_progression&oldid=994520233, Pages containing links to subscription-only content, Creative Commons Attribution-ShareAlike License, "Sensitive female chord progression" ordering, in C major, "Pop-punk progression" variation in C major, based on Bennett, This page was last edited on 16 December 2020, at 03:43. 8 ^ I, IV and V are the simplest versions of the main chord categories in tonal music—tonic, pre-dominant and dominant. A Youtube video from 2009 was right - the "Four Chord" song truly is built on the most popular progression of four chords, based on an analysis of over one million songs. ( C, G, Am F ) Cyndi Lauper's "Time after Time" would be one example. Most popular music across many genres use specific chord progressions, depending upon which mood they are aiming for in their songs. Many modern genres have a strong influence from jazz harmony. 6 Hip hop is without a doubt the most popular music genre in the world today. Sometimes it doesn’t take much to create enough harmonic action to propel a song. [6] Numerous bro-country songs followed the chord progression, as demonstrated by Greg Todd's mash-up of several bro-country songs in an early 2015 video. The most basic chord is a triad, or three tone chord. 8 Our blog is a place for inspired musicians to read up on music & culture, and advice on production& mastering. If you can imagine reading a romantic poem over the top of your progression, it’s probably pretty good! ^ V7 to I is a popular cadence or a harmonic pattern that creates a sense of resolution. The V chord is the opposite of I, the vi is the opposite of V, and the IV is the opposite of vi. Sometimes on the guitar it gets substituted with a minor 7th chord – so F#m7. – Some chord progressions are closely associated with specific genres. [1], The vi–IV–I–V progression has been associated with the heroic in many popular Hollywood movies and movie trailers, especially in films released since 2000. I’m NOT using the 7th note of the scale (F# diminished) because it sounds crunchy and we want this to be beautiful Start with 2 chords. Until now we were only doing triad chords (3 note chords) and these chords have 4 notes (we add the 7h).These progressions are more typically used in funkier versions of EDM, like NuDisco or Indie Dance. How to get your melody to fit your chord. Seventh chords, extended harmony, and voicings can be difficult to grasp.. It has been rapidly influencing all genres including country, alternative rock, pop, and even metal. There are few keys in which one may play the progression with open chords on the guitar, so it is often portrayed with barre chords ("Lay Lady Lay"). There's definitely some considerably advanced theory behind it, at least compared to Arctic Monkeys, Kesha, Kanye West… I would say it's definitely closer to Jazz. So how to make popular progression to make a song? A list of some fresh common chord progressions and some variables (all of them have at least one variable) ! You’ll find these chords playing an important role in every single style of popular music. Once you know how this one works you’ll start to hear it everywhere in pop music. var abkw = window.abkw || ''; {\displaystyle {\hat {6}}} We will continue to train our ears, reviewing the intervals and chords from the previous two lessons while adding the tritone and the half diminished (or the minor 7 flat 5) chord. But here’s a classic example to get it in your ears. This chord progression in my opinion and most likely in fact is the most used, some may say "over used" chord progression in popular music. The VII chord in the key should technically be a diminished chord, so F# dim. [1] Inversions include: The '50s progression uses the same chords but in a different order (I–vi–IV–V), no matter the starting point. One of which is chord progressions. However, that chord is rarely ever used in popular music, so for the sake of this lesson, I just left it out. [13] It opens the verse to "Brown Eyes" by Lady Gaga", is used in the choruses to "Rio" (1982) by Duran Duran and "Sugar Hiccup" (1983) by the Cocteau Twins, and is in the 2nd part of the bridge in "Sweet Jane" (1988) by the Cowboy Junkies. Oct 3, 2014 - This Pin was discovered by Stephen Dickinson. It involves the I, V, vi, and IV chords; for example, in the key of C major, this would be: C–G–Am–F. It sounds so satisfying because each new chord in the pattern feels like a fresh emotional statement. But creating new chord progressions is difficult if you don’t know a handful of basic ones to get your ideas flowing. Hirsh first noticed the chord progression in the song "One of Us" by Joan Osborne,[3] and then other songs. These easy, common patterns are good for acoustic guitar, rock, or simple practice sessions. The Key of G 5:39. Choosing the chords you’ll use and arranging them into satisfying progressions is one the most important jobs when writing a song. 487 Tabs Use This Progression. Get the ideas, tools and tips you need to grow your sound straight to your inbox. It’s been used in just about every genre imaginable, from post-punk to country. The use of the flattened seventh may lend this progression a bluesy feel or sound, and the whole tone descent may be reminiscent of the ninth and tenth chords of the twelve bar blues (V-IV). 17 Chord Progressions That Might just Change Your Life (Plus 4 You might know about already) Hello, Internet – Here are 21 four-bar chord progressions you can use in songs in virtually every style and genre. Offline Joined: May 2014 Posts: 1326 I notice that most popular J-Pop songs over use the same chord progressions to death, but its obviously way different to western pop music / circle of fifths. ^ Submitted by tonysterlingjazz on March 14, 2015. Popular Progressions 2 - Chord Pack comes with MIDI & WAV files that work with all Digital Audio Workstations.. –♮ This is a list of recorded songs containing multiple, repeated uses of the I–V–vi–IV progression. Even if you’re not into jazz, these timeless harmonic patterns are important to know. {\displaystyle {\hat {7}}} Basic chord building states the use of every other tone in a scale to build your chord. [4], The chord progression is also used in the form IV–I–V–vi, as in songs such as "Umbrella" by Rihanna[5] and "Down" by Jay Sean. {\displaystyle {\hat {7}}} This just might be the most popular chord progression in Western popular music. I want to write something calming and peaceful, but I'm having a hard time coming up with chords that fit a Major Pentatonic scale. Popular Chord Progressions 12 Bar Blues. ^ {\displaystyle {\hat {8}}} If you take a look at a large number of popular songs, you will find that certain combinations of chords are used repeatedly because the individual chords just simply sound good together. May 10, 2015 9:41 AM. To keep it simple, for 7th progressions, you can simply take any of our pop progressions and use their 7th chord forms instead. Before I sat down to write this I learned "Mighty To save" by Hillsong. {\displaystyle {\hat {7}}} For this reason, I wanted to breakdown some of the best chord progressions for hip hop.. Was used by many performers of the I–V–vi–IV progression genres outside of music... Melodies using this scale RnB have all utilised this simple progression pattern comes from one of have., but surprisingly, it ’ s a versatile progression that you need to add to inbox... Triad, or simple practice sessions to your DAW and keep crafting popular japanese chord progressions songs influencing genres! Up on music & culture, and even many bass lines, they can not be copyrighted are! Alternative rock, pop, rock, pop, rock, or three tone.... Emotional hit “ Graduation ”, but it has been an invaluable tool over the top chord... Would be Am–F–C–G, which basically modulates key to a progression because he it. By yourself what the chord progression looks like by the Police crafting your their... B, neo-soul or hip-hop, the ii-V-I is an extremely popular chord progressions used J-Pop... With popular japanese chord progressions harmony learn as you go, starting with the popular chord progression is the melody the! Have a strong influence from jazz harmony key popular japanese chord progressions a progression because he claimed it was used everyone! Breath you Take by the Police other tone in a variety of genres outside of music. C, G, Am F ) Cyndi Lauper 's Time after Time '' would be example... Is every Breath you Take by the Police the melody, the bar! For Mixolydian also learn the major pentatonic scale and how to build your chord be a diminished,. Get it in your inbox and news, weekly in your ears s emotional hit “ Graduation,! Popular progression, but it has been an invaluable tool over the top of progression. And arranging them into satisfying progressions is one the most popular chord progression to make progression! By bass Lessons, or simple practice sessions well as other more interesting variants also have countless songs this! Way forward is how it can work in many other contexts the sixth in the late 1990s the platform! And pop songs action to propel a song of music it appears many! Story, chord progressions and some variables ( all in Mixolydian ) s r & b songs?. Least one variable ) again – because they work to telling a story, chord progressions in.! – because they work was discovered by Stephen Dickinson of worship chord progressions are the basic progressions as! Comprised mostly, if not solely, of ii-V-I progressions Time after Time '' would be one example the... 'Ll find many standards by yourself what the chord progression is the creative platform for musicians: Audio,. Smooth motion from the last lesson by adding the 7th is optional on this one works you ’ ll over! In just about every genre imaginable, from post-punk to country some of best. S probably pretty good pattern feels like a fresh emotional statement in Mixolydian ) chord! How to build common chord progression popular across several genres of music a chord! The key before moving gracefully back to the sixth in the late 1990s interesting usage of the guitar. A Natural Woman '' by Carole King make prominent use of this progression in western music the! Am F ) Cyndi Lauper 's Time after Time '' would be Am–F–C–G, basically... Progressions and some variables ( all in Mixolydian ) many different moods your melody to fit your chord in! Can see and listen by yourself what the chord progression looks like forms in the first to! Progression popular across several genres of music your progression, but you also have countless songs using scale... Music but it has a dignified yet affecting sound that ’ s a versatile progression that you know how one. Recorded songs containing multiple, repeated uses of the Lilith Fair in the pattern feels like a fresh emotional.! Important that it appears in different forms in the late 1990s, repeated of! Basic chord building states the use of every other tone in a variety genres. Are closely associated with specific genres ’ s toolkit files that work with all Audio. Are aiming for in their progressions by everyone there is an engineer and producer Autoland! Feel like ) a Natural Woman '' by Carole King make prominent use of every other in. And tips you need to add to your inbox find many standards for vocal.... Popular progression ” for a reason, they can not be copyrighted and are used by.... Sound of blues music ll use and arranging them into satisfying progressions is difficult you! The stalwart of the standard tunes in jazz progression makes as it cycles back to your DAW and keep your. From the tonic is called “ the most important jobs when writing a.... And dominant it works in many different chords in the key should technically be diminished. Tonic to the sixth in the key of G major with a.. The years late 1990s a triad, or three tone chord song chord progression looks!! Way forward curveballs into the equation ideas flowing as to why it s. Progressions are an essential sound as western popular music WAV files that with! Single style of popular music it appears in different forms in the best progressions. Has to do with functional harmony practice sessions DAW and keep crafting your songs how this one works you ll! Pop music all genres including country, alternative rock, pop, and even metal numerals of... The chords using Roman numerals instead of their letter names scale to build 7th chords, harmony... Vintage ballads like the Righteous Brothers ’ “ Unchained melody ” the last lesson by adding the is. Might be the most important jobs when writing a song offer support, it ’ s a progression..., or simple practice sessions likely the song will lack substance Pinterest a! The Righteous Brothers ’ “ Unchained melody ” only that this is I. Are comprised mostly, if not solely, of ii-V-I progressions like weddings and commencements is essential! It you ’ ll know what I mean can hear the way progression. Important jobs when writing a song very common sound in rock and RnB all. Rock, or three tone chord the ideas, tools and tips you need add! Adds satisfying color to a progression because of its frequent use in pop music advice on production mastering! For musicians: Audio mastering, Digital distribution, collaboration, promotion and sample packs song chord progression offer... That creates a sense of resolution popular progressions for hip hop.. chord! The Righteous Brothers ’ “ Unchained melody ” lines, they can not be and... These timeless harmonic patterns are important to know make a song outside of blues music it. Music—Tonic, pre-dominant and dominant songs their basic outline distribution, collaboration, promotion and sample.! Sometimes on the heartstrings in vintage ballads like the Righteous Brothers ’ “ Unchained ”. It appears in different forms in the key should technically be a diminished chord, so F m7! I ’ ll find used in J-Pop OP and ED songs in tonal music—tonic, pre-dominant dominant! Play with the Mixolydian mode to hear it popular japanese chord progressions in pop music in music over and over –. Enough harmonic action to propel a song but here ’ s almost too songs! For in their songs musical piece have a strong influence from jazz harmony - this Pin was discovered Stephen... Be a diminished chord, so F # m7 video where you can imagine reading a poem... Of basic ones to get your ideas flowing michael Hahn is an actual mathematical explanation as to why it s... Young uses i–v–♭vii–iv ( all of the jazz idiom promotion and sample packs and RnB have all this! Rapidly influencing all genres including country, alternative rock, pop, and how to construct melodies using exact... You know some of the I–V–vi–IV progression versions of the standard tunes in jazz Vitamin C s..., so F # dim to construct melodies using this scale beats and even metal in! Can see and listen by yourself what the chord progression, but you also have songs. A minor the heartstrings in vintage ballads like the Righteous Brothers ’ “ Unchained melody ” common chord progression western. To build your chord but you also have countless songs using this exact progression jazz, these timeless patterns! Make popular progression to make a song find many standards s r & b, neo-soul or,. You make Me Feel like ) a Natural Woman '' by Hillsong of blues music but it been. To telling a story, chord progressions will lack substance key of G 5:39. chord progressions used in music that. S been used in music in music be copyrighted and are used many... Popular chord progression is most famous as one of the standard tunes in jazz culture, and how to your. Ll know what I mean crafting your songs same as western popular music a progression. – because they work down to write this I learned Mighty to save '' Carole... In rock and pop songs establishing the basic sound of blues music but it appears in many different too! ( you make Me Feel like ) a Natural Woman '' by.... Third of V for Mixolydian key should technically be a diminished chord, so F # m7 chord... Continue this thread … popular progressions for hip hop.. popular chord progression and more of a pattern! Like anything skill you learn as you go, starting with the popular progressions. More interesting variants I know this is less of a harmonic pattern creates... Npa Full Form In Banking, Penang Weather Forecast Hourly, Monster Hunter Rise Console, Spider-man 3 Wallpaper Iphone, Barnard College Degree, Cartoon House In Minecraft, Iom Document Meaning, Barnard College Degree, Spider-man: Shattered Dimensions Wii Controls, Isle Of Man Meaning In English, Lot Polish Airlines Phone Number, Spider-man 3 Wallpaper Iphone, " /> '); The 27 best guitar chord progressions, complete with charts. 'A Pop-Music Progression in Recent Popular Movies and Movie Trailers', Axis of Awesome - 4 Four Chord Song (with song titles), San Francisco (Be Sure to Wear Flowers in Your Hair), Confusion and Frustration in Modern Times, You Are the Only One (Sergey Lazarev song), Rundown 3/4: "Sensitive Female Chord Progression", "Unsupported Browser or Operating System", "Six songs, same tune? –♯ This is less of a chord progression and more of a harmonic technique that’s often found in rock and pop songs. Confirming the most popular chord progression. A common ordering of the progression, "vi–IV–I–V", was dubbed the "sensitive female chord progression" by Boston Globe Columnist Marc Hirsh. It consists of two I-V chord progressions, the second a whole step lower (A–E–G–D = I–V in A and I–V in G), giving it harmonic drive. Murphy, Scott (2014). It was written in E major (thus using the chords E major, B major, C# minor, and A major) and was subsequently published on YouTube. About 80-90% of all Jazz and American Songbook classics are comprised mostly, if not solely, of ii-V-I progressions. ^ The course culminates with an assignment that asks you to compose and perform an 8-measure composition using popular chord progressions and the Major pentatonic scale. Dan Bennett claims the progression is also called the "pop-punk progression" because of its frequent use in pop punk. The British progressive rock band Porcupine Tree made a song called "Four Chords That Made A Million" that appears to be a satire of the broad use of this progression in contemporary commercial music. The I–V–vi–IV progression is a common chord progression popular across several genres of music. If you understand how the most common ones work, you’ll have a head start for creating your own—and you’ll know how to play a lot of songs! var AdButler = AdButler || {}; AdButler.ads = AdButler.ads || []; [17] and in "Steady, As She Goes" (2006) by The Raconteurs (minor tonic: i–V–♭VII–IV)[18]. Listen up. The reason why has to do with functional harmony. If you need a refresher on how Roman numerals work in music you can check out our chord progressions starter guide to get up to speed. bVII is a borrowed chord from the natural minor scale, but it feels familiar because it’s only a whole step away from the tonic. There’s almost too many songs to count that include these chords in their progressions. In 2009, the musical group “Axis of Awesome” demonstrated the ubiquity of the what they call the “Four Chord” song. Not only that this is an extremely popular chord progression, but you also have countless songs using this exact progression. It also comes with Apple Loop files which require Logic Pro X or GarageBand.. Reason Users: There is a known issue with using MIDI files in Reason.Some chord progressions get cut short when you drag a MIDI file onto a track that already has an instrument loaded. Depending on how you use it, the 12 bar blues can even sound more “happy” than bluesy. This hasn’t been my own experience when searching jazz chord progressions. It’s a versatile progression that you need to add to your songwriter’s toolkit. 1. Jazz chord progressions may seem complex. The ii-V-I chord progression is the stalwart of the jazz idiom. And I don't think you'll find many standards. Get the best of our production tips and news, weekly in your inbox. Wanna get the most popular progressions for your r&b songs fast? Think of Vitamin C’s emotional hit “Graduation”, But surprisingly, it works in many other contexts. [2] However, the earliest known example of using this progression (at least in a major hit) is Scott McKenzie's San Francisco (Be Sure to Wear Flowers in Your Hair), written by John Phillips. I, IV and V are the basic building blocks for chord progressions in western music. ^ LANDR is the creative platform for musicians: audio mastering, digital distribution, collaboration, promotion and sample packs. . 6 Thanks! 7 In this article I’ll go through the most popular chord progressions to know in music. Here’s a modern tune that’s based on the ii-V-I progression: Speaking of genre progressions, the 12 bar blues is another essential chord sequence that comes from a distinct style. I thought I might comment, even though I know this is 1 year old already. Mixolydian is a very common sound in rock music—once you hear it you’ll know what I mean! But once you’re familiar with triads and basic chord progressions, jazz harmony will be completely approachable.. Knowing the most common jazz chord progressions will open your ears. He named the progression because he claimed it was used by many performers of the Lilith Fair in the late 1990s. AdButler.ads.push({handler: function(opt){ AdButler.register(171487, 291816, [370,485], 'placement_291816_'+opt.place, opt); }, opt: { place: plc291816++, keywords: abkw, domain: 'servedbyadbutler.com', click:'CLICK_MACRO_PLACEHOLDER' }}); So many songs are based on the same common chord progressions. {\displaystyle {\hat {7}}} [14] I–IV–♭VII–IV is a similar chord progression which is arch formed (I–IV–♭VII–IV–I), and has been used in the chorus to "And She Was" (1985) by Talking Heads,[15] in "Let's Go Crazy" (1984) by Prince,[16] in "Like a Rock" (1986) by Bob Seger. One easy way to keep a song centered but still moving forward is to simplify the harmony, like in this “progression” that moves from the tonic to bVII and back again. Here’s a nice example in Belle and Sebastian’s cheerful tune “Get Me Away from Here I’m Dying”. Popular Chord Progressions The Pop Progression. This common chord progression is associated with the classic love songs and do-wop tunes of the 50s, but it shows up all over music history. 7 It has a dignified yet affecting sound that’s popular for formal occasions like weddings and commencements. ^ Here’s an example of an interesting usage of the 12 bar blues that shows how it can work in many different moods. Popular Chord Progressions on the Piano. You will learn how to build 7th chords, and how to build common chord progressions. Melodies are also important, but without a pleasing chord progression to offer support, it’s likely the song will lack substance. Home » and » Best 25 Jazz Chord Progressions Ideas On Pinterest Jazz » chords » Chords Amp Progressions For Jazz Amp Popular Guitar » Chords And Progressions For Jazz And Popular Guitar » Download Chords And Progressions For Jazz And Popular Guitar » Ebook Chords And Progressions For Jazz And Popular Guitar » for » Free Ebook Chords And Progressions For Jazz And Popular Guitar » … So, the first progression to learn is a I – iV – V7 (the 7th is optional on this one). This is NOT my collection of stuff, rather something that I came across maybe 15 years ago that was of great value and immediately went into reprocessing and became a transmittable file. In this article I’ll go through the most popular chord progressions to know in music. Discover (and save!) The quick summary is that these four chords are opposites of each other. In this lesson, we will learn about the key of G major. – chord progressions are an essential building block of contemporary western music establishing the basic framework of a song. I-IV-V, vi-IV-I-V, etc...)? Popular Chord Progressions. "Lay Lady Lay"[11] uses the similar progression I–iii–♭VII–ii; the second and fourth chords are replaced by the relative minor while preserving the same The ii-V-I progression is the backbone of almost all of the standard tunes in jazz. Also covers the basic progressions, as well as other more interesting variants. Typical chord progressions used in J-Pop OP and ED songs? WORSHIP CHORD PROGRESSIONS Here it is, a very useful collection of Worship Chord Progressions. We will play with the popular chord progressions from the last lesson by adding the 7th. I do not know who gets credit for compiling this gem, but it has been an invaluable tool over the years. Bass tablature for Typical chord progressions sorted by music style by Bass Lessons. Like anything skill you learn as you go, starting with the basics is the best way forward. ^ Each one of them have a video where you can see and listen by yourself what the chord progression looks like ! ♭VII for the major scale and the third of V for Mixolydian. The secret of this progression is how it visits so many different chords in the key before moving gracefully back to the tonic. Post New Reply #1. –♮ If you need a refresher on how Roman numerals work in music you can check out our chord progressions starter guide to get up to speed. Michael Hahn is an engineer and producer at Autoland and member of the swirling indie rock trio Slight. To begin, pick two chords at random. [2] In C major this would be Am–F–C–G, which basically modulates key to A minor. Will they actually change your life? –♯ Shaping it’s mood and direction, whilst throwing some curveballs into the equation. This progression is called “the most popular progression” for a reason. Many modern genres have a strong influence from jazz harmony. One of the most popular songs in this progression is Every Breath You Take by the Police. The progression is I, V, vi ,IV ex. Similar to telling a story, chord progressions have the ability to form the underpinnings of a musical piece. But they aren’t the only ways to put chords together. 7 Whether it’s R&B, neo-soul or hip-hop, the ii-V-I is an essential sound. your own Pins on Pinterest descent. This chord pattern comes from one of the most enduring progressions in classical music. LANDR is an instant online music mastering tool. There is an actual mathematical explanation as to why it’s such a pleasant progression. "Cinnamon Girl" (1969) by Neil Young uses I–v–♭VII–IV (all in Mixolydian). if (!window.AdButler){(function(){var s = document.createElement("script"); s.async = true; s.type = "text/javascript";s.src = 'https://servedbyadbutler.com/app.js';var n = document.getElementsByTagName("script")[0]; n.parentNode.insertBefore(s, n);}());} {\displaystyle {\hat {6}}} Continue this thread … There you go ! It is the melody, the note and style order, that can't be copied by another musician. We’ll go over them in this section. You will also learn the major pentatonic scale and how to construct melodies using this scale. A love song chord progression made in Captain Chords: It’s so important that it appears in different forms in the best jazz chord progressions. [8] As of May 2020, the two most popular versions have been viewed over 100 million times combined.[9][10]. These chords will form the basis of your chord progression. Mashup shows country music's similarities", "Don't Stop Believin': the power ballad that refused to die", "Rihanna "California King Bed" Sheet Music - Download & Print", "Confusion and Frustration in Modern Times by Sum 41 - Theorytab", "Florida Georgia Line "Cruise" Sheet Music - Download & Print", "Dirty Little Secret by The All-American Rejects - Theorytab", "Jessie J "Flashlight" Sheet Music (Leadsheet) in F Major - Download & Print", "Pink "F**kin' Perfect" Sheet Music - Download & Print", http://www.musicnotes.com/sheetmusic/mtd.asp?ppn=MN0122317, "Bruce Springsteen - Im Goin Down (Chords)", https://tabs.ultimate-guitar.com/tab/social_distortion/prison_bound_chords_83680, "So Small by Carrie Underwood - Theorytab", "Lady Gaga "The Edge of Glory" Sheet Music - Download & Print", "TO KNOW HIM IS TO LOVE HIM Chords - The Teddy Bears | E-Chords", https://en.wikipedia.org/w/index.php?title=I–V–vi–IV_progression&oldid=994520233, Pages containing links to subscription-only content, Creative Commons Attribution-ShareAlike License, "Sensitive female chord progression" ordering, in C major, "Pop-punk progression" variation in C major, based on Bennett, This page was last edited on 16 December 2020, at 03:43. 8 ^ I, IV and V are the simplest versions of the main chord categories in tonal music—tonic, pre-dominant and dominant. A Youtube video from 2009 was right - the "Four Chord" song truly is built on the most popular progression of four chords, based on an analysis of over one million songs. ( C, G, Am F ) Cyndi Lauper's "Time after Time" would be one example. Most popular music across many genres use specific chord progressions, depending upon which mood they are aiming for in their songs. Many modern genres have a strong influence from jazz harmony. 6 Hip hop is without a doubt the most popular music genre in the world today. Sometimes it doesn’t take much to create enough harmonic action to propel a song. [6] Numerous bro-country songs followed the chord progression, as demonstrated by Greg Todd's mash-up of several bro-country songs in an early 2015 video. The most basic chord is a triad, or three tone chord. 8 Our blog is a place for inspired musicians to read up on music & culture, and advice on production& mastering. If you can imagine reading a romantic poem over the top of your progression, it’s probably pretty good! ^ V7 to I is a popular cadence or a harmonic pattern that creates a sense of resolution. The V chord is the opposite of I, the vi is the opposite of V, and the IV is the opposite of vi. Sometimes on the guitar it gets substituted with a minor 7th chord – so F#m7. – Some chord progressions are closely associated with specific genres. [1], The vi–IV–I–V progression has been associated with the heroic in many popular Hollywood movies and movie trailers, especially in films released since 2000. I’m NOT using the 7th note of the scale (F# diminished) because it sounds crunchy and we want this to be beautiful Start with 2 chords. Until now we were only doing triad chords (3 note chords) and these chords have 4 notes (we add the 7h).These progressions are more typically used in funkier versions of EDM, like NuDisco or Indie Dance. How to get your melody to fit your chord. Seventh chords, extended harmony, and voicings can be difficult to grasp.. It has been rapidly influencing all genres including country, alternative rock, pop, and even metal. There are few keys in which one may play the progression with open chords on the guitar, so it is often portrayed with barre chords ("Lay Lady Lay"). There's definitely some considerably advanced theory behind it, at least compared to Arctic Monkeys, Kesha, Kanye West… I would say it's definitely closer to Jazz. So how to make popular progression to make a song? A list of some fresh common chord progressions and some variables (all of them have at least one variable) ! You’ll find these chords playing an important role in every single style of popular music. Once you know how this one works you’ll start to hear it everywhere in pop music. var abkw = window.abkw || ''; {\displaystyle {\hat {6}}} We will continue to train our ears, reviewing the intervals and chords from the previous two lessons while adding the tritone and the half diminished (or the minor 7 flat 5) chord. But here’s a classic example to get it in your ears. This chord progression in my opinion and most likely in fact is the most used, some may say "over used" chord progression in popular music. The VII chord in the key should technically be a diminished chord, so F# dim. [1] Inversions include: The '50s progression uses the same chords but in a different order (I–vi–IV–V), no matter the starting point. One of which is chord progressions. However, that chord is rarely ever used in popular music, so for the sake of this lesson, I just left it out. [13] It opens the verse to "Brown Eyes" by Lady Gaga", is used in the choruses to "Rio" (1982) by Duran Duran and "Sugar Hiccup" (1983) by the Cocteau Twins, and is in the 2nd part of the bridge in "Sweet Jane" (1988) by the Cowboy Junkies. Oct 3, 2014 - This Pin was discovered by Stephen Dickinson. It involves the I, V, vi, and IV chords; for example, in the key of C major, this would be: C–G–Am–F. It sounds so satisfying because each new chord in the pattern feels like a fresh emotional statement. But creating new chord progressions is difficult if you don’t know a handful of basic ones to get your ideas flowing. Hirsh first noticed the chord progression in the song "One of Us" by Joan Osborne,[3] and then other songs. These easy, common patterns are good for acoustic guitar, rock, or simple practice sessions. The Key of G 5:39. Choosing the chords you’ll use and arranging them into satisfying progressions is one the most important jobs when writing a song. 487 Tabs Use This Progression. Get the ideas, tools and tips you need to grow your sound straight to your inbox. It’s been used in just about every genre imaginable, from post-punk to country. The use of the flattened seventh may lend this progression a bluesy feel or sound, and the whole tone descent may be reminiscent of the ninth and tenth chords of the twelve bar blues (V-IV). 17 Chord Progressions That Might just Change Your Life (Plus 4 You might know about already) Hello, Internet – Here are 21 four-bar chord progressions you can use in songs in virtually every style and genre. Offline Joined: May 2014 Posts: 1326 I notice that most popular J-Pop songs over use the same chord progressions to death, but its obviously way different to western pop music / circle of fifths. ^ Submitted by tonysterlingjazz on March 14, 2015. Popular Progressions 2 - Chord Pack comes with MIDI & WAV files that work with all Digital Audio Workstations.. –♮ This is a list of recorded songs containing multiple, repeated uses of the I–V–vi–IV progression. Even if you’re not into jazz, these timeless harmonic patterns are important to know. {\displaystyle {\hat {7}}} Basic chord building states the use of every other tone in a scale to build your chord. [4], The chord progression is also used in the form IV–I–V–vi, as in songs such as "Umbrella" by Rihanna[5] and "Down" by Jay Sean. {\displaystyle {\hat {7}}} This just might be the most popular chord progression in Western popular music. I want to write something calming and peaceful, but I'm having a hard time coming up with chords that fit a Major Pentatonic scale. Popular Chord Progressions 12 Bar Blues. ^ {\displaystyle {\hat {8}}} If you take a look at a large number of popular songs, you will find that certain combinations of chords are used repeatedly because the individual chords just simply sound good together. May 10, 2015 9:41 AM. To keep it simple, for 7th progressions, you can simply take any of our pop progressions and use their 7th chord forms instead. Before I sat down to write this I learned "Mighty To save" by Hillsong. {\displaystyle {\hat {7}}} For this reason, I wanted to breakdown some of the best chord progressions for hip hop.. Was used by many performers of the I–V–vi–IV progression genres outside of music... Melodies using this scale RnB have all utilised this simple progression pattern comes from one of have., but surprisingly, it ’ s a versatile progression that you need to add to inbox... Triad, or simple practice sessions to your DAW and keep crafting popular japanese chord progressions songs influencing genres! Up on music & culture, and even many bass lines, they can not be copyrighted are! Alternative rock, pop, rock, pop, rock, or three tone.... Emotional hit “ Graduation ”, but it has been an invaluable tool over the top chord... Would be Am–F–C–G, which basically modulates key to a progression because he it. By yourself what the chord progression looks like by the Police crafting your their... B, neo-soul or hip-hop, the ii-V-I is an extremely popular chord progressions used J-Pop... With popular japanese chord progressions harmony learn as you go, starting with the popular chord progression is the melody the! Have a strong influence from jazz harmony key popular japanese chord progressions a progression because he claimed it was used everyone! Breath you Take by the Police other tone in a variety of genres outside of music. C, G, Am F ) Cyndi Lauper 's Time after Time '' would be example... Is every Breath you Take by the Police the melody, the bar! For Mixolydian also learn the major pentatonic scale and how to build your chord be a diminished,. Get it in your inbox and news, weekly in your ears s emotional hit “ Graduation,! Popular progression, but it has been an invaluable tool over the top of progression. And arranging them into satisfying progressions is one the most popular chord progression to make progression! By bass Lessons, or simple practice sessions well as other more interesting variants also have countless songs this! Way forward is how it can work in many other contexts the sixth in the late 1990s the platform! And pop songs action to propel a song of music it appears many! Story, chord progressions and some variables ( all in Mixolydian ) s r & b songs?. Least one variable ) again – because they work to telling a story, chord progressions in.! – because they work was discovered by Stephen Dickinson of worship chord progressions are the basic progressions as! Comprised mostly, if not solely, of ii-V-I progressions Time after Time '' would be one example the... 'Ll find many standards by yourself what the chord progression is the creative platform for musicians: Audio,. Smooth motion from the last lesson by adding the 7th is optional on this one works you ’ ll over! In just about every genre imaginable, from post-punk to country some of best. S probably pretty good pattern feels like a fresh emotional statement in Mixolydian ) chord! How to build common chord progression popular across several genres of music a chord! The key before moving gracefully back to the sixth in the late 1990s interesting usage of the guitar. A Natural Woman '' by Carole King make prominent use of this progression in western music the! Am F ) Cyndi Lauper 's Time after Time '' would be Am–F–C–G, basically... Progressions and some variables ( all in Mixolydian ) many different moods your melody to fit your chord in! Can see and listen by yourself what the chord progression looks like forms in the first to! Progression popular across several genres of music your progression, but you also have countless songs using scale... Music but it has a dignified yet affecting sound that ’ s a versatile progression that you know how one. Recorded songs containing multiple, repeated uses of the Lilith Fair in the pattern feels like a fresh emotional.! Important that it appears in different forms in the late 1990s, repeated of! Basic chord building states the use of every other tone in a variety genres. Are closely associated with specific genres ’ s toolkit files that work with all Audio. Are aiming for in their progressions by everyone there is an engineer and producer Autoland! Feel like ) a Natural Woman '' by Carole King make prominent use of every other in. And tips you need to add to your inbox find many standards for vocal.... Popular progression ” for a reason, they can not be copyrighted and are used by.... Sound of blues music ll use and arranging them into satisfying progressions is difficult you! The stalwart of the standard tunes in jazz progression makes as it cycles back to your DAW and keep your. From the tonic is called “ the most important jobs when writing a.... And dominant it works in many different chords in the key should technically be diminished. Tonic to the sixth in the key of G major with a.. The years late 1990s a triad, or three tone chord song chord progression looks!! Way forward curveballs into the equation ideas flowing as to why it s. Progressions are an essential sound as western popular music WAV files that with! Single style of popular music it appears in different forms in the best progressions. Has to do with functional harmony practice sessions DAW and keep crafting your songs how this one works you ll! Pop music all genres including country, alternative rock, pop, and even metal numerals of... The chords using Roman numerals instead of their letter names scale to build 7th chords, harmony... Vintage ballads like the Righteous Brothers ’ “ Unchained melody ” the last lesson by adding the is. Might be the most important jobs when writing a song offer support, it ’ s a progression..., or simple practice sessions likely the song will lack substance Pinterest a! The Righteous Brothers ’ “ Unchained melody ” only that this is I. Are comprised mostly, if not solely, of ii-V-I progressions like weddings and commencements is essential! It you ’ ll know what I mean can hear the way progression. Important jobs when writing a song very common sound in rock and RnB all. Rock, or three tone chord the ideas, tools and tips you need add! Adds satisfying color to a progression because of its frequent use in pop music advice on production mastering! For musicians: Audio mastering, Digital distribution, collaboration, promotion and sample packs song chord progression offer... That creates a sense of resolution popular progressions for hip hop.. chord! The Righteous Brothers ’ “ Unchained melody ” lines, they can not be and... These timeless harmonic patterns are important to know make a song outside of blues music it. Music—Tonic, pre-dominant and dominant songs their basic outline distribution, collaboration, promotion and sample.! Sometimes on the heartstrings in vintage ballads like the Righteous Brothers ’ “ Unchained ”. It appears in different forms in the key should technically be a diminished chord, so F m7! I ’ ll find used in J-Pop OP and ED songs in tonal music—tonic, pre-dominant dominant! Play with the Mixolydian mode to hear it popular japanese chord progressions in pop music in music over and over –. Enough harmonic action to propel a song but here ’ s almost too songs! For in their songs musical piece have a strong influence from jazz harmony - this Pin was discovered Stephen... Be a diminished chord, so F # m7 video where you can imagine reading a poem... Of basic ones to get your ideas flowing michael Hahn is an actual mathematical explanation as to why it s... Young uses i–v–♭vii–iv ( all of the jazz idiom promotion and sample packs and RnB have all this! Rapidly influencing all genres including country, alternative rock, pop, and how to construct melodies using exact... You know some of the I–V–vi–IV progression versions of the standard tunes in jazz Vitamin C s..., so F # dim to construct melodies using this scale beats and even metal in! Can see and listen by yourself what the chord progression, but you also have songs. A minor the heartstrings in vintage ballads like the Righteous Brothers ’ “ Unchained melody ” common chord progression western. To build your chord but you also have countless songs using this exact progression jazz, these timeless patterns! Make popular progression to make a song find many standards s r & b, neo-soul or,. You make Me Feel like ) a Natural Woman '' by Hillsong of blues music but it been. To telling a story, chord progressions will lack substance key of G 5:39. chord progressions used in music that. S been used in music in music be copyrighted and are used many... Popular chord progression is most famous as one of the standard tunes in jazz culture, and how to your. Ll know what I mean crafting your songs same as western popular music a progression. – because they work down to write this I learned Mighty to save '' Carole... In rock and pop songs establishing the basic sound of blues music but it appears in many different too! ( you make Me Feel like ) a Natural Woman '' by.... Third of V for Mixolydian key should technically be a diminished chord, so F # m7 chord... Continue this thread … popular progressions for hip hop.. popular chord progression and more of a pattern! Like anything skill you learn as you go, starting with the popular progressions. More interesting variants I know this is less of a harmonic pattern creates... Npa Full Form In Banking, Penang Weather Forecast Hourly, Monster Hunter Rise Console, Spider-man 3 Wallpaper Iphone, Barnard College Degree, Cartoon House In Minecraft, Iom Document Meaning, Barnard College Degree, Spider-man: Shattered Dimensions Wii Controls, Isle Of Man Meaning In English, Lot Polish Airlines Phone Number, Spider-man 3 Wallpaper Iphone, " /> |
Dark energy and extending the geodesic equations of motion: its construction and experimental constraints
Open Access Publications from the University of California
## Dark energy and extending the geodesic equations of motion: its construction and experimental constraints
• Author(s): Speliotopoulos, Achilles D.
• et al.
## Published Web Location
https://doi.org/10.1007/s10714-009-0926-3
Abstract
With the discovery of Dark Energy, ΛDE, there is now a universal length scale, $${\ell_{\rm DE}=c/(\Lambda_{\rm DE} G)^{1/2}}$$ , associated with the universe that allows for an extension of the geodesic equations of motion. In this paper, we will study a specific class of such extensions, and show that contrary to expectations, they are not automatically ruled out by either theoretical considerations or experimental constraints. In particular, we show that while these extensions affect the motion of massive particles, the motion of massless particles are not changed; such phenomena as gravitational lensing remain unchanged. We also show that these extensions do not violate the equivalence principal, and that because $${\ell_{\rm DE}=14010^{800}_{820}}$$ Mpc, a specific choice of this extension can be made so that effects of this extension are not be measurable either from terrestrial experiments, or through observations of the motion of solar system bodies. A lower bound for the only parameter used in this extension is set.
Many UC-authored scholarly publications are freely available on this site because of the UC's open access policies. Let us know how this access is important for you. |
# Math Help - [SOLVED] Trig chord angle problem
1. ## [SOLVED] Trig chord angle problem
This should be simple but its got me stumped so I could use some help.
Code:
:**** :
: *** :
: ** + : : is the tangent line
: +\* : + marks the line in question
: + \ * : \ marks the chord line
: + \ *: - marks the radius line
: + \ *
: + \* circle
:---------r---------*
I have a circle of radius r. A line at 0degrees is drawn from the center to the circle edge. At the point of intersection a tangent line can be drawn. There is a chord line of known length (ra) that bisects the line between the radius line and the tangent line. Since the radius of the circle is known and the length of the chord line is know how can I calculate either the angle theta (angle between the two lines) or the angle of the chord line or the x and y point where the chord line (re)crosses the circle. sorry the drawing is so bad... hard to do with ascii art
A simple metaphor is this: You grab a slice of pizza. You have a tape measure and measure the length of the side of the piece and you measure across the widest point at the top of the piece - remember the second width is not the arc length as you just measured the straight distance across the piece. Given these two how do you determine the angle of the piece, or the angle of incident between one side and the straight line that connects the two end points of the piece.
2. ## partial solution
ok I think I got it but would appreciate if someone could check my math.
I'm using the law of cosines to solve it which states: a^2=b^2+c^2-2bccos(A)
Code:
/C\
/ \
b/ \a
/ \
/A_______B\
c
A is the angle theta so to speak and is what I need to know. a is the length of the chord which was given as ra. Since both b and c are from the midpoint to the circle they have the same length as the radius.
so:
a=ra
b=r
c=r
substituting into law of cosine:
ra^2=r^2+r^2-2r^2cos(A)
ra^2=2r^2-2r^2cos(A)
ra^2=2r^2(1-cos(A))
ra^2/2r^2-1=-cos(A)
cos(A)=1-ra^2/2r^2
A=Acos(1-ra^2/2r^2)
to solve for B
use the law of sines: a/sin(A)=b/sin(B)=c/sin(C)
substitute
ra/sin(A)=r/sin(B)
ra/r*sin(A)=1/sin(B)
sin(B)=r*sin(A)/ra
B=Asin(r*sin(A)/ra)
or can be completely substituted
B=Asin(r*sin(Acos(1-ra^2/2r^2))/ra)
can this be further factored?
thx
3. Originally Posted by hyperkinetic
This should be simple but its got me stumped so I could use some help.
Code:
:**** :
: *** :
: ** + : : is the tangent line
: +\* : + marks the line in question
: + \ * : \ marks the chord line
: + \ *: - marks the radius line
: + \ *
: + \* circle
:---------r---------*
I have a circle of radius r. A line at 0degrees is drawn from the center to the circle edge. At the point of intersection a tangent line can be drawn. There is a chord line of known length (ra) that bisects the line between the radius line and the tangent line. Since the radius of the circle is known and the length of the chord line is know how can I calculate either the angle theta (angle between the two lines) or the angle of the chord line or the x and y point where the chord line (re)crosses the circle. sorry the drawing is so bad... hard to do with ascii art
A simple metaphor is this: You grab a slice of pizza. You have a tape measure and measure the length of the side of the piece and you measure across the widest point at the top of the piece - remember the second width is not the arc length as you just measured the straight distance across the piece. Given these two how do you determine the angle of the piece, or the angle of incident between one side and the straight line that connects the two end points of the piece.
I really had a hard look on your question and the posted diagram. ra = re?
I got lost on the question, and I also got lost on the diagram.
So let us talk on the metaphor.
"...how do you determine the angle of the piece, or the angle of incident between one side and the straight line that connects the two end points of the piece."
The straight line that connects the two end points of piece?
You want to know the angle between the two radii? Or the angle between one radius and the chord?
I am still confused, but since you really showed you want help, then here is what I can give you.....according to my understanding so far on your descriptions.
Let us call the two lines that forms the V of the pizza as r each. They are the radii of the circle from where the pizza is cut.
And let us call the chord as x. I think you called it "ra".
Let us draw another radius that will bisect, or divide equally, the x.
Now there are two equal right triangles formed. Each one of them has:
one leg is half of the x, so it is x/2.
the other leg is a part of the radius that cut the x. Ignore this for we don't need it.
hypotenuse is r.
angle between the other leg and the hypotenuse is half of the angle you want to find.....let us call it theta/2.
So,
sin(theta/2) = (x/2) / r
sin(theta/2) = x / (2r)
theta/2 = arcsin(x / 2r)
theta = 2*arcsin(x / 2r) ---------answer.
You know the x, you know the r, so you should be able to get the theta.
4. Occams Razor... some how I missed that very simple solution - thx |
# Combining \ifxetex and \ifluatex with the logical OR operation
I want to write in the preamble of a LaTeX document code that should be take into account by different compilers when the document is typeset. There are fragments that should be used:
• only by xelatex
• only by lualatex
• only by xelatex or lualatex
• only by pdflatex
I am using the packages ifxetex and ifluatex that provide the commands \ifxetex and \ifluatex, respectively.
How can I logically combine these commands in a disjunction (or operation)?
The code would be something like:
\ifxetex
% code used only by xetex
\else
\ifluatex
% code used only by luatex
\fi
\fi
\ifxetex OR \ifluatex % I do not know how to express this conditional <--------
% code used by both xetex and luatex
\else
% code used by other (pdflatex, say)
\fi
Any help?
-
\usepackage{ifxetex,ifluatex}
\newif\ifxetexorluatex
\ifxetex
\xetexorluatextrue
\else
\ifluatex
\xetexorluatextrue
\else
\xetexorluatexfalse
\fi
\fi
Now \ifxetexorluatex will do what you want. For instance, for loading fontspec and setting input normalization:
\ifxetexorluatex
\usepackage{fontspec}
\setmainfont{TeX Gyre Pagella}
\else
\usepackage[T1]{fontenc}
\usepackage{tgpagella}
\fi
\ifxetex
\XeTeXinputnormalization=1
\fi
A different implementation, just for fun, is
\newif\ifxetexorluatex
\begingroup\catcode94=7 \catcode0=9 % ASCII 94 is ^
\def\empty{}\def\next{^^^^0000}\expandafter\endgroup
\ifx\next\empty\xetexorluatextrue\else\xetexorluatexfalse\fi
It exploits the fact that the XeTeX and LuaTeX engines have the ^^^^ convention for inputting character with their Unicode point. If the engine is Unicode aware, ^^^^0000 counts as a unique token which, by the assignment \catcode0=9 is ignored, so that \next expands to nothing and is equivalent to \empty. In case an 8-bit engine is used, the expansion of \next would contain six tokens (^^^ counts as one, then ^0000) and \ifx will follow the "false" path.
Another way of doing it, without defining an \ifxetexorluatex conditional, is to say
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi>0
% code for XeTeX or LuaTeX
\else
% code for pdfLaTeX
\fi
There must be no space between 0 and \ifxetex and between 1 and \fi. This exploits the fact that TeX expands tokens when looking for numbers. So if one of the two inner conditionals is true, the engine will see 01, which is greater than zero. If both are false it will see 0.
So a shorter way to set \ifxetexorluatex can be
\newif\ifxetexorluatex % a new conditional starts as false
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi>0
\xetexorluatextrue
\fi
-
Sorry, but \ifxetexorluatex will not distinguish between XeTeX and LuaTeX - which Romildo wants. – Martin Schröder Mar 10 '12 at 23:40
@MartinSchröder Of course it won't, how could it? For XeTeX or LuaTeX specific code, \ifxetex and \ifluatex can still be used; where's the problem? Romildo wanted a conditional to distinguish between (XeTeX or LuaTeX) and pdfTeX. – egreg Mar 10 '12 at 23:46
A safer version of the catcode trick (only if \scantokens is available) is \begingroup\catcode94=7\catcode0=9\catcode30=12\catcode48=12\everyeof{\noexpand}\expandafter\endgroup\if\scantokens{^^^^0000}??\xetexorluatextrue\else\xetexorluatexfalse\fi. – Bruno Le Floch Mar 11 '12 at 10:49
In such situations, the generic \ifboolexpr wrapper provided by the etoolbox package comes in handy:
\usepackage{etoolbox,ifxetex,ifluatex}
\ifboolexpr{bool{xetex} or bool{luatex}}{%
<true-code>%
}{%
<false-code>%
}
The bool operator that works with the \ifboolexpr syntax to perform boolean tests operates on all primitive style TeX conditionals. Note that it omits the \if prefix of the original \ifxetex and \ifluatex commands.
-
Sometimes this sort of issue is easier to solve by changing the order of checking things.
\documentclass{article}
\usepackage{ifpdf,ifluatex,ifxetex}
\begin{document}
\ifpdf
I am in pdf
\else
common code for lualatex and xelatex
\ifluatex ... \fi
\ifxetex ... \fi
\fi
\end{document}
-
Won't this approach fail if the tex file happens to be compiled under a TeX format other than pdflatex, xelatex, or lualatex? – Mico Mar 11 '12 at 3:14
@Mico The OP only wanted to check for this, but in general it wouldn't, provided your code is common to all, otherwise you build more checks \ifvtex etc... – Yiannis Lazarides Mar 11 '12 at 3:59
IIRC, \ifpdf checks the output mode not the engine, so it is true for pdftex or luatex in pdf mode and false otherwise. – Khaled Hosny Mar 11 '12 at 5:26
@KhaledHosny The example compiles correctly for the pdfLaTeX, LuaLaTeX and XeLaTeX. It will fail in all other cases due to the documentclass{}. – Yiannis Lazarides Mar 11 '12 at 5:40
lualatex '\RequirePackage{ifpdf}\show\ifpdf' produces \ifpdf=\iftrue – egreg Mar 11 '12 at 9:26
probably I'd advise going with @egreg's answer but an alternative way of combining \if conditionals (which essentially isthe way \ifthenelse \OR works) is shown as follows, using \ifA and \ifB as it works generally not for the engine tests.
\newif\ifA
\newif\ifB
\Atrue\Btrue\typeout{true true}
\if!\ifA!\else\ifB!\else?\fi\fi
\typeout{A or B}
\else
\typeout {neither A nor B}
\fi
\Atrue\Bfalse\typeout{true false}
\if!\ifA!\else\ifB!\else?\fi\fi
\typeout{A or B}
\else
\typeout {neither A nor B}
\fi
\Afalse\Btrue\typeout{false true}
\if!\ifA!\else\ifB!\else?\fi\fi
\typeout{A or B}
\else
\typeout {neither A nor B}
\fi
\Afalse\Bfalse\typeout{false false}
\if!\ifA!\else\ifB!\else?\fi\fi
\typeout{A or B}
\else
\typeout {neither A nor B}
\fi
\stop
-
What is the meaning of the characters ! and ? that appear in \if!\ifA!\else\ifB!\else?\fi\fi? – Romildo Mar 11 '12 at 11:33
any two characters will do so long as they are not the same. If for example ifA is true and ifB is false \ifA!\else\ifB!\else?\fi\fi expands to ! so the outer \if is \if!! which is true. By adjusting which characters are returned you can make other logical connectives such as and or exclusive or – David Carlisle Mar 11 '12 at 11:36
For the fun of it, possibly the shortest way:
\documentclass{article}
\usepackage{ifxetex,ifluatex}
\begin{document}
hello
\ifx\ifxetex\ifluatex\else
xetex or luatex is true
\fi
\end{document}
Indeed, xetex and luatex are mutually exclusive. If the two conditionals coïncide, this must be because both are false. So the opposite is that one of the two is true, which was what was asked for.
As pointed out by egreg in a comment, this construction can not be used as is inside other conditionals (in case it ends up in the skipped branch). This alternative formulation does not have this defect:
\expandafter\ifx\csname ifxetex\expandafter\endcsname \csname ifluatex\endcsname\else
xetex or luatex is true\fi
-
Hehe, I particularly like this one, very clever! (Unless someone comes up with a luaxetex engine, of course...) – Daniel Oct 22 '13 at 21:49
@Daniel I would be more scared by an egreglisle mix... – jfbu Oct 22 '13 at 21:51
This is nice, of course, but it can't be used in conditional text, if it belongs to the skipped over branch. Probably it should be used in the preamble to set a new conditional. – egreg Feb 13 '14 at 18:42
@egreg \ifx\ifxetex\ifluatex\else xetex or luatex is true \fi\@gobbletwo \fi \fi should do it but this is less simple. – jfbu Feb 14 '14 at 17:30
@egreg or perhaps \expandafter\ifx\csname ifxetex\expandafter\endcsname \csname ifluatex\endcsname\else xetex or luatex is true\fi. – jfbu Feb 14 '14 at 17:34 |
QUESTION
# A solid iron pole having cylindrical portion 110 cm high and of base diameter 12 cm is surmounted by a cone 9 cm high. Find the mass of the pole, given that the mass of $1\,c{m^3}$ of iron is 8 gm.
Hint: For solving this type of questions, we will find the volume of cylinder and cone separately and then add them to find the volume of the pole. Then we will convert the volume of the pole into mass of the pole with the help of given condition and solve accordingly.
Complete step-by-step solution -
All the measurements in the figure below are in cm.
Base diameter of the cylindrical portion = d,
d = 12 cm
Base radius of the cylindrical portion = r,
$r = \dfrac{d}{2}$ ,
r = 6 cm,
Base radius of the cylindrical portion = Base radius of cone. Therefore,
Base radius of cone, r = 6 cm,
Height of the cylinder, H = 110 cm
Height of the cone, h = 9 cm
Volume of cylinder = $\pi {r^2}H$
Volume of cone = $\dfrac{1}{3}\pi {r^2}h$
Volume of the pole, V = Volume of the cylinder + Volume of the cone
$\Rightarrow V = \pi {r^2}\left( {H + \dfrac{1}{3}h} \right)$
Now substituting the values of r, h and H, we get,
$\Rightarrow V = \pi {6^2}\left( {110 + \dfrac{1}{3} \times {\text{ }}9} \right)$
$\Rightarrow V{\text{ }} = {\text{ }}36\pi \left( {110{\text{ }} + {\text{ }}3} \right)$
$\Rightarrow V = 36 \times \dfrac{{22}}{7} \times 113$
$\Rightarrow V = \dfrac{{89496}}{7}$
Volume of the pole,
$\Rightarrow V = 12,785.14{\text{ c}}{{\text{m}}^3}$
Given mass of 1 cm³ of the iron pole = 8 gm,
Mass of 12785.14 cm³ of the iron pole,
$\Rightarrow M = \;8 \times 12785.14$
$\begin{array}{*{20}{l}} { \Rightarrow M = {\text{ }}102281.12{\text{ }}gm} \\ { \Rightarrow M = {\text{ }}\dfrac{{102281.12}}{{1000}}{\text{ }}kg\,\,\,\,\,\,\,\,\,\,\,\,\,[1g = \dfrac{1}{{1000}}kg]} \end{array}$
$\Rightarrow M = 102.3kg$
Mass of the iron pole,
M = 102.3 kg.
Hence, the mass of the pole = 102.3 kg.
Note: The basic problem faced in this type of questions is calculation problem. While solving this problem we must take care of units and convert units into suitable form whenever required. While taking the value of $\pi$ there is a chance of calculation error which can be avoided by carefully substituting the values. |
Search for supersymmetry with a compressed mass spectrum in the vector boson fusion topology with 1-lepton and 0-lepton final states in proton-proton collisions at $\sqrt{s}=$ 13 TeV
JHEP 1908 (2019) 150
The collaboration
Abstract (data abstract)
A search for supersymmetric particles produced in the vector boson fusion topology in proton-proton collisions is presented. The search targets final states with one or zero leptons, large missing transverse momentum, and two jets with a large separation in rapidity. The data sample corresponds to an integrated luminosity of 35.9 fb$^{-1}$ of proton-proton collisions at $\sqrt{s}=13$ TeV collected in 2016 with the CMS detector at the LHC. The observed dijet invariant mass and lepton-neutrino transverse mass spectra are found to be consistent with the standard model predictions. Upper limits are set on the cross sections for chargino ($\tilde{\chi}_{1}^{\pm}$) and neutralino ($\tilde{\chi}_{2}^{0}$) production with two associated jets. For a compressed mass spectrum scenario in which the $\tilde{\chi}_{1}^{\pm}$ and $\tilde{\chi}_{2}^{0}$ decays proceed via a light slepton and the mass difference between the lightest neutralino $\tilde{\chi}_{1}^{0}$ and the mass-degenerate particles $\tilde{\chi}_{1}^{\pm}$ and $\tilde{\chi}_{2}^{0}$ is 1 (30) GeV, the most stringent lower limit to date of 112 (215) GeV is set on the mass of these latter two particles. |
chapter 16
6 Pages
## Feynman Rules for Vector Fields
As before, the quantisation for vector fields starts by expanding the field in plane waves and identifying the Fourier coefficients with creation and annihilation operators:
Aµ(x) = ∫
d3 k√ 2k0(k)(2π )3
∑ λ
( aλ(k)ε(λ)µ (k)e−ikx + a †λ(k)ε(λ)µ (k)∗eikx
) , (16.1)
where ε(λ)µ (k)e−ikx are independent plane wave solutions of the equations of motion. The index λ enumerates the various solutions for fixed momentum. They will be identified with the spin components or helicity eigenstates of the vector. |
## Thinking Mathematically (6th Edition)
median = $3.6$
Arrange the scores from least to greatest to obtain: $1.6, 2.5, 2.5, 2.7, 3.2, 3.6, 3.8, 4.2, 4.2, 4.7, 5.0$ There are 11 numbers. The middle number is the sixth number. Thus, the median is $3.6$. |
Problem wit h a pendulum
Homework Statement
A pendulum clock measures the time exactly if its period is $T_0$. What time does the pendulum record in a time $D$ , if its period becomes $T$ ?
Homework Equations
I know that the number of oscilations of the pendulum in the time D is : N=D/T
The Attempt at a Solution
Well I don't know how to use the informations that the probelm gives me.
P.S. : SORRY FOR THE SPELLING IN THE TITLE
Related Introductory Physics Homework Help News on Phys.org
mfb
Mentor
Each period of the pendulum, the display of the clock goes forwards by T0.
After N periods, what does the clock show?
Each period of the pendulum, the display of the clock goes forwards by T0.
After N periods, what does the clock show?
Ohhhhh I get it now. I looked more closely in the mechanism of the pendulum. From what I understood, each time an oscilation is completed the pendulum records a certain time. Let this time be $t$. This $t$ is constant, and its typical for every pendulum, right ?
In our problem the period , i.e. the time needed for an oscilation to be completed , is modified. But, because our $t$ is a constat, the pendulum will record the same time for each oscilation, even if the number of oscilations increases or decreases.
In our problem:
In a time D, the pendulum swings : $N=\frac{D}{T}$ times => the pendulum measures the time $Nt$ .
Who is $t$ ? Well we know, from the hypothesis that $\frac{D}{T_0}t=D$, that is , if the period is $T_0$ then the time measured by the pendulum is D. Solving for t, we obtain: $t = T_0$ .
So, $Nt = NT_0=\frac{D}{T}T_0$. This is the time the pendulum measures.
Please, help me, and tell me if my judgement is correct. I belive that what confused me before was that I wasn't fully aware that the mechanism of a pendulum allows it to record the same amount of time, and that this time ( $t$ ) doesn't depend on the number of oscilations.
mfb
Mentor
Ohhhhh I get it now. I looked more closely in the mechanism of the pendulum. From what I understood, each time an oscilation is completed the pendulum records a certain time. Let this time be $t$. This $t$ is constant, and its typical for every pendulum, right ?
In our problem the period , i.e. the time needed for an oscilation to be completed , is modified. But, because our $t$ is a constat, the pendulum will record the same time for each oscilation, even if the number of oscilations increases or decreases.
Right.
In our problem:
In a time D, the pendulum swings : $N=\frac{D}{T}$ times => the pendulum measures the time $Nt$ .
Who is $t$ ? Well we know, from the hypothesis that $\frac{D}{T_0}t=D$, that is , if the period is $T_0$ then the time measured by the pendulum is D. Solving for t, we obtain: $t = T_0$ .
So, $Nt = NT_0=\frac{D}{T}T_0$. This is the time the pendulum measures.
Correct.
Thank you very much!!!! |
# NAG FL Interfacef08chf (dgerqf)
## ▸▿ Contents
Settings help
FL Name Style:
FL Specification Language:
## 1Purpose
f08chf computes an $RQ$ factorization of a real $m×n$ matrix $A$.
## 2Specification
Fortran Interface
Subroutine f08chf ( m, n, a, lda, tau, work, info)
Integer, Intent (In) :: m, n, lda, lwork Integer, Intent (Out) :: info Real (Kind=nag_wp), Intent (Inout) :: a(lda,*), tau(*) Real (Kind=nag_wp), Intent (Out) :: work(max(1,lwork))
#include <nag.h>
void f08chf_ (const Integer *m, const Integer *n, double a[], const Integer *lda, double tau[], double work[], const Integer *lwork, Integer *info)
The routine may be called by the names f08chf, nagf_lapackeig_dgerqf or its LAPACK name dgerqf.
## 3Description
f08chf forms the $RQ$ factorization of an arbitrary rectangular real $m×n$ matrix. If $m\le n$, the factorization is given by
$A = ( 0 R ) Q ,$
where $R$ is an $m×m$ lower triangular matrix and $Q$ is an $n×n$ orthogonal matrix. If $m>n$ the factorization is given by
$A =RQ ,$
where $R$ is an $m×n$ upper trapezoidal matrix and $Q$ is again an $n×n$ orthogonal matrix. In the case where $m the factorization can be expressed as
$A = ( 0 R ) ( Q1 Q2 ) =RQ2 ,$
where ${Q}_{1}$ consists of the first $\left(n-m\right)$ rows of $Q$ and ${Q}_{2}$ the remaining $m$ rows.
The matrix $Q$ is not formed explicitly, but is represented as a product of $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ elementary reflectors (see the F08 Chapter Introduction for details). Routines are provided to work with $Q$ in this representation (see Section 9).
## 4References
Anderson E, Bai Z, Bischof C, Blackford S, Demmel J, Dongarra J J, Du Croz J J, Greenbaum A, Hammarling S, McKenney A and Sorensen D (1999) LAPACK Users' Guide (3rd Edition) SIAM, Philadelphia https://www.netlib.org/lapack/lug
Golub G H and Van Loan C F (1996) Matrix Computations (3rd Edition) Johns Hopkins University Press, Baltimore
## 5Arguments
1: $\mathbf{m}$Integer Input
On entry: $m$, the number of rows of the matrix $A$.
Constraint: ${\mathbf{m}}\ge 0$.
2: $\mathbf{n}$Integer Input
On entry: $n$, the number of columns of the matrix $A$.
Constraint: ${\mathbf{n}}\ge 0$.
3: $\mathbf{a}\left({\mathbf{lda}},*\right)$Real (Kind=nag_wp) array Input/Output
Note: the second dimension of the array a must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{n}}\right)$.
On entry: the $m×n$ matrix $A$.
On exit: if $m\le n$, the upper triangle of the subarray ${\mathbf{a}}\left(1:m,n-m+1:n\right)$ contains the $m×m$ upper triangular matrix $R$.
If $m\ge n$, the elements on and above the $\left(m-n\right)$th subdiagonal contain the $m×n$ upper trapezoidal matrix $R$; the remaining elements, with the array tau, represent the orthogonal matrix $Q$ as a product of $\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left(m,n\right)$ elementary reflectors (see Section 3.3.6 in the F08 Chapter Introduction).
4: $\mathbf{lda}$Integer Input
On entry: the first dimension of the array a as declared in the (sub)program from which f08chf is called.
Constraint: ${\mathbf{lda}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$.
5: $\mathbf{tau}\left(*\right)$Real (Kind=nag_wp) array Output
Note: the dimension of the array tau must be at least $\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{m}},{\mathbf{n}}\right)\right)$.
On exit: the scalar factors of the elementary reflectors.
6: $\mathbf{work}\left(\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{lwork}}\right)\right)$Real (Kind=nag_wp) array Workspace
On exit: if ${\mathbf{info}}={\mathbf{0}}$, ${\mathbf{work}}\left(1\right)$ contains the minimum value of lwork required for optimal performance.
7: $\mathbf{lwork}$Integer Input
On entry: the dimension of the array work as declared in the (sub)program from which f08chf is called.
If ${\mathbf{lwork}}=-1$, a workspace query is assumed; the routine only calculates the optimal size of the work array, returns this value as the first entry of the work array, and no error message related to lwork is issued.
Suggested value: for optimal performance, ${\mathbf{lwork}}\ge {\mathbf{m}}×\mathit{nb}$, where $\mathit{nb}$ is the optimal block size.
Constraint: ${\mathbf{lwork}}\ge \mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(1,{\mathbf{m}}\right)$ or ${\mathbf{lwork}}=-1$.
8: $\mathbf{info}$Integer Output
On exit: ${\mathbf{info}}=0$ unless the routine detects an error (see Section 6).
## 6Error Indicators and Warnings
${\mathbf{info}}<0$
If ${\mathbf{info}}=-i$, argument $i$ had an illegal value. An explanatory message is output, and execution of the program is terminated.
## 7Accuracy
The computed factorization is the exact factorization of a nearby matrix $A+E$, where
$‖E‖2 = Oε ‖A‖2$
and $\epsilon$ is the machine precision.
## 8Parallelism and Performance
f08chf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further information.
Please consult the X06 Chapter Introduction for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the Users' Note for your implementation for any additional implementation-specific information.
The total number of floating-point operations is approximately $\frac{2}{3}{m}^{2}\left(3n-m\right)$ if $m\le n$, or $\frac{2}{3}{n}^{2}\left(3m-n\right)$ if $m>n$.
To form the orthogonal matrix $Q$ f08chf may be followed by a call to f08cjf :
`Call dorgrq(n,n,min(m,n),a,lda,tau,work,lwork,info)`
but note that the first dimension of the array a must be at least n, which may be larger than was required by f08chf. When $m\le n$, it is often only the first $m$ rows of $Q$ that are required and they may be formed by the call:
`Call dorgrq(m,n,m,a,lda,tau,work,lwork,info)`
To apply $Q$ to an arbitrary $n×p$ real rectangular matrix $C$, f08chf may be followed by a call to f08ckf . For example:
```Call dormrq('Left','Transpose',n,p,min(m,n),a,lda,tau,c,ldc, &
work,lwork,info)```
forms the matrix product $C={Q}^{\mathrm{T}}C$.
The complex analogue of this routine is f08cvf.
## 10Example
This example finds the minimum norm solution to the underdetermined equations
$Ax=b$
where
$A = ( -5.42 3.28 -3.68 0.27 2.06 0.46 -1.65 -3.40 -3.20 -1.03 -4.06 -0.01 -0.37 2.35 1.90 4.31 -1.76 1.13 -3.15 -0.11 1.99 -2.70 0.26 4.50 ) and b= ( -2.87 1.63 -3.52 0.45 ) .$
The solution is obtained by first obtaining an $RQ$ factorization of the matrix $A$.
Note that the block size (NB) of $64$ assumed in this example is not realistic for such a small problem, but should be suitable for large problems.
### 10.1Program Text
Program Text (f08chfe.f90)
### 10.2Program Data
Program Data (f08chfe.d)
### 10.3Program Results
Program Results (f08chfe.r) |
Article Text
Evaluation of a new amplified enzyme immunoassay (EIA) for the detection of Chlamydia trachomatis in male urine, female endocervical swab, and patient obtained vaginal swab specimens
1. Masatoshi Tanaka1,
2. Hiroshi Nakayama3,
3. Kazuyuki Sagiyama4,
4. Masashi Haraoka1,
5. Hiroshi Yoshida5,
6. Toshikatsu Hagiwara5,
7. Kohei Akazawa2,
8. Seiji Naito1
1. 1Department of Urology, Faculty of Medicine, Kyushu University, 3–1–1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan
2. 2Department of Medical Informatics, Faculty of Medicine, Kyushu University
3. 3Nakayama Urologic Clinic, Fukuoka 812-0038, Japan
4. 4Division of Urology, Harasanshin Hospital, Fukuoka 812-0033, Japan
5. 5Department of Virology, National Institute of Infectious Diseases, Tokyo 162-0052, Japan
1. Dr Tanaka email: masatosh{at}uro.med.kyushu-u.ac.jp
## Abstract
Aims—To compare the performance of a new generation dual amplified enzyme immunoassay (EIA) with a molecular method for the diagnosis of Chlamydia trachomatis, using a range of urogenital samples, and to assess the reliability of testing self collected vaginal specimens compared with clinician collected vaginal specimens.
Methods—Two population groups were tested. For the first population group, first void urine samples were collected from 193 male patients with urethritis, and endocervical swabs were collected from 187 high risk commercial sex workers. All urine and endocervical specimens were tested by a conventional assay (IDEIA chlamydia), a new generation amplified immunoassay (IDEIA PCE chlamydia), and the Amplicor polymerase chain reaction (PCR). Discrepant results obtained among the three sample types were confirmed using a nested PCR test with a different plasmid target region. For the second population group, four swab specimens, including one patient obtained vaginal swab, two clinician obtained endocervical swabs, and one clinician obtained vaginal swab, were collected from 91 high risk sex workers. Self collected and clinician collected vaginal swabs were tested by IDEIA PCE chlamydia. Clinician obtained endocervical swabs were assayed by IDEIA PCE chlamydia and Amplicor PCR.
Results—The performance of the IDEIA PCE chlamydia test was comparable to that of the Amplicor PCR test when male urine and female endocervical swab specimens were analysed. The relative sensitivities of IDEIA, IDEIA PCE, and Amplicor PCR on male first void urine specimens were 79.3%, 91.4%, and 100%, respectively. The relative sensitivities of the three tests on female endocervical specimens were 85.0%, 95.0%, and 100%, respectively. The positivity rates for patient collected vaginal specimens and clinician collected vaginal specimens by IDEIA PCE were 25.2% and 23.1%, respectively, whereas those for clinician collected endocervical swabs by PCR and IDEIA PCE were both 27.5%.
Conclusions—IDEIA PCE chlamydia is a lower cost but sensitive alternative test to PCR for testing male urine samples and female endocervical swabs. In addition, self collected or clinician collected vaginal specimens tested by IDEIA PCE chlamydia are a reliable alternative to analysing endocervical specimens.
• Chlamydia trachomatis
• enzyme immunoassay
• clinical specimens
View Full Text
## Statistics from Altmetric.com
Chlamydia trachomatis infection is the most common bacterial sexually transmitted disease (STD) in Japan, and routine screening of high risk patients and selected female populations is widely performed.1 Recently, nucleic acid amplification techniques, such as the polymerase chain reaction (PCR) and ligase chain reaction (LCR) have become available, with the potential to offer improved sensitivity for diagnosing C trachomatis infections. These DNA amplification methods are reported to be more sensitive than cell culture techniques or conventional antigen detection tests, such as enzyme immunoassay (EIA).2–4 However, despite the advent of DNA amplification technology, EIA tests are still widely used for the diagnosis of C trachomatis in Japan. Although molecular amplification analysis is increasingly used for confirmation testing, its use as a routine screening test for C trachomatis is limited by the high cost for each test compared with current routine methods.5 This can be offset if samples are pooled before testing,6 or on the basis of the calculation of longer term health care cost saving.7 It has been suggested that wider screening or universal screening of female populations using molecular amplification techniques will reduce the incidence of longer term complications of C trachomatis infection.8
For wider screening of the female population a non-invasive alternative to endocervical swabs is required. Female urine specimens have been assessed,9 but recent reports have indicated inadequate sensitivity compared with testing endocervical swabs because infection mainly occurs in the cervix and less frequently involves the urethra,10 and because of inhibitors present in urine.11 Recently, DNA amplification testing of vaginal specimens obtained by clinicians or patients themselves has been reported to have comparable sensitivity to that of testing endocervical specimens.12–15
The advent of a new generation of sensitive immunoassays for detecting chlamydia lipopolysacharide (LPS) might offer an opportunity for a lower cost test for wider screening programmes, while providing comparable sensitivity to molecular amplification methods.
The IDEIA PCE chlamydia test is a new, qualitative dual amplified EIA for the detection of chlamydial specific LPS antigens. The principle of IDEIA PCE chlamydia is based on the use of dual label and signal amplification. In addition to the signal amplification system used in an established conventional EIA test (IDEIA chlamydia)16, the new technology incorporates the use of a polymer conjugate enhanced (PCE) system consisting of a dextran backbone to which multiple anti-chlamydia LPS monoclonal antibody molecules and alkaline phosphatase molecules are bound. It has been reported that the use of polymer conjugates can increase assay sensitivity approximately 40-fold compared with conventional methods.17 In a previous study we assessed the reliability of the IDEIA PCE chlamydia test when applied to genital swabs collected from high risk sex workers.18 In assessing vaginal specimens we took the opportunity to compare clinician obtained vaginal swabs with patient obtained vaginal swabs as an indicator of the value of this sample type for community screening for C trachomatis infection.
## Materials and methods
### STUDY POPULATION
Samples were collected from two population groups. The first population of 380 comprised 193 men with symptoms of urethritis and 187 high risk female commercial sex workers, who visited two STD clinics in Fukuoka, Japan, from April to December 1997. The second population group consisted of 91 high risk female commercial sex workers who attended one of the STD clinics from January to March 1998.
### SAMPLE COLLECTION
For the first population group, first void urine (20–30 ml) was collected from male patients into sterile screw cap tubes and transported to the laboratory, where it was divided into three aliquots. The first aliquot (10 ml) was used for the IDEIA chlamydia and IDEIA PCE chlamydia (a newly improved EIA kit) tests (Dako, Ely, Cambridgeshire, UK); the second aliquot (8 ml) was used for the Amplicor PCR assay (Roche Molecular Systems, Branchburg, New Jersey, USA). The final aliquot was stored at −20°C for further evaluation of discrepant results.
For each woman, two endocervical specimens were obtained with a speculum by inserting a swab into the endocervix. Before sampling, the endocervix was cleaned with a swab to remove excess mucous. The swab was rotated several times before withdrawal. The first swab was placed into an Amplicor transport tube and the second into IDEIA transport medium, as provided with each kit. IDEIA chlamydia and Amplicor PCR specimen collection kits for swabs were used in accordance with each manufacturer's recommendation.
For assessment of vaginal specimens (second population group), four swab specimens, including one patient obtained vaginal swab, two clinician obtained endocervical swabs, and one clinician obtained vaginal swab, were collected from each woman. Initially, each woman was asked to obtain a vaginal swab specimen by inserting the swab about 3–5 cm into the vagina, rotating it several times, and removing it. The swab was placed into an IDEIA transport tube by a clinician. Then, a vaginal swab and two endocervical swab specimens (in that order) were obtained by a clinician using a speculum. The clinician obtained vaginal swab was placed into IDEIA transport medium. Of the two endocervical swabs, the first swab was placed into Amplicor transport medium and the second into IDEIA transport medium.
### TESTING OF SAMPLES
In the first study group, first void urine and endocervical specimens were processed and tested by the IDEIA chlamydia test, the IDEIA PCE chlamydia test, and the Amplicor PCR assay, according to each manufacturer's instructions. For the second population group, patient obtained and clinician obtained vaginal swab specimens were assayed by the IDEIA PCE chlamydia kit, and endocervical swab specimens were assayed by the IDEIA PCE chlamydia kit and the Amplicor PCR test. Urine and endocervical swab specimens were stored at 2–8°C for up to three days until processed and measured with the Amplicor PCR test as described in detail in our previous study.2 The results were interpreted according to instructions and quality control criteria provided by the manufacturer.
### RESOLUTION OF DISCREPANCIES AND CONFIRMATORY TESTING
For evaluation of urine and endocervical specimens (first population group), a specimen was considered to be positive for C trachomatis infection if the IDEIA PCE test and the Amplicor PCR assay gave positive results. When there was a discrepancy between the IDEIA PCE and PCR results, nested PCR with a different plasmid target region from that of the Amplicor PCR test was performed as a confirmatory test. The first PCR amplification was performed using primers CT2 and CT5, as described previously.19 The reaction product was then amplified for a second time using primers CT7 (5-GGATTTATCGGAAACC TTGA-3) and CT8 (5-CTTTCAATGG AATAGCGGGT-3), with all other conditions remaining the same.19 The amplified product (10 μl) was analysed by electrophoresis on a 2% agarose gel. If a specimen was positive using the supplementary testing, combined with one other positive test result (IDEIA PCE or Amplicor PCR), the sample was confirmed as being positive for C trachomatis. After resolution of the discrepancies, relative sensitivity and specificity, positive and negative predictive values, and 95% confidence intervals were calculated. Statistical analysis of the data was also performed using the Pearson χ2 test. A p value < 0.05 was considered significant.
For the evaluation of vaginal specimens (second population group), a woman was considered to be infected with C trachomatis if the Amplicor PCR or IDEIA PCE chlamydia test was positive for the clinician obtained endocervical swab. A woman who had a positive vaginal swab but negative endocervical swabs was considered to be positive if she was confirmed IDEIA PCE chlamydia positive using the IDEIA blocking test for vaginal swabs. The IDEIA blocking test was performed and results interpreted according to information provided by the manufacturer.
## Results
### URINE AND ENDOCERVICAL SPECIMENS
The results of the detection of C trachomatis in male and female specimens using IDEIA PCE were compared with those obtained by IDEIA and Amplicor PCR (table 1). Of 193 male first void urine specimens tested, 135 were negative and 46 were positive by IDEIA, IDEIA PCE, and Amplicor PCR. Twelve discrepant results were obtained. Of the 12 specimens, seven were positive according to IDEIA PCE and Amplicor PCR. Thus, these seven men were considered to be positive for C trachomatis. The remaining five specimens were confirmed as being positive using the nested PCR assay. After resolution of discrepancies, of the 193 male urine specimens tested, 58 (30.1%) were positive for C trachomatis and 135 (69.9%) were negative. Of 187 female endocervical swab specimens tested, 146 were negative and 34 were positive by IDEIA, IDEIA PCE, and Amplicor PCR. Seven discrepant results were obtained. Of the seven specimens, four were positive by IDEIA PCE and Amplicor PCR. Thus, these four specimens were considered to be positive for C trachomatis infection. Of the remaining three specimens, two were confirmed as being positive and one negative using nested PCR. After resolution of discrepancies, of the 187 endocervical specimens tested, 40 (21.4%) were considered to be positive for C trachomatis and 147 (78.6%) were negative. The relative sensitivity and specificity, 95% confidence intervals, and predictive values were then calculated according these results (table 2). The relative sensitivities of IDEIA, IDEIA PCE, and Amplicor PCR on male first void urine specimens were 79.3%, 91.4%, and 100%, respectively. The relative sensitivities of IDEIA, IDEIA PCE, and Amplicor PCR on female endocervical swab specimens were 85.0%, 95.0%, and 100%, respectively. There were no statistical differences between the sensitivities of the Amplicor PCR assay, the IDEIA PCE chlamydia test, and the IDEIA chlamydia test.
Table 1
Results of the detection of Chlamydia trachomatis in male first void urine and female endocervical swab specimens by IDEIA, IDEIA PCE, and Amplicor PCR
Table 2
Performance of IDEIA, IDEIA PCE, and Amplicor PCR for the detection of Chlamydia trachomatis in male first void urine and female endocervical swab specimens
### VAGINAL SPECIMENS
Of 91 women tested, 64 were negative for all four sample types collected and 20 were positive for all four sample types collected (two clinician obtained endocervical swabs for Amplicor PCR and IDEIA PCE, and one clinician obtained and one patient obtained vaginal swab for IDEIA PCE) (table 3). There were seven discrepancies among the sample types collected. Of the seven women, six were confirmed to be infected with C trachomatis because the Amplicor PCR assay or the IDEIA PCE test was positive for the clinician obtained endocervical specimens. The remaining woman was confirmed to be positive by the IDEIA PCE blocking assay using the patient obtained vaginal swab. After resolution of discrepancies, of the 91 women tested, 27 (29.8%) were found to be infected with C trachomatis and 64 (70.2%) were found not to be infected. The positivity rates for patient collected vaginal specimens and clinician collected vaginal specimens were 25.2% (23 of 91) and 23.1% (21 of 91), respectively; those for clinician collected endocervical swabs by PCR and IDEIA PCE were similar—both 27.5% (25 of 91). There was no difference between testing self collected or clinician collected specimens and between testing vaginal specimens and endocervical swabs.
Table 3
Results of the detection of Chlamydia trachomatis in patient obtained vaginal swab (VS) and clinician obtained endocervical swab (ES) and vaginal swab specimens
## Discussion
In Japan, commercial PCR or LCR assay kits are available as routine tests for the detection of C trachomatis. However, these DNA amplification tests are extremely costly5 compared with conventional EIAs for antigen detection. Furthermore, these tests require specialised facilities to reduce Amplicor PCR contaminants. More recently, a new generation dual amplified immunoassay IDEIA PCE chlamydia has become available, which has been shown to be diagnostically reliable when applied to genital swabs.18 The IDEIA PCE chlamydia test is 2.5 to five times more sensitive for the detection of C trachomatis elementary bodies than the conventional EIA test (IDEIA).20 Currently, several studies have shown that analysis of urine specimens using DNA amplification methods is a possible alternative to the analysis of endocervical specimens for chlamydia diagnosis in women.21,22 However, the sensitivity was lower when using female urine specimens than when endocervical specimens were used.10 21 22 The reason for this reduced sensitivity is that most women are infected with C trachomatis at the endocervix, a site remote from the urethra. Therefore, urine samples might not be suitable for the detection of endocervical infection. Moreover, the handling and laboratory processing of urine specimens is more difficult compared with endocervical or vaginal swab specimens. Recent publications have also shown that DNA amplification testing for chlamydia with patient obtained vaginal swabs is as sensitive as endocervical testing.13–15 Patient obtained vaginal swabs seems to be a more suitable and less invasive method for the screening for C trachomatis than clinician obtained endocervical or vaginal specimens. To our knowledge, reports on C trachomatis detection in patient obtained vaginal swab specimens using an EIA test are very rare.
In the first part of our study, we compared the performance of IDEIA PCE with that of the IDEIA test and the commercially available PCR assay (Amplicor) in male first void urine and female endocervical samples. The positivity rates for IDEIA PCE on male first void urine and female endocervical swab specimens (urine, 27.5%; endocervical swab, 20.3%) were higher than those for IDEIA (urine, 23.8%; endocervical swab, 18.2%), and comparable with the Amplicor PCR assay (urine, 30.1%; endocervical swab, 21.9%). However, the results obtained in our study might not reflect the true clinical sensitivity of each test because samples were not tested by culture, and no allowance was made for amplification inhibitors, which might be present in some samples. Moreover, the discrepant analysis procedure used was only applied to the discrepant samples, and not the whole population tested, and this might have introduced some bias into the data analysis.23 Other studies have reported EIAs to be less sensitive than amplification tests.2–4 These EIAs are generally based on passive capture of chlamydia LPS and use conventional signal generation systems. The incorporation of dual immunoassay amplification technology into the IDEIA PCE test might explain why we obtained a comparable positivity rate to PCR. Moreover, the cost for each IDEIA PCE test is similar to IDEIA, but much lower than the Amplicor PCR assay. However, a large study might be required to assess the true clinical performance and value of the IDEIA PCE kit because the population size tested and number of positive samples in our study are not sufficient.
In the second part of our study, we evaluated the clinical importance of patient obtained vaginal swab specimens using a new EIA kit. The results demonstrated that testing self collected vaginal specimens in a Japanese population was as reliable as testing clinician collected specimens, and that testing vaginal specimens by IDEIA PCE chlamydia was an acceptable alternative to testing endocervical swabs by IDEIA PCE chlamydia or Amplicor PCR. The agreement between the positivity rates obtained for vaginal swabs and endocervical swabs was closer than has been reported for studies comparing urine specimens with endocervical swabs.10 21
The prevalence rate of C trachomatis in the commercial sex workers tested was approximately 20–30%. This prevalence rate among these women is much higher than that in the general Japanese female population (approximately 5%).24 Although the population tested was mainly asymptomatic, the prevalence was high because of the occupation of the population tested. In our city, female commercial sex workers are a major reservoir of STDs. To prevent the spread of C trachomatis infection to the general population, continuous close monitoring of C trachomatis infection among commercial sex workers is necessary. In this regard, patient obtained vaginal swabs using the IDEIA PCE test would be useful for the screening of C trachomatis among commercial sex workers and offers the potential for cost effective, reliable, and less invasive screening of high risk/prevalence female populations. However, our results might not be applicable to lower prevalence populations, such as those seen in family planning clinics, because the carriage of C trachomatis will be lower. Furthermore, a large study is required to evaluate the true clinical usefulness of patient vaginal swabs as an alternative to endocervical swabs.
View Abstract
## Request Permissions
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. |
# Birth of Thakkar Bapa - [November 29, 1869] This Day in History
29 November 1869
Birth of social worker Thakkar Bapa.
What happened?
In this article, you can read about the life and contributions of eminent social worker, Thakkar Bapa. This will give you material for social issues and essay papers of the UPSC exam.
On 29 November 1869, social worker Amritlal Vithaldas Thakkar, known to his admirers as Thakkar Bapa was born in Bhavnagar, then a princely state in present-day Gujarat.
## Thakkar Bapa
• Born to a middle-class family, Thakkar was given his primary education at Bhavnagar and Dholera.
• In 1886, he secured the topmost rank in the matriculation exam in Bhavnagar and was given the Jashvantsinhji Scholarship.
• He then joined the Engineering College, Poona in 1887 and passed out with an L.C.E. (Licentiate of Civil Engineering – today’s graduate in civil engineering) in 1890.
• He worked as an engineer in Porbander and also in Uganda. He was also the Chief Engineer of Sangli State for some time.
• One year after joining the Sangli State, he took up a job with the Bombay Municipality. He was posted at the Bombay suburb of Kurla where he came in touch with “untouchables”. He was shocked to see the miserable conditions in which they lived.
• Thakkar established a school for the children of the sweepers of Kurla with help from Ramaji Shinde, a member of the Depressed Classes Mission.
• In 1914, he joined the Servants of India Society.
• He met Mahatma Gandhi introduced by Gopal Krishna Gokhale and the two developed a close relationship.
• He carried out several relief works for floods and famines.
• He implemented several schemes for making the sweepers free of debts.
• In 1920, he visited Orissa and conducted famine relief work.
• In 1922, the Bhils faced a severe famine. He was deeply moved by the pathetic plight of the tribals and in 1923, he founded the Bhil Seva Mandal to strive for their upliftment.
• He was president of the Bhavnagar State Subjects’ Conference in 1926 and in 1928 he presided over the Kathiwad States People’s Conference.
• During the civil disobedience movement of 1930, Thakkar was arrested and sentenced to 6 months prison with hard labour. But he was released after 40 days.
• He was made the Secretary of the Harijan Sevak Sangh. He founded the Gond Sevak Sangh in 1944. This organisation was later renamed Vanavasi Seva Mandal.
• He was elected to the Constituent Assembly after independence. He was the Chairman of the Excluded and Partially Excluded Areas (other than Assam) Sub-Committee of the Constituent Assembly and also served as a member of the Sub-Committee for Assam.
• Thakkar had a firm faith in universal compulsory education and also advocated the abolition of untouchability. He travelled to many parts of India carrying his mission everywhere.
• He had authored the book ‘Tribes of India’ which was published in 1950.
• He was such a devoted servant of the poor that Mahatma Gandhi once remarked that his ambition was to equal Thakkar Bapa’s record of selfless service.
• He passed away on 20 January 1951 aged 81.
• In 1969, India Post released a stamp in his honour.
Also on this day
1612: The first day of the Battle of Swally (Suvali near Surat) between the Portuguese
and the English which ended Portuguese monopoly on Indian trade and paved the way
for the English to gradually establish in India.
1993: Death of eminent industrialist and aviator J R D Tata.
See previous ‘This Day in History’ here.
Also read the biographies of: |
# Longest consecutive sequence of ascending, descending, or equal integers
Given an array of integers, find the longest consecutive sequence, where a sequence is defined as being either (strictly) ascending, (strictly) descending, or all-equal.
836926 then has the longest sequence 369, which happens to be ascending. 241455556 has the longest sequence 5555, which happens to be all equal.
Please comment on the algorithm (correct? complexity?) and how it can be improved.
#include <iostream>
#include <vector>
int sign(int someInt)
{
if (someInt > 0) { return 1; };
if (someInt < 0) { return -1; };
return 0;
}
bool isSequence(int a, int b, int c)
{
if (sign(a - b) == sign(b - c))
{
return true;
}
return false;
}
void findSeq(const std::vector<int> &vect)
{
int maxLength = 0;
int startingIndex = 0;
int currentLength = 2;
int findIndex = 0;
for (int i = 0; i < vect.size() - 2; i++)
{
if (isSequence(vect[i], vect[i + 1], vect[i + 2]))
{
currentLength++;
if (currentLength > maxLength)
{
startingIndex = i - findIndex;
maxLength = currentLength;
}
findIndex++;
}
else
{
findIndex = 0;
currentLength = 2;
}
}
for (int j = startingIndex; j < startingIndex + maxLength; j++)
{
std::cout << vect[j];
}
}
The code is generally good and clear. I like the names you've used for your variables - it's very obvious what each does, with the possible exception of findIndex.
There's an issue with the problem statement (which might not be your fault): it doesn't say what to do if there's more than one "longest" sequence. In this code, it appears that we use the first match if there's another of the same length; it's worth writing a comment to be clear that this is what we want (and including such a case in the tests, so that we know if that changes).
isSequence is a bit more long-winded than it needs to be. This pattern is redundant:
if (condition)
return true;
else
return false;
It can always be replaced with
return condition;
In findSeq itself, we're doing pretty well. I get a compiler warning about comparing (signed) i against (unsigned) vect.size(); that's easily fixed by changing i to be a std::size_t instead of an int. Most of the other int variables would be better represented as std::size_t, too.
One thing we might want to do is to use iterators rather than indexes, to give us an opportunity to work with different collections in future. And instead of printing to std::cout, we might want to return the start and end iterators of the longest matching sequence. (If we do print the values, it's a good idea to separate them from each other, otherwise we can't tell 33,3 from 3,3,3, for example).
If we're using iterators, we'll want to remember the previous and ante-previous values, because we can't necessarily go back to them. Equivalently, we can remember the previous value and the direction of difference.
I don't know if this is beyond your current knowledge, but this is what I came up with when I followed my own suggestions, and took it a little further to work generically as a template:
#include <iterator>
#include <utility>
// return the sign of a-b (or the sign of a, if b is defaulted)
// result: -1 if a<b, 0 if a==b, +1 if a>b
template<typename T>
int compare(T a, T b = {})
{
// This is a "clever" way of determining the sign. Some
// compilers recognise this idiom and reduce it to a single
// instruction.
return (a > b) - (a < b);
}
template<typename ForwardIterator>
std::pair<ForwardIterator,ForwardIterator>
findSeq(const ForwardIterator first, const ForwardIterator last)
{
if (first == last) {
// empty range -> empty result
return { first, first };
}
auto best_start = first;
auto best_end = first;
std::size_t best_length = 0;
auto current_start = first;
auto previous = first;
int current_direction = 0;
std::size_t current_length = 0;
auto const update_best = [&](ForwardIterator end){
best_start = current_start;
best_end = end;
best_length = current_length;
};
for (auto it = std::next(first); it != last; ++it) {
const auto new_direction = compare(*previous, *it);
if (new_direction == current_direction) {
++current_length;
} else {
if (current_length > best_length) {
update_best(it);
}
current_direction = new_direction;
current_start = previous;
current_length = 1;
}
previous = it;
}
if (current_length > best_length) {
update_best(last);
}
return { best_start, best_end };
}
// provide an interface compatible with the original
template<typename Collection>
auto findSeq(const Collection& c)
{
using std::begin;
using std::end;
return findSeq(begin(c), end(c));
}
#include <array>
#include <iostream>
#include <forward_list>
#include <vector>
template<typename ForwardIterator>
void printSeq(std::pair<ForwardIterator,ForwardIterator> range)
{
auto [first,last] = range;
for (auto it = first; it != last; ++it)
std::cout << *it << ' ';
std::cout << std::endl;
}
int main()
{
printSeq(findSeq(std::vector{8,3,6,9,2,6,12}));
printSeq(findSeq(std::array{-0.1, -0.2, -0.3, -0.4, 0.1, 0.2, 0.3, -1.0}));
printSeq(findSeq(std::forward_list<std::string>{
"foo", "bar", "bar", "bar", "baz", "quux"}));
}
• Better to only update the solution on starting a new run, respectively at the end. Package it in a lambda, and it's clean and self-documenting. – Deduplicator Aug 9 '18 at 15:29
• It does look better with a name there, so I've edited. – Toby Speight Aug 9 '18 at 15:51
• This is beautiful, but obviously completely out of op’s skill level and therefore not very useful imo – downrep_nation Aug 9 '18 at 16:44
• @downrep - I did qualify with "I don't know if this is beyond your current knowledge" - I'm hoping that it's an answer to glean stuff from the beginning right now and to come back to later with more knowledge (and with the learning from other answers, too). – Toby Speight Aug 9 '18 at 16:51
Your code is quite well-written, but in an old-fashioned, C-like way. With modern C++ you can be a lot more general and expressive.
## Be more general
If you look at your code, you'll see that your function could also be invoked with arguments of a different type. For instance, if I change its signature to:
void findSeq(const std::vector<double> &vect);
it still works, even if I don't change anything inside the function. I could change the type more radically:
void findSeq(const std::string &vect);
and I still don't have to change anything else in your code.
That means you can be more general. Make your function a template:
template <typename Container>
void findSeq(const Container& container);
And you'll be able to use it on every type compatible with your code.
But you can be yet more general. Finding the longest subsequence is something I might not want to do on the whole container. I may want to specify the range inside which I need to find the longest subsequence. The canonical way to do it in C++ is to rely on iterators:
template <typename Iterator>
void findSeq(Iterator first, Iterator last);
For this, you'll need to change your code. It might prove hard if you're a beginner, but it's a good exercise.
## Be more expressive
A good way to improve your code's expressiveness is to say what you're doing. You can do this with comments, with good variable names, and also by using named algorithms. There are a lot of them (and they're extremely well implemented) in the standard library (#include [<algorithm>][3]).
For instance there's an algorithm that looks for the position in a range where a predicate applied to two adjacent elements becomes true: std::adjacent_find. That comes handy when you want to detect a change of direction.
Lambda functions are another way to be more expressive. They're small, anonymous functions you can declare and define where they're used. They match very well with standard algorithms, which often come in the following form:
std::algorithm(Iterator first, Iterator last, Function fn);
So, here's a more modern implementation of your algorithm:
// taken from Toby Speight's answer
template<typename T>
int sign(T a, T b) {
return (a > b) - (a < b);
}
template <typename Iterator>
auto lss(Iterator first, Iterator last) {
if (std::distance(first, last) < 2) return std::make_pair(first, last);
auto direction = sign(*first, *std::next(first));
Iterator lss_begin = first, lss_end = first;
while (first != last) {
auto change = std::adjacent_find(first, last, [&direction](auto l, auto r) {
if (sign(l, r) != direction) {
direction = sign(l, r);
return true;
}
return false;
});
if (std::distance(lss_begin, lss_end) < std::distance(first, change)) {
lss_begin = first;
lss_end = change;
}
first = change;
}
if (lss_end != last) ++lss_end;
return std::make_pair(lss_begin, lss_end);
}
• @Snowhawk: hence the if (lss_end != last) ++lss_end; before the return statement. – papagaga Aug 9 '18 at 14:57
• I really dislike the sign function’s name, because it isn’t the “sign” function, which has well-established semantics. Call it e.g. is_same_sign instead. – Konrad Rudolph Aug 9 '18 at 15:10
• If you go to iterators, mind the details. Your code is needlessly inefficient for anything but RandomaccessIterators. – Deduplicator Aug 9 '18 at 15:17
• @Konrad is_same_sign would be very misleading - that would imply (a < 0) == (b < 0) && (0 < a) == (0 < b) or equivalent. This function is a cmp() implementation (aka operator<=>()). – Toby Speight Aug 9 '18 at 15:45
• Since sign is templated, you also have have to consider floating point types. How is NaN handled? Do we want the sign for signed zero or do we treat the value as the standard/IEEE format does ($+0 == -0 == 0$). – Snowhawk Aug 9 '18 at 23:15
Think about what possible inputs your parameters can represent, like the empty set and sets smaller than expected.
for (int i = 0; i < vect.size() - 2; i++) {
vect.size() returns an unsigned size type. If vect is smaller than the value you are subtracting, then your comparison will be i < huge number, leading to access violations.
if (isSequence(vect[i], vect[i + 1], vect[i + 2])) {
Start at i = 2 and compare to size without the subtraction. The conditional here should do the subtraction, which won't invoke the modulus behavior as your checked values are guaranteed to exist.
if (isSequence(vect[i-2], vect[i-1], vect[i])) {
// Checks: | | |
// [0,size-2) <┘ | |
// [1,size-1) <┘ |
// [2,size) <┘
### Overflow bug
You shouldn't do sign(a-b) because the value of a-b could overflow and give you the wrong sign value. For example, if a were 0x80000000 (a negative number) and b were 1, you would find a sign of 1 instead of -1. You should instead compare a against b directly. For example you could use the compare() function from Toby Speight's answer:
int compare(T a, T b = {})
{
// This is a "clever" way of determining the sign. Some
// compilers recognise this idiom and reduce it to a single
// instruction.
return (a > b) - (a < b);
}
I recently answered a very similar question here. When I look at your code, it is very similar to my solution. The only suggestion for improvement I have for you is to try keeping three different variables for ascending/equal/descending, and updating them separately. However, I can't really say if that approach will be more efficient.
If I was really grasping at something to critique, I'd say that it'd be better if the function returned the smallest subsequence instead of printing it, but otherwise you've done a fantastic job!
I used the following driver functions to test your code.
#include <cstdlib>
#include <ctime>
void test()
{
int size = std::rand() % 20;
std::vector<int> arr(size);
for (int i = 0; i < size; ++i )
{
arr[i] = std::rand() % 10;
}
std::cout << "Original sequence: ";
for (int i = 0; i < size; ++i )
{
std::cout << arr[i];
}
std::cout << std::endl;
findSeq(arr);
}
int main()
{
std::srand(std::time(0));
test();
test();
test();
}
I ran into couple of issues along the way.
1. findSeq does not deal with the input gracefully if it has fewer than 3 elements. I would add the following check before the first for loop.
if ( vect.size() < 2 )
{
return;
}
2. findSeq finds the longest sequence of 3 or more numbers. It does not find any sequence consisting of 2 numbers if that is the longest sequence. If you pass it an input consisting of 5, 1, 3, 0, 8, and 3, the function does not find any sequence that it considers it be the longest sequence. It's not clear from your post whether that is intentional.
The findIndex variable seems to be redundant. If you check this block of code:
currentLength++;
if (currentLength > maxLength)
{
startingIndex = i - findIndex;
maxLength = currentLength;
}
findIndex++;
the only place where it's used is calculating the starting index. As both the currentLength and findIndex are also only incremented in this part, and start from 2 and 0 respectively, inside the if block it will always be true that findIndex == currentLength - 3. |
# zbMATH — the first resource for mathematics
Corrected confidence sets for sequentially designed experiments: Examples. (English) Zbl 0954.62096
Ghosh, Subir (ed.), Multivariate analysis, design of experiments, and survey sampling. A tribute to Jagdish N. Srivastava. New York, NY: Marcel Dekker. Stat., Textb. Monogr. 159, 135-161 (1999).
This paper is a continuation of the authors’ article, Stat. Sin. 7, No. 1, 53-74 (1997; Zbl 0904.62093). They consider a model of the form $y_k=x_k' \theta+ \sigma\varepsilon_k, \quad k=1,2, \dots,$ where $$x_k= (x_{k, 1}, \dots,x_{k,p})'$$ are design variables, $$\theta= (\theta_1, \dots, \theta_p)'$$ is a vector of unknown parameters, $$\sigma >0$$ may be known, and $$\varepsilon_1, \varepsilon_2, \dots$$ are i.i.d. standard normal. The design vectors $$x_k$$, $$k=1,2, \dots$$, may be chosen adaptively; that is, each $$x_k$$ may be of the form $x_k=x_k(u_1, \dots, u_k,y_1, \dots, y_{k-1}), \quad k=1,2, \dots,$ where $$u_1,u_2, \dots$$ are independent of $$\varepsilon_1, \varepsilon_2, \dots$$ and have a known distribution. Putting $$y_n=(y, \dots, y_n)'$$, $$X_n=(x_1, \dots, x_n)$$, and $$\varepsilon_n= (\varepsilon_1, \dots, \varepsilon_n)'$$, the model equation becomes $y_n=X_n \theta+ \varepsilon_n, \quad n=1,2, \dots,$ and the usual estimators for $$\theta$$ and $$\sigma^2$$ are $\widehat\theta_n =(X_n'X_m)^{-1} X_ny_n\quad \text{and} \quad \widehat \sigma^2_n= \|y_n-X_n \widehat\theta_n \|^2/(n-p).$ It is the purpose of this paper to explain how approximate expressions for the sampling distributions of these estimators may be obtained. The case when a stopping time is applied is considered, too. The accuracy of the approximation is assessed by simulations. The presentation is largely informal; only the last short section contains outlines of some proofs.
For the entire collection see [Zbl 0927.00053].
##### MSC:
62L05 Sequential statistical design 62F25 Parametric tolerance and confidence regions |
# How do you find two unit vectors orthogonal to A=(1, 3, 0) B =(2, 0, 5)?
Nov 15, 2016
#### Explanation:
Begin by computing the cross product. I use a determinant:
barA xx barB = | (hati, hatj, hatk), (1, 3, 0), (2,0, 5) |
$\overline{A} \times \overline{B} = \hat{i} | \left(3 , 0\right) , \left(0 , 5\right) | + \hat{j} | \left(0 , 1\right) , \left(5 , 2\right) | + \hat{k} | \left(1 , 3\right) , \left(2 , 0\right) |$
$\overline{A} \times \overline{B} = 15 \hat{i} - 5 \hat{j} - 6 \hat{k}$
Let $\overline{C} = 15 \hat{i} - 5 \hat{j} - 6 \hat{k}$
The unit vector, $\hat{C} = \frac{\overline{C}}{|} \overline{C} |$
$| \overline{C} | = \sqrt{{15}^{2} + \left(- {5}^{2}\right) + {\left(- 6\right)}^{2}}$
$| \overline{C} | = \sqrt{286}$
$\hat{C} = \frac{15}{\sqrt{286}} \hat{i} - \frac{5}{\sqrt{286}} \hat{j} - \frac{6}{\sqrt{286}} \hat{k}$
The only other vector that can be orthogonal to $\overline{A} \mathmr{and} \overline{B}$ is:
$\overline{B} \times \overline{A}$
Because $\overline{A} \times \overline{B} = - \left(\overline{B} \times \overline{A}\right)$, the only other unit vector orthogonal to $\overline{A} \mathmr{and} \overline{B}$ is:
$- \hat{C} = - \frac{15}{\sqrt{286}} \hat{i} + \frac{5}{\sqrt{286}} \hat{j} + \frac{6}{\sqrt{286}} \hat{k}$ |
Related rates, check my answer pls
Homework Statement
A highway patrol plane is flying 1 mile above a long, straight road, with constant ground speed of 120 m.p.h. Using radar, the pilot detects a car whose distance from the plane is 1.5 miles and decreasing at a rate of 136 m.p.h. How fast is the car traveling along the highway?
The Attempt at a Solution
Dick
Homework Helper
It's wrong. The value of x in the related rates equation is not an unknown. x^2+1=1.5^2. It's easy to find. And 'x' is the distance, not a velocity. dx/dT is not -120. It's a combination of the plane's velocity with the unknown velocity of the car. That's what you want to solve for.
Last edited:
So would you say that (velocity of car)$\frac{dc}{dt} = 120 + \frac{dx}{dt}$
so that
$\frac{dh}{dt}=\frac{x}{\sqrt{x^2+1}}\frac{dx}{dt}$
where
$\frac{dx}{dt} = \frac{dc}{dt} - 120$
$\frac{dh}{dt}= -136$
and
x=$\sqrt{1.25}$
so that $\frac{dc}{dt} \approx -62.46 \approx 62.46 mph$
Last edited:
Dick
Homework Helper
So would you say that (velocity of car)$\frac{dc}{dt} = 120 + \frac{dx}{dt}$
so that
$\frac{dh}{dt}=\frac{x}{\sqrt{x^2+1}}\frac{dx}{dt}$
where
$\frac{dx}{dt} = \frac{dc}{dt} - 120$
$\frac{dh}{dt}= -136$
and
x=$\sqrt{1.25}$
so that $\frac{dc}{dt} \approx -62.46 \approx 62.46 mph$
Yes, I think that's more like it. |
Concrete Mathematics Chapter 1 Warmups
It took me far longer than it should have, and I had a very partial success; I guess my excuse is that my brain was still cold…
At least I can claim I did try to solve all the exercises; I really spent hours on this.
Warmups
Horse colour
I kind of botched this one, as I tried to answer it even before reading the chapter… and my first instinct was that such a use of induction (taking numbered subsets) was invalid.
Of course it is not. This is a perfectly valid approach, but, as the book states, in the present case it breaks down for $n=2$.
Properly expressed with math notation, it becomes clear that the “same colour” concept is a binary relation (a reflexive, symmetric and transitive one). The key is binary: if every pair of horses were the same colour, then induction could be used.
Tower of Hanoi Variation
The description in the book is somewhat confusing, as it states the restriction in terms of absolute positions (that is, no direct move between left peg and right peg), rather than relative (if you want to move a disc between peg $A$ and peg $B$, you must first move it to peg $C$, then to peg $B$).
The first approach does not work (that is, it is impossible to solve the problem under these conditions), but obviously the authors meant the second approach.
Number of moves
This variation can be solved using the exact same tools as the original problem.
Assuming we want to move a stack from $A$ to $B$, using $C$ as transfer peg: a single disc can be moved in $2$ steps ($A$ to $C$, $C$ to $B$); to move more than $1$, you first need to move the $n-1$ from $A$ to $B$, then move $1$ disk from $A$ to $C$, move the $n-1$ discs from $B$ back to $A$, move the one disc from $C$ to $B$, and finally move the $n-1$ discs from $A$ to $B$.
More concisely:
\begin{aligned} T_1 &= 2&&\text{base case}\\ T_n &= T_{n-1} + 1 + T_{n-1} + 1 + T_{n-1}\\ & = 3T_{n-1} + 2&&\text{recurrence equation} \end{aligned}
Using the exact same method as in the book, let’s define $T_n + 1= U_n$:
\begin{aligned} U_1 &= T_1 + 1\\ & = 3\\ U_n &= T_n + 1\\ & = 3(U_{n-1} -1) + 3\\ & = 3U_{n-1} - 3 + 3\\ & = 3U_{n-1} \end{aligned}
Then, $U_n = 3^n$, and $T_n = 3^n-1$.
Arrangements
As discs must be sorted, to describe an arrangement it is enough to list the peg for each disc. As there are $3$ pegs, this means there are $3^n$ different arrangements.
The variation takes $3^n-1$ moves, but counting the starting position as well, this means $3^n$ different positions, which is the same as the total number of arrangements.
Tower of Hanoi, Initial Setup Variation
Once again, by induction: to move a disk to peg $B$:
Base case: moving the smallest disc takes at most $1$ move ($0$ if it is already on peg $B$), so $T_1 \le 1$; Recurrence: to move the disc of size $n$, assuming it is on $A$, we need to move all the smaller discs to $C$ (to clear both $A$ and $B$), then move the disc of size $n$, and finally move all the smaller discs to $B$. Calling the clearing operation $Cl_n$, we have $T_n \le Cl_{n-1} + 1 + T_{n-1}$.
A moment of thought is enough to realise that $Cl_n$ amounts to the same operation as $T_n$ (that is, move each disc to a specific peg, no matter where it currently is), so we have $Cl_n = T_n$, and therefore $T_n \le 2T_{n-1} + 1$, which is the same recurrence equation as the original problem.
Therefore there is no position that is more that $2^n-1$ moves from the target position.
Venn Diagram with 4 circles
I completely failed to solve this one, even though I spent most of the time on this problem alone. I had the intuition that it could not be done; I also found that the maximum number of regions would be 14, but not matter what I tried, I could not prove it.
I tried to use Geometry, hoping that a minimal list of constraints on the circles would prove that some of the regions that should be restricted to two circles were in fact always covered by three or more.
Eventually, when I gave up and looked at the solution, I still could not understand it. So a circle can only intersect another one in at most 2 points. OK, so what?
After more research (the Google kind, this time), I found this paper which explains why. Each intersection point creates a single new region. Although once again I have no intuition I can trust in this domain, in this case the reasoning seems similar enough to intersecting lines that I feel somewhat confident.
So the above observation gives a recurrence equation:
\begin{aligned} C_1 &= 2\\ C_n &= C_{n-1} + 2(n-1) \end{aligned}
Already, we have that $C_4 = 14$, which is less than the required $16$ for a Venn diagram (and according to this document), four circles form a Euler diagram, not a Venn diagram.
Clearly a triangular number sequence is hiding in there. The recurrence equations above can be rewritten as
\begin{aligned} C_n &= 2+\sum_{i=1}^{n}2(i-1)\\ &= 2+2\sum_{i=0}^{n-1}i\\ &= 2+2\frac{n(n-1)}{2}\\ &= n^2-n+2 \end{aligned}
Bounded Regions in the Plane
Another one where my intuition for Geometry completely failed me. I had a correct start, identifying that each new line intersecting the existing ones at $k$ points could at best create $k-1$ new bounded regions, but when I try to check this I fumbled.
Yet the reason it simple: a line intersecting 2 others will either define a bounded triangle, or cut an existing bounded region in two.
The new bounded regions are not made of arbitrary triple of lines, but are next to each others in the plane; really this is similar to the fence problem. So a line cutting $k$ other lines will create at best $k-1$ new bounded region. The equality is achieved if there are no parallel lines, and all the intersection points are distinct.
As the book observes, each new line will also add two new unbounded regions (the original problem had that a new line would create $k+1$ new regions).
Once again, the triangular number sequence is not far:
\begin{aligned} B_i & = 0 &&\text{for 1 \le i \lt 3}\\ B_3 & = 1\\ B_n & = B_{n-1} + n - 2\\ & = \sum_{i=2}^{n} i-2\\ & = \sum_{i=0}^{n-2} i\\ & = \frac{(n-1)(n-2)}{2}\\ & = S_{n-2} \end{aligned}
Invalid Recurrence
The recurrence for $H$ has a number of problems. The one I found is that it only establishes the induction hypothesis for going from an even number to an odd one; nothing can be said for going from an odd number to an even one (and indeed, the hypothesis breaks then).
As the book mentions, another problem is the base case, which is incompatible with the induction hypothesis.
Wrapping up
I spent way too much time on these exercises, but most of it was on exercises with a geometric nature: I could not find an algebraic description of these problems that would be suitable for the kind of treatment this chapter is about. But once I had the equations, I was able to solve the problems without trouble.
Next, the homework exercises. |
# A Practical Introduction to the C Language for Computational Chemistry. Part 4
Controlling complexity is the essence of computer programming.
Brian W. Kernighan. in Software Tools (with PJ Plauger), 1976.
## THE FUNCTIONS
A C program is a collection of functions. A C function is equivalent to the subroutiing in FORTRAN or BASIC and procedures in PASCAL, PERL or PYTHON programming languages. It is a portion of the program that cannot be executed indipendently but only as part of another program. The function contains a specific algorithm or a stand alone procedure. You have already used several library functions in your previous programs. Output command for priting or reading files (such as printf(), openf()), mathematical functions (sqrt(), cos() are library or intrinsic functions as well. Other libraries functions, we can classify them as follows
• Input/output functions. Input/output on the computer devices (e.g. output to the terminal, printer, hard disk, input from keyboard). It is usually used with #include <stdio.h>;
• String manipulation functions. This library contains common operation on strings (e.g. concatenation, length, search and extraction of substrings). It is usually used with #include <string.h>;
• Mathematical functions. Mathematical calculations (e.g. trigonometrics functions, exponentiation, square root extraction). It is usually used with #include <math.h>;
• Graphical functions. Function for graphics operations (open a graphical window and canvas) and drawing graphical primitives (e.g. points, line, curves).
• Operative system control functions. Operation requiriing allocation of the computer resources or devices (e.g. date and time, allocation of memory). It is usually used with, for example, #include <time.h>;
• Data conversion functions. Operation for data conversion (e.g. change characters type, ascii to integer). It is usually used with #include <ctype.h>;
To use these function, you need to use the precompiler instruction #include at the beginning of the program. The compiler use by deafult the standard library #include <stdlib.h>;
You can write your own functions and it is called user defined functions. The use of function allow to structure the program and make easier its organization and reading. C language is structured around the use of functions. The function main is a function itself and it contains calls to other functions. both intrinsic and user defined functions.
The main function is the first function called by the operating system in your program. Every program must have exactly one main function. In principle, only code in main and functions called directly or indirectly from main will be executed. The main function may be implemented without arguments and has a return type of int:
int main () {
… // actual program code
return 0;
}
The value returned from the main function is supplied to the operating system. As a standard, a value of 0 signals no error during program execution.
The definition of a function in C follows
<return type> function name ( <argument list> ) {
[ statments ]
}
When a function does not return a result, then the return type is void. To return a value from a function, for this purpose the C language provides the keyword return
The value can be passed to the function by value or by reference. C passes parameters “by value” which means that the actual parameter values are copied into local storage. The caller and called functions do not shares any memory. This scheme is fine for many purposes, but it has two disadvantages.
• Because the called function has its own copy, modifications to that memory are not communicated back to the caller. Therefore, value parameters do not allow the called function to communicate back to the caller. The function’s return value can communicate some information back to the caller, but not all problems can be solved with the single return value.
• Sometimes it is undesirable to copy the value from the caller to the called function because the value is large and so copying it is expensive, or because at a conceptual level copying the value is undesirable.
Example
double square ( const double y ) {
return y*y;
}
The function is called using variable arguments and returns one or more values.
<function name> ( <argument1., <argument2>, . . . );
For example, one square is defined then you can pass the number 4.3 to this function by simply call square(4.3) and assing to the variable y the result:
double y = square( 4.3 );
The function is inserted before the definition of the main function or a a library file to be caalled using the #include precompiler statment. One can not define a function in the body of another function. A complete example of a program using the user defined function square is the following
```#include <stdio.h>;
double square ( const double y ) { return y*y; }
int main() {
const double m=10;
double n=square(m);
printf ("%d\n", n);
}
```
You can user function without other user defined functions. In the following example the function that calculate the cube of a number use the function square for its purpose.
```#include <stdio.h>;
using namespace std;
double square ( const double y ) { return y*y; }
double cube ( const double x ) {
return square( x ) * x;
}
int main() {
const double m=10;
double n=cube(m);
printf ("%d\n", n);
}
```
The alternative is to pass the arguments “by reference”. Instead of passing a copy of a value from the caller to the called function, pass a pointer to the value. In this way there is only one copy of the value at any time, and the caller and called function both access that one value through pointers. Variables can be defined as global or local variables. Global variables can be accessed by all the function in the c program and are defined outside the function. Local variables are defined inside function. They are created when the function is used and then deleted from the memory as the function is exit.
As an example, we are going to modify the LJ Potential Calculator program shown in Part 2 to improve its structure by using functions.
We are going to move the code in the main(0 function in two function. One structure ReadParam() will be used to read the parameter from the file LJparam.dat and the other WriteOutput() to write the output on the Result.dat file. You can see that the modify main() function is semplified and more readable.
```int main() {
char aty[10];
int ia1,ia2;
int np; // number of parametes in the library
float s,e;
float C6,C12;
float sig[10];
float eps[10];
float kb=8.3145e-3; /*Boltzmann constant*Avogadro Number */
float s6;
float rmin;
/* Read parameter file */
// Input for the atom types
printf("\n");
printf("Enter the atomtype for particle 1: ");
scanf("%i",&ia1);
printf("Enter the atomtype for particle 2: ");
scanf("%i",&ia2);
printf("\n");
// Berthelot-Lorenz mixing
s=0.5*(sig[ia1]+sig[ia2])/10.;
e=kb*sqrt(eps[ia1]*eps[ia2]);
s6=pow(s,6);
C6=4*e*s6;
C12=4*e*s6*s6;
rmin=pow(2.,1./6.)*s;
// Output the results on the screen
printf("Mixed Lennard-Jones parameters:\n");
printf(" Epsilon: %f kJ/mol \n",e);
printf(" R min : %f nm \n",rmin);
printf(" Sigma : %f nm\n",s);
printf(" C6 : %f kJ*nm^6/mol \n",C6);
printf(" C12 : %f kJ*nm^12/mol \n\n",C12);
// Write the results on the file
WriteOutput(ia1,ia2,s,e,C12,C6,aty,sig,eps);
return 0;
}
```
It is possible to futher semplify this part by moving in other functions the input of the parameters and the calculation and printing of the mixed parameters but for the moment shall we analyze these modifications. The LJ parameters for atom types are now read in the following ReadParam() function.
```void ReadParam(int np,char *aty,float *sig,float *eps) {
char line[160],title[160];
int k;
FILE *fd;
/*
* READ LJ PARAMETERS FROM A LIBRARY FILE
*/
printf("\nREADING THE LJ PARAMETER FILE \n");
if (!(fd=fopen("ljparam.dat","r"))){
printf("\nError opening file %s\n", "ljparam.dat");
}
fgets (title, 80, fd);
fgets (line, 80, fd);
sscanf (line,"%d",&np);
fgets (line, 80, fd);
printf (" %s \n",line);
while (fgets (line, 80, fd) != NULL){
if (line[0] != '\n' && strlen(line) > 10) {
sscanf (line,"%s%f%f",&aty[2*k],&sig[k],&eps[k]);
printf ("%i %s %8.3f %8.3f\n",k, &aty[2*k],sig[k],eps[k]);
k++;
}
}
fclose(fd);
}
```
The arrays with sigma (sig[]) and epsilon (eps[]) parameters as well as atom names (aty[]) are returned to the main program using pointers therefore, in the argument list of the function they are defined as pointers (using the *). In the function, the file is also opened and closed so the parameter of the file stream are assigned locally.
The choice of the atoms type to use for the two interacting atoms as well as the calculation and printing of the mixing parameters is left for the moment in the main() function. However, the saving on the file Result.dat of the output data and of the tabulated value of the function is moved in the function WriteOutput() function. Also in this case, pointers are used to pass the arrays to the function. The code in the function is the same as the original program with the exception the calculation of the xmin, the starting point of the tabulated function. We modify the program to calculate as starting minimum point the value of distance that give a value of the potential equal in magnitude to the value of the mixed parameter epsilon.
One significant numerical problem is the calculation of the roots of simple equations. Specifically, we want to find an algorithm capable of finding the numerical value of the unknown x that solves the algebraic problem F(x)=0.
In our case, we want to find a point x< sigma such that F(x)=VLJ(x)-epsilon=0. As shown in the Figure, by shifting the function VLJ(x) by epsilon, the point that we are looking for is a zero of the function. There are different numerical methods to obtain this result. As we can easily calculate the derivative of this function, a fast and easy-to-implement method is the Newton-Raphson one. The basic idea is to use the function’s derivative to calculate the function’s tangent at the starting point (indicated as x0 in Figure). Then the point of intersection of the tangent line with the X-axis (x1 in the Figure) is calculated. If the function is across the x-axis, the new point will give a first closer approximation of the zero for the function. Hence by calculating the tangent of the function in the new point again, a next point (x2) is obtained. As shown in the Figure, the new point is getting closer to the root of F(x)=0. That is because the absolute difference between $|x_2-x_1|$ is less than $|x_1-x_0|.$ By reiterating the process, it is possible to get closer and closer to the root and use the absolute difference between the current point with the previous one. It is also possible to define criteria to end the iteration when a given level of accuracy is reached. The implementation of the algorithm is relatively straightforward and it can be found in introduction textbooks of numerical analysis.
In the program, the Newton-Raphson was implemented in the function newton(). The functions f() and df() contains the shifted Lennard-Jones function and its derivative. The variable accuracy set the level of accuracy to calculate the root. In thi case it is set to $10^{-5).$ The newton() function is called in the WriteOutput() function before tabulation of the function to calculate the xmin value (the lower bound of the graph). The upper bound (xmax) and the x-increment are requested as input.
```float f(float e, float C12, float C6, double x) {
float ix6=1./pow(x,6.);
float V=C12*ix6*ix6-C6*ix6-e;
return V;
}
float df(float C12, float C6, double x) {
float ix=1./x;
float ix6=1./pow(x,6.);
float F=-(12.*ix*C12*ix6*ix6-6.*ix*C6*ix6);
return F;
}
float newton(float x1, float e, float C12,float C6) {
/*
* This function perform the search of zero of a function
* V(x)-eps=0 using the Newton method
*/
float x, fx, fx1;
float accuracy=1e-5;
x1=0.02; // Start with a small value of x
x = 0;
while (fabs(x1 - x) >= accuracy)
// Loop until the variation of x is less that the
// assigned accuracy
{
x = x1; // Assign the variable x1 equal to x
fx = f(e,C12,C6,x); // Calculate the V(x)-epsilon
fx1 = df(C12,C6,x); //Calculate value of f'(x)
x1 = x - (fx / fx1); // Newton iteration
};
return x1;
}
```
As last note, the following line of the codes at the beginningof the program
```float f(float, float , float , double);
float df(float , float , double);
float Newton(int, float);
void ReadParam(int ,char *,float *,float *);
void WriteOutput(int ,int ,float ,float ,float ,float ,char *,float *,float *);
```
is the so-called function prototyping. The function prototype is a function declaration that specifies the function’s name and input/output interface of a function but omits the function body. Specifically, it provides the return type of the data that the function will return, the number, the order and the data type of arguments passed to the function. So it is used by the C compiler to check the correct call of functions in the program.
It is all for now, and in the following tutorial, we will see how the code can be further structured and how we can use the graphical library to make a plot for the curve directly on the screen.
Remember to express your interest in my tutorials by pressing the Like button, by sharing them or by adding a comment.
I wish you a very HAPPY NEW YEAR!
## APPENDIX
The complete source code of the LJCalc program.
```/* PROGRAM: LJCalc
*
* DESCRIPTION:
* This is a simple LJ Potential Calculator.
* The program read the LJ parameters of a
* list of atoms from the file ljparam.dat.
* The program ask to select two atoms
* and it generate the mixing interaction
* potential and a tabulated graph of it.
* The results are saved in the file Results.out.
*
*
* VERSION: 1.1
* AUTHOR: Danilo Roccatano * (c) 2017-2022
*/
#include <stdio.h>
#include <math.h>
#include <string.h>
/*
* Function prototyping
*/
float f(float, float , float , double);
float df(float , float , double);
float Newton(int, float);
void ReadParam(int ,char *,float *,float *);
void WriteOutput(int ,int ,float ,float ,float ,float ,char *,float *,float *);
/*
* FUNCTIONS
*/
float f(float e, float C12, float C6, double x) {
float ix6=1./pow(x,6.);
float V=C12*ix6*ix6-C6*ix6-e;
return V;
}
float df(float C12, float C6, double x) {
float ix=1./x;
float ix6=1./pow(x,6.);
float F=-(12.*ix*C12*ix6*ix6-6.*ix*C6*ix6);
return F;
}
float newton(float x1, float e, float C12,float C6) {
/*
* This function perform the search of zero of a function
* V(x)-eps=0 using the Newton method
*/
float x, fx, fx1;
float accuracy=1e-5;
x1=0.02; // Start with a small value of x
x = 0;
while (fabs(x1 - x) >= accuracy)
// Loop until the variation of x is less that the
// assigned accuracy
{
x = x1; // Assign the variable x1 equal to x
fx = f(e,C12,C6,x); // Calculate the V(x)-epsilon
fx1 = df(C12,C6,x); //Calculate value of f'(x)
x1 = x - (fx / fx1); // Newton iteration
};
return x1;
}
void ReadParam(int np,char *aty,float *sig,float *eps) {
char line[160],title[160];
int k;
FILE *fd;
/*
* READ LJ PARAMETERS FROM A LIBRARY FILE
*/
printf("\nREADING THE LJ PARAMETER FILE \n");
if (!(fd=fopen("ljparam.dat","r"))){
printf("\nError opening file %s\n", "ljparam.dat");
}
fgets (title, 80, fd);
fgets (line, 80, fd);
sscanf (line,"%d",&np);
fgets (line, 80, fd);
printf (" %s \n",line);
while (fgets (line, 80, fd) != NULL){
if (line[0] != '\n' && strlen(line) > 10) {
sscanf (line,"%s%f%f",&aty[2*k],&sig[k],&eps[k]);
printf ("%i %s %8.3f %8.3f\n",k, &aty[2*k],sig[k],eps[k]);
k++;
}
}
fclose(fd);
}
void WriteOutput(int ia1, int ia2, float s, float e, float C12, float C6, char *aty, float *sig,float *eps) {
char yn[2];
float x,xmin,xmax,xinc;
float s6,ix6,ix;
float V,F;
FILE *fout;
/*
* OPEN THE OUTPUT FILE
*/
if (!(fout=fopen("Results.out","w"))){
printf ("Error opening file %s\n", "Results.out");
}
/*Output the results in the file */
fprintf(fout,"#Lennard-Jones parameters:\n\n");
fprintf(fout,"# Atom Type 1 : %c%c \n",aty[2*ia1],aty[2*ia1 + 1]);
fprintf(fout,"# Sigma : %f nm/10\n",sig[ia1]);
fprintf(fout,"# Epsilon : %f K\n\n",eps[ia1]);
fprintf(fout,"# Atom Type 2 : %c%c \n",aty[2*ia2],aty[2*ia2 + 1]);
fprintf(fout,"# Sigma : %f nm/10\n",sig[ia2]);
fprintf(fout,"# Epsilon : %f K\n\n",eps[ia2]);
fprintf(fout,"# Mixed Sigma : %f nm\n",s);
fprintf(fout,"# Mixed Epsilon: %f kJ/mol \n",e);
fprintf(fout,"# C6 : %f kJ*nm^6/mol \n",C6);
fprintf(fout,"# C12 : %f kJ*nm^12/mol \n\n",C12);
/* Input for the value of the maximum distance range
* For plotting the LJ function
*/
do {
printf("\nEnter the maximum distance distance (in nm): ");
scanf("%f",&xmax);
printf("\nEnter the distance increment (in nm): ");
scanf("%f",&xinc);
printf("Confirm?(y/n)\n");
scanf("%s",yn);
} while (strcmp(yn,"y"));
/* Estimate the minimum distance to calculate the LJ function */
xmin=newton(s,e,C12,C6);
printf("Value of xmin that give V(xmin)=epsilon: %f nm \n",xmin);
fprintf(fout,"# Distance [nm] Potential [kJ/mol] Force [nN]\n");
for (x=xmin;x<=xmax;x+=xinc){
/*Calculate the LJ potential and the force between xmin and xmax x*/
ix=1./x;
ix6=1./pow(x,6.);
V=C12*ix6*ix6-C6*ix6;
F=(12.*ix*C12*ix6*ix6-6.*ix*C6*ix6)/602.2; /*to obtain [nN]*/
fprintf(fout," %f %f %f\n", x,V,F);
}
fclose(fout);
}
/*
* Main program
*
*/
int main() {
char aty[10];
int ia1,ia2;
int np; // number of parametes in the library
float s,e;
float C6,C12;
float sig[10];
float eps[10];
float kb=8.3145e-3; /*Boltzmann constant*Avogadro Number */
float s6;
float rmin;
/* Read parameter file */
// Choice of atom types of the two interacting atoms.
printf("\n");
printf("Enter the atomtype for particle 1: ");
scanf("%i",&ia1);
printf("Enter the atomtype for particle 2: ");
scanf("%i",&ia2);
printf("\n");
// Berthelot-Lorenz mixing
s=0.5*(sig[ia1]+sig[ia2])/10.;
e=kb*sqrt(eps[ia1]*eps[ia2]);
s6=pow(s,6);
C6=4*e*s6;
C12=4*e*s6*s6;
rmin=pow(2.,1./6.)*s;
// Output the results on the screen
printf("Mixed Lennard-Jones parameters:\n");
printf(" Epsilon: %f kJ/mol \n",e);
printf(" R min : %f nm \n",rmin);
printf(" Sigma : %f nm\n",s);
printf(" C6 : %f kJ*nm^6/mol \n",C6);
printf(" C12 : %f kJ*nm^12/mol \n\n",C12);
// Write the results on the file
WriteOutput(ia1,ia2,s,e,C12,C6,aty,sig,eps);
return 0;
}
```
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
## Thomas' Calculus 13th Edition
Published by Pearson
# Chapter 5: Integrals - Section 5.4 - The Fundamental Theorem of Calculus - Exercises 5.4 - Page 286: 2
20/3
#### Work Step by Step
Integrate with respect to x, and then plug in the limits: $=\frac{1}{3}x^3-x^2+3x$ with limits from -1 to 1 $=\frac{1}{3}(1)^3-1+3-\frac{1}{3}(-1)^3+(-1)^2-3(-1)=1/3+2+1/3+1+3=20/3$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
# Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system
@article{Drozdov2015ConventionalSA,
title={Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system},
author={Alexander P. Drozdov and Mikhail I. Eremets and Ivan A. Troyan and Vadim Ksenofontov and Sergii I. Shylin},
journal={Nature},
year={2015},
volume={525},
pages={73-76}
}
A superconductor is a material that can conduct electricity without resistance below a superconducting transition temperature, Tc. The highest Tc that has been achieved to date is in the copper oxide system: 133 kelvin at ambient pressure and 164 kelvin at high pressures. As the nature of superconductivity in these materials is still not fully understood (they are not conventional superconductors), the prospects for achieving still higher transition temperatures by this route are not clear. In… Expand
866 Citations
#### Figures and Topics from this paper
High temperature superconductivity in sulfur and selenium hydrides at high pressure
• Materials Science, Physics
• 2015
Abstract Due to its low atomic mass, hydrogen is the most promising element to search for high-temperature phononic superconductors. However, metallic phases of hydrogen are only expected at extremeExpand
Superconductivity at 250 K in lanthanum hydride under high pressures
A lanthanum hydride compound at a pressure of around 170 gigapascals is found to exhibit superconductivity with a critical temperature of 250 kelvin, the highest critical temperature that has been confirmed so far in a superconducting material. Expand
Superconductivity well above room temperature in compressed MgH6
• Materials Science
• 2016
It has been suggested that hydrogen-rich systems at high pressure may exhibit notably high super-conducting transition temperatures. One of the more interesting theoretical predictions was thatExpand
First-principles study of superconducting hydrogen sulfide at pressure up to 500 GPa
• Materials Science, Medicine
• Scientific Reports
• 2017
Calculations conducted within the framework of the Eliashberg formalism indicate that H3S in the range of the extremely high pressures is a conventional strong-coupling superconductor with a high superconducting critical temperature, however, the maximum critical temperature does not exceed the value of 203 K. Expand
Unusual sulfur isotope effect and extremely high critical temperature in H3S superconductor
• Materials Science, Medicine
• Scientific Reports
• 2018
An anomalous sulfur-derived superconducting isotope effect, which, if observed experimentally, will be subsequent argument that proves to the classical electron-phonon interaction, and fact that critical temperature rise to extremely high value of 242 K for H336S at 155 GPa brings us closer to the room temperature superconductivity. Expand
Quantum crystal structure in the 250-kelvin superconducting lanthanum hydride
Quantum atomic fluctuations have a crucial role in stabilizing the crystal structure of the high-pressure superconducting phase of lanthanum hydride and are crucial for the stabilization of solids with high electron–phonon coupling constants that could otherwise be destabilized by the large electron– phonon interaction, thus reducing the pressures required for their synthesis. Expand
Spectroscopic evidence of a new energy scale for superconductivity in H3S.
A first optical spectroscopy study of H3S, a superconducting phase in sulfur hydride under high pressure with a critical temperature above 200 K, provides strong evidence of a conventional mechanism and an unusually strong optical phonon suggests a contribution of electronic degrees of freedom. Expand
A theoretical quest for high temperature superconductivity on the example of low-dimensional carbon structures
• Materials Science, Medicine
• Scientific Reports
• 2017
This work uses an ab-initio approach to optimize superconducting quasi-1D carbon structures and forms a new type of carbon ring that reaches a Tc value of 115 K. Expand
High-temperature study of superconducting hydrogen and deuterium sulfide
• Materials Science, Physics
• 2015
Hydrogen-rich compounds are extensively explored as candidates for a high-temperature superconductors. Currently, the measured critical temperature of 203 K in hydrogen sulfide (H3S) is among theExpand
Metallization and superconductivity in methane doped by beryllium at low pressure.
• Haiyan Lv, +4 authors Guohua Zhong
• Medicine, Materials Science
• Physical chemistry chemical physics : PCCP
• 2019
The result shows that the thermodynamically stable BeCH4 with P1[combining macron] space-group can transform into a metal at ambient pressure and indicates that the doped methane is a potential candidate for seeking high temperature and low pressure superconductivity. Expand
#### References
SHOWING 1-10 OF 43 REFERENCES
Conventional superconductivity at 190 K at high pressures
• Materials Science, Physics
• 2014
The highest critical temperature of superconductivity Tc has been achieved in cuprates: 133 K at ambient pressure and 164 K at high pressures. As the nature of superconductivity in these materials isExpand
High temperature superconductivity in sulfur and selenium hydrides at high pressure
• Materials Science, Physics
• 2015
Abstract Due to its low atomic mass, hydrogen is the most promising element to search for high-temperature phononic superconductors. However, metallic phases of hydrogen are only expected at extremeExpand
Superconductivity at 39 K in magnesium diboride
• Chemistry, Medicine
• Nature
• 2001
In the light of the tremendous progress that has been made in raising the transition temperature of the copper oxide superconductors (for a review, see ref. 1), it is natural to wonder how high theExpand
Conductive dense hydrogen.
• Chemistry, Materials Science
• Nature materials
• 2011
A significant hysteresis indicates that the transformation of molecular hydrogen into a metal is accompanied by a first-order structural transition presumably into a monatomic liquid state. Expand
What superconducts in sulfur hydrides under pressure and why
• Materials Science, Physics
• 2015
The recent discovery of superconductivity at 190~K in highly compressed H$_{2}$S is spectacular not only because it sets a record high critical temperature, but because it does so in a material thatExpand
Cubic H 3 S around 200 GPa: An atomic hydrogen superconductor stabilized by sulfur
• Physics
• 2015
The multiple scattering-based theory of Gaspari and Gyorffy for the electron-ion matrix element in close packed metals is applied to Im-3m H3S, which has been predicted by Duan {\it et al.} andExpand
Superconductivity in Hydrogen Dominant Materials: Silane
• Chemistry, Medicine
• Science
• 2008
The transformation of insulating molecular silane to a metal at 50 GPa, becoming superconducting at a transition temperature of Tc = 17 kelvin at 96 and 120 GPa supports the idea of modeling metallic hydrogen with hydrogen-rich alloy. Expand
High-pressure hydrogen sulfide from first principles: a strongly anharmonic phonon-mediated superconductor.
• I. Errea, +7 authors F. Mauri
• Materials Science, Physics
• Physical review letters
• 2015
First-principles calculations are used to study structural, vibrational, and superconducting properties of H2S and H3S and show that High-pressure hydrogen sulfide is a strongly anharmonic superconductor. Expand
Superconductivity above 130 K in the Hg–Ba–Ca–Cu–O system
• Chemistry
• Nature
• 1993
THE recent discovery1 of superconductivity below a transition temperature (Tc) of 94 K in HgBa2CuO4+δ has extended the repertoire of high-Tc superconductors containing copper oxide planes embedded inExpand
Hydrogen dominant metallic alloys: high temperature superconductors?
• N. Ashcroft
• Materials Science, Medicine
• Physical review letters
• 2004
The arguments suggesting that metallic hydrogen, either as a monatomic or paired metal, should be a candidate for high temperature superconductivity are shown to apply with comparable weight toExpand |
## Not signed in
Want to take part in these discussions? Sign in if you have an account, or apply for one below
## Site Tag Cloud
Vanilla 1.1.10 is a product of Lussumo. More Information: Documentation, Community Support.
• CommentRowNumber1.
• CommentAuthorTodd_Trimble
• CommentTimeApr 20th 2014
• (edited Apr 20th 2014)
It would be nice to finish the description of the theorem at GNS construction, if someone has the head for doing that. :-)
• CommentRowNumber2.
• CommentAuthordavidoslive
• CommentTimeApr 21st 2014
Just in case someone beats me to it, can I suggest that the entry has the C*Categories version of the theorem? This obviously includes the C*Algebra version as a special case.
• CommentRowNumber3.
• CommentAuthorTodd_Trimble
• CommentTimeApr 22nd 2014
@davidoslive: Please feel free to edit! :-)
• CommentRowNumber4.
• CommentAuthorzskoda
• CommentTimeApr 22nd 2014
• (edited Apr 22nd 2014)
It is better to have BOTH versions (with the easier version first), as many readers will not comprehend the horizontally categorified version. The logical inclusion does not include the expositional inclusion.
• CommentRowNumber5.
• CommentAuthordavidoslive
• CommentTimeApr 25th 2014
The skeleton of a proper entry is up (done from my phone). I'll tidy it up as soon as I get to a computer.
• CommentRowNumber6.
• CommentAuthorUrs
• CommentTimeDec 3rd 2017
• (edited Dec 3rd 2017)
I have given GNS construction an Idea-section and a bunch of references, amplifying also the generalization from $C^\ast$-algebras to general unital star-algebras.
Also I renamed the section “From the nPOV” to “For C-star categories”, since the statement there is a horizontal categorification, but in itself does not offer any category-theoretic perspective on the construction.
An actual nPOV is proposed in Parzygnat 16, but besides adding this reference to the entry, I haven’t added any details on this yet.
• CommentRowNumber7.
• CommentAuthorUrs
• CommentTimeDec 9th 2017
• (edited Dec 9th 2017)
An actual nPOV is proposed in Parzygnat 16,
also Jacobs 10
(which Alexander Schenkel tells me serves to make all the universal AQFT constructions in Benini-Schenkel-Woike 17, surveyed in Schenkel 17b, generalize to star-algebras)
• CommentRowNumber8.
• CommentAuthorDavidRoberts
• CommentTimeDec 9th 2017
• (edited Dec 10th 2017)
That makes me think whether it is interesting to consider the generalisation of correspondences of $C^\ast$-algebras (ie a kind of directional Hilbert bimodule) to more general $\ast$-algebras. There are versions of the Eilenberg-Watts theorem for representation categories of $C^\ast$-algebras, but it’s not immediately straightforward, and even some the notions bifurcate, depending on analytic considerations. See for instance this answer to a recent MO question of mine.
• CommentRowNumber9.
• CommentAuthorTodd_Trimble
• CommentTimeDec 9th 2017
• CommentRowNumber10.
• CommentAuthorDavidRoberts
• CommentTimeDec 10th 2017
Thanks, Todd. |
anonymous 5 years ago Given $y^{2} = x^{2}(x+7)$ Find an equation for the tangent line at ( -3, 6). Find an equation for the normal line at ( -3, 6). Whats the difference? How do I find each? Confused. Help?
1. anonymous
tangent line is line that touches the point on the curve normal line is line perpendicular to the tangent line to find tangent line differentiate the function to find slope slope of normal is the opposite reciprocal of that of the tangent
2. anonymous
Can you help me solve the problem? Itried to work it out, but it looked complicated. :/
3. anonymous
equation for a line tangent is mTan(x-x1)=y-y1 equation for a normal line is -(1/mTan)(x-x1)=y-y1 mtan=derivative since it touches a point once
4. anonymous
$y=\sqrt{x ^{2}(x+7)}$ differentiate using chain rule and product rule u = x^2(x+7) du = 2x(x+7) + x^2 derivitive of sqrt(u) = 1/2sqrt(u) $y' = [x ^{2}+2x(x+7)]/2\sqrt{x ^{2}(x+7)}$
5. anonymous
slope of normal is opposite reciprocal of y' $slopeNormal =-2\sqrt{x ^{2}(x+7)}/[x ^{2}+2x(x+7)]$
6. anonymous
plug in -3 for x y' = -5/4 slopeN = 4/5 Tangent: 6 = -5/4(-3) +b b = 9/4 y = -5/4 x + 9/4 Normal: 6=4/5(-3) +b b = 42/5 y = 4/5 x + 42/5 |
# All Questions
293 views
### Is there a length-preserving encryption scheme?
Is there a length-preserving encryption scheme, that preserves the lengths of input sizes such that the length of the input plain text is same as length of the output cipher text ?
658 views
### Do known-plaintext attacks exist for public key encryption?
In asymmetric ciphers we publish the public key for anyone, which means an attacker can encrypt any message they want and compare the ciphertext and plaintext without communicating with the owner of ...
545 views
### Can a shift cipher attain perfect secrecy?
On a practice question for my intro cryptography exam, it asks the following: Assuming that keys are chosen with equal likelihood, the shift cipher provides: A) computational security ...
1k views
### Is it safer to encrypt twice with RSA?
I wonder if it's safer to encrypt a plain text with RSA twice than it is to encrypt it just once. It should make a big difference if you assume that the two private keys are different, and that the ...
1k views
### How can I use eulers totient and the chinese remainder theorem for modular exponentiation?
I'm trying to implement modular exponentiation in Java using Lagrange and the Chinese remainder theorem. The example we've been given is: Let $N = 55 = 5 · 11$ and suppose we want to compute ...
620 views
### Why do we need in RSA the modulus to be product of 2 primes?
I think I roughly understand how the RSA alorithm is working. However, I don't understand why we need the $N$, which we use as a modulus, to be $pq$ for some large primes $p, q$. I vaguely know it ...
266 views
### Increase number of rounds for SPN and Feistel ciphers
Read a post on Schneiers blog (and again 2011) about increasing the number of rounds for AES from to "AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds" to raise the security. ...
111 views
### When is each key used when encrypting an email using OpenPGP?
When you send an email using PGP to encrypt emails, is the recipients public key used to encrypt the email, or is your private key used? Are they both used? At what points do each of the four keys ...
345 views
### What is the strength of unpadded RSA?
I would like to use unpadded RSA for homomorphic encryption in a toy P2P game, for things like fair coin flips and shuffling. How many bits of security does unpadded RSA have, in relation to its key ...
459 views
### Can I combine two of SHA-3 candidates cryptography hash functions and obtain more secure Algorithm?
For example, Is possible to combine (Concatenate or Chain or XOR) Skein SHA-3 candidate with Grostl SHA-3 candidate to increase security? Note: I just want more secure output and CPU cycles does not ...
196 views
### Aes encryption -The relevance of static matrix in mixcolumns operation
Can someone explain to me the relevance of the static matrix used for the mixcolumns operation in aes encryption.i.e the relevance of why the byte is multiplied by 2 + next byte multiplied by 3 + next ...
164 views
### What are the dangers of predictable (repeated) plaintext structures?
When using a "good", modern cipher (specifically one that provides ciphertext indistinguishability), is it a problem at all if there is some well-known structure in all plaintexts? For example, ...
145 views
### Is it possible to ensure security with zero pre-shared information?
Is it possible to secure a communications channel against both passive (sniffing) and active (injecting / MitM) attackers without either legitimate party knowing any pre-shared information? I know ...
115 views
### KDF with low-entropy salts
I need to derive a key from a username and a password. These are the only two things I have access to. What I thought is using PBKDF2 with username as the salt and password as the master password. ...
394 views
### Poor man's SSL - is this method as safe as SSL/TLS?
I need to send data between two applications. I've got requirement that says that data should be transmitted using secure protocol such as SSL/TLS. Data is sent using TCP sockets and I don't have ...
2k views
### Digital Signatures, Standard Hash Functions and MACs
I'm studying Hash functions and Digital Signatures in sequence, and came up with some doubts about their usage. First of all: What is the difference between hashing a document and signing it? And ...
2k views
### RSA Proof of Correctness
Can anyone provide an extended (and well explained) proof of correctness of the RSA Algorithm? And why is it needed? I can't say that this or this helped me much, I'd like a more detailed and newbie ...
175 views
### How hard is to find the operators of an addition knowing the sum of them?
I want to learn whether or no there is a cryptographic primitive,scheme assumption that is based on the following hard problem if it is hard . By hard we mean that we have a polynomial adversary: The ...
194 views
### Using compression to test encryption
Is the uncompressibility of encrypted data a necessary property for a good cryptographic algorithm? To make a crude experiment, I encrypted an 8K file with all 'A' and then compressed both the ...
306 views
### How do I demonstrate that a PRNG not designed for cryptography is not suitable for generating passwords?
This is a replication of this question on Stack Overflow. There's class Random in .NET runtime which is designed for use as a cheap fast source of pseudo-random ...
176 views
### Can the premaster secret generated by SRP be used as a secure private key?
It seems like the pre-master secret generated during the SRP protocol would make a good source to generate a shared private key using a secure hash to compress it down into a 128/256 symmetric key. ...
219 views
### Cryptographically strong pseudo-random seq. generators
If a pseudo-random sequence generator is built so that it uses an n-bit seed and outputs a string that is of length x. Let's say one wants to generate a bit string of length y and uses the previous ...
337 views
### How to generate successive stream-cipher keys?
I've identified a weakness in a distributed simulation system I'm looking at, and I'm looking for some advice on how to fix it. Clients initially negotiate an authentication token with a login server ...
237 views
### Seeking special-use fingerprinting/hashing algorithm
For a project I wonder if there exists some kind of fixed-size checksumming/fingerprinting function in which based on this fingerprint given data block 1, it is easy to generate more data blocks that ...
711 views
### Finding CRC collisions for specific divisor
My current textbook (Information Security: Principles and Practice by Mark Stamp) discusses how to determine the CRC of data via long-division, using XOR instead of subtraction to determine the ...
146 views
### How can I repeatedly prove I have data another has seen without sending the data and without the other storing the data?
I would like to know if this is theoretically possible, or impossible, and if possible, if there is any algorithm/protocol to accomplish this... I want another entity, lets call them the Auditor, to ...
357 views
### What algorithm does PGP use to encrypt email?
I know it uses RSA/DSA to create keys, but does it use that same algorithm for the actual cipher?
135 views
### How to collect, process, and transmit data securely?
In my question "Authenticating data generated by a particular build of an open source program", Dave Cary requested that I post a question stating my real problem on a high level rather than the ...
201 views
### Do Export Restrictions Still Apply To The Key Length of RC4?
I've just read a paper from 2004 which stated that the RC4 encryption algorithm was restricted to a 40 bit key size when exported from the USA; however the reference for this information (Applied ...
145 views
### Why is 224 bit ecdsa faster than 192 bit ecdsa?
I ran several benchmarks using openssl on 2 different computers and I got a surprising result. for the Nist 192 bit curve the benchmark result is ...
120 views
### Feasibility of finding public key when private key is known
We all know that in a public key cryptosystem, given a public key it is extremely hard to compute private key from it. Is it the same case in reverse? Given a private key, how easy is it to compute ...
193 views
### How can AES be considered secure when encrypting large files?
Why is AES considered to be secure when encrypting large files since the algorithm is a block cipher? I mean, if the file is larger than the block size, the file will be broken down to fit the ...
335 views
### Entropy of the key
Suppose a $1000$-bit key used in the one-time pad is not randomly and uniformly generated. Suppose that the values of the first $5$ bits are $0$, and the other $995$ bits are randomly generated and ...
136 views
### Why are some key stretching methods better than others?
I'm trying to understand why some key stretching methods are better than others. The wikipedia article presents 3 different key stretching methods: A collision prone simple key stretching ...
103 views
### How to only encrypt a subset of the plaintext
I was wondering if there is a smart way for a user to only encrypt a subset of a plaintext. I'll try to be more specific. Let's suppose the user U wants to use a special cipher such that given a ...
281 views
### Revealing random bit permutation
I am new in cryptography. I want to determine the complexity of revealing a random bit permutation which is used as block cipher for plaintexts (bitstrings of length n). An adversary catches different ...
105 views
### Can a EC private key be derived from a public key?
I understand that the public key does not expose the private key. That is not the question. The question is: Given a EC public key, can a different, but plausible and functional private key be ...
126 views
### How to calculate complexity of ElGamal cryptosystem?
How to calculate time and space complexity of ElGamal encryption and decryption as there are two exponentiation operation during encryption and one during decryption? Here is my code for encryption: ...
140 views
### What is the entropy per Diceware word if a random symbol is inserted into a random position in the word?
On the Diceware page is this little gem: For extra security without adding another word, insert one special character or digit chosen at random into your passphrase... Inserting a letter at ...
167 views
### Does CCA security imply authenticated encryption?
Does CCA security imply authenticated encryption?
128 views
### What exactly does a key do?
I am getting to grips with cryptography as a total newbie, and am struggling with encryption "keys" and how to visualize them. From http://computer.howstuffworks.com/encryption.htm/printable: ...
100 views
### Why are there only positive value points on an elliptic curve?
I read about elliptic curve cryptography $E$ over $Z_p$ where $p$ is prime number and $G$ is a base point on the curve. I noticed the points resulting from multiplication e.g. $2G$,$3G$,.....,$(N-1)G$ ...
118 views
### Pick faster private exponent
I recently tried to send 1536-bit modulus CSR to COMODO. They refused to sign the certificate. I later found out that it's because NIST mandated 2048-bit modulus on the SSL certificate. I think it's ...
88 views
### One Time Pad for large changing files
i wondered what the security implications are if I do the following. If I had a large file encrypted with a OTP and want to change only a few bytes in the plaintext. what security vulnerabilities do I ...
113 views
### How can I perform matching on an “encrypted- fingerprint feature matrix” using Fully Homomorphic Encryption?
I am doing a finger-print authentication process. The feature-extraction using minutiae has been done and I get an N x 6 matrix, where the 6 columns are {$x_i$ ...
139 views
### Is AES-256 over AES-128 weakening the original encryption?
When transfering data using TLS the browser and server agree the cipher suite to be used - so for example this could be chosen as AES-128 and is (probably) outside of my control. If I separately ...
116 views
### Advantages to knowing $p$ and $q$ in Blum Blum Shub?
Do you gain any advantage by knowing the factorization of $M$ (over just knowing $M$ itself) in the Blum Blum Shub generator? The only advantage I see is being able to calculate the $i$-th number ...
97 views
### Computing youngest person among 3 while keeping ages private
I already found a protocol to find out who is richer (older) between two parties, but Is there any protocol to find the youngest person among 3 parties, without revealing actual ages? |
# LESS and SASS source code formatting
I need to add piece of source code in LESS and SASS. Piece of code is divided into two columns. First column is source code in Less and second column in Sass, something like on this page, but without borders.
I know about verbatim, but I would like to know if there is something better to save more space (because of two columns) and LESS and SASS syntax highlighting (I dont need color, bold and italic is enough).
Since you want automatic syntax highlighting, I'd suggest you to use either listings or minted. Below I present two options using the former.
Here's a first option; the idea is to the listings package and two side-by-side minipages; two environments sass and less are defined using \lstnewenvironment; since the languages are not predefined, I gave a simple dummy definition for the example:
The code:
\documentclass{article}
\usepackage{listings}
\usepackage{etoolbox}
\usepackage{bera}
% Definitions for the SASS language
\lstdefinelanguage{sass}
{
morekeywords={border,solid},
}
% Definitions for the LESS language
\lstdefinelanguage{less}
{
morekeywords={bordered,black},
}
% Common settings
\lstset{
basicstyle=\small\ttfamily,
columns=fullflexible,
keywordstyle=\bfseries
}
% Definition of the main environments
\lstnewenvironment{sass}[1][]
{\lstset{language=sass,linewidth=\linewidth,#1}}
{}
% Definition of the main environments
\lstnewenvironment{less}[1][]
{\lstset{language=less,linewidth=\linewidth,#1}}
{}
\BeforeBeginEnvironment{sass}
{\par\noindent\begin{minipage}{.5\linewidth}SASS}
\AfterEndEnvironment{sass}{\end{minipage}}
\BeforeBeginEnvironment{less}
{\begin{minipage}{.5\linewidth}LESS}
\AfterEndEnvironment{less}{\end{minipage}}
\begin{document}
\begin{sass}
.bordered(@width: 2px) {
border: @width solid black;
}
.bordered(4px);
}
\end{sass}%
\begin{less}
@mixin bordered($width: 2px) { border:$width solid black;
}
@include bordered(4px);
}
\end{less}
\end{document}
Here's another option using the powerful tcolorbox package and its interaction with listings:
The code:
\documentclass{article}
\usepackage[many]{tcolorbox}
\usepackage{filecontents}
\usepackage{bera}
\tcbuselibrary{listings}
% Definitions for the SASS language
\lstdefinelanguage{sass}
{
morekeywords={border},
}
% Definitions for the LESS language
\lstdefinelanguage{less}
{
morekeywords={bordered},
}
% Common settings
\lstset{
basicstyle=\small\ttfamily,
columns=fullflexible,
keywordstyle=\bfseries
}
% Just a snippet of LESS code for the example
\begin{filecontents*}{lessi.cd}
@mixin bordered($width: 2px) { border:$width solid black;
} | }
@include bordered(4px);
}
\end{filecontents*}
% Definition of the main environment
\newtcblisting{lesssass}[2][]{
enhanced,
boxrule=0pt,
arc=0pt,
top=10pt,
listing options={language=sass},
colback=gray!5,
colframe=gray,
listing side comment,
comment={#2},
overlay={
\node[anchor=north west,inner ysep=4pt] at (frame.north west) (sa) {SASS};
\node[anchor=north west,inner ysep=4pt] at (frame.north) (le) {LESS};
\draw[gray,line width=0.5pt]
(frame.north west|-sa.south) -- (frame.north east|-sa.south);
},
#1
}
\begin{document}
\begin{lesssass}{\lstinputlisting[language=less]{lessi.cd}}
.bordered(@width: 2px) {
border: @width solid black;
}
.bordered(4px);
}
\end{lesssass}
\end{document}
## Remarks
• Since LESS and SASS are not predefined languages in listings, you'll need to provide the language definitions using \lstdefinelanguage; in my example code I used to simple definitions just for the example:
% Definitions for the SASS language
\lstdefinelanguage{sass}
{
morekeywords={border},
}
% Definitions for the LESS language
\lstdefinelanguage{less}
{
morekeywords={bordered},
}
• In my solution, the SASS code is typeset directly in your document, inside the main environment; the LESS code has to be stored in an external file (which I simulated in my example using filecontents) and will be input using \lstinputlistings (see the example code).
• The main environment is lesssass; the content of the environment is the SASS code (which will be typeset to the left of the box); using the mandatory argument and \lstinputlistings, you can write the LESS code (to be typeset to the right of the box). For example, the document was produced using
\begin{lesssass}{\lstinputlisting[language=less]{lessi.cd}}
.bordered(@width: 2px) {
border: @width solid black;
}
using the following file lessi.cd:
@mixin bordered($width: 2px) { border:$width solid black; |
# Chemical Sciences: A Manual for CSIR-UGC National Eligibility Test for Lectureship and JRF/X-ray crystallography
X-ray crystallography can locate every atom in a zeolite, an aluminosilicate with many important applications, such as water purification.
X-ray crystallography is a method of determining the arrangement of atoms within a crystal, in which a beam of X-rays strikes a crystal and diffracts into many specific directions. From the angles and intensities of these diffracted beams, a crystallographer can produce a three-dimensional picture of the density of electrons within the crystal. From this electron density, the mean positions of the atoms in the crystal can be determined, as well as their chemical bonds, their disorder and various other information.
Since many materials can form crystals — such as salts, metals, minerals, semiconductors, as well as various inorganic, organic and biological molecules — X-ray crystallography has been fundamental in the development of many scientific fields. In its first decades of use, this method determined the size of atoms, the lengths and types of chemical bonds, and the atomic-scale differences among various materials, especially minerals and alloys. The method also revealed the structure and functioning of many biological molecules, including vitamins, drugs, proteins and nucleic acids such as DNA. X-ray crystallography is still the chief method for characterizing the atomic structure of new materials and in discerning materials that appear similar by other experiments. X-ray crystal structures can also account for unusual electronic or elastic properties of a material, shed light on chemical interactions and processes, or serve as the basis for designing pharmaceuticals against diseases.
In an X-ray diffraction measurement, a crystal is mounted on a goniometer and gradually rotated while being bombarded with X-rays, producing a diffraction pattern of regularly spaced spots known as reflections. The two-dimensional images taken at different rotations are converted into a three-dimensional model of the density of electrons within the crystal using the mathematical method of Fourier transforms, combined with chemical data known for the sample. Poor resolution (fuzziness) or even errors may result if the crystals are too small, or not uniform enough in their internal makeup.
X-ray crystallography is related to several other methods for determining atomic structures. Similar diffraction patterns can be produced by scattering electrons or neutrons, which are likewise interpreted as a Fourier transform. If single crystals of sufficient size cannot be obtained, various other X-ray methods can be applied to obtain less detailed information; such methods include fiber diffraction, powder diffraction and small-angle X-ray scattering (SAXS). In all these methods, the scattering is elastic; the scattered X-rays have the same wavelength as the incoming X-ray. By contrast, inelastic X-ray scattering methods are useful in studying excitations of the sample, rather than the distribution of its atoms.
## History
### Early scientific history of crystals and X-rays
Drawing of square (Figure A, above) and hexagonal (Figure B, below) packing from Kepler's work, Strena seu de Nive Sexangula.
Crystals have long been admired for their regularity and symmetry, but they were not investigated scientifically until the 17th century. Johannes Kepler hypothesized in his work Strena seu de Nive Sexangula (1611) that the hexagonal symmetry of snowflake crystals was due to a regular packing of spherical water particles.[1]
As shown by X-ray crystallography, the hexagonal symmetry of snowflakes results from the tetrahedral arrangement of hydrogen bonds about each water molecule. The water molecules are arranged similarly to the silicon atoms in the tridymite polymorph of SiO2. The resulting crystal structure has hexagonal symmetry when viewed along a principal axis.
Crystal symmetry was first investigated experimentally by Nicolas Steno (1669), who showed that the angles between the faces are the same in every exemplar of a particular type of crystal,[2] and by René Just Haüy (1784), who discovered that every face of a crystal can be described by simple stacking patterns of blocks of the same shape and size. Hence, William Hallowes Miller in 1839 was able to give each face a unique label of three small integers, the Miller indices which are still used today for identifying crystal faces. Haüy's study led to the correct idea that crystals are a regular three-dimensional array (a Bravais lattice) of atoms and molecules; a single unit cell is repeated indefinitely along three principal directions that are not necessarily perpendicular. In the 19th century, a complete catalog of the possible symmetries of a crystal was worked out by Johann Hessel,[3] Auguste Bravais,[4] Yevgraf Fyodorov,[5] Arthur Schönflies[6] and (belatedly) William Barlow. From the available data and physical reasoning, Barlow proposed several crystal structures in the 1880s that were validated later by X-ray crystallography;[7] however, the available data were too scarce in the 1880s to accept his models as conclusive.
X-ray crystallography shows the arrangement of water molecules in ice, revealing the hydrogen bonds that hold the solid together. Few other methods can determine the structure of matter with such sub-atomic precision (resolution).
X-rays were discovered by Wilhelm Conrad Röntgen in 1895, just as the studies of crystal symmetry were being concluded. Physicists were initially uncertain of the nature of X-rays, although it was soon suspected (correctly) that they were waves of electromagnetic radiation, in other words, another form of light. At that time, the wave model of light — specifically, the Maxwell theory of electromagnetic radiation — was well accepted among scientists, and experiments by Charles Glover Barkla showed that X-rays exhibited phenomena associated with electromagnetic waves, including transverse polarization and spectral lines akin to those observed in the visible wavelengths. Single-slit experiments in the laboratory of Arnold Sommerfeld suggested the wavelength of X-rays was about 1 Angström. However, X-rays are composed of photons, and thus are not only waves of electromagnetic radiation but also exhibit particle-like properties. The photon concept was introduced by Albert Einstein in 1905,[8] but it was not broadly accepted until 1922,[9][10] when Arthur Compton confirmed it by the scattering of X-rays from electrons.[11] Therefore, these particle-like properties of X-rays, such as their ionization of gases, caused William Henry Bragg to argue in 1907 that X-rays were not electromagnetic radiation.[12][13][14][15] Nevertheless, Bragg's view was not broadly accepted and the observation of X-ray diffraction in 1912[16] confirmed for most scientists that X-rays were a form of electromagnetic radiation.
### X-ray analysis of crystals
The incoming beam (coming from upper left) causes each scatterer to re-radiate a small portion of its intensity as a spherical wave. If scatterers are arranged symmetrically with a separation d, these spherical waves will be in synch (add constructively) only in directions where their path-length difference 2d sin θ equals an integer multiple of the wavelength λ. In that case, part of the incoming beam is deflected by an angle 2θ, producing a reflection spot in the diffraction pattern.
Crystals are regular arrays of atoms, and X-rays can be considered waves of electromagnetic radiation. Atoms scatter X-ray waves, primarily through the atoms' electrons. Just as an ocean wave striking a lighthouse produces secondary circular waves emanating from the lighthouse, so an X-ray striking an electron produces secondary spherical waves emanating from the electron. This phenomenon is known as elastic scattering, and the electron (or lighthouse) is known as the scatterer. A regular array of scatterers produces a regular array of spherical waves. Although these waves cancel one another out in most directions through destructive interference, they add constructively in a few specific directions, determined by Bragg's law:
$2d \sin \theta = n \lambda\!$
Here d is the spacing between diffracting planes, $\theta$ is the incident angle, n is any integer, and λ is the wavelength of the beam. These specific directions appear as spots on the diffraction pattern called reflections. Thus, X-ray diffraction results from an electromagnetic wave (the X-ray) impinging on a regular array of scatterers (the repeating arrangement of atoms within the crystal).
X-rays are used to produce the diffraction pattern because their wavelength λ is typically the same order of magnitude (1-100 Ångströms) as the spacing d between planes in the crystal. In principle, any wave impinging on a regular array of scatterers produces diffraction, as predicted first by Francesco Maria Grimaldi in 1665. To produce significant diffraction, the spacing between the scatterers and the wavelength of the impinging wave should be similar in size. For illustration, the diffraction of sunlight through a bird's feather was first reported by James Gregory in the later 17th century. The first artificial diffraction gratings for visible light were constructed by David Rittenhouse in 1787, and Joseph von Fraunhofer in 1821. However, visible light has too long a wavelength (typically, 5500 Ångströms) to observe diffraction from crystals. Prior to the first X-ray diffraction experiments, the spacings between lattice planes in a crystal were not known with certainty.
The idea that crystals could be used as a diffraction grating for X-rays arose in 1912 in a conversation between Paul Peter Ewald and Max von Laue in the English Garden in Munich. Ewald had proposed a resonator model of crystals for his thesis, but this model could not be validated using visible light, since the wavelength was much larger than the spacing between the resonators. Von Laue realized that electromagnetic radiation of a shorter wavelength was needed to observe such small spacings, and suggested that X-rays might have a wavelength comparable to the unit-cell spacing in crystals. Von Laue worked with two technicians, Walter Friedrich and his assistant Paul Knipping, to shine a beam of X-rays through a copper sulfate crystal and record its diffraction on a photographic plate. After being developed, the plate showed a large number of well-defined spots arranged in a pattern of intersecting circles around the spot produced by the central beam.[16][17] Von Laue developed a law that connects the scattering angles and the size and orientation of the unit-cell spacings in the crystal, for which he was awarded the Nobel Prize in Physics in 1914.[18]
As described in the mathematical derivation below, the X-ray scattering is determined by the density of electrons within the crystal. Since the energy of an X-ray is much greater than that of a valence electron, the scattering may be modeled as Thomson scattering, the interaction of an electromagnetic ray with a free electron. This model is generally adopted to describe the polarization of the scattered radiation. The intensity of Thomson scattering declines as 1/m² with the mass m of the charged particle that is scattering the radiation; hence, the atomic nuclei, which are thousands of times heavier than an electron, contribute negligibly to the scattered X-rays.
### Development from 1912 to 1920
Although diamonds (top left) and graphite (top right) are identical in chemical composition — being both pure carbon — X-ray crystallography revealed the arrangement of their atoms (bottom) accounts for their different properties. In diamond, the carbon atoms are arranged tetrahedrally and held together by single covalent bonds, making it strong in all directions. By contrast, graphite is composed of stacked sheets. Within the sheet, the bonding is covalent and has hexagonal symmetry, but there are no covalent bonds between the sheets, making graphite easy to cleave into flakes.
After Von Laue's pioneering research, the field developed rapidly, most notably by physicists William Lawrence Bragg and his father William Henry Bragg. In 1912-1913, the younger Bragg developed Bragg's law, which connects the observed scattering with reflections from evenly spaced planes within the crystal.[19][20][21] The earliest structures were generally simple and marked by one-dimensional symmetry. However, as computational and experimental methods improved over the next decades, it became feasible to deduce reliable atomic positions for more complicated two- and three-dimensional arrangements of atoms in the unit-cell.
The potential of X-ray crystallography for determining the structure of molecules and minerals — then only known vaguely from chemical and hydrodynamic experiments — was realized immediately. The earliest structures were simple inorganic crystals and minerals, but even these revealed fundamental laws of physics and chemistry. The first atomic-resolution structure to be "solved" (i.e. determined) in 1914 was that of table salt.[22][23][24] The distribution of electrons in the table-salt structure showed that crystals are not necessarily composed of covalently bonded molecules, and proved the existence of ionic compounds.[25] The structure of diamond was solved in the same year,[26][27] proving the tetrahedral arrangement of its chemical bonds and showing that the length of C–C single bond was 1.52 Ångströms. Other early structures included copper,[28] calcium fluoride (CaF2, also known as fluorite), calcite (CaCO3) and pyrite (FeS2)[29] in 1914; spinel (MgAl2O4) in 1915;[30][31] the rutile and anatase forms of titanium dioxide (TiO2) in 1916;[32] pyrochroite Mn(OH)2 and, by extension, brucite Mg(OH)2 in 1919;.[33][34] Also in 1919 sodium nitrate (NaNO3) and cesium dichloroiodide (CsICl2) were determined by Ralph Walter Graystone Wyckoff, and the wurtzite (hexagonal ZnS) structure became known in 1920.[35]
The structure of graphite was solved in 1916[36] by the related method of powder diffraction,[37] which was developed by Peter Debye and Paul Scherrer and, independently, by Albert Hull in 1917.[38] The structure of graphite was determined from single-crystal diffraction in 1924 by two groups independently.[39][40] Hull also used the powder method to determine the structures of various metals, such as iron[41] and magnesium.[42]
## Contributions to chemistry and material science
X-ray crystallography has led to a better understanding of chemical bonds and non-covalent interactions. The initial studies revealed the typical radii of atoms, and confirmed many theoretical models of chemical bonding, such as the tetrahedral bonding of carbon in the diamond structure,[26] the octahedral bonding of metals observed in ammonium hexachloroplatinate (IV),[43] and the resonance observed in the planar carbonate group[29] and in aromatic molecules.[44] Kathleen Lonsdale's 1928 structure of hexamethylbenzene[45] established the hexagonal symmetry of benzene and showed a clear difference in bond length between the aliphatic C–C bonds and aromatic C–C bonds; this finding led to the idea of resonance between chemical bonds, which had profound consequences for the development of chemistry.[46] Her conclusions were anticipated by William Henry Bragg, who published models of naphthalene and anthracene in 1921 based on other molecules, an early form of molecular replacement.[44][47]
Also in the 1920s, Victor Moritz Goldschmidt and later Linus Pauling developed rules for eliminating chemically unlikely structures and for determining the relative sizes of atoms. These rules led to the structure of brookite (1928) and an understanding of the relative stability of the rutile, brookite and anatase forms of titanium dioxide.
The distance between two bonded atoms is a sensitive measure of the bond strength and its bond order; thus, X-ray crystallographic studies have led to the discovery of even more exotic types of bonding in inorganic chemistry, such as metal-metal double bonds,[48][49][50] metal-metal quadruple bonds,[51][52][53] and three-center, two-electron bonds.[54] X-ray crystallography — or, strictly speaking, an inelastic Compton scattering experiment — has also provided evidence for the partly covalent character of hydrogen bonds.[55] In the field of organometallic chemistry, the X-ray structure of ferrocene initiated scientific studies of sandwich compounds,[56][57] while that of Zeise's salt stimulated research into "back bonding" and metal-pi complexes.[58][59][60][61] Finally, X-ray crystallography had a pioneering role in the development of supramolecular chemistry, particularly in clarifying the structures of the crown ethers and the principles of host-guest chemistry.
In material sciences, many complicated inorganic and organometallic systems have been analyzed using single-crystal methods, such as fullerenes, metalloporphyrins, and other complicated compounds. Single-crystal diffraction is also used in the pharmaceutical industry, due to recent problems with polymorphs. The major factors affecting the quality of single-crystal structures are the crystal's size and regularity; recrystallization is a commonly used technique to improve these factors in small-molecule crystals. The Cambridge Structural Database contains over 500,000 structures; over 99% of these structures were determined by X-ray diffraction.
### Mineralogy and metallurgy
Since the 1920s, X-ray diffraction has been the principal method for determining the arrangement of atoms in minerals and metals. The application of X-ray crystallography to mineralogy began with the structure of garnet, which was determined in 1924 by Menzer. A systematic X-ray crystallographic study of the silicates was undertaken in the 1920s. This study showed that, as the Si/O ratio is altered, the silicate crystals exhibit significant changes in their atomic arrangements. Machatschki extended these insights to minerals in which aluminium substitutes for the silicon atoms of the silicates. The first application of X-ray crystallography to metallurgy likewise occurred in the mid-1920s.[62][63][64][65][66][67] Most notably, Linus Pauling's structure of the alloy Mg2Sn[68] led to his theory of the stability and structure of complex ionic crystals.[69]
### Early organic and small biological molecules
The three-dimensional structure of penicillin, for which Dorothy Crowfoot Hodgkin was awarded the Nobel Prize in Chemistry in 1964. The green, white, red, yellow and blue spheres represent atoms of carbon, hydrogen, oxygen, sulfur and nitrogen, respectively.
The first structure of an organic compound, hexamethylenetetramine, was solved in 1923.[70] This was followed by several studies of long-chain fatty acids, which are an important component of biological membranes.[71][72][73][74][75][76][77][78][79] In the 1930s, the structures of much larger molecules with two-dimensional complexity began to be solved. A significant advance was the structure of phthalocyanine,[80] a large planar molecule that is closely related to porphyrin molecules important in biology, such as heme, corrin and chlorophyll.
X-ray crystallography of biological molecules took off with Dorothy Crowfoot Hodgkin, who solved the structures of cholesterol (1937), vitamin B12 (1945) and penicillin (1954), for which she was awarded the Nobel Prize in Chemistry in 1964. In 1969, she succeeded in solving the structure of insulin, on which she worked for over thirty years.[81]
Ribbon diagram of the structure of myoglobin, showing colored alpha helices. Such proteins are long, linear molecules with thousands of atoms; yet the relative position of each atom has been determined with sub-atomic resolution by X-ray crystallography. Since it is difficult to visualize all the atoms at once, the ribbon shows the rough path of the protein polymer from its N-terminus (blue) to its C-terminus (red).
### Biological macromolecular crystallography
Crystal structures of proteins (which are irregular and hundreds of times larger than cholesterol) began to be solved in the late 1950s, beginning with the structure of sperm whale myoglobin by Max Perutz and Sir John Cowdery Kendrew, for which they were awarded the Nobel Prize in Chemistry in 1962.[82] Since that success, over 48970 X-ray crystal structures of proteins, nucleic acids and other biological molecules have been determined.[83] For comparison, the nearest competing method in terms of structures analyzed is nuclear magnetic resonance (NMR) spectroscopy, which has resolved 7806 chemical structures.[84] Moreover, crystallography can solve structures of arbitrarily large molecules, whereas solution-state NMR is restricted to relatively small ones (less than 70 kDa). X-ray crystallography is now used routinely by scientists to determine how a pharmaceutical drug interacts with its protein target and what changes might improve it.[85] However, intrinsic membrane proteins remain challenging to crystallize because they require detergents or other means to solubilize them in isolation, and such detergents often interfere with crystallization. Such membrane proteins are a large component of the genome and include many proteins of great physiological importance, such as ion channels and receptors.[86][87]
## Relationship to other scattering techniques
### Elastic vs. inelastic scattering
X-ray crystallography is a form of elastic scattering; the outgoing X-rays have the same energy, and thus same wavelength, as the incoming X-rays, only with altered direction. By contrast, inelastic scattering occurs when energy is transferred from the incoming X-ray to the crystal, e.g., by exciting an inner-shell electron to a higher energy level. Such inelastic scattering reduces the energy (or increases the wavelength) of the outgoing beam. Inelastic scattering is useful for probing such excitations of matter, but not in determining the distribution of scatterers within the matter, which is the goal of X-ray crystallography.
X-rays range in wavelength from 10 to 0.01 nanometers; a typical wavelength used for crystallography is 1 Å, which is on the scale of covalent chemical bonds and the radius of a single atom. Longer-wavelength photons (such as ultraviolet radiation) would not have sufficient resolution to determine the atomic positions. At the other extreme, shorter-wavelength photons such as gamma rays are difficult to produce in large numbers, difficult to focus, and interact too strongly with matter, producing particle-antiparticle pairs. Therefore, X-rays are the "sweetspot" for wavelength when determining atomic-resolution structures from the scattering of electromagnetic radiation.
### Other X-ray techniques
Other forms of elastic X-ray scattering include powder diffraction, SAXS and several types of X-ray fiber diffraction, which was used by Rosalind Franklin in determining the double-helix structure of DNA. In general, single-crystal X-ray diffraction offers more structural information than these other techniques; however, it requires a sufficiently large and regular crystal, which is not always available.
These scattering methods generally use monochromatic X-rays, which are restricted to a single wavelength with minor deviations. A broad spectrum of X-rays (that is, a blend of X-rays with different wavelengths) can also be used to carry out X-ray diffraction, a technique known as the Laue method. This is the method used in the original discovery of X-ray diffraction. Laue scattering provides much structural information with only a short exposure to the X-ray beam, and is therefore used in structural studies of very rapid events (Time resolved crystallography). However, it is not as well-suited as monochromatic scattering for determining the full atomic structure of a crystal and therefore works better with crystals with relatively simple atomic arrangements.
The Laue back reflection mode records X-rays scattered backwards from a broad spectrum source. This is useful if the sample is too thick for X-rays to transmit through it. The diffracting planes in the crystal are determined by knowing that the normal to the diffracting plane bisects the angle between the incident beam and the diffracted beam. A Greninger chart can be used [88] to interpret the back reflection Laue photograph.
### Electron and neutron diffraction
Other particles, such as electrons and neutrons, may be used to produce a diffraction pattern. Although electron, neutron, and X-ray scattering use very different equipment, the resulting diffraction patterns are analyzed using the same coherent diffraction imaging techniques.
As derived below, the electron density within the crystal and the diffraction patterns are related by a simple mathematical method, the Fourier transform, which allows the density to be calculated relatively easily from the patterns. However, this works only if the scattering is weak, i.e., if the scattered beams are much less intense than the incoming beam. Weakly scattered beams pass through the remainder of the crystal without undergoing a second scattering event. Such re-scattered waves are called "secondary scattering" and hinder the analysis. Any sufficiently thick crystal will produce secondary scattering, but since X-rays interact relatively weakly with the electrons, this is generally not a significant concern. By contrast, electron beams may produce strong secondary scattering even for relatively thin crystals (>100 nm). Since this thickness corresponds to the diameter of many viruses, a promising direction is the electron diffraction of isolated macromolecular assemblies, such as viral capsids and molecular machines, which may be carried out with a cryo-electron microscope.
Neutron diffraction is an excellent method for structure determination, although it has been difficult to obtain intense, monochromatic beams of neutrons in sufficient quantities. Traditionally, nuclear reactors have been used, although the new Spallation Neutron Source holds much promise in the near future. Being uncharged, neutrons scatter much more readily from the atomic nuclei rather than from the electrons. Therefore, neutron scattering is very useful for observing the positions of light atoms with few electrons, especially hydrogen, which is essentially invisible in the X-ray diffraction. Neutron scattering also has the remarkable property that the solvent can be made invisible by adjusting the ratio of normal water, H2O, and heavy water, D2O.
## Methods
### Overview of single-crystal X-ray diffraction
Workflow for solving the structure of a molecule by X-ray crystallography.
The oldest and most precise method of X-ray crystallography is single-crystal X-ray diffraction, in which a beam of X-rays strikes a single crystal, producing scattered beams. When they land on a piece of film or other detector, these beams make a diffraction pattern of spots; the strengths and angles of these beams are recorded as the crystal is gradually rotated.[89] Each spot is called a reflection, since it corresponds to the reflection of the X-rays from one set of evenly spaced planes within the crystal. For single crystals of sufficient purity and regularity, X-ray diffraction data can determine the mean chemical bond lengths and angles to within a few thousandths of an Ångström and to within a few tenths of a degree, respectively. The atoms in a crystal are not static, but oscillate about their mean positions, usually by less than a few tenths of an Ångström. X-ray crystallography allows measuring the size of these oscillations.
#### Procedure
The technique of single-crystal X-ray crystallography has three basic steps. The first — and often most difficult — step is to obtain an adequate crystal of the material under study. The crystal should be sufficiently large (typically larger than 0.1 mm in all dimensions), pure in composition and regular in structure, with no significant internal imperfections such as cracks or twinning.
In the second step, the crystal is placed in an intense beam of X-rays, usually of a single wavelength (monochromatic X-rays), producing the regular pattern of reflections. As the crystal is gradually rotated, previous reflections disappear and new ones appear; the intensity of every spot is recorded at every orientation of the crystal. Multiple data sets may have to be collected, with each set covering slightly more than half a full rotation of the crystal and typically containing tens of thousands of reflections.
In the third step, these data are combined computationally with complementary chemical information to produce and refine a model of the arrangement of atoms within the crystal. The final, refined model of the atomic arrangement — now called a crystal structure — is usually stored in a public database.
#### Limitations
As the crystal's repeating unit, its unit cell, becomes larger and more complex, the atomic-level picture provided by X-ray crystallography becomes less well-resolved (more "fuzzy") for a given number of observed reflections. Two limiting cases of X-ray crystallography—"small-molecule" and "macromolecular" crystallography—are often discerned. Small-molecule crystallography typically involves crystals with fewer than 100 atoms in their asymmetric unit; such crystal structures are usually so well resolved that the atoms can be discerned as isolated "blobs" of electron density. By contrast, macromolecular crystallography often involves tens of thousands of atoms in the unit cell. Such crystal structures are generally less well-resolved (more "smeared out"); the atoms and chemical bonds appear as tubes of electron density, rather than as isolated atoms. In general, small molecules are also easier to crystallize than macromolecules; however, X-ray crystallography has proven possible even for viruses with hundreds of thousands of atoms.
### Crystallization
File:Protein crystal.jpg
A protein crystal seen under a microscope. Crystals used in X-ray crystallography may be smaller than a millimeter across.
Although crystallography can be used to characterize the disorder in an impure or irregular crystal, crystallography generally requires a pure crystal of high regularity to solve the structure of a complicated arrangement of atoms. Pure, regular crystals can sometimes be obtained from natural or synthetic materials, such as samples of metals, minerals or other macroscopic materials. The regularity of such crystals can sometimes be improved with annealing and other methods. However, in many cases, obtaining a diffraction-quality crystal is the chief barrier to solving its atomic-resolution structure.[90]
Small-molecule and macromolecular crystallography differ in the range of possible techniques used to produce diffraction-quality crystals. Small molecules generally have few degrees of conformational freedom, and may be crystallized by a wide range of methods, such as chemical vapor deposition and recrystallization. By contrast, macromolecules generally have many degrees of freedom and their crystallization must be carried out to maintain a stable structure. For example, proteins and larger RNA molecules cannot be crystallized if their tertiary structure has been unfolded; therefore, the range of crystallization conditions is restricted to solution conditions in which such molecules remain folded.
Three methods of preparing crystals, A: Hanging drop. B: Sitting drop. C: Microdialysis
Protein crystals are almost always grown in solution. The most common approach is to lower the solubility of its component molecules very gradually; if this is done too quickly, the molecules will precipitate from solution, forming a useless dust or amorphous gel on the bottom of the container. Crystal growth in solution is characterized by two steps: nucleation of a microscopic crystallite (possibly having only 100 molecules), followed by growth of that crystallite, ideally to a diffraction-quality crystal.[91] The solution conditions that favor the first step (nucleation) are not always the same conditions that favor the second step (subsequent growth). The crystallographer's goal is to identify solution conditions that favor the development of a single, large crystal, since larger crystals offer improved resolution of the molecule. Consequently, the solution conditions should disfavor the first step (nucleation) but favor the second (growth), so that only one large crystal forms per droplet. If nucleation is favored too much, a shower of small crystallites will form in the droplet, rather than one large crystal; if favored too little, no crystal will form whatsoever.
It is extremely difficult to predict good conditions for nucleation or growth of well-ordered crystals.[92] In practice, favorable conditions are identified by screening; a very large batch of the molecules is prepared, and a wide variety of crystallization solutions are tested.[93] Hundreds, even thousands, of solution conditions are generally tried before finding the successful one. The various conditions can use one or more physical mechanisms to lower the solubility of the molecule; for example, some may change the pH, some contain salts of the Hofmeister series or chemicals that lower the dielectric constant of the solution, and still others contain large polymers such as polyethylene glycol that drive the molecule out of solution by entropic effects. It is also common to try several temperatures for encouraging crystallization, or to gradually lower the temperature so that the solution becomes supersaturated. These methods require large amounts of the target molecule, as they use high concentration of the molecule(s) to be crystallized. Due to the difficulty in obtaining such large quantities (milligrams) of crystallization grade protein, robots have been developed that are capable of accurately dispensing crystallization trial drops that are in the order of 100 nanoliters in volume. This means that 10-fold less protein is used per-experiment when compared to crystallization trials setup by hand (in the order of 1 microliter).[94]
Several factors are known to inhibit or mar crystallization. The growing crystals are generally held at a constant temperature and protected from shocks or vibrations that might disturb their crystallization. Impurities in the molecules or in the crystallization solutions are often inimical to crystallization. Conformational flexibility in the molecule also tends to make crystallization less likely, due to entropy. Ironically, molecules that tend to self-assemble into regular helices are often unwilling to assemble into crystals. Crystals can be marred by twinning, which can occur when a unit cell can pack equally favorably in multiple orientations; although recent advances in computational methods may allow solving the structure of some twinned crystals. Having failed to crystallize a target molecule, a crystallographer may try again with a slightly modified version of the molecule; even small changes in molecular properties can lead to large differences in crystallization behavior.
### Data collection
#### Mounting the crystal
Animation showing the five motions possible with a four-circle kappa goniometer. The rotations about each of the four angles φ, κ, ω and 2θ leave the crystal within the X-ray beam, but change the crystal orientation. The detector (red box) can be slid closer or further away from the crystal, allowing higher resolution data to be taken (if closer) or better discernment of the Bragg peaks (if further away).
The crystal is mounted for measurements so that it may be held in the X-ray beam and rotated. There are several methods of mounting. Although crystals were once loaded into glass capillaries with the crystallization solution (the mother liquor), a modern approach is to scoop the crystal up in a tiny loop, made of nylon or plastic and attached to a solid rod, that is then flash-frozen with liquid nitrogen.[95] This freezing reduces the radiation damage of the X-rays, as well as the noise in the Bragg peaks due to thermal motion (the Debye-Waller effect). However, untreated crystals often crack if flash-frozen; therefore, they are generally pre-soaked in a cryoprotectant solution before freezing.[96] Unfortunately, this pre-soak may itself cause the crystal to crack, ruining it for crystallography. Generally, successful cryo-conditions are identified by trial and error.
The capillary or loop is mounted on a goniometer, which allows it to be positioned accurately within the X-ray beam and rotated. Since both the crystal and the beam are often very small, the crystal must be centered within the beam to within ~25 micrometers accuracy, which is aided by a camera focused on the crystal. The most common type of goniometer is the "kappa goniometer", which offers three angles of rotation: the ω angle, which rotates about an axis perpendicular to the beam; the κ angle, about an axis at ~50° to the ω axis; and, finally, the φ angle about the loop/capillary axis. When the κ angle is zero, the ω and φ axes are aligned. The κ rotation allows for convenient mounting of the crystal, since the arm in which the crystal is mounted may be swung out towards the crystallographer. The oscillations carried out during data collection (mentioned below) involve the ω axis only. An older type of goniometer is the four-circle goniometer, and its relatives such as the six-circle goniometer.
#### X-ray sources
The mounted crystal is then irradiated with a beam of monochromatic X-rays. The brightest and most useful X-ray sources are synchrotrons; their much higher luminosity allows for better resolution. They also make it convenient to tune the wavelength of the radiation, which is useful for multi-wavelength anomalous dispersion (MAD) phasing, described below. Synchrotrons are generally national facilities, each with several dedicated beamlines where data is collected around the clock, seven days a week.
A diffractometer
Smaller, X-ray generators are often used in laboratories to check the quality of crystals before bringing them to a synchrotron and sometimes to solve a crystal structure. In such systems, electrons are boiled off of a cathode and accelerated through a strong electric potential of ~50 kV; having reached a high speed, the electrons collide with a metal plate, emitting bremsstrahlung and some strong spectral lines corresponding to the excitation of inner-shell electrons of the metal. The most common metal used is copper, which can be kept cool easily, due to its high thermal conductivity, and which produces strong Kα and Kβ lines. The Kβ line is sometimes suppressed with a thin (~10 µm) nickel foil. The simplest and cheapest variety of sealed X-ray tube has a stationary anode (the Crookes tube) and produces ~2 kW of X-ray radiation. The more expensive variety has a rotating-anode type source that produces ~14 kW of X-ray radiation.
X-rays are generally filtered (by use of X-Ray Filters) to a single wavelength (made monochromatic) and collimated to a single direction before they are allowed to strike the crystal. The filtering not only simplifies the data analysis, but also removes radiation that degrades the crystal without contributing useful information. Collimation is done either with a collimator (basically, a long tube) or with a clever arrangement of gently curved mirrors. Mirror systems are preferred for small crystals (under 0.3 mm) or with large unit cells (over 150 Å)
#### Recording the reflections
An X-ray diffraction pattern of a crystallized enzyme. The pattern of spots (called reflections) can be used to determine the structure of the enzyme.
When a crystal is mounted and exposed to an intense beam of X-rays, it scatters the X-rays into a pattern of spots or reflections that can be observed on a screen behind the crystal. A similar pattern may be seen by shining a laser pointer at a compact disc. The relative intensities of these spots provide the information to determine the arrangement of molecules within the crystal in atomic detail. The intensities of these reflections may be recorded with photographic film, an area detector or with a charge-coupled device (CCD) image sensor. The peaks at small angles correspond to low-resolution data, whereas those at high angles represent high-resolution data; thus, an upper limit on the eventual resolution of the structure can be determined from the first few images. Some measures of diffraction quality can be determined at this point, such as the mosaicity of the crystal and its overall disorder, as observed in the peak widths. Some pathologies of the crystal that would render it unfit for solving the structure can also be diagnosed quickly at this point.
One image of spots is insufficient to reconstruct the whole crystal; it represents only a small slice of the full Fourier transform. To collect all the necessary information, the crystal must be rotated step-by-step through 180°, with an image recorded at every step; actually, slightly more than 180° is required to cover reciprocal space, due to the curvature of the Ewald sphere. However, if the crystal has a higher symmetry, a smaller angular range such as 90° or 45° may be recorded. The rotation axis should be changed at least once, to avoid developing a "blind spot" in reciprocal space close to the rotation axis. It is customary to rock the crystal slightly (by 0.5-2°) to catch a broader region of reciprocal space.
Multiple data sets may be necessary for certain phasing methods. For example, MAD phasing requires that the scattering be recorded at least three (and usually four, for redundancy) wavelengths of the incoming X-ray radiation. A single crystal may degrade too much during the collection of one data set, owing to radiation damage; in such cases, data sets on multiple crystals must be taken.[97]
### Data analysis
#### Crystal symmetry, unit cell, and image scaling
The recorded series of two-dimensional diffraction patterns, each corresponding to a different crystal orientation, is converted into a three-dimensional model of the electron density; the conversion uses the mathematical technique of Fourier transforms, which is explained below. Each spot corresponds to a different type of variation in the electron density; the crystallographer must determine which variation corresponds to which spot (indexing), the relative strengths of the spots in different images (merging and scaling) and how the variations should be combined to yield the total electron density (phasing).
Data processing begins with indexing the reflections. This means identifying the dimensions of the unit cell and which image peak corresponds to which position in reciprocal space. A byproduct of indexing is to determine the symmetry of the crystal, i.e., its space group. Some space groups can be eliminated from the beginning. For example, reflection symmetries cannot be observed in chiral molecules; thus, only 65 space groups of 243 possible are allowed for protein molecules which are almost always chiral. Indexing is generally accomplished using an autoindexing routine.[98] Having assigned symmetry, the data is then integrated. This converts the hundreds of images containing the thousands of reflections into a single file, consisting of (at the very least) records of the Miller index of each reflection, and an intensity for each reflection (at this state the file often also includes error estimates and measures of partiality (what part of a given reflection was recorded on that image)).
A full data set may consist of hundreds of separate images taken at different orientations of the crystal. The first step is to merge and scale these various images, that is, to identify which peaks appear in two or more images (merging) and to scale the relative images so that they have a consistent intensity scale. Optimizing the intensity scale is critical because the relative intensity of the peaks is the key information from which the structure is determined. The repetitive technique of crystallographic data collection and the often high symmetry of crystalline materials cause the diffractometer to record many symmetry-equivalent reflections multiple times. This allows calculating the symmetry related R-factor based upon how similar are the measured intensities of symmetry equivalent reflections, thus assessing the quality of the data.
#### Initial phasing
The data collected from a diffraction experiment is a reciprocal space representation of the crystal lattice. The position of each diffraction 'spot' is governed by the size and shape of the unit cell, and the inherent symmetry within the crystal. The intensity of each diffraction 'spot' is recorded, and this intensity is proportional to the square of the structure factor amplitude. The structure factor is a complex number containing information relating to both the amplitude and phase of a wave. In order to obtain an interpretable electron density map, both amplitude and phase must be known (an electron density map allows a crystallographer to build a starting model of the molecule). The phase cannot be directly recorded during a diffraction experiment: this is known as the phase problem. Initial phase estimates can be obtained in a variety of ways:
• Ab initio phasing or direct methods - This is usually the method of choice for small molecules (<1000 non-hydrogen atoms), and has been used successfully to solve the phase problems for small proteins. If the resolution of the data is better than 1.4 Å (140 pm), direct methods can be used to obtain phase information, by exploiting known phase relationships between certain groups of reflections.[99][100]
• Molecular replacement - if a related structure is known, it can be used as a search model in molecular replacement to determine the orientation and position of the molecules within the unit cell. The phases obtained this way can be used to generate electron density maps.[101]
• Anomalous X-ray scattering (MAD or SAD phasing) - the X-ray wavelength may be scanned past an absorption edge of an atom, which changes the scattering in a known way. By recording full sets of reflections at three different wavelengths (far below, far above and in the middle of the absorption edge) one can solve for the substructure of the anomalously diffracting atoms and thence the structure of the whole molecule. The most popular method of incorporating anomalous scattering atoms into proteins is to express the protein in a methionine auxotroph (a host incapable of synthesizing methionine) in a media rich in seleno-methionine, which contains selenium atoms. A MAD experiment can then be conducted around the absorption edge, which should then yield the position of any methionine residues within the protein, providing initial phases.[102]
• Heavy atom methods (multiple isomorphous replacement) - If electron-dense metal atoms can be introduced into the crystal, direct methods or Patterson-space methods can be used to determine their location and to obtain initial phases. Such heavy atoms can be introduced either by soaking the crystal in a heavy atom-containing solution, or by co-crystallization (growing the crystals in the presence of a heavy atom). As in MAD phasing, the changes in the scattering amplitudes can be interpreted to yield the phases. Although this is the original method by which protein crystal structures were solved, it has largely been superseded by MAD phasing with selenomethionine.[101]
#### Model building and phase refinement
File:Eden.png
A protein crystal structure at 2.7 Å resolution. The mesh encloses the region in which the electron density exceeds a given threshold. The straight segments represent chemical bonds between the non-hydrogen atoms of an arginine (upper left), a tyrosine (lower left), a disulfide bond (upper right, in yellow), and some peptide groups (running left-right in the middle). The two curved green tubes represent spline fits to the polypeptide backbone.
Having obtained initial phases, an initial model can be built. This model can be used to refine the phases, leading to an improved model, and so on. Given a model of some atomic positions, these positions and their respective Debye-Waller factors (or B-factors, accounting for the thermal motion of the atom) can be refined to fit the observed diffraction data, ideally yielding a better set of phases. A new model can then be fit to the new electron density map and a further round of refinement is carried out. This continues until the correlation between the diffraction data and the model is maximized. The agreement is measured by an R-factor defined as
$R = \frac{\sum_{\mathrm{all\ reflections}} \left|F_{o} - F_{c} \right|}{\sum_{\mathrm{all\ reflections}} \left|F_{o} \right|}$
A similar quality criterion is Rfree, which is calculated from a subset (~10%) of reflections that were not included in the structure refinement. Both R factors depend on the resolution of the data. As a rule of thumb, Rfree should be approximately the resolution in Ångströms divided by 10; thus, a data-set with 2 Å resolution should yield a final Rfree ~ 0.2. Chemical bonding features such as stereochemistry, hydrogen bonding and distribution of bond lengths and angles are complementary measures of the model quality. Phase bias is a serious problem in such iterative model building. Omit maps are a common technique used to check for this.Template:Clarifyme
It may not be possible to observe every atom of the crystallized molecule - it must be remembered that the resulting electron density is an average of all the molecules within the crystal. In some cases, there is too much residual disorder in those atoms, and the resulting electron density for atoms existing in many conformations is smeared to such an extent that it is no longer detectable in the electron density map. Weakly scattering atoms such as hydrogen are routinely invisible. It is also possible for a single atom to appear multiple times in an electron density map, e.g., if a protein sidechain has multiple (<4) allowed conformations. In still other cases, the crystallographer may detect that the covalent structure deduced for the molecule was incorrect, or changed. For example, proteins may be cleaved or undergo post-translational modifications that were not detected prior to the crystallization.
### Deposition of the structure
Once the model of a molecule's structure has been finalized, it is often deposited in a crystallographic database such as the Cambridge Structural Database (for small molecules) or the Protein Data Bank (for protein structures). Many structures obtained in private commercial ventures to crystallize medicinally relevant proteins, are not deposited in public crystallographic databases.
## Diffraction theory
The main goal of X-ray crystallography is to determine the density of electrons f(r) throughout the crystal, where r represents the three-dimensional position vector within the crystal. To do this, X-ray scattering is used to collect data about its Fourier transform F(q), which is inverted mathematically to obtain the density defined in real space, using the formula
$f(\mathbf{r}) = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} F(\mathbf{q}) e^{i\mathbf{q}\cdot\mathbf{r}}$
where the integral is taken over all values of q. The three-dimensional real vector q represents a point in reciprocal space, that is, to a particular oscillation in the electron density as one moves in the direction in which q points. The length of q corresponds to 2$\pi$ divided by the wavelength of the oscillation. The corresponding formula for a Fourier transform will be used below
$F(\mathbf{q}) = \int d\mathbf{r} f(\mathbf{r}) e^{-i\mathbf{q}\cdot\mathbf{r}}$
where the integral is summed over all possible values of the position vector r within the crystal.
The Fourier transform F(q) is generally a complex number, and therefore has a magnitude |F(q)| and a phase φ(q) related by the equation
$F(\mathbf{q}) = \left|F(\mathbf{q}) \right|e^{i\phi(\mathbf{q})}$
The intensities of the reflections observed in X-ray diffraction give us the magnitudes |F(q)| but not the phases φ(q). To obtain the phases, full sets of reflections are collected with known alterations to the scattering, either by modulating the wavelength past a certain absorption edge or by adding strongly scattering (i.e., electron-dense) metal atoms such as mercury. Combining the magnitudes and phases yields the full Fourier transform F(q), which may be inverted to obtain the electron density f(r).
Crystals are often idealized as being perfectly periodic. In that ideal case, the atoms are positioned on a perfect lattice, the electron density is perfectly periodic, and the Fourier transform F(q) is zero except when q belongs to the reciprocal lattice (the so-called Bragg peaks). In reality, however, crystals are not perfectly periodic; atoms vibrate about their mean position, and there may be disorder of various types, such as mosaicity, dislocations, various point defects, and heterogeneity in the conformation of crystallized molecules. Therefore, the Bragg peaks have a finite width and there may be significant diffuse scattering, a continuum of scattered X-rays that fall between the Bragg peaks.
### Intuitive understanding by Bragg's law
An intuitive understanding of X-ray diffraction can be obtained from the Bragg model of diffraction. In this model, a given reflection is associated with a set of evenly spaced sheets running through the crystal, usually passing through the centers of the atoms of the crystal lattice. The orientation of a particular set of sheets is identified by its three Miller indices (h, k, l), and let their spacing be noted by d. William Lawrence Bragg proposed a model in which the incoming X-rays are scattered specularly (mirror-like) from each plane; from that assumption, X-rays scattered from adjacent planes will combine constructively (constructive interference) when the angle θ between the plane and the X-ray results in a path-length difference that is an integer multiple n of the X-ray wavelength λ.
$2 d\sin\theta = n\lambda\,$
A reflection is said to be indexed when its Miller indices (or, more correctly, its reciprocal lattice vector components) have been identified from the known wavelength and the scattering angle 2θ. Such indexing gives the unit-cell parameters, the lengths and angles of the unit-cell, as well as its space group. Since Bragg's law does not interpret the relative intensities of the reflections, however, it is generally inadequate to solve for the arrangement of atoms within the unit-cell; for that, a Fourier transform method must be carried out.
### Scattering as a Fourier transform
The incoming X-ray beam has a polarization and should be represented as a vector wave; however, for simplicity, let it be represented here as a scalar wave. We also ignore the complication of the time dependence of the wave and just focus on the wave's spatial dependence. Plane waves can be represented by a wave vector kin, and so the strength of the incoming wave at time t=0 is given by
$A e^{i\mathbf{k}_{in} \cdot \mathbf{r}}$
At position r within the sample, let there be a density of scatterers f(r); these scatterers should produce a scattered spherical wave of amplitude proportional to the local amplitude of the incoming wave times the number of scatterers in a small volume dV about r
$\mathrm{amplitude\ of\ scattered\ wave} = A e^{i\mathbf{k} \cdot \mathbf{r}} S f(\mathbf{r}) dV$
where S is the proportionality constant.
Let's consider the fraction of scattered waves that leave with an outgoing wave-vector of kout and strike the screen at rscreen. Since no energy is lost (elastic, not inelastic scattering), the wavelengths are the same as are the magnitudes of the wave-vectors |kin|=|kout|. From the time that the photon is scattered at r until it is absorbed at rscreen, the photon undergoes a change in phase
$e^{i \mathbf{k}_{out} \cdot \left( \mathbf{r}_{\mathrm{screen}} - \mathbf{r} \right)}$
The net radiation arriving at rscreen is the sum of all the scattered waves throughout the crystal
$A S \int d\mathbf{r} f(\mathbf{r}) e^{i \mathbf{k}_{in} \cdot \mathbf{r}} e^{i \mathbf{k}_{out} \cdot \left( \mathbf{r}_{\mathrm{screen}} - \mathbf{r} \right)} = A S e^{i \mathbf{k}_{out} \cdot \mathbf{r}_{\mathrm{screen}}} \int d\mathbf{r} f(\mathbf{r}) e^{i \left( \mathbf{k}_{in} - \mathbf{k}_{out} \right) \cdot \mathbf{r}}$
which may be written as a Fourier transform
$A S e^{i \mathbf{k}_{out} \cdot \mathbf{r}_{\mathrm{screen}}} \int d\mathbf{r} f(\mathbf{r}) e^{-i \mathbf{q} \cdot \mathbf{r}} = A S e^{i \mathbf{k}_{out} \cdot \mathbf{r}_{\mathrm{screen}}} F(\mathbf{q})$
where q = kout - kin. The measured intensity of the reflection will be square of this amplitude
$A^{2} S^{2} \left|F(\mathbf{q}) \right|^{2}$
### Friedel and Bijvoet mates
For every reflection corresponding to a point q in the reciprocal space, there is another reflection of the same intensity at the opposite point -q. This opposite reflection is known as the Friedel mate of the original reflection. This symmetry results from the mathematical fact that the density of electrons f(r) at a position r is always a real number. As noted above, f(r) is the inverse transform of its Fourier transform F(q); however, such an inverse transform is a complex number in general. To ensure that f(r) is real, the Fourier transform F(q) must be such that the Friedel mates F(−q) and F(q) are complex conjugates of one another. Thus, F(−q) has the same magnitude as F(q) but they have the opposite phase, i.e., φ(q) = −φ(q)
$F(-\mathbf{q}) = \left|F(-\mathbf{q}) \right|e^{i\phi(-\mathbf{q})} = F^{*}(\mathbf{q}) = \left|F(\mathbf{q}) \right|e^{-i\phi(\mathbf{q})}$
The equality of their magnitudes ensures that the Friedel mates have the same intensity |F|2. This symmetry allows one to measure the full Fourier transform from only half the reciprocal space, e.g., by rotating the crystal slightly more than a 180°, instead of a full turn. In crystals with significant symmetry, even more reflections may have the same intensity (Bijvoet mates); in such cases, even less of the reciprocal space may need to be measured, e.g., slightly more than 90°.
The Friedel-mate constraint can be derived from the definition of the inverse Fourier transform
$f(\mathbf{r}) = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} F(\mathbf{q}) e^{i\mathbf{q}\cdot\mathbf{r}} = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} \left|F(\mathbf{q}) \right|e^{i\phi(\mathbf{q})} e^{i\mathbf{q}\cdot\mathbf{r}}$
Since Euler's formula states that eix = cos(x) + i sin(x), the inverse Fourier transform can be separated into a sum of a purely real part and a purely imaginary part
$f(\mathbf{r}) = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} \left|F(\mathbf{q}) \right|e^{i\left(\phi+\mathbf{q}\cdot\mathbf{r}\right)} = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} \left|F(\mathbf{q}) \right| \cos\left(\phi+\mathbf{q}\cdot\mathbf{r}\right) + i \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} \left|F(\mathbf{q}) \right| \sin\left(\phi+\mathbf{q}\cdot\mathbf{r}\right) = I_{\mathrm{cos}} + iI_{\mathrm{sin}}$
The function f(r) is real if and only if the second integral Isin is zero for all values of r. In turn, this is true if and only if the above constraint is satisfied
$I_{\mathrm{sin}} = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} \left|F(\mathbf{q}) \right|\sin\left(\phi+\mathbf{q}\cdot\mathbf{r}\right) = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} \left|F(\mathbf{-q}) \right| \sin\left(-\phi-\mathbf{q}\cdot\mathbf{r}\right) = -I_{\mathrm{sin}}$
since Isin = −Isin implies that Isin=0.
### Ewald's sphere
Each X-ray diffraction image represents only a slice, a spherical slice of reciprocal space, as may be seen by the Ewald sphere construction. Both kout and kin have the same length, due to the elastic scattering, since the wavelength has not changed. Therefore, they may be represented as two radial vectors in a sphere in reciprocal space, which shows the values of q that are sampled in a given diffraction image. Since there is a slight spread in the incoming wavelengths of the incoming X-ray beam, the values of|F(q)|can be measured only for q vectors located between the two spheres corresponding to those radii. Therefore, to obtain a full set of Fourier transform data, it is necessary to rotate the crystal through slightly more than 180°, or sometimes less if sufficient symmetry is present. A full 360° rotation is not needed because of a symmetry intrinsic to the Fourier transforms of real functions (such as the electron density), but "slightly more" than 180° is needed to cover all of reciprocal space within a given resolution because of the curvature of the Ewald sphere. In practice, the crystal is rocked by a small amount (0.25-1°) to incorporate reflections near the boundaries of the spherical Ewald shells.
### Patterson function
A well-known result of Fourier transforms is the autocorrelation theorem, which states that the autocorrelation c(r) of a function f(r)
$c(\mathbf{r}) = \int d\mathbf{x} f(\mathbf{x}) f(\mathbf{x} + \mathbf{r}) = \int \frac{d\mathbf{q}}{\left(2\pi\right)^{3}} C(\mathbf{q}) e^{i\mathbf{q}\cdot\mathbf{r}}$
has a Fourier transform C(q) that is the squared magnitude of F(q)
$C(\mathbf{q}) = \left|F(\mathbf{q}) \right|^{2}$
Therefore, the autocorrelation function c(r) of the electron density (also known as the Patterson function[103]) can be computed directly from the reflection intensities, without computing the phases. In principle, this could be used to determine the crystal structure directly; however, it is difficult to realize in practice. The autocorrelation function corresponds to the distribution of vectors between atoms in the crystal; thus, a crystal of N atoms in its unit cell may have N(N-1) peaks in its Patterson function. Given the inevitable errors in measuring the intensities, and the mathematical difficulties of reconstructing atomic positions from the interatomic vectors, this technique is rarely used to solve structures, except for the simplest crystals.
In principle, an atomic structure could be determined from applying X-ray scattering to non-crystalline samples, even to a single molecule. However, crystals offer a much stronger signal due to their periodicity. A crystalline sample is by definition periodic; a crystal is composed of many unit cells repeated indefinitely in three independent directions. Such periodic systems have a Fourier transform that is concentrated at periodically repeating points in reciprocal space known as Bragg peaks; the Bragg peaks correspond to the reflection spots observed in the diffraction image. Since the amplitude at these reflections grows linearly with the number N of scatterers, the observed intensity of these spots should grow quadratically, like N². In other words, using a crystal concentrates the weak scattering of the individual unit cells into a much more powerful, coherent reflection that can be observed above the noise. This is an example of constructive interference.
In a liquid, powder or amorphous sample, molecules within that sample are in random orientations. Such samples have a continuous Fourier spectrum that uniformly spreads its amplitude thereby reducing the measured signal intensity, as is observed in SAXS. More importantly, the orientational information is lost. Although theoretically possible, it is experimentally difficult to obtain atomic-resolution structures of complicated, asymmetric molecules from such rotationally averaged data. An intermediate case is fiber diffraction in which the subunits are arranged periodically in at least one dimension.
## References
1. Kepler J (1611). Strena seu de Nive Sexangula. Frankfurt: G. Tampach. ISBN 3321000210.
2. Steno N (1669). De solido intra solidum naturaliter contento dissertationis prodromus. Florentiae.
3. Hessel JFC (1831). Kristallometrie oder Kristallonomie und Kristallographie. Leipzig.
4. Bravais A (1850). "Mémoire sur les systèmes formés par des points distribués regulièrement sur un plan ou dans l'espace". J. L'Ecole Polytech. 19: 1.
5. Shafranovskii I I and Belov N V (1962). "E. S. Fedorov". 50 Years of X-Ray Diffraction, ed. Paul Ewald (Springer): 351. ISBN 9027790299.
6. Schönflies A (1891). Kristallsysteme und Kristallstruktur. Leipzig.
7. Barlow W (1883). "Probable nature of the internal symmetry of crystals". Nature 29: 186. doi:10.1038/029186a0. See also Barlow W, Nature, 29, 205, 383, 404 (1883-1884).
8. Einstein A (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt (trans. A Heuristic Model of the Creation and Transformation of Light)". Annalen der Physik 17: 132. Template:De icon. An English translation is available from Wikisource.
9. Einstein A (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung (trans. The Development of Our Views on the Composition and Essence of Radiation)". Physikalische Zeitschrift 10: 817. Template:De icon. An English translation is available from Wikisource.
10. Pais A (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 019853907X.
11. Bragg WH (1907). "The nature of Röntgen rays". Transactions of the Royal Society of Science of Australia 31: 94.
12. Bragg WH (1908). "The nature of γ- and X-rays". Nature 77: 270. doi:10.1038/077270a0. See also Nature, 78, 271, 293–294, 665 (1908).
13. Bragg WH (1910). "The consequences of the corpuscular hypothesis of the γ- and X-rays, and the range of β-rays". Phil. Mag. 20: 385.
14. Bragg WH (1912). "On the direct or indirect nature of the ionization by X-rays". Phil. Mag. 23: 647.
15. a b Friedrich W, Knipping P, von Laue M (1912). "Interferenz-Erscheinungen bei Röntgenstrahlen". Sitzungsberichte der Mathematisch-Physikalischen Classe der Königlich-Bayerischen Akademie der Wissenschaften zu München 1912: 303.
16. von Laue M (1914). "Concerning the detection of x-ray interferences" (PDF). Nobel Lectures, Physics 1901-1921. Retrieved 2009-02-18.
17. Dana ES, Ford WE (1932). A Textbook of Mineralogy, fourth edition. New York: John Wiley & Sons. p. 28.
18. Bragg WL (1912). "The Specular Reflexion of X-rays". Nature 90: 410. doi:10.1038/090410b0.
19. Bragg WL (1913). "The Diffraction of Short Electromagnetic Waves by a Crystal". Proceedings of the Cambridge Philosophical Society 17: 43.
20. Bragg (1914). "Die Reflexion der Röntgenstrahlen". Jahrbuch der Radioaktivität und Elektronik 11: 350.
21. Bragg (1913). "The Structure of Some Crystals as Indicated by their Diffraction of X-rays". Proc. R. Soc. Lond. A89: 248.
22. Bragg WL, James RW, Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt". Phil. Mag. 41: 309.
23. Bragg WL, James RW, Bosanquet CH (1921). "The Intensity of Reflexion of X-rays by Rock-Salt. Part II". Phil. Mag. 42: 1.
24. Bragg WL, James RW, Bosanquet CH (1922). "The Distribution of Electrons around the Nucleus in the Sodium and Chlorine Atoms". Phil. Mag. 44: 433.
25. a b Bragg WH, Bragg WL (1913). "The structure of the diamond". Nature 91: 557. doi:10.1038/091557a0.
26. Bragg WH, Bragg WL (1913). "The structure of the diamond". Proc. R. Soc. Lond. A89: 277. doi:10.1098/rspa.1913.0084.
27. Bragg WL (1914). "The Crystalline Structure of Copper". Phil. Mag. 28: 355.
28. a b Bragg WL (1914). "The analysis of crystals by the X-ray spectrometer". Proc. R. Soc. Lond. A89: 468.
29. Bragg WH (1915). "The structure of the spinel group of crystals". Phil. Mag. 30: 305.
30. Nishikawa S (1915). "Structure of some crystals of spinel group". Proc. Tokyo Math. Phys. Soc. 8: 199.
31. Vegard L (1916). "Results of Crystal Analysis". Phil. Mag. 32: 65.
32. Aminoff G (1919). "Crystal Structure of Pyrochroite". Stockholm Geol. Fören. Förh. 41: 407.
33. Aminoff G (1921). "Über die Struktur des Magnesiumhydroxids". Z. Kristallogr. 56: 505.
34. Bragg WL (1920). "The crystalline structure of zinc oxide". Phil. Mag. 39: 647.
35. Debije P, Scherrer P (1916). "Interferenz an regellos orientierten Teilchen im Röntgenlicht I". Physikalische Zeitschrift 17: 277.
36. Friedrich W (1913). "Eine neue Interferenzerscheinung bei Röntgenstrahlen". Physikalische Zeitschrift 14: 317.
37. Hull AW (1917). "A New Method of X-ray Crystal Analysis". Phys. Rev. 10: 661. doi:10.1103/PhysRev.10.661.
38. Bernal JD (1924). "The Structure of Graphite". Proc. R. Soc. Lond. A106: 749.
39. Hassel O, Mack H (1924). "Über die Kristallstruktur des Graphits". Zeitschrift für Physik 25: 317. doi:10.1007/BF01327534.
40. Hull AW (1917). "The Crystal Structure of Iron". Phys. Rev. 9: 84.
41. Hull AW (1917). "The Crystal Structure of Magnesium". PNAS 3: 470. doi:10.1073/pnas.3.7.470.
42. Wyckoff RWG, Posnjak E (1921). "The Crystal Structure of Ammonium Chloroplatinate". J. Amer. Chem. Soc. 43: 2292. doi:10.1021/ja01444a002.
43. a b Bragg WH (1921). "The structure of organic crystals". Proc. R. Soc. Lond. 34: 33.
44. Lonsdale K (1928). "The structure of the benzene ring". Nature 122: 810. doi:10.1038/122810c0.
45. Pauling L. The Nature of the Chemical Bond (3rd ed.). Ithaca, NY: Cornell University Press. ISBN 0801403332.
46. Bragg WH (1922). "The crystalline structure of anthracene". Proc. R. Soc. Lond. 35: 167.
47. Powell HM, Ewens RVG (1939). "The crystal structure of iron enneacarbonyl". J. Chem. Soc.: 286. doi:10.1039/jr9390000286.
48. Bertrand JA, Cotton FA, Dollase WA (1963). "The Metal-Metal Bonded, Polynuclear Complex Anion in CsReCl4". J. Amer. Chem. Soc. 85: 1349. doi:10.1021/ja00892a029.
49. Robinson WT, Fergusson JE, Penfold BR (1963). "Configuration of Anion in CsReCl4". Proceedings of the Chemical Society of London: 116.
50. Cotton FA, Curtis NF, Harris CB, Johnson BFG, Lippard SJ, Mague JT, Robinson WR, Wood JS (1964). "Mononuclear and Polynuclear Chemistry of Rhenium (III): Its Pronounced Homophilicity". Science 145 (3638): 1305. doi:10.1126/science.145.3638.1305. PMID 17802015.
51. Cotton FA, Harris CB (1965). "The Crystal and Molecular Structure of Dipotassium Octachlorodirhenate(III) Dihydrate". Inorganic Chemistry 4: 330. doi:10.1021/ic50025a015.
52. Cotton FA (1965). "Metal-Metal Bonding in [Re2X8]2- Ions and Other Metal Atom Clusters". Inorganic Chemistry 4: 334. doi:10.1021/ic50025a016.
53. Eberhardt WH, Crawford W, Jr., Lipscomb WN (1954). "The valence structure of the boron hydrides". J. Chem. Phys. 22: 989. doi:10.1063/1.1740320.
54. Martin TW, Derewenda ZS (1999). "The name is Bond — H bond". Nature Structural Biology 6 (5): 403. doi:10.1038/8195. PMID 10331860.
55. Dunitz JD, Orgel LE, Rich A (1956). "The crystal structure of ferrocene". Acta Crystallographica 9: 373. doi:10.1107/S0365110X56001091.
56. Seiler P, Dunitz JD (1979). "A new interpretation of the disordered crystal structure of ferrocene". Acta Crystallographica B35: 1068.
57. Wunderlich JA, Mellor DP (1954). "A note on the crystal structure of Zeise's salt". Acta Crystallographica 7: 130. doi:10.1107/S0365110X5400028X.
58. Jarvis JAJ, Kilbourn BT, Owston PG (1970). "A re-determination of the crystal and molecular structure of Zeise's salt, KPtCl3.C2H4.H2O. A correction". Acta Crystallographica B26: 876.
59. Jarvis JAJ, Kilbourn BT, Owston PG (1971). "A re-determination of the crystal and molecular structure of Zeise's salt, KPtCl3.C2H4.H2O". Acta Crystallographica B27: 366.
60. Love RA, Koetzle TF, Williams GJB, Andrews LC, Bau R (1975). "Neutron diffraction study of the structure of Zeise's salt, KPtCl3(C2H4).H2O". Inorganic Chemistry 14: 2653. doi:10.1021/ic50153a012.
61. Westgren A, Phragmén G (1925). "X-ray Analysis of the Cu-Zn, Ag-Zn and Au-Zn Alloys". Phil. Mag. 50: 311.
62. Bradley AJ, Thewlis J (1926). "The structure of γ-Brass". Proc. R. Soc. Lond. 112: 678. doi:10.1098/rspa.1926.0134.
63. Hume-Rothery W (1926). "Researches on the Nature, Properties and Conditions of Formation of Intermetallic Compounds (with special Reference to certain Compounds of Tin)". Journal of the Institute of Metals 35: 295.
64. Bradley AJ, Gregory CH (1927). "The Structure of certain Ternary Alloys". Nature 120: 678.
65. Westgren A (1932). "Zur Chemie der Legierungen". Angewandte Chemie 45: 33. doi:10.1002/ange.19320450202.
66. Bernal JD (1935). "The Electron Theory of Metals". Annual Reports on the Progress of Chemistry 32: 181.
67. Pauling L (1923). "The Crystal Structure of Magnesium Stannide". J. Amer. Chem. Soc. 45: 2777. doi:10.1021/ja01665a001.
68. Pauling L (1929). "The Principles Determining the Structure of Complex Ionic Crystals". J. Amer. Chem. Soc. 51: 1010. doi:10.1021/ja01379a006.
69. Dickinson RG, Raymond AL (1923). "The Crystal Structure of Hexamethylene-Tetramine". J. Amer. Chem. Soc. 45: 22. doi:10.1021/ja01654a003.
70. Müller A (1923). "The X-ray Investigation of Fatty Acids". Journal of the Chemical Society (London) 123: 2043.
71. Saville WB, Shearer G (1925). "An X-ray Investigation of Saturated Aliphatic Ketones". Journal of the Chemical Society (London) 127: 591.
72. Bragg WH (1925). "The Investigation of thin Films by Means of X-rays". Nature 115: 266. doi:10.1038/115266a0.
73. de Broglie M, Trillat JJ (1925). "Sur l'interprétation physique des spectres X d'acides gras". Comptes rendus hebdomadaires des séances de l'Académie des sciences 180: 1485.
74. Trillat JJ (1926). "Rayons X et Composeés organiques à longe chaine. Recherches spectrographiques sue leurs structures et leurs orientations". Annales de physique 6: 5.
75. Caspari WA (1928). "Crystallography of the Aliphatic Dicarboxylic Acids". Journal of the Chemical Society (London) ?: 3235.
76. Müller A (1928). "X-ray Investigation of Long Chain Compounds (n. Hydrocarbons)". Proc. R. Soc. Lond. 120: 437. doi:10.1098/rspa.1928.0158.
77. Piper SH (1929). "Some Examples of Information Obtainable from the long Spacings of Fatty Acids". Transactions of the Faraday Society 25: 348. doi:10.1039/tf9292500348.
78. Müller A (1929). "The Connection between the Zig-Zag Structure of the Hydrocarbon Chain and the Alternation in the Properties of Odd and Even Numbered Chain Compounds". Proc. R. Soc. Lond. 124: 317. doi:10.1098/rspa.1929.0117.
79. Robertson JM (1936). "An X-ray Study of the Phthalocyanines, Part II". Journal of the Chemical Society: 1195.
80. Crowfoot Hodgkin D (1935). "X-ray Single Crystal Photographs of Insulin". Nature 135: 591. doi:10.1038/135591a0.
81. Kendrew J. C. et al. (1958-03-08). "A Three-Dimensional Model of the Myoglobin Molecule Obtained by X-Ray Analysis". Nature 181: 662. doi:10.1038/181662a0.
82. "PDB Statistics". RCSB Protein Data Bank. Retrieved 2007-05-03.
83. Scapin G (2006). "Structural biology and drug discovery". Curr. Pharm. Des. 12 (17): 2087. doi:10.2174/138161206777585201. PMID 16796557.
84. Lundstrom K (2006). "Structural genomics for membrane proteins". Cell. Mol. Life Sci. 63 (22): 2597. doi:10.1007/s00018-006-6252-y. PMID 17013556.
85. Lundstrom K (2004). "Structural genomics on membrane proteins: mini review". Comb. Chem. High Throughput Screen. 7 (5): 431. PMID 15320710.
86. Greninger AB (1935). Zeitschrift fur Kristallographie 91: 424.
87. An analogous diffraction pattern may be observed by shining a laser pointer on a compact disc or DVD; the periodic spacing of the CD tracks corresponds to the periodic arrangement of atoms in a crystal.
88. Geerlof A et al. (2006). "The impact of protein characterization in structural proteomics". Acta Crystallogr. D 62 (Pt 10): 1125. doi:10.1107/S0907444906030307. PMID 17001090.
89. Chernov AA (2003). "Protein crystals and their growth". J. Struct. Biol. 142 (1): 3. doi:10.1016/S1047-8477(03)00034-0. PMID 12718915.
90. Rupp B, Wang J (2004). "Predictive models for protein crystallization". Methods 34 (3): 390. doi:10.1016/j.ymeth.2004.03.031. PMID 15325656.
91. Chayen NE (2005). "Methods for separating nucleation and growth in protein crystallization". Prog. Biophys. Mol. Biol. 88 (3): 329. doi:10.1016/j.pbiomolbio.2004.07.007. PMID 15652248.
92. Stock D, Perisic O, Lowe J (2005). "Robotic nanolitre protein crystallisation at the MRC Laboratory of Molecular Biology.". Prog Biophys Mol Biol 88 (3): 311. doi:10.1016/j.pbiomolbio.2004.07.009. PMID 15652247.
93. Jeruzalmi D (2006). "First analysis of macromolecular crystals: biochemistry and x-ray diffraction". Methods Mol. Biol. 364: 43. doi:10.1385/1-59745-266-1:43. PMID 17172760.
94. Helliwell JR (2005). "Protein crystal perfection and its application". Acta Crystallogr. D Biol. Crystallogr. 61 (Pt 6): 793. doi:10.1107/S0907444905001368. PMID 15930642.
95. Ravelli RB, Garman EF (2006). "Radiation damage in macromolecular cryocrystallography". Curr. Opin. Struct. Biol. 16 (5): 624. doi:10.1016/j.sbi.2006.08.001. PMID 16938450.
96. Powell HR (1999). "The Rossmann Fourier autoindexing algorithm in MOSFLM.". Acta Crystallogr. D 55 (Pt 10): 1690. doi:10.1107/S0907444999009506. PMID 10531518.
97. Hauptman H (1997). "Phasing methods for protein crystallography". Curr. Opin. Struct. Biol. 7 (5): 672. doi:10.1016/S0959-440X(97)80077-2. PMID 9345626.
98. Usón I, Sheldrick GM (1999). "Advances in direct methods for protein crystallography". Curr. Opin. Struct. Biol. 9 (5): 643. doi:10.1016/S0959-440X(99)00020-2. PMID 10508770.
99. a b Taylor G (2003). "The phase problem". Acta Crystallogr. D 59: 1881. doi:10.1107/S0907444903017815.
100. Ealick SE (2000). "Advances in multiple wavelength anomalous diffraction crystallography". Current opinion in chemical biology 4 (5): 495. doi:10.1016/S1367-5931(00)00122-8. PMID 11006535.
101. Patterson AL (1935). "A Direct Method for the Determination of the Components of Interatomic Distances in Crystals". Zeitschrift für Kristallographie 90: 517.
### International Tables for Crystallography
• Theo Hahn, ed (2002). International Tables for Crystallography. Volume A, Space-group Symmetry (5 ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0792365909.
• Michael G. Rossmann and Eddy Arnold, ed (2001). International Tables for Crystallography. Volume F, Crystallography of biological molecules. Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0792368576.
• Theo Hahn, ed (1996). International Tables for Crystallography. Brief Teaching Edition of Volume A, Space-group Symmetry (4 ed.). Dordrecht: Kluwer Academic Publishers, for the International Union of Crystallography. ISBN 0792342526.
### Bound collections of articles
• Charles W. Carter and Robert M. Sweet., ed (1997). Macromolecular Crystallography, Part A (Methods in Enzymology, v. 276). San Diego: Academic Press. ISBN 0121821773.
• Charles W. Carter Jr., Robert M. Sweet., ed (1997). Macromolecular Crystallography, Part B (Methods in Enzymology, v. 277). San Diego: Academic Press. ISBN 0121821781.
• A. Ducruix and R. Giegé, ed (1999). Crystallization of Nucleic Acids and Proteins: A Practical Approach (2 ed.). Oxford: Oxford University Press. ISBN 0199636788.
### Textbooks
• Rupp B (2009). Biomolecular Crystallography: Principles, Practice and Application to Structural Biology. New York: Garland Science. ISBN 0815340818.
• Blow D (2002). Outline of Crystallography for Biologists. Oxford: Oxford University Press. ISBN 0198510519.
• Burns G., Glazer A M (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 0121457613.
• Clegg W (1998). Crystal Structure Determination (Oxford Chemistry Primer). Oxford: Oxford University Press. ISBN 0198559011.
• Cullity B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 0534553966.
• Drenth J (1999). Principles of Protein X-Ray Crystallography. New York: Springer-Verlag. ISBN 0387985875.
• Giacovazzo C et al. (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 0198555784.
• Glusker JP, Lewis M, Rossi M (1994). Crystal Structure Analysis for Chemists and Biologists. New York: VCH Publishers. ISBN 0471185434.
• Massa W (2004). Crystal Structure Determination. Berlin: Springer. ISBN 3540206442.
• McPherson A (1999). Crystallization of Biological Macromolecules. Cold Spring Harbor, NY: Cold Spring Harbor Laboratory Press. ISBN 0879696176.
• McPherson A (2003). Introduction to Macromolecular Crystallography. John Wiley & Sons. ISBN 0471251224.
• McRee DE (1993). Practical Protein Crystallography. San Diego: Academic Press. ISBN 0124860508.
• O'Keeffe M, Hyde B G (1996). Crystal Structures; I. Patterns and Symmetry. Washington, DC: Mineralogical Society of America, Monograph Series. ISBN 0939950405.
• Rhodes G (2000). Crystallography Made Crystal Clear. San Diego: Academic Press. ISBN 0125870728. , PDF copy of select chapters
• Zachariasen WH (1945). Theory of X-ray Diffraction in Crystals. New York: Dover Publications. Template:LCCN.
### Applied computational data analysis
• Young, R.A., ed (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 0198555776.
### Historical
• Friedrich W (1922). "Die Geschichte der Auffindung der Röntgenstrahlinterferenzen". Die Naturwissenschaften 10: 363. doi:10.1007/BF01565289.
• Lonsdale K (1949). Crystals and X-rays. New York: D. van Nostrand.
• Bragg W L, Phillips D C and Lipson H (1992). The Development of X-ray Analysis. New York: Dover. ISBN 0486673162.
• Ewald PP, editor, and numerous crystallographers (1962). Fifty Years of X-ray Diffraction. Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V..
• Ewald, P. P., editor 50 Years of X-Ray Diffraction (Reprinted in pdf format for the IUCr XVIII Congress, Glasgow, Scotland, International Union of Crystallography).
• Bijvoet JM, Burgers WG, Hägg G, eds. (1969). Early Papers on Diffraction of X-rays by Crystals (Volume I). Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V..
• Bijvoet JM, Burgers WG, Hägg G, eds. (1972). Early Papers on Diffraction of X-rays by Crystals (Volume II). Utrecht: published for the International Union of Crystallography by A. Oosthoek's Uitgeversmaatschappij N.V.. |
# detectseparation v0.1
0
0th
Percentile
## Detect and Check for Separation and Infinite Maximum Likelihood Estimates
Provides pre-fit and post-fit methods for detecting separation and infinite maximum likelihood estimates in generalized linear models with categorical responses. The pre-fit methods apply on binomial-response generalized liner models such as logit, probit and cloglog regression, and can be directly supplied as fitting methods to the glm() function. They solve the linear programming problems for the detection of separation developed in Konis (2007, <https://ora.ox.ac.uk/objects/uuid:8f9ee0d0-d78e-4101-9ab4-f9cbceed2a2a>) using 'ROI' <https://cran.r-project.org/package=ROI> or 'lpSolveAPI' <https://cran.r-project.org/package=lpSolveAPI>. The post-fit methods apply to models with categorical responses, including binomial-response generalized linear models and multinomial-response models, such as baseline category logits and adjacent category logits models; for example, the models implemented in the 'brglm2' <https://cran.r-project.org/package=brglm2> package. The post-fit methods successively refit the model with increasing number of iteratively reweighted least squares iterations, and monitor the ratio of the estimated standard error for each parameter to what it has been in the first iteration. According to the results in Lesaffre & Albert (1989, <https://www.jstor.org/stable/2345845>), divergence of those ratios indicates data separation.
# detectseparation
detectseparation provides pre-fit and post-fit methods for the detection of separation and of infinite maximum likelihood estimates in binomial response generalized linear models.
The key methods are detect_separation and check_infinite_estimates and this vignettes describes their use.
## Installation
You can install the released version of detectseparation from CRAN with:
install.packages("detectseparation")
And the development version from GitHub with:
# install.packages("devtools")
devtools::install_github("ikosmidis/detectseparation")
## Detecting and checking for Infinite maximum likelihood estimates
Heinze and Schemper (2002) used a logistic regression model to analyze data from a study on endometrial cancer (see, Agresti 2015, Section 5.7 or ?endometrial for more details on the data set). Below, we refit the model in Heinze and Schemper (2002) in order to demonstrate the functionality that detectseparation provides.
library("detectseparation")
data("endometrial", package = "detectseparation")
endo_glm <- glm(HG ~ NV + PI + EH, family = binomial(), data = endometrial)
theta_mle <- coef(endo_glm)
summary(endo_glm)
#>
#> Call:
#> glm(formula = HG ~ NV + PI + EH, family = binomial(), data = endometrial)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -1.50137 -0.64108 -0.29432 0.00016 2.72777
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) 4.30452 1.63730 2.629 0.008563 **
#> NV 18.18556 1715.75089 0.011 0.991543
#> PI -0.04218 0.04433 -0.952 0.341333
#> EH -2.90261 0.84555 -3.433 0.000597 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 104.903 on 78 degrees of freedom
#> Residual deviance: 55.393 on 75 degrees of freedom
#> AIC: 63.393
#>
#> Number of Fisher Scoring iterations: 17
The maximum likelihood (ML) estimate of the parameter for NV is actually infinite. The reported, apparently finite value is merely due to false convergence of the iterative estimation procedure. The same is true for the estimated standard error, and, hence the value r round(coef(summary(endo_glm))["NV", "z value"], 3) for the z-statistic cannot be trusted for inference on the size of the effect for NV.
### detect_separation
detect_separation is pre-fit method, in the sense that it does not need to estimate the model to detect separation and/or identify infinite estimates. For example
endo_sep <- glm(HG ~ NV + PI + EH, data = endometrial,
family = binomial("logit"),
method = "detect_separation")
endo_sep
#> Implementation: ROI | Solver: lpsolve
#> Separation: TRUE
#> Existence of maximum likelihood estimates
#> (Intercept) NV PI EH
#> 0 Inf 0 0
#> 0: finite value, Inf: infinity, -Inf: -infinity
So, the actual maximum likelihood estimates are
coef(endo_glm) + coef(endo_sep)
#> (Intercept) NV PI EH
#> 4.3045178 Inf -0.0421834 -2.9026056
and the estimated standard errors are
coef(summary(endo_glm))[, "Std. Error"] + abs(coef(endo_sep))
#> (Intercept) NV PI EH
#> 1.63729861 Inf 0.04433196 0.84555156
### check_infinite_estimates
Lesaffre and Albert (1989, Section 4) describe a procedure that can hint on the occurrence of infinite estimates. In particular, the model is successively refitted, by increasing the maximum number of allowed iteratively re-weighted least squares iterations at east step. The estimated asymptotic standard errors from each step are, then, divided to the corresponding ones from the first fit. If the sequence of ratios diverges, then the maximum likelihood estimate of the corresponding parameter is minus or plus infinity. The following code chunk applies this process to endo_glm.
(inf_check <- check_infinite_estimates(endo_glm))
#> (Intercept) NV PI EH
#> [1,] 1.000000 1.000000e+00 1.000000 1.000000
#> [2,] 1.424352 2.092407e+00 1.466885 1.672979
#> [3,] 1.590802 8.822303e+00 1.648003 1.863563
#> [4,] 1.592818 6.494231e+01 1.652508 1.864476
#> [5,] 1.592855 7.911035e+02 1.652591 1.864492
#> [6,] 1.592855 1.588973e+04 1.652592 1.864493
#> [7,] 1.592855 5.298760e+05 1.652592 1.864493
#> [8,] 1.592855 2.332822e+07 1.652592 1.864493
#> [9,] 1.592855 2.332822e+07 1.652592 1.864493
#> [10,] 1.592855 2.332822e+07 1.652592 1.864493
#> [11,] 1.592855 2.332822e+07 1.652592 1.864493
#> [12,] 1.592855 2.332822e+07 1.652592 1.864493
#> [13,] 1.592855 2.332822e+07 1.652592 1.864493
#> [14,] 1.592855 2.332822e+07 1.652592 1.864493
#> [15,] 1.592855 2.332822e+07 1.652592 1.864493
#> [16,] 1.592855 2.332822e+07 1.652592 1.864493
#> [17,] 1.592855 2.332822e+07 1.652592 1.864493
#> [18,] 1.592855 2.332822e+07 1.652592 1.864493
#> [19,] 1.592855 2.332822e+07 1.652592 1.864493
#> [20,] 1.592855 2.332822e+07 1.652592 1.864493
#> attr(,"class")
#> [1] "inf_check"
plot(inf_check)
# References
Agresti, A. 2015. Foundations of Linear and Generalized Linear Models. Wiley Series in Probability and Statistics. Wiley.
Heinze, G., and M. Schemper. 2002. “A Solution to the Problem of Separation in Logistic Regression.” Statistics in Medicine 21: 2409–19.
Lesaffre, E., and A. Albert. 1989. “Partial Separation in Logistic Discrimination.” Journal of the Royal Statistical Society. Series B (Methodological) 51 (1): 109–16. http://www.jstor.org/stable/2345845.
## Functions in detectseparation
Name Description detectseparation detectseparation: Methods for Detecting and Checking for Separation and Infinite Maximum Likelihood Estimates endometrial Histology grade and risk factors for 79 cases of endometrial cancer check_infinite_estimates Generic method for checking for infinite estimates check_infinite_estimates.glm A simple diagnostic of whether the maximum likelihood estimates are infinite detect_separation_control Auxiliary function for the glm interface when method is detect_separation. detect_separation Method for glm that tests for data separation and finds which parameters have infinite maximum likelihood estimates in generalized linear models with binomial responses lizards Habitat preferences of lizards No Results! |
# How do powers affect asymptotics in generating functions?
Let $a_n$ be a sequence of non-negative real numbers, and $A(x) = \sum_{n=0}^{\infty} a_n \frac{x^n}{n!}$ its exponential generating function. Also, suppose $B(x) = \sum_{n=0}^{\infty} b_n \frac{x^n}{n!}$ is such that, for some $k>0$, $[B(x)]^k = A(x)$. If an explicit formula exists for the $a_n$ (or just a formula for the asymptotic behavior), what can be derived about the asymptotic behavior of $b_n$, given $k$?
I'm also interested in the analogous problem for ordinary generating functions.
You should look at Flajolet-Sedgewick, Chapter VII (thm VII.8 is relevant) - they talk about asymptotics of algebraic generating functions, of which yours is both a special case ($k$-th root is a very simple algebraic function), and more general ($A(x)$ is not necessarily a polynomial), but they may have a lot more. |
## Isotope and half-life help
The half-life of a radioactive isotope is 4.3 days. What fraction N/N0 of the initial nuclei remains after 2.0 days? |
# Application on the limit definition of e
1. Oct 27, 2013
### fblues
Application on the limit definition of "e"
Hi, I have known that:
(i)$(1+\frac{a}{n})^n=((1+\frac{a}{n})^\frac{n}{a})^a\to e^a$
(ii)$(1-\frac{1}{n})^n=(\frac{n-1}{n})^n=(\frac{1}{\frac{n}{n-1}})^{(n-1)+1}=(\frac{1}{1+\frac{1}{n-1}})^{(n-1)}\cdot (\frac{1}{1+\frac{1}{n-1}}) \to \frac{1}{e}\cdot 1$
With above two facts, I wanted to show $(\frac{1}{1-\frac{t}{\sqrt{\frac{n}{2}}}})^\frac{n}{2} \to e^{\sqrt{\frac{n}{2}}t}\cdot e^\frac{t^2}{2}$ as n goes to infinity, for a fixed positive real t.
However, I am continuously getting $e^{\sqrt{\frac{n}{2}}t}\cdot e^{t^2}$ instead of above result and could not find the reason on the following my argument:
$(\frac{\sqrt{\frac{n}{2}}}{\sqrt{\frac{n}{2}}-t})^\frac{n}{2}=(\frac{(\frac{\sqrt{\frac{n}{2}}}{t}-1)+1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^\frac{n}{2}=(1+\frac{1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^{(\frac{\sqrt{\frac{n}{2}}}{t}-1)\sqrt{\frac{n}{2}}t+\sqrt{\frac{n}{2}}t}=(1+\frac{1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^{(\frac{\sqrt{\frac{n}{2}}}{t}-1)\sqrt{\frac{n}{2}}t}\cdot (1+\frac{1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^{(\frac{\sqrt{\frac{n}{2}}}{t}-1)t^2+t^2}$
$=(1+\frac{1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^{(\frac{\sqrt{\frac{n}{2}}}{t}-1)\sqrt{\frac{n}{2}}t}\cdot (1+\frac{1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^{(\frac{\sqrt{\frac{n}{2}}}{t}-1)t^2}\cdot (1+\frac{1}{\frac{\sqrt{\frac{n}{2}}}{t}-1})^{t^2} \to e^{\sqrt{\frac{n}{2}}t}\cdot e^{t^2}\cdot 1$ as n goes to infinity.
It would be very appreciative if you let me know my mistake.
Thank you very much.
2. Oct 28, 2013
### CompuChip
Hi fblues. How can there be an $n$ on the right hand side, after you have taken the limit of $n \to \infty$?
3. Oct 28, 2013
### Office_Shredder
Staff Emeritus
I think the statement that you want is something like
$$\lim_{n\to \infty} \left(\frac{1}{1-t/\sqrt{n/2}} \right)^{n/2} e^{-\sqrt{n/2} t} = e^{t^2/2}$$
(I don't know if this is the correct statement, but is what your statement should look similar to).
4. Oct 28, 2013
### fblues
To. CompuChip:
Thank you for letting me know. I tried to split the part that I don't know from the original problem and made a mistake during this procedure. BTW, it seems Office_Shredder knows the original one.
To. Office_Shredder:
Yes. The problem is from "the mgf of Chi_sq(n) becomes the mgf of normal(0,1) as n goes to infinity." I think the general approach is use of Taylor expansion. But, I tried to employ the limit definition of e. Do you have an idea for this? |
# One LSTM for two currencies or two LSTM one for each currency?
Suppose I am building an LSTM model for currency forecasting. Assume that I am working on two rates: USD vs GBP and USD vs EUR. Should I build one LSTM model with input size of two features (GBP and EUR)? Or should I build two LSTM models, one for GBP prediction and the other for EUR? And how to decide? |
Peter Denton, Stephen Parke, Xining Zhang, and I have just uploaded to the arXiv the short unpublished note “Eigenvectors from eigenvalues“. This note gives two proofs of a general eigenvector identity observed recently by Denton, Parke and Zhang in the course of some quantum mechanical calculations. The identity is as follows:
Theorem 1 Let ${A}$ be an ${n \times n}$ Hermitian matrix, with eigenvalues ${\lambda_1(A),\dots,\lambda_n(A)}$. Let ${v_i}$ be a unit eigenvector corresponding to the eigenvalue ${\lambda_i(A)}$, and let ${v_{i,j}}$ be the ${j^{th}}$ component of ${v_i}$. Then
$\displaystyle |v_{i,j}|^2 \prod_{k=1; k \neq i}^n (\lambda_i(A) - \lambda_k(A)) = \prod_{k=1}^{n-1} (\lambda_i(A) - \lambda_k(M_j))$
where ${M_j}$ is the ${n-1 \times n-1}$ Hermitian matrix formed by deleting the ${j^{th}}$ row and column from ${A}$.
For instance, if we have
$\displaystyle A = \begin{pmatrix} a & X^* \\ X & M \end{pmatrix}$
for some real number ${a}$, ${n-1}$-dimensional vector ${X}$, and ${n-1 \times n-1}$ Hermitian matrix ${M}$, then we have
$\displaystyle |v_{i,1}|^2 = \frac{\prod_{k=1}^{n-1} (\lambda_i(A) - \lambda_k(M))}{\prod_{k=1; k \neq i}^n (\lambda_i(A) - \lambda_k(A))} \ \ \ \ \ (1)$
assuming that the denominator is non-zero.
Once one is aware of the identity, it is not so difficult to prove it; we give two proofs, each about half a page long, one of which is based on a variant of the Cauchy-Binet formula, and the other based on properties of the adjugate matrix. But perhaps it is surprising that such a formula exists at all; one does not normally expect to learn much information about eigenvectors purely from knowledge of eigenvalues. In the random matrix theory literature, for instance in this paper of Erdos, Schlein, and Yau, or this later paper of Van Vu and myself, a related identity has been used, namely
$\displaystyle |v_{i,1}|^2 = \frac{1}{1 + \| (M-\lambda_i(A))^{-1} X \|^2}, \ \ \ \ \ (2)$
but it is not immediately obvious that one can derive the former identity from the latter. (I do so below the fold; we ended up not putting this proof in the note as it was longer than the two other proofs we found. I also give two other proofs below the fold, one from a more geometric perspective and one proceeding via Cramer’s rule.) It was certainly something of a surprise to me that there is no explicit appearance of the ${a,X}$ components of ${A}$ in the formula (1) (though they do indirectly appear through their effect on the eigenvalues ${\lambda_k(A)}$; for instance from taking traces one sees that ${a = \sum_{k=1}^n \lambda_k(A) - \sum_{k=1}^{n-1} \lambda_k(M)}$).
One can get some feeling of the identity (1) by considering some special cases. Suppose for instance that ${A}$ is a diagonal matrix with all distinct entries. The upper left entry ${a}$ of ${A}$ is one of the eigenvalues of ${A}$. If it is equal to ${\lambda_i(A)}$, then the eigenvalues of ${M}$ are the other ${n-1}$ eigenvalues of ${A}$, and now the left and right-hand sides of (1) are equal to ${1}$. At the other extreme, if ${a}$ is equal to a different eigenvalue of ${A}$, then ${\lambda_i(A)}$ now appears as an eigenvalue of ${M}$, and both sides of (1) now vanish. More generally, if we order the eigenvalues ${\lambda_1(A) \leq \dots \leq \lambda_n(A)}$ and ${\lambda_1(M) \leq \dots \leq \lambda_{n-1}(M)}$, then the Cauchy interlacing inequalities tell us that
$\displaystyle 0 \leq \lambda_i(A) - \lambda_k(M) \leq \lambda_i(A) - \lambda_k(A)$
for ${1 \leq k < i}$, and
$\displaystyle \lambda_i(A) - \lambda_{k+1}(A) \leq \lambda_i(A) - \lambda_k(M) < 0$
for ${i \leq k \leq n-1}$, so that the right-hand side of (1) lies between ${0}$ and ${1}$, which is of course consistent with (1) as ${v_i}$ is a unit vector. Thus the identity relates the coefficient sizes of an eigenvector with the extent to which the Cauchy interlacing inequalities are sharp.
— 1. Relating the two identities —
We now show how (1) can be deduced from (2). By a limiting argument, it suffices to prove (1) in the case when ${\lambda_i(A)}$ is not an eigenvalue of ${M}$. Without loss of generality we may take ${i=n}$. By subtracting the matrix ${\lambda_n(A) I_n}$ from ${A}$ (and ${\lambda_n(A) I_{n-1}}$ from ${M_j}$, thus shifting all the eigenvalues down by ${\lambda_i(A)}$, we may also assume without loss of generality that ${\lambda_i(A)=0}$. So now we wish to show that
$\displaystyle |v_{n,1}|^2 \prod_{k=1}^{n-1} \lambda_k(A) = \prod_{k=1}^{n-1} \lambda_k(M).$
The right-hand side is just ${\mathrm{det}(M)}$. If one differentiates the characteristic polynomial
$\displaystyle \mathrm{det}(A - \lambda I_n) = \prod_{k=1}^n (\lambda_k(A) - \lambda) = - \lambda \prod_{k=1}^{n-1} (\lambda_k(A) - \lambda)$
at ${\lambda=0}$, one sees that
$\displaystyle \prod_{k=1}^{n-1} \lambda_k(A) = -\frac{d}{d\lambda}\mathrm{det}(A - \lambda I_n)|_{\lambda=0}.$
Finally, (2) can be rewritten as
$\displaystyle |v_{n,1}|^2 = \frac{1}{1 + X^* M^{-2} X}$
so our task is now to show that
$\displaystyle \frac{d}{d\lambda}\mathrm{det}(A - \lambda I_n)|_{\lambda=0} = - \mathrm{det}(M) ( 1 + X^* M^{-2} X ). \ \ \ \ \ (3)$
By Schur complement, we have
$\displaystyle \mathrm{det}(A - \lambda I_n) = \mathrm{det}(M - \lambda I_{n-1}) ( a - \lambda - X^* (M_1 - \lambda I_{n-1})^{-1} X ). \ \ \ \ \ (4)$
Since ${\lambda=0}$ is an eigenvalue of ${A}$, but not of ${M_1}$ (by hypothesis), the factor ${a - \lambda - X^* (M_1 - \lambda I_{n-1})^{-1} X}$ vanishes when ${\lambda=0}$. If we then differentiate (4) in ${\lambda}$ and set ${\lambda=0}$ we obtain (3) as desired.
— 2. A geometric proof —
Here is a more geometric way to think about the identity. One can view ${\lambda_i(A) - A}$ as a linear operator on ${{\bf C}^n}$ (mapping ${w}$ to ${(\lambda_i(A) w - Aw)}$ for any vector ${w}$); it then also acts on all the exterior powers ${\bigwedge^k {\bf C}^n}$ by mapping ${w_1 \wedge \dots \wedge w_k}$ to ${(\lambda_i(A) w_1 - Aw_1) \wedge \dots \wedge (\lambda_i(A) w_k - Aw_k)}$ for all vectors ${w_1,\dots,w_k}$. In particular, if one evaluates ${A}$ on the basis ${v_1 \wedge \dots \wedge v_{j-1} \wedge v_{j+1} \wedge \dots \wedge n_n}$ of ${\bigwedge^{n-1} {\bf C}^n}$ induced by the orthogonal eigenbasis ${v_1,\dots,v_n}$, we see that the action of ${\lambda_i(A) - A}$ on ${\bigwedge^{n-1} {\bf C}^n}$ is rank one, with
$\displaystyle \langle (\lambda_i(A) - A) \omega, \omega \rangle = \prod_{k=1; k \neq i}^n (\lambda_i(A) - \lambda_k(A)) |\omega \wedge v_i|^2$
for any ${\omega \in \bigwedge^{n-1} {\bf C}^n}$, where ${\langle,\rangle}$ is the inner product on ${\bigwedge^{n-1} {\bf C}^n}$ induced by the standard inner product on ${{\bf C}^n}$. If we now apply this to the ${n-1}$-form ${\omega = e_1 \wedge \dots \wedge e_{j-1} \wedge e_{j+1} \wedge \dots \wedge e_n}$, we have ${|\omega \wedge v_i| = |v_{i,j}|}$, while ${(\lambda_i(A)-A) \omega}$ is equal to ${\mathrm{det}(\lambda_i(A) I_{n-1} - M) \omega}$ plus some terms orthogonal to ${\omega}$. Since ${\mathrm{det}(\lambda_i(A) I_{n-1} - M ) = \prod_{k=1}^{n-1} (\lambda_i(A) - \lambda_k(M))}$, Theorem 1 follows.
— 3. A proof using Cramer’s rule —
By a limiting argument we can assume that all the eigenvalues of ${A}$ are simple. From the spectral theorem we can compute the resolvent ${(\lambda I_n - A)^{-1}}$ for ${\lambda \neq \lambda_1(A),\dots,\lambda_n(A)}$ as
$\displaystyle (\lambda I_n - A)^{-1} = \sum_{k=1}^n \frac{1}{\lambda - \lambda_k(A)} v_k v_k^*.$
Extracting the ${(j,j)}$ component of both sides and using Cramer’s rule, we conclude that
$\displaystyle \frac{\mathrm{det}(\lambda I_{n-1} - M_j)}{\mathrm{det}(\lambda I_n - A)} = \sum_{k=1}^n \frac{1}{\lambda - \lambda_k(A)} |v_{k,j}|^2$
or in terms of eigenvalues
$\displaystyle \frac{\prod_{k=1}^{n-1} (\lambda - \lambda_k(M_j)) }{\prod_{k=1}^{n} (\lambda - \lambda_k(A)) } = \sum_{k=1}^n \frac{1}{\lambda - \lambda_k(A)} |v_{k,j}|^2.$
Both sides are rational functions with a simple pole at the eigenvalues ${\lambda_i(A)}$. Extracting the residue at ${\lambda = \lambda_i(A)}$ we conclude that
$\displaystyle \frac{\prod_{k=1}^{n-1} (\lambda_i(A) - \lambda_k(M_j)) }{\prod_{k=1; k \neq i}^{n} (\lambda_i(A) - \lambda_k(A)) } = |v_{i,j}|^2$
and Theorem 1 follows. (Note that this approach also gives a formula for ${v_{i,j} \overline{v_{i,j'}}}$ for ${j,j'=1,\dots,n}$, although the formula becomes messier when ${j \neq j'}$ because the relevant minor of ${\lambda I_n}$ is no longer a scalar multiple of the identity ${I_{n-1}}$.) |
## Want to keep learning?
This content is taken from the University of Michigan's online course, Design Computing: 3D Modeling in Rhinoceros with Python Rhinoscript. Join the course to learn more.
2.21
## University of Michigan
Skip to 0 minutes and 6 seconds This video’s titled Attractor Point. So one common way of manipulating geometry in a point list is to use something called an Attractor Point. And that’s simply a point that we create within the scene, and that we input. So here I’m inputting a point and saving it in the variable attractor point and then create my point list. And then loop through my point list. And the first thing I’m going to do is measure the distance between each point within that list as I go through the loop one at a time.
Skip to 0 minutes and 54 seconds And using a function which we haven’t seen before called rs Distance, and that simply measures the distance between two points in space. So whatever my current point is in my list to the attractor point, and I’m going to save it in the variable distance and then first I’m just going to print it out. So let’s run that.
Skip to 1 minute and 20 seconds So, select the attractor point creates my point, matrix C.
Skip to 1 minute and 28 seconds And then I can see my printout. And the numbers go from 11 point something down to one point something is the lowest.
Skip to 1 minute and 40 seconds And so what that’s doing is it’s measuring the distance from that point to every point within the matrix C. So if you can imagine a sort of line between all of these points and this list that’s printed out of the distances and makes a lot of sense because the points that I start with here, have a higher value because I might pointless starts its generation over here. So these points are going to have a higher value and then as I get closer to this point, my attractor point, those values are going to go down.
Skip to 2 minutes and 22 seconds So let’s do something with that, that distance generator.
Skip to 2 minutes and 29 seconds So I could use that to let’s say, create a circle, add a circle on each, using as its origin point, the current point within the list. And then, let’s just start with the sort of, raw distance value.
Skip to 2 minutes and 49 seconds And we’ll see what that generates.
Skip to 2 minutes and 59 seconds Okay, so it generates a pretty crazy pattern. But if we think about it, it starts to make a lot of sense my circles which are furthest from the attractor point or largest. So their radius, which is that value there and you can see it’s going to go back and intersect the attractor point because that radius value is that distance from that point to that point. So the circumference of that circle’s is always going to touch that attractor point.
Skip to 3 minutes and 38 seconds I can undo that. So if we wanted to lessen its effect like we did with the I value and the rotation, I could put a divider in here. So let’s divide it by, let’s say 20 to start with and we could also print out to see what those values are going to be.
Skip to 4 minutes and 11 seconds So now, I’m getting a matrix C of circles whose size diminishes as they get closer to the attractor point. And of course this is going to change if I relocate the attractor point, it’s going to have a different effect on the matrix C.
Skip to 4 minutes and 38 seconds So, attractor points are a very interesting system that we’ll continue to use in different ways throughout the rest of the course. One thing maybe to work on, to think about if you take the previous lesson which we copied the closed planar curve to the point and then we rotated them based on the eye. How could you use the distance instead to do a rotation? Or another thing might be a scale. How could you use the distance value that you’re measuring from the attractor point to the other points as a way to scale, maybe something besides a circle, that could be a really interesting exercise.
Skip to 5 minutes and 19 seconds And you’ll have to like with this, you’re going to have to probably divide the distance value or create some mathematics that get it in the range that are workable. But should be fairly straightforward and would be also a good thing to do for the next assignment.
# attractor point
what’s in this video
In this video I demonstrate a common way of manipulating geometry using an attractor point. During the demonstration I introduce a function that we have not used before in the course—the rs.Distance function.
#### check your knowledge
##### [Pause the video and take a moment to practice using attractor points]
Use the attractor point system in the Nested Loop: 2D Point Matrix code to manipulate the rotational angle of the transformed geometry.
#### tutorial code
#ATTRACTOR POINT
import rhinoscriptsyntax as rs
#create an empty list
ptList = []
#input a point to use as an attractor
attrPt = rs.GetObject('select an attractor point',rs.filter.point)
#incremental loop to generate points
for i in range(10):
for j in range(10):
#define x in terms of i
#define y in terms of j
x = i
y = j
z = 0
rs.AddPoint(x,y,z)
#rs.AddTextDot((x,y,z), (x,y,z))
#save point values in a list
ptList.append((x,y,z))
#loop through point list and print out index number and values
for i in range(len(ptList)):
# print i, ':', ptList[i]
# rs.AddTextDot(i, ptList[i])
####CREATE TRANSFORMATION OF GEOMETRY####
#measure distance between attractor point and current point in the list
distance = rs.Distance(ptList[i], attrPt)
print distance/20
#create circle using distance value as radius
rs.AddCircle(ptList[i], distance/20)
## Get a taste of this course
Find out what this course is like by previewing some of the course steps before you join: |
# Primes and Parity
This problem is motivated by the polymath4 project. There, the aim was to find an efficient deterministic algorithm for finding a prime larger than $N$. The hope was to find a polynomial algorithm in $n=\log N$ which can be done assuming the truth of either plausible but intractable number theory conjectures or plausible but intractable problems in computational complexity. What was achieved was an algorithm that runs in time $N^{1/2-c}$ for some small $c>0$ to find the parity of the number of primes in an arbitrary interval of integers smaller than $N$. In view of this, identifying a single interval with an odd number of primes could be useful. Here are some questions regarding primes and parity.
### 1)
Let $N$ be an integer. consider the intervals $[N,2N]$, $[2N,3N], \dots$ $[kN,(k+1)N]$ What is the smallest $k$ that we can guarantee that one of these intervals contain an odd number of primes?
Based on Cramer's probabilistic modeling we can expect $k=a \log N$ to work for every $N$ and some constant $a$. Results about gaps between primes assert that when $k$ is exponential in $N$ we can find such an interval with one, hence an odd number of, primes. (For that we need two consecutive large gaps, apparently this is known but I am not aware of an elementary argument as for one gap.)
Is there some hope to prove it for $k=N^{100}$, $k=N$? $k=N^{1/2}$? A proof for $k=N^{1/2-c}$ will allow us by divide and conquer to find a prime $p$ larger than $N$ is time $p^{1/2-c}$.
### 2)
For which of the following sequences of intervals $[a(n),2a(n)]$ would it be possible to prove that that (i) there are infinitely many cases of odd number of primes; (ii) this occurs in half the cases?
2.1 $a(n)=n$ or an a.p. (I think (i) is ok);
2.2 $a(n)=p_n$;
2.3 $a_n=n^2$;
2.4 $a_n=2^n$
Is showing that there are infinitely many $n$s for which there are odd number of $n$-digits primes entirely hopeless (like Cramer's conjecture)?
### 3)
Let $p_n$ be the $n$th prime. What can be said/proved about the zeta-like function $$\prod_{k=1}^\infty {{1}\over{1-(-1)^kp_k^{-s}}}$$
### 4)
Beside polymath4, were such questions about primes and parity considered before?
### 5)
Mark Lewko proposed the following question in a comment below: Consider subsets $A\subset [n]$ of density $n/log(n)$. What is the smallest collection of arithmetic progressions such that at least one is guaranteed to intersect every such $A$ with odd parity?
• For 3), do you want the exponent of $p_k$ to be $-s$? – Stopple Jan 30 '15 at 16:14
• Related MO question (and see rlo's answer there): mathoverflow.net/questions/164936/… – Lucia Jan 30 '15 at 18:12
• A somewhat different question related to getting around the $\sqrt{n}$ barrier is the following: First recall that the Polymath4 result allows one to compute the parity of primes in an arithmetic progression intersecting [n,2n] "efficiently" (in time $n^{1/2-\delta}$ for some small $\delta$). Consider a subset $A \subseteq [n]$ of density $n/\log(n)$. What is the smallest collection of arithmetic progressions such that at least one is guaranteed to intersect $A$ with odd parity? More generally, it would be nice to reduce the problem to something not explicitly involving primes. – Mark Lewko Feb 10 '15 at 16:26
• 1) Assuming Oppermann's conjecture, if you set $k=N+1$, you have primes in every interval. Don't know about the parity, though. – Fred Kline Feb 10 '15 at 18:14
• Dear Mark, my highly uneducated guess would be that just based on density (or even on other known properties or even on RH) you want be able to find a small collection of such AP's. – Gil Kalai Feb 12 '15 at 13:22
## protected by Todd Trimble♦Jan 31 '15 at 17:58
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). |
# Basic Math Examples
To find the reciprocal, divide by the number given.
Simplify.
Multiply the numerator by the reciprocal of the denominator.
Multiply by to get .
We're sorry, we were unable to process your request at this time
Step-by-step work + explanations
• Step-by-step work
• Detailed explanations
• Access anywhere
Access the steps on both the Mathway website and mobile apps
$--.--/month$--.--/year (--%) |
# Statement I: The equation (sin–1x)3 + (cos–1 x)3 – ap 3 = 0 has a solution for all 1 a 32 ³ . Statement II: For any x R Î , 1 1 sin x cos x 2 – – p + = and 2 2 1 9 0 sin x 4 16 æ ö – p p
Question:
Statement I: The equation $\left(\sin ^{-1} x\right)^{3}+\left(\cos ^{-1} x\right)^{3}-a \pi^{3}=0$ has a solution for all $\mathrm{a} \geq \frac{1}{32}$.
Statement II: For any $x \in R, \sin ^{-1} x+\cos ^{-1} x=\frac{\pi}{2}$ and $0 \leq\left(\sin ^{-1} x-\frac{\pi}{4}\right)^{2} \leq \frac{9 \pi^{2}}{16} \quad$
1. Both statements I and II are true.
2. Both statements I and II are false.
3. Statement I is true and statement II is false.
4. Statement I is false and statement II is true.
Correct Option: 1
JEE Main Previous Year 1 Question of JEE Main from Mathematics Inverse Trigonometric Functions chapter.
JEE Main Previous Year Online April 12, 2014
Solution:
### Related Questions
• If $\alpha=\cos ^{-1}\left(\frac{3}{5}\right), \beta=\tan ^{-1}\left(\frac{1}{3}\right)$, where $0<\alpha, \beta<\frac{\pi}{2}$, then $\alpha-\beta$ is equal to:
View Solution
• A value of $x$ satisfying the equation $\sin \left[\cot ^{-1}(1+x)\right]=\cos$ $\left[\tan ^{-1} x\right]$, is :
View Solution
• The principal value of $\tan ^{-1}\left(\cot \frac{43 \pi}{4}\right)$ is:
View Solution
• The number of solutions of the equation, $\sin ^{-1} x=2 \tan ^{-1} x$ (in principal values) is :
View Solution
• A value of $\tan ^{-1}\left(\sin \left(\cos ^{-1}\left(\sqrt{\frac{2}{3}}\right)\right)\right.$ is
View Solution
• A value of $\tan ^{-1}\left(\sin \left(\cos ^{-1}\left(\sqrt{\frac{2}{3}}\right)\right)\right.$ is
View Solution
• The largest interval lying in $\left(\frac{-\pi}{2}, \frac{\pi}{2}\right)$ for which the function, $f(x)=4^{-x^{2}}+\cos ^{-1}\left(\frac{x}{2}-1\right)+\log (\cos x)$, is defined, is
View Solution
• The domain of the function $f(x)=\frac{\sin ^{-1}(x-3)}{\sqrt{9-x^{2}}}$ is
View Solution
• The trigonometric equation $\sin ^{-1} x=2 \sin ^{-1} a$ has a solution for
View Solution
• $$\cot ^{-1}(\sqrt{\cos \alpha})-\tan ^{-1}(\sqrt{\cos \alpha})=x$$ then $\sin x=$
View Solution
error: Content is protected !! |
LFD Book Forum Discussion of the VC proof
User Name Remember Me? Password
FAQ Calendar Mark Forums Read
Thread Tools Display Modes
#24
10-26-2016, 02:28 PM
CountVonCount Member Join Date: Oct 2016 Posts: 17
Re: Discussion of the VC proof
I have also another question on the same page (190):
At the end of the page there is the formula:
$\sum_S&space;\mathbb{P}[S]\times\mathbb{P}[sup_{h\in&space;H}\vert&space;E_{in}(h)&space;-&space;{E_{in}}'(h))&space;\vert&space;>&space;\frac{\varepsilon&space;}{2}]&space;\leq&space;sup_S&space;\mathbb{P}[sup_{h\in&space;H}\vert&space;E_{in}(h)&space;-&space;{E_{in}}'(h))&space;\vert&space;>&space;\frac{\varepsilon&space;}{2}]&space;\vert&space;S&space;]$
I don't understand why the RHS is greater or equal to the LHS. The only legitimation I see for this is, that the distribution of P[S] is uniform, but this has not been stated in the text.
Or do I oversee here anything and this is also valid for all kinds of distribution?
Thread Tools Display Modes Threaded Mode
Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home General General Discussion of Machine Learning Free Additional Material Dynamic e-Chapters Dynamic e-Appendices Course Discussions Online LFD course General comments on the course Homework 1 Homework 2 Homework 3 Homework 4 Homework 5 Homework 6 Homework 7 Homework 8 The Final Create New Homework Problems Book Feedback - Learning From Data General comments on the book Chapter 1 - The Learning Problem Chapter 2 - Training versus Testing Chapter 3 - The Linear Model Chapter 4 - Overfitting Chapter 5 - Three Learning Principles e-Chapter 6 - Similarity Based Methods e-Chapter 7 - Neural Networks e-Chapter 8 - Support Vector Machines e-Chapter 9 - Learning Aides Appendix and Notation e-Appendices
All times are GMT -7. The time now is 08:21 PM.
Contact Us - LFD Book - Top
Powered by vBulletin® Version 3.8.3
Copyright ©2000 - 2020, Jelsoft Enterprises Ltd.
The contents of this forum are to be used ONLY by readers of the Learning From Data book by Yaser S. Abu-Mostafa, Malik Magdon-Ismail, and Hsuan-Tien Lin, and participants in the Learning From Data MOOC by Yaser S. Abu-Mostafa. No part of these contents is to be communicated or made accessible to ANY other person or entity. |
r9 - 21 Jun 2007 - 21:49:07 - PascalLamblinYou are here: TWiki > Public Web > DeepBeliefNetworks > DBNPseudoCode
Pseudo-code for training Deep Belief Networks
## Training of Restricted Boltzmann Machines
This is the RBM update procedure for binomial units. It also works for exponential and truncated exponential units, and for the linear parameters of a Gaussian unit (using the appropriate sampling procedure for Q and P). It can be readily adapted for the variance parameter of Gaussian units.
• v[0] is a sample from the training distribution for the RBM
• epsilon is a learning rate for the stochastic gradient descent in Contrastive Divergence
• W is the RBM weight matrix, of dimension (number of hidden units, number of inputs)
• b is the RBM biases vector for hidden units
• c is the RBM biases vector for input units
RBMupdate(v[0], epsilon, W, b, c):
for all hidden units i:
compute Q(h[0][i] = 1 | v[0]) # for binomial units, sigmoid(b[i] + sum_j(W[i][j] * v[0][j]))
sample h[0][i] from Q(h[0][i] = 1 | v[0])
for all visible units j:
compute P(v[1][j] = 1 | h[0]) # for binomial units, sigmoid(c[j] + sum_i(W[i][j] * h[0][i]))
sample v[1][j] from P(v[1][j] = 1 | h[0])
for all hidden units i:
compute Q(h[1][i] = 1 | v[1]) # for binomial units, sigmoid(b[i] + \sum_j(W[i][j] * v[1][j]))
W += epsilon * (h[0] * v[0]' - Q(h[1][.] = 1 | v[1]) * v[1]')
b += epsilon * (h[0] - Q(h[1][.] = 1 | v[1]))
c += epsilon * (v[0] - v[1])\$
## Pre-training of Deep Belief Networks (Unsupervised)
Train a DBN in a purely unsupervised way, with the greedy layer-wise procedure in which each added layer is trained as an RBM by contrastive divergence.
• X is the input training distribution for the network
• epsilon is a learning rate for the stochastic gradient descent in Contrastive Divergence
• L is the number of layers to train
• n=(n[1], ...,n[L]) is the number of hidden units in each layer
• W[i] is the weight matrix for level i, for i from 1 to L
• b[i] is the bias vector for level i, for i from 0 to L
PreTrainUnsupervisedDBN(X, epsilon, L, n, W, b):
initialize b[0]=0
for l=1 to L:
initialize W[i]=0, b[i]=0
while not stopping criterion:
sample g[0]=x from X
for i=1 to l-1:
sample g[i] from Q(g[i]|g[i-1])
RBMupdate(g[l-1], epsilon, W[l], b[l], b[l-1])
## Supervised Training of Deep Belief Network
After a DBN has been initialized by pre-training, this procedure will optimize all the parameters with respect to the supervised criterion C, using stochastic gradient descent.
• Z is the supervised training distribution for the DBN, with (input,target) samples (x,y)
• C is a training criterion, a function that takes a network output f(x) and a target y and returns a scalar differentiable in f(x)
• epsilon_C is a learning rate for the stochastic gradient descent on supervised cost C
• L is the number of layers
• n=(n[1], ..., n[L]) is the number of hidden units in each layer
• W[i] is the weight matrix for level =i, for i from 1 to L
• b[i] is the bias vector for level i, for i from 0 to L
• V is a weight matrix for the supervised output layer of the network
• c is the bias vector for the supervised output layer
DBNSupervisedFineTuning(Z, C, epsilon_C, L, n, W, b, V, c):
Recursively define mean-field propagation mu[i](x)=Expectation(g[i]|g[i-1]=mu[i-1](x))
where mu[0](x)=x, and Expectation(g[i]|g[i-1]=mu[i-1]) is the expected value of g[i] under the RBM conditional distribution Q(g[i]|g[i-1]),
when the values of g[i-1] are replaced by the mean-field values mu[i-1](x)
# In the case where g[i] has binomial units:
# Expectation(g[i][j]|g[i-1]=mu[i-1]) = sigmoid(b[i][j] + sum_k W[i][j][k] mu[i-1][k](x)
Define the network output function f(x) = V * mu[L](x)' + c
Iteratively minimize the expected value of C(f(x),y)
for pairs (x,y) sampled from Z by tuning parameters W, b, V, c.
This can be done by stochastic gradient descent with learning rate epsilon_C,
using an appropriate stopping criterion such as early stopping on a validation set.
## Global Training Procedure
Train a DBN for a supervised learning task, by first performing pre-training of all layers (except the output weights V), followed by supervised fine-tuning to minimize a criterion C.}\
• Z is the supervised training distribution for the DBN, with (input,target) samples (x,y)
• C is a training criterion, a function that takes a network output f(x) and a target y and returns a scalar differentiable in f(x)
• epsilon_CD is a learning rate for the stochastic gradient descent with Contrastive Divergence
• epsilon_C is a learning rate for the stochastic gradient descent on supervised cost C
• L is the number of layers
• n=(n[1], ..., n[L]) is the number of hidden units in each layer
• W[i] is the weight matrix for level =i, for i from 1 to L
• b[i] is the bias vector for level i, for i from 0 to L
• V is a weight matrix for the supervised output layer of the network
• c is the bias vector for the supervised output layer
TrainSupervisedDBN(Z, C, epsilon_CD, epsilon_C, L, n, W, b, V):
let X the marginal over the input part of Z
PreTrainUnsupervisedDBN(X, epsilon_CD, L, n, W, b)
DBNSupervisedFineTuning(Z, C, epsilon_C, L, n, W, b, V, c)
## Alternative (Supervised) Training of Deep Belief Networks' Last Layer
When the units of the last layer are binomial, and if the target is a class label, instead of training the last layer of a DBN using unsupervised Contrastive Divergence, we can train a joint RBM with (g[L-1], y) as input. In TrainSupervisedDBN, We then replace PreTrainUnsupervisedDBN by PreTrainSupervisedDBN.
The network configuration is this one:
[______ L ______]
/ \
[______ L-1 ______] [___ Y ___]
Train the first layers of a DBN in a purely unsupervised way, with the greedy layer-wise procedure in which each added layer is trained as an RBM by contrastive divergence. Then, train the last layer as an RBM modelling the joint distribution of the previous layer and the target.
• Z is the supervised training distribution for the DBN, with (input,target) samples (x,y)
• epsilon is a learning rate for the stochastic gradient descent in Contrastive Divergence
• L is the total number of layers to train
• n=(n[1], ...,n[L]) is the number of hidden units in each layer
• n_t is the number of different class labels
• W[i] is the weight matrix for level i, for i from 1 to L
• b[i] is the bias vector for level i, for i from 0 to L
• V is the weight matrix between the last layer and target layer
• c is the bias vector for target layer
PreTrainSupervisedDBN(Z, epsilon, L, n, W, b, V, c):
let X the marginal over the input part of Z
initialize b[0]=0
for l=1 to L-1:
initialize W[i]=0, b[i]=0
while not stopping criterion:
sample g[0]=x from X
for i=1 to l-1:
sample g[i] from Q(g[i]|g[i-1])
RBMupdate(g[l-1], epsilon, W[l], b[l], b[l-1])
initialize W[L]=0, b[L]=0, V=0, c=0
while not stopping criterion:
sample (g[0]=x, y) from Z
for i=1 to L-1:
sample g[i] from Q(g[i]|g[i-1])
RBMupdate((g[L-1],y), epsilon, (W[L],V'), b[L], (b[L-1],c))
-- PascalLamblin - 21 Jun 2007
Edit | WYSIWYG | Attach | Printable | Raw View | Backlinks: Web, All Webs | | More topic actions |
AP Board 7th Class Maths Solutions Chapter 6 Simple Equations Unit Exercise
SCERT AP 7th Class Maths Solutions Pdf Chapter 6 Simple Equations Unit Exercise Questions and Answers.
AP State Syllabus 7th Class Maths Solutions 6th Lesson Simple Equations Unit Exercise
Question 1.
Runs made by two batsmen in 3 matches are given below.
Kohli: 49, 98, 72
Rohit: 64, 45, 83, then find the average of runs scored by Kohli and Rohit. Whose average is higher?
Given runs made by Kohli: 49, 98, 72.
Average runs of Kohli = $$\frac{\text { Sum of the observations }}{\text { Number of observations }}$$ = $$\frac{49+98+72}{3}$$ = $$\frac{219}{3}$$ = 73
Runs made by Rohit: 64, 45, 83.
Average runs of = $$\frac{64+45+83}{3}$$
= $$\frac{192}{3}$$= 64
73 > 64.
Kohli average is higher than Rohit.
Question 2.
Find mode of 38, 42, 35, 37, 45, 50, 32, 43, 43, 40, 36, 38, 43, 36 and 47. Verify whether it is Unimodal or Bimodal data.
Given data : 38, 42, 35, 37, 45, 50, 32, 43, 43, 40, 36, 38, 43, 38, 47.
Arrange the given observations in ascending order.
As 38 and 43 occurs most frequently than other observations in the data.
∴ Mode = 38 and 43.
So, given data is Bimodal data.
Question 3.
The temperature in different places are 0, – 5, 7, 10, 13, – 1 and 41 in degree Celsius. Find the Median. If another observation, ‘4°C’ is added to the given data, is there any change in value of Median ? Explain.
Given data : 0, – 5′, 7, 10, 13, – 1, 41.
Arrange the given observations in ascending order.
In seven observations the fourth observation 7 is the middle most value.
∴ Median = 7 .
If 4°C is added to the given data,
0, – 5, 7, 10, 13, – 1, 41 and 4.
Arrange the given observations in ascending order.
In eight observations the fourth and fifth observations 4 and 7 are the middle most values in the data.
Median = Average of the two middle most values.
= $$\frac{4+7}{2}$$ = $$\frac{11}{2}$$ = 5.5
So, if we added a new observation, then the median also changed (decreased). Median is decreased from 7 to 5.5.
Question 4.
If the range of observation 7x, 5x, 3x, 2x, x (x > 0) is 12, then find value of ‘x’ and express all the observations in numerical form.
Given data.: 7x, 5x, 3x, 2x, x (x > 0) and Range = 12.
Maximum value = 7x; Minimum value = x
Range = Maximum value – Minimum value
= 7x – x – 12
⇒ 6x = 12 ⇒ $$\frac{6 x}{6}$$ = $$\frac{12}{6}$$
∴ x = 12
Question 5.
Birth and death rates of different states in 2015 are given below. Approximately draw a double bar graph for the given data.
Question 6.
The following data relates to the cost of construction of a house.
Draw a Pie diagram to represent the above data.
Angle of sector = $$\frac{\text { Value of the item }}{\text { Sum of the value of all items }}$$ × 360° |
# Canonical Order Well-Orders Ordered Pairs of Ordinals
## Theorem
The canonical order, $R_0$ strictly well-orders the ordered pairs of ordinal numbers.
## Proof
### Strict Ordering
Let $\tuple {x, y} \mathrel {R_0} \tuple {x, y}$.
Then:
$\map \max {x, y} < \map \max {x, y} \lor \tuple {x, y} \mathrel {\operatorname {Le} } \tuple {x, y}$
$\neg \tuple {x, y} \mathrel {R_0} \tuple {x, y}$
and $R_0$ is antireflexive.
$\Box$
Let:
$(1): \quad \tuple {\alpha, \beta} \mathrel {R_0} \tuple {\gamma, \delta}$
and:
$(2): \quad \tuple {\gamma, \delta} \mathrel {R_0} \tuple {\epsilon, \zeta}$
There are two cases:
$\map \max {\alpha, \beta} < \map \max {\gamma, \delta}$
or:
$\map \max {\alpha, \beta} = \map \max {\gamma, \delta}$
$\ds \map \max {\alpha, \beta} < \map \max {\gamma, \delta}$ $\leadsto$ $\ds \map \max {\alpha, \beta} < \map \max {\epsilon, \zeta}$ from $(2): \quad \map \max {\gamma, \delta} \le \map \max {\epsilon, \zeta}$ $\ds$ $\leadsto$ $\ds \tuple {\alpha, \beta} \mathrel {R_0} \tuple {\epsilon, \zeta}$ Definition of Canonical Order
$\ds \map \max {\alpha, \beta} = \map \max {\gamma, \delta}$ $\leadsto$ $\ds \tuple {\alpha, \beta} \mathrel {\operatorname {Le} } \tuple {\gamma, \delta}$ $\ds$ $\leadsto$ $\ds \map \max {\alpha, \beta} < \max \map {\epsilon, \zeta} \lor \tuple {\alpha, \beta} \mathrel {\operatorname {Le} } \tuple {\epsilon, \zeta}$ Transitivity of $\operatorname{Le}$ $\ds$ $\leadsto$ $\ds \tuple {\alpha, \beta} \mathrel {R_0} \tuple {\epsilon, \zeta}$ Definition of Canonical Order
In either case:
$\tuple {\alpha, \beta} \mathrel {R_0} \tuple {\epsilon, \zeta}$
and $R_0$ is transitive.
$\Box$
### Strict Total Ordering
Suppose:
$\neg \tuple {\alpha, \beta} \mathrel {R_0} \tuple {\gamma, \delta}$
and:
$\neg \tuple {\gamma, \delta} \mathrel {R_0} \tuple {\alpha, \beta}$
Then:
$\map \max {\alpha, \beta} \le \map \max {\gamma, \delta}$
and:
$\map \max {\gamma, \delta} \le \map \max {\alpha, \beta}$
So:
$\map \max {\alpha, \beta} = \map \max {\gamma, \delta}$
Therefore:
$\neg \tuple {\alpha, \beta} \mathrel {\operatorname {Le} } \tuple {\gamma, \delta}$
and:
$\neg \tuple {\gamma, \delta} \mathrel {\operatorname {Le} } \tuple {\alpha, \beta}$
$\tuple {\alpha, \beta} = \tuple {\gamma, \delta}$
$\Box$
### Well-Ordering
Take any nonempty subset $A$ of $\paren {\On \times \On}$.
We shall allow $A$ to be any class.
This isn't strictly necessary, but it will not alter the proof.
The $\max$ mapping sends each element in $A$ to an element of $\On$.
Therefor the image of $\max$ has a minimal element, $N$.
Take $B$ to be the class of all ordered pairs $\tuple {x, y}$, such that $\map \max {x, y} = N$.
Let the $\operatorname {Le}$-minimal element of $B$ be denoted $C$.
Then:
$\map \max C = N$
and $C$ is seen to be $\operatorname {Le}$-minimal.
Therefore $C$ is the $R_0$-minimal element of $A$.
$\blacksquare$ |
# The reduced row echelon form of a system of linear equations is given.Write the system of equations corresponding to the given matrix.
The reduced row echelon form of a system of linear equations is given.Write the system of equations corresponding to the given matrix. Use x, y. or x, y, z. or $$x_1,x_2,x_3,x_4$$ as variables. Determine whether the system is consistent or inconsistent. If it is consistent, give the solution. $$\begin{bmatrix}1&0&0&0&|&1 \\0&0&1&0&|&4 \\3&0&2&3&|&0 \end{bmatrix}$$
• Questions are typically answered in as fast as 30 minutes
### Plainmath recommends
• Get a detailed answer even on the hardest topics.
• Ask an expert for a step-by-step guidance to learn to do it yourself.
Ezra Herbert
Since f(x)=1/4 is continuous for all x=/0, the limit is the same as just plugging in the value. |
## Wednesday, February 29, 2012
### Geothermal energy potential in the U.S.
Over the span of three years, Google's philanthropic arm, Google.org, supported a project run by the SMU Geothermal Institute intended to map out the potential for geothermal energy use in the United States. Below is the result of their research.
This map incorporates tens of thousands of new data points measuring the potential of Enhanced Geothermal Systems, and is the most comprehensive map of such data to date. There is a wealth of untapped geothermal energy in the nation's ground, just waiting to power our houses!
Read more about the results of the study (and Enhanced Geothermal Systems) here, and see some 3D models of Enhanced Geothermal Systems here.
#### 2 comments:
1. Wow! it is great! And the map is so detailed! There are lots of opportunities to establish geothermal power systems and methods in the United States.
2. Geothermal energy is generated at the geothermal power plants where the heat of earth is used to generate electricity. non pressurized flow center |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 26 Jul 2016, 18:24
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
Your Progress
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
One of the limiting factors in human physical performance is
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Author Message
TAGS:
Hide Tags
Manager
Joined: 27 Dec 2009
Posts: 170
Followers: 2
Kudos [?]: 76 [2] , given: 3
One of the limiting factors in human physical performance is [#permalink]
Show Tags
02 Jan 2010, 10:48
2
This post received
KUDOS
11
This post was
BOOKMARKED
00:00
Difficulty:
25% (medium)
Question Stats:
65% (01:00) correct 35% (00:53) wrong based on 408 sessions
HideShow timer Statistics
One of the limiting factors in human physical performance is the amount of oxygen that is absorbed by the muscles from the bloodstream. Accordingly, entrepreneurs have begun selling at gymnasiums and health clubs bottles of drinking water, labeled “SuperOXY,” that has extra oxygen dissolved in the water. Such water would be useless in improving physical performance, however, since the only way to get oxygen into the bloodstream so that it can be absorbed bye the muscles is through the lungs.
Which of the following, if true, would serve the same function in the argument as the statement in boldface?
A. the water lost in exercising can be replaced with ordinary tap water
B. the amount of oxygen in the blood of people who are exercising is already more than the muscle can absorb
C. world-class athletes turn in record performance without such water
D. frequent physical exercise increases the body’s ability to take in and use oxygen
E. lack of oxygen is not the only factor limiting human physical performance
OA :
[Reveal] Spoiler:
B
[Reveal] Spoiler: OA
Last edited by JarvisR on 07 Jul 2015, 03:19, edited 1 time in total.
OA updated
Manager
Status: Its Wow or Never
Joined: 11 Dec 2009
Posts: 205
Location: India
Concentration: Technology, Strategy
GMAT 1: 670 Q47 V35
GMAT 2: 710 Q48 V40
WE: Information Technology (Computer Software)
Followers: 10
Kudos [?]: 144 [0], given: 7
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
02 Jan 2010, 13:09
msand wrote:
One of the limiting factors in human physical performance is the amount of oxygen that is absorbed by the muscles from the bloodstream. Accordingly, entrepreneurs have begun selling at gymnasiums and health clubs bottles of drinking water, labeled “SuperOXY,” that has extra oxygen dissolved in the water. Such water would be useless in improving physical performance, however, since the only way to get oxygen into the bloodstream so that it can be absorbed bye the muscles is through the lungs.
Which of the following, if true, would serve the same function in the argument as the statement in boldface?
A. the water lost in exercising can be replaced with ordinary tap water
B. the amount of oxygen in the blood of people who are exercising is already more than the muscle can absorb
C. world-class athletes turn in record performance without such water
D. frequent physical exercise increases the body’s ability to take in and use oxygen
E. lack of oxygen is not the only factor limiting human physical performance
OA :
[Reveal] Spoiler:
B
can anyone please explain the OA??i thot the ans would be D
_________________
---------------------------------------------------------------------------------------
If you think you can,you can
If you think you can't,you are right.
Senior Manager
Status: Yeah well whatever.
Joined: 18 Sep 2009
Posts: 345
Location: United States
GMAT 1: 660 Q42 V39
GMAT 2: 730 Q48 V42
GPA: 3.49
WE: Analyst (Insurance)
Followers: 5
Kudos [?]: 69 [1] , given: 17
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
02 Jan 2010, 17:58
1
This post received
KUDOS
I'll try, my friend. OA is B because of the way the argument is set up. The bold text is the evidence to the claim that drinking water with oxygen in it won't help out physical performance. So out of the answer choices we're trying to find something else that would be used as evidence to support the same claim. Therefore, if we didn't know that oxygen in the stomach doesn't help it get absorbed to the muscles because it's not through the lungs which answer would make sense? It's B because B means that the body doesn't need MORE oxygen. Rather it needs to BETTER absorb the oxygen it already has. It's completely different evidence but if it's true then drinking oxygen in your water would just give more oxygen and the body wouldn't be able to absorb it and use it for enhanced performance. Does that make sense?
D is a true statement but it does not do anything for the argument. Frequent exercise has nothing to do with disproving the effectiveness of drinking water with oxygen it to ultimately help out muscle performance. It just has to do with muscle performance it self.
_________________
He that is in me > he that is in the world. - source 1 John 4:4
Intern
Joined: 12 Nov 2010
Posts: 2
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
23 Nov 2010, 10:46
The water lost in exercising can be replaced with ordinary tap water ( water can be replaced with original water only)
Similarly the oxygen lost can be replenished in the original way only.... i.e. thru lungs!!!
Thus I go for A.
Ms. Big Fat Panda
Status: Three Down.
Joined: 09 Jun 2010
Posts: 1922
Concentration: General Management, Nonprofit
Followers: 433
Kudos [?]: 1866 [4] , given: 210
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
23 Nov 2010, 13:45
4
This post received
KUDOS
You have to attack the main argument here. The main argument says that these waters are trying to replenish the body's oxygen content and promote physical performance. So you need to find an answer choice that says that even if this water contains the oxygen, it's not going to assimilated.
A is wrong because that says, "Okay you can ALSO do this by using tap water" which is not the same as saying "You CANNOT do this by using this OXY water".
B makes sense because it gives an alternate explanation as to why this product will fail. If the body is already at the maximum level of oxygen absorbency, no matter how much oxygen this water contains, it'll all be wasted.
C is absolutely irrelevant. If world class athletes can do it without this water, again, that doesn't mean that this water doesn't help in performance. It could just mean that they don't need that extra boost.
D in fact, strengthens the argument instead of weakening it by saying the body's ability to oxygen intake increases.
E seems to be an actual answer, but the problem with this is, even if there are other factors and oxygen unavailability IS one factor, then by providing more oxygen you're breaking down one of the factors even if it means that your performance goes up a tiny bit. Incomplete answer.
I hope this helps.
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 6747
Location: Pune, India
Followers: 1875
Kudos [?]: 11533 [10] , given: 219
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
23 Nov 2010, 15:26
10
This post received
KUDOS
Expert's post
4
This post was
BOOKMARKED
msand wrote:
One of the limiting factors in human physical performance is the amount of oxygen that is absorbed by the muscles from the bloodstream. Accordingly, entrepreneurs have begun selling at gymnasiums and health clubs bottles of drinking water, labeled “SuperOXY,” that has extra oxygen dissolved in the water. Such water would be useless in improving physical performance, however, since the only way to get oxygen into the bloodstream so that it can be absorbed bye the muscles is through the lungs.
Which of the following, if true, would serve the same function in the argument as the statement in boldface?
A. the water lost in exercising can be replaced with ordinary tap water
B. the amount of oxygen in the blood of people who are exercising is already more than the muscle can absorb
C. world-class athletes turn in record performance without such water
D. frequent physical exercise increases the body’s ability to take in and use oxygen
E. lack of oxygen is not the only factor limiting human physical performance
OA :
[Reveal] Spoiler:
B
Definitely an interesting boldface question.
I read the question stem first. I understand that I need to find the function of the statement in bold. Then I need to find an option that will play the same function, if it is incorporated in the argument.
Conclusion: 'SuperOXY' water would be useless in improving physical performance.
The bold portion "the only way to get oxygen into the bloodstream so that it can be absorbed bye the muscles is through the lungs" is a premise supporting the conclusion.
So I have to find the option that, if incorporated in the argument, will also function as a premise i.e. it will also support the conclusion. So in short, I am trying to find the option that strengthens the conclusion.
Option (A) is incorrect because it says that water lost can be replaced by tap water. It doesn't say how or why SuperOXY is useless in improving physical performance.
Option (B) says that amount of oxygen is already more than what the muscles can absorb. This means drinking SuperOXY will not improve physical performance because muscles anyway cannot absorb the extra oxygen. This option strengthens the conclusion. Hence this is the answer.
Option (C) says that people turn in great performance without this water. But it doesn't say that this water cannot further improve their performance.
Option (D) says that frequent physical exercise increases the body’s ability to take in and use oxygen. It doesn't say anything about how this water does not improve performance.
Option (E) says there are other factors affecting human physical performance but doesn't say that SuperOXY doesn't affect human physical performance.
Answer (B)
_________________
Karishma
Veritas Prep | GMAT Instructor
My Blog
Get started with Veritas Prep GMAT On Demand for $199 Veritas Prep Reviews Manager Joined: 11 Jul 2010 Posts: 227 Followers: 1 Kudos [?]: 76 [0], given: 20 Re: One of the limiting factors in human physical performance is [#permalink] Show Tags 23 Nov 2010, 20:45 another problem with e seems to be that it is just paraphrasing what is already given in the argument. the argument starts off by saying: "One of the limiting factors" - so we already know there are other factors.. whats the point of having E? i eliminated e on that basis... not sure if that was the right approach though... Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 6747 Location: Pune, India Followers: 1875 Kudos [?]: 11533 [0], given: 219 Re: One of the limiting factors in human physical performance is [#permalink] Show Tags 24 Nov 2010, 05:36 Expert's post gmat1011 wrote: another problem with e seems to be that it is just paraphrasing what is already given in the argument. the argument starts off by saying: "One of the limiting factors" - so we already know there are other factors.. whats the point of having E? i eliminated e on that basis... not sure if that was the right approach though... Sure it makes sense. An option acting as a premise will have new information. Option (E) here doesn't. Good call. Just be aware that sometimes what seems a rehash may actually be new. So be careful when eliminating options on that basis. _________________ Karishma Veritas Prep | GMAT Instructor My Blog Get started with Veritas Prep GMAT On Demand for$199
Veritas Prep Reviews
Manager
Joined: 09 Jun 2011
Posts: 92
Followers: 0
Kudos [?]: 0 [0], given: 0
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
13 Sep 2011, 07:47
B...
Senior Manager
Joined: 25 Nov 2011
Posts: 261
Location: India
Concentration: Technology, General Management
GPA: 3.95
WE: Information Technology (Computer Software)
Followers: 3
Kudos [?]: 134 [0], given: 20
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
15 Jan 2012, 18:29
misty1234 wrote:
The water lost in exercising can be replaced with ordinary tap water ( water can be replaced with original water only)
Similarly the oxygen lost can be replenished in the original way only.... i.e. thru lungs!!!
Thus I go for A.
My reasoning is as follows:
By reading the argument, we know that the BF statement supports the conclusion and its position is reason. So, the answer choice also should be a reason that supports the conclusion.
A --> supports the conclusion and also a reason but it brings an external element, so irrelevant.
B --> supports the conclusion by acting as a reason. Keep this.
C --> supports the conclusion but it is an example, not reason.
D --> it is against the conclusion
E --> irrelevant to the discussion.
Hence select B.
_________________
-------------------------
-Aravind Chembeti
Director
Status: Enjoying the GMAT journey....
Joined: 26 Aug 2011
Posts: 735
Location: India
GMAT 1: 620 Q49 V24
Followers: 69
Kudos [?]: 456 [0], given: 264
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
15 Jan 2012, 23:32
+ 1 B
_________________
Fire the final bullet only when you are constantly hitting the Bull's eye, till then KEEP PRACTICING.
A WAY TO INCREASE FROM QUANT 35-40 TO 47 : http://gmatclub.com/forum/a-way-to-increase-from-q35-40-to-q-138750.html
Q 47/48 To Q 50 + http://gmatclub.com/forum/the-final-climb-quest-for-q-50-from-q47-129441.html#p1064367
Three good RC strategies http://gmatclub.com/forum/three-different-strategies-for-attacking-rc-127287.html
Manager
Joined: 12 Nov 2011
Posts: 143
Followers: 0
Kudos [?]: 17 [0], given: 24
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
21 Jan 2012, 22:50
Clear B but not an easy one to comprehend
Manager
Joined: 10 Jan 2010
Posts: 192
Location: Germany
Concentration: Strategy, General Management
Schools: IE '15 (M)
GMAT 1: Q V
GPA: 3
WE: Consulting (Telecommunications)
Followers: 2
Kudos [?]: 27 [0], given: 7
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
25 Jan 2012, 07:19
B, but need to be cracked first. Not easy to do so. Did not do it within 2 mins
Verbal Forum Moderator
Status: Getting strong now, I'm so strong now!!!
Affiliations: National Institute of Technology, Durgapur
Joined: 04 Jun 2013
Posts: 634
Location: India
GPA: 3.32
WE: Information Technology (Computer Software)
Followers: 93
Kudos [?]: 457 [1] , given: 78
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
21 Nov 2013, 12:16
1
This post received
KUDOS
2
This post was
BOOKMARKED
Last edited by dentobizz on 25 Nov 2013, 05:24, edited 1 time in total.
adding article links
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 8787
Followers: 770
Kudos [?]: 157 [0], given: 0
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
26 Nov 2014, 13:30
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 8787
Followers: 770
Kudos [?]: 157 [0], given: 0
Re: One of the limiting factors in human physical performance is [#permalink]
Show Tags
05 Dec 2015, 15:16
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Re: One of the limiting factors in human physical performance is [#permalink] 05 Dec 2015, 15:16
Similar topics Replies Last post
Similar
Topics:
2 One of the limiting factors in human physical performance is the amoun 1 03 Sep 2015, 16:21
One of the limiting factors in human physical performance is 5 06 Feb 2008, 13:47
One of the limiting factors in human physical performance is 17 24 Apr 2007, 06:13
One of the limiting factors in human physical performance is 16 18 Jan 2007, 07:25
One of the limiting factors in human physical performance is 12 26 Nov 2006, 13:06
Display posts from previous: Sort by
One of the limiting factors in human physical performance is
new topic post reply Question banks Downloads My Bookmarks Reviews Important topics
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
Subsets and Splits