text
stringlengths 104
605k
|
---|
Distances with non-euclidean metric
Hello,
when measuring length of geodesic shortest paths, or more in general, when measuring the length of a parametric curve in the space, what we usually do is to sum the length of infinitesimal arcs of that curve, assuming an euclidean norm.
Why this choice?
I have not found in literature any mention on the possibility of using other norms, like L1-norm.
Why not to allow to measure the length of infinitesimal arcs of a curve in $$\mathbb{R}^2$$ by doing instead:
$$ds = \left| \frac{\partial \mathbf{p}}{\partial x} \right| dx + \left| \frac{\partial \mathbf{p}}{\partial y} \right| dy$$
Thanks... |
# 10th International Conference on Hard and Electromagnetic Probes of High-Energy Nuclear Collisions
May 31, 2020 to June 5, 2020
Online
US/Central timezone
## Probing the partonic degree of freedom in high multiplicity p-Pb at $\sqrt{s_{NN}}$ = 5.02 TeV collisions
Jun 2, 2020, 7:30 AM
1h 20m
Online
#### Online
Poster Presentation New Theoretical Developments
### Speaker
Wenbin Zhao (Peking University)
### Description
The collective flow and the possible formation of the Quark-Gluon Plasma (QGP) in the small colliding systems are hot research topics in the heavy-ion community. Recently, ALICE, ATLAS and CMS collaborations have measured the elliptic flow and the related number of constituent quark (NCQ) scaling of identified hadrons in p+Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, which are important observables to probe the partonic degree of freedom in the created small system.
In this talk, we focus on the coalescence model calculations for the NCQ scaling of at intermediate $p_T$ for the high multiplicity p+Pb collisions, which includes thermal-thermal, thermal-jet and jet-jet partons recombinations, using the thermal partons from hydrodynamics and jet partons after the energy loss of the Linear Boltzmann Transport (LBT) model. Such coalescence model calculations have also been smoothly connected with the low hydrodynamic calculation and with high jet fragmentation. Within such combined framework, we present a nice description of the spectra and elliptic flow over the $p_T$ range from 0 to 6 GeV, and obtain the approximately NCQ scaling at intermediate $p_T$ as measured in experiment. We also switch off the coalescence process of partons and find that without such coalescence, one can not describe the differential elliptic flow and related NCQ scaling at intermediate $p_T$. Such comparison calculations also demonstrate the importance of the partonic degree of freedom and indicate the possible formation of QGP in the high multiplicity p+Pb collisions.
Contribution type Contributed Talk New Theoretical Developments
### Primary authors
Wenbin Zhao (Peking University) Prof. Che-Ming Ko Guang-You Qin (Central China Normal University) Prof. Yu-Xin Liu (Peking Univerisity) |
Find all School-related info fast with the new School-Specific MBA Forum
It is currently 26 Sep 2016, 02:32
Events & Promotions
Events & Promotions in June
Open Detailed Calendar
GMAT Problem Solving (PS)
new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous 1 ... 5 6 7 8 9 10 11 ... 262 Next Search for:
Topics Author Replies Views Last post
Announcements
151
150 Hardest and easiest questions for PS Tags:
Bunuel
7
24931
11 Sep 2016, 03:25
822
GMAT PS Question Directory by Topic & Difficulty
bb
0
300305
22 Feb 2012, 11:27
Topics
1
if integer N has p factors ; how many factors will 2N have ?
stonecold
1
177
07 Aug 2016, 23:07
1
In a sample of patients at an animal hospital, 30 percent are cats and
Bunuel
4
177
29 Aug 2016, 19:58
2
The factorial operation ! applied to a positive integer n denotes the
stonecold
1
177
24 Apr 2016, 18:42
2
2) In the figure above, triangle ABC is inscribed in the circle with c
mikemcgarry
2
177
25 Sep 2016, 17:17
In a certain company, the ratio of male to female employees is 7:8...
EBITDA
1
178
24 Jul 2016, 02:15
Jeremy bought 2Q steaks for W dollars. Jerome buys R steaks for a 50%
Bunuel
3
178
05 Jul 2016, 03:16
2
There are 30 socks in a drawer. 60% of the socks are red and the...
EBITDA
1
178
16 Aug 2016, 08:03
If there are 30 red and blue marbles in a jar, and the ratio of red to
Bunuel
1
178
24 Jul 2016, 09:40
If 35 percent of 400 is 20 percent of x, then x =
Bunuel
3
178
29 Aug 2016, 03:19
1
An investment yielded an interest payment of $350 each month when the Bunuel 1 178 30 Jun 2016, 10:02 4 The incomplete table above shows a distribution of scores for a class Bunuel 4 178 28 Aug 2016, 22:00 1 If −y ≥ x, and −x < −5, then which of the following must be true? Bunuel 2 179 02 Sep 2016, 07:14 4 Tickets to a certain concert sell for$20 each. The first 10 people to
Bunuel
2
180
11 Sep 2016, 09:10
Judy ran along the fence that surrounds her house. The fence forms a
Bunuel
3
180
21 Sep 2016, 20:17
If ABCD is a square with its side length is 10ft and points P, Q, M ar
MathRevolution
4
180
11 Sep 2016, 18:25
Machine–A produces 40% of the total output and Machine-B produces 60%
mihir0710
2
180
03 Jul 2016, 12:36
How many different 4-digit numbers
chetan2u
2
180
13 Mar 2016, 07:41
A farmer with 1,350 acres of land had planted his fields with corn, su
Bunuel
2
181
22 Aug 2016, 07:35
2
For all even integers n, h(n) is defined to be the sum of the even
mystiquethinker
3
181
31 Aug 2016, 07:48
1
Between 100 and 200...
aayushagrawal
1
181
08 Jun 2016, 12:49
Which of the following is equal to the cube of a non-integer?
Bunuel
2
181
14 Aug 2016, 10:19
1
The probability that a man speaks a true statement is 3/4.
chetan2u
2
181
19 Mar 2016, 09:45
In triangle ABC above, if ∠BAD and ∠ADB have measures of 4n°, ∠ACB ha
Bunuel
1
181
01 Aug 2016, 06:01
120% of 5/8 =
Bunuel
1
182
30 Mar 2016, 04:21
1
Which of the following is closest to (-3/4)^199?
MathRevolution
4
182
12 Sep 2016, 17:38
What is the scope including 1/21+1/12+1/13+......+1/30?
MathRevolution
2
182
11 Jul 2016, 18:21
1
What is CD in the figure above?
Bunuel
2
182
06 Sep 2016, 14:02
In certain year in Country C, x sets of twins and y sets of triplets
Bunuel
1
182
24 Jul 2016, 09:42
1
John has 5 friends who want to ride in his new car that can accommodat
sarthaksabharwal
1
182
04 Sep 2016, 10:36
1
A used cars salesman receives an annual bonus if he meets a certain
Bunuel
2
182
07 Sep 2016, 18:56
1
One-fifth of the students at a nursery school are 4 years old or older
Mbawarrior01
1
182
10 Jun 2016, 11:18
Fifty applicants for a job were given scores from 1 to 5 on their int
Bunuel
1
182
28 Jul 2016, 06:09
2
If arc ABC in the circle above is 80°, then what is the value of x°+y°
Bunuel
3
182
11 Sep 2016, 09:32
1
When x, y are positive integers, xy=xy/(x+y). If a, b, and c are posi
MathRevolution
3
183
13 Jun 2016, 01:05
1
A certain factory produces buttons and buckles at a uniform weight. If
Bunuel
1
183
05 Jul 2016, 20:43
A rectangular room has the rectangular shaped rug shown as above figur
MathRevolution
2
183
04 May 2016, 21:49
2
Volume of a right circular cylinder is 60 l. If radius of cylinder is
bimalr9
1
183
14 Jun 2016, 09:25
2
Anthony covers a certain distance on a bike. Had he moved 3mph faster-
susheelh
1
183
20 Aug 2016, 01:36
1
Five people, Ada, Ben, Cathy, Dan, and Eliza, are lining up for a
DrAB
6
183
07 Sep 2016, 21:01
1
It is known that no more than 7 children will be attending a party.
Bunuel
3
184
24 Jul 2016, 10:58
If -3x+4y=28 and 3x-2y=8, what is the product of x and y?
Bunuel
2
184
14 Jul 2016, 04:47
5
Region R is defined as the region in the first quadrant satisfying the
GMATantidote
4
184
25 Sep 2016, 08:50
1
Which of the following cannot be the sum of 3 different prime numbers?
Bunuel
1
184
11 Apr 2016, 08:37
1
The visitors of a modern art museum who watched a certain Picasso pain
Bunuel
1
184
08 Sep 2016, 11:44
1
Of the following answer choices, which is the closest approximation of
Bunuel
2
184
02 Sep 2016, 07:58
If x is an integer and y=4x+3, which of the following cannot be a divi
krodin
1
185
09 Apr 2016, 09:10
If x < 0 and y is not equal to 0, which of the following must be true?
akshayk
2
185
24 Aug 2016, 05:05
In a certain conservative mutual fund, 70 percent of the money is inv
Bunuel
1
185
30 Jul 2016, 04:24
4
In a recent survey at a local deli, it was observed that 3 out of 5
jayshah0621
1
185
24 Jun 2016, 13:57
3
If a + c is the median of a, b, and c, which of the following COULD be
Bunuel
1
185
24 Jul 2016, 09:55
new topic Question banks Downloads My Bookmarks Reviews Important topics Go to page Previous 1 ... 5 6 7 8 9 10 11 ... 262 Next Search for:
Who is online In total there are 11 users online :: 2 registered, 0 hidden and 9 guests (based on users active over the past 15 minutes) Users browsing this forum: testprepabc, yezz and 9 guests Statistics Total posts 1535868 | Total topics 186573 | Active members 465645 | Our newest member [email protected]
Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
d - Maple Help
difforms
d
exterior differentiation
Calling Sequence d(expr) d(expr, forms)
Parameters
expr - expression or list of expressions forms - (optional) list of 1-forms, i.e. wdegree=one
Description
• The function d computes the exterior derivative of an expression. If the expression is a list, then d is applied to each element in the list.
• When d is called with an expression and a list of 1-forms, any name of type scalar in the expression will be expanded in these 1-forms. It is assumed that the 1-forms are independent. For each 1-form, a new scalar is created to be the component for that 1-form.
• The command with(difforms,d) allows the use of the abbreviated form of this command.
Examples
> $\mathrm{with}\left(\mathrm{difforms}\right):$
> $\mathrm{defform}\left(f=0,\mathrm{w1}=1,\mathrm{w2}=1,\mathrm{w3}=1,v=1,x=0,y=0,z=0\right)$
> $d\left({x}^{2}y\right)$
${2}{}{x}{}{y}{}{d}{}\left({x}\right){+}{{x}}^{{2}}{}{d}{}\left({y}\right)$ (1)
> $d\left(f\left(x,y,z\right)\right)$
$\left(\frac{{\partial }}{{\partial }{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{f}{}\left({x}{,}{y}{,}{z}\right)\right){}{d}{}\left({x}\right){+}\left(\frac{{\partial }}{{\partial }{y}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{f}{}\left({x}{,}{y}{,}{z}\right)\right){}{d}{}\left({y}\right){+}\left(\frac{{\partial }}{{\partial }{z}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}{f}{}\left({x}{,}{y}{,}{z}\right)\right){}{d}{}\left({z}\right)$ (2)
> $d\left(f,\left[\mathrm{w1},\mathrm{w2},\mathrm{w3}\right]\right)$
${\mathrm{fw1}}{}{\mathrm{w1}}{+}{\mathrm{fw2}}{}{\mathrm{w2}}{+}{\mathrm{fw3}}{}{\mathrm{w3}}$ (3)
> $d\left(f,\left[d\left(x\right),d\left(y\left[1\right]\right),v\left[1\right]\right]\right)$
${\mathrm{fx}}{}{d}{}\left({x}\right){+}{{\mathrm{fy}}}_{{1}}{}{d}{}\left({{y}}_{{1}}\right){+}{{\mathrm{fv}}}_{{1}}{}{{v}}_{{1}}$ (4)
> $d\left(f\mathrm{w1}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}&ˆ\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}\mathrm{w2}\right)$
${\mathrm{&^}}{}\left({d}{}\left({f}\right){,}{\mathrm{w1}}{,}{\mathrm{w2}}\right){+}{f}{}{d}{}\left({\mathrm{w1}}\right)\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{&ˆ}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{\mathrm{w2}}{-}{f}{}{\mathrm{w1}}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{&ˆ}\phantom{\rule[-0.0ex]{0.3em}{0.0ex}}{d}{}\left({\mathrm{w2}}\right)$ (5)
> $d\left(\left[xy,{y}^{2},\mathrm{w2}\right]\right)$
$\left[{y}{}{d}{}\left({x}\right){+}{x}{}{d}{}\left({y}\right){,}{2}{}{y}{}{d}{}\left({y}\right){,}{d}{}\left({\mathrm{w2}}\right)\right]$ (6) |
#### Results in Journal Classical and Quantum Gravity: 14,151
##### (searched for: journal_id:(33277))
Page of 284
Articles per Page
by
Show export options
Select all
Published: 15 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa924f
Abstract:
In this paper we expand upon our previous work Coley et al (2016 Class. Quantum Grav. 33 215010) by using the entire family of Bianchi type V stiff fluid solutions as seed solutions of the Stephani transformation. Among the new exact solutions generated, we observe a number of important physical phenomena. The most interesting phenomenon is exact solutions with intersecting spikes. Other interesting phenomena are solutions with saddle states and a close-to-FL epoch.
, , David Serrano-Blanco
Published: 8 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa924a
Abstract:
We analyse the behaviour of the MacDowell–Mansouri action with internal symmetry group under the De Donder–Weyl Hamiltonian formulation. The field equations, known in this formalism as the De Donder–Weyl equations, are obtained by means of the graded Poisson–Gerstenhaber bracket structure present within the De Donder–Weyl formulation. The decomposition of the internal algebra allows the symmetry breaking , which reduces the original action to the Palatini action without the topological term. We demonstrate that, in contrast to the Lagrangian approach, this symmetry breaking can be performed indistinctly in the polysymplectic formalism either before or after the variation of the De Donder–Weyl Hamiltonian has been done, recovering Einstein's equations via the Poisson–Gerstenhaber bracket.
Published: 25 February 2011
Classical and Quantum Gravity, Volume 28; https://doi.org/10.1088/0264-9381/28/6/065013
Published: 13 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa91f6
Abstract:
Generic resolution of singularities and geodesic completeness in the loop quantization of Bianchi-II spacetimes with arbitrary minimally coupled matter is investigated. Using the effective Hamiltonian approach, we examine two available quantizations: one based on the connection operator and second by treating extrinsic curvature as connection via gauge fixing. It turns out that for the connection based quantization, either the inverse triad modifications or imposition of weak energy condition is necessary to obtain a resolution of all strong singularities and geodesic completeness. In contrast, the extrinsic curvature based quantization generically resolves all strong curvature singularities and results in a geodesically complete effective spacetime without inverse triad modifications or energy conditions. In both the quantizations, weak curvature singularities can occur resulting from divergences in pressure and its derivatives at finite densities. These are harmless events beyond which geodesics can be extended. Our work generalizes previous results on the generic resolution of strong singularities in the loop quantization of isotropic, Bianchi-I and Kantowski–Sachs spacetimes.
Published: 9 October 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa91f4
Abstract:
The proper vertex amplitude is derived from the Engle-Pereira-Rovelli-Livine vertex by restricting to a single gravitational sector in order to achieve the correct semi-classical behaviour. We apply the proper vertex to calculate a cosmological transition amplitude that can be viewed as the Hartle-Hawking wavefunction. To perform this calculation we deduce the integral form of the proper vertex and use extended stationary phase methods to estimate the large-volume limit. We show that the resulting amplitude satisfies an operator constraint whose classical analogue is the Hamiltonian constraint of the Friedmann-Robertson-Walker cosmology. We find that the constraint dynamically selects the relevant family of coherent states and demonstrate a similar dynamic selection in standard quantum mechanics. We investigate the effects of dynamical selection on long-range correlations.
, Luisa G Jaime
Published: 20 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa91f5
Abstract:
We derive a class of non-static inhomogeneous dust solutions in gravity described by the Lemaître–Tolman–Bondi (LTB) metric. The field equations are fully integrated for all parameter subcases and compared with analogous subcases of LTB dust solutions of GR. Since the solutions do not admit regular symmetry centres, we have two possibilities: (i) a spherical dust cloud with angle deficit acting as the source of a vacuum Schwarzschild-like solution associated with a global monopole, or (ii) fully regular dust wormholes without angle deficit, whose rest frames are homeomorphic to the Schwarzschild–Kruskal manifold or to a 3d torus. The compatibility between the LTB metric and generic ansatzes furnishes an 'inverse procedure' to generate LTB solutions whose sources are found from the geometry. While the resulting fluids may have an elusive physical interpretation, they can be used as exact non-perturbative toy models in theoretical and cosmological applications of theories.
, Joydip Mitra, Soumitra Sengupta
Published: 15 December 2017
Classical and Quantum Gravity, Volume 35; https://doi.org/10.1088/1361-6382/aa91f7
Abstract:
Fermion localization in a braneworld model in presence of dilaton coupled higher curvature Gauss–Bonnet bulk gravity is discussed. It is shown that the lowest mode of left handed fermions can be naturally localized on the visible brane due to the dilaton coupled higher curvature term without the necessity of any external localizing bulk field.
James Healy, , , Manuela Campanelli
Published: 6 October 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa91b1
Abstract:
The RIT numerical relativity group is releasing a public catalog of black-hole-binary waveforms. The initial release of the catalog consists of 126 recent simulations that include precessing and non precessing systems with mass ratios $q=m_1/m_2$ in the range $1/6\leq q\leq1$. The catalog contains information about the initial data of the simulation, the waveforms extrapolated to infinity, as well as information about the peak luminosity and final remnant black hole properties. These waveforms can be used to independently interpret gravitational wave signals from laser interferometric detectors and the remnant properties to model the merger of binary black holes from initial configurations.
Published: 29 October 2004
Classical and Quantum Gravity, Volume 21, pp 5245-5251; https://doi.org/10.1088/0264-9381/21/22/015
Published: 29 October 2004
Classical and Quantum Gravity, Volume 21, pp 5233-5243; https://doi.org/10.1088/0264-9381/21/22/014
E Buffenoir, , K Noui,
Published: 29 October 2004
Classical and Quantum Gravity, Volume 21, pp 5203-5220; https://doi.org/10.1088/0264-9381/21/22/012
Ofer Aharony, Joseph Marsano, , Toby Wiseman
Published: 29 October 2004
Classical and Quantum Gravity, Volume 21, pp 5169-5191; https://doi.org/10.1088/0264-9381/21/22/010
Published: 7 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa9151
Abstract:
We study a large family of metric-affine theories with a projective symmetry, including non-minimally coupled matter fields which respect this invariance. The symmetry is straightforwardly realised by imposing that the connection only enters through the symmetric part of the Ricci tensor, even in the matter sector. We leave the connection completely free (including torsion), and obtain its general solution as the Levi-Civita connection of an auxiliary metric, showing that the torsion only appears as a projective mode. This result justifies the widely used condition of setting vanishing torsion in these theories as a simple gauge choice. We apply our results to some particular cases considered in the literature, including the so-called Eddington-inspired-Born–Infeld theories among others. We finally discuss the possibility of imposing a gauge fixing where the connection is metric compatible, and comment on the genuine character of the non-metricity in theories where the two metrics are not conformally related.
Marcus Khuri,
Published: 5 October 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa9154
Abstract:
We show that near-horizon geometries in the presence of a positive cosmological constant cannot exist with ring topology. In particular, de Sitter black rings with vanishing surface gravity do not exist. Our result relies on a known mathematical theorem which is a straightforward consequence of a type of energy condition for a modified Ricci tensor, similar to the curvature-dimension conditions for the m-Bakry-Émery-Ricci tensor.
, , Takahiro Miyamoto, , Koki Okutomi, Yoshinori Fujii, Hiroki Tanaka, Mark A Barton, Ryutaro Takahashi, , et al.
Published: 4 October 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa90e3
Abstract:
KAGRA is a 3-km cryogenic interferometric gravitational wave telescope located at an underground site in Japan. In order to achieve its target sensitivity, the relative positions of the mirrors of the interferometer must be finely adjusted with attached actuators. We have developed a model to simulate the length control loops of the KAGRA interferometer with realistic suspension responses and various noises for mirror actuation. Using our model, we have designed the actuation parameters to have sufficient force range to acquire lock as well as to control all the length degrees of freedom without introducing excess noise.
Published: 30 November 2017
Classical and Quantum Gravity, Volume 35; https://doi.org/10.1088/1361-6382/aa90e7
Abstract:
We prove uniqueness of the near-horizon geometries arising from degenerate Kerr black holes within the collection of nearby vacuum near-horizon geometries.
Published: 2 October 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa9039
Abstract:
The (generalized) Rainich conditions are algebraic conditions which are polynomial in the (mixed-component) stress-energy tensor. As such they are logically distinct from the usual classical energy conditions (NEC, WEC, SEC, DEC), and logically distinct from the usual Hawking-Ellis (Segre-Plebanski) classification of stress-energy tensors (type I, type II, type III, type IV). There will of course be significant inter-connections between these classification schemes, which we explore in the current article. Overall, we shall argue that it is best to view the (generalized) Rainich conditions as a refinement of the classical energy conditions and the usual Hawking-Ellis classification.
Chen-Yu Liu, , Chi-Yong Lin
Published: 13 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa903b
Abstract:
We examine the dynamics of a neutral particle around a Kerr–Newman black hole, and in particular focus on the effects of the charge of the spinning black hole on the motion of the particle. We first consider the innermost stable circular orbits (ISCO) on the equatorial plane. It is found that the presence of the charge of the black hole leads to the effective potential of the particle with stronger repulsive effects as compared with the Kerr black hole. As a result, the radius of the ISCO decreases as charge Q of the black hole increases for a fixed value of the black hole's angular momentum a. We then consider a kick on the particle from its initial orbit out of the equatorial motion. The perturbed motion of the particle will eventually be bounded, or unbounded so that it escapes to spatial infinity. Even more, the particle will likely be captured by the black hole. Thus we analytically and numerically determine the parameter regions of the corresponding motions, in terms of the initial radius of the orbital motion and the strength of the kick. The comparison will be made with the motion of a neutral particle in the Kerr black hole.
Published: 2 October 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa903c
Abstract:
McVittie spacetimes embed the Schwarzschild(-(anti) de Sitter) spacetime in an isotropic FLRW background universe. We study the global structure of McVittie spacetimes with spatially non-flat FLRW backgrounds. We extend the definition of such spacetimes, previously given only for the flat and open cases, to the closed case. We revisit this definition and show how it gives rise to a unique spacetime (given the FLRW background, the mass parameter M and the cosmological constant Λ) in the open and flat cases. In the closed case, an additional free function of the cosmic time arises. We derive some basic results on the metric, curvature and matter content of McVittie spacetimes and derive a representation of the line element that makes the study of their global properties possible. In the closed case (independently of the free function mentioned above), the spacetime is confined (at each instant of time) to a region bounded by a minimum and a maximum area radius, and is bounded either to the future or to the past by a scalar curvature singularity. This allowed region only exists when the background scale factor is above a certain minimum. In the open case, radial null geodesics originate in finite affine time in the past at a boundary formed by the union of the Big Bang of the FLRW background and a non-singular hypersurface of varying causal character. In the case of eternally expanding open universes, we show that black holes are ubiquitous: ingoing radial null geodesics extend in finite affine time to a hypersurface that forms the boundary of the region from which photons can escape to future null infinity. We revisit the spatially flat McVittie spacetimes, and show that the black hole interpretation holds also in the case of a vanishing cosmological constant, contrary to a previous claim of ours.
Published: 13 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa903f
Abstract:
We study a sufficient condition for proving the stability of a black hole when the master equation for linear perturbation takes the form of the Schrödinger equation. If the potential contains a small negative region, the S-deformation method is usually used to show the non-existence of an unstable mode. However, in some cases, it is hard to find an appropriate deformation function analytically because the only way found so far to find it is by trial-and-error. In this paper, we show that it is easy to find a regular deformation function by numerically solving the differential equation such that the deformed potential vanishes everywhere, when the spacetime is stable. Even if the spacetime is almost marginally stable, our method still works. We also discuss a simple toy model which can be solved analytically, and show that the condition for the non-existence of a bound state is the same as that for the existence of a regular solution for the differential equation in our method. From these results, we conjecture that our criteria is also a necessary condition.
Published: 20 February 2006
Classical and Quantum Gravity, Volume 23, pp 1721-1761; https://doi.org/10.1088/0264-9381/23/5/016
Abstract:
We compute the one loop fermion self-energy for massless Dirac + Einstein in the presence of a locally de Sitter background. We employ dimensional regularization and obtain a fully renormalized result by absorbing all divergences with BPHZ counterterms. An interesting technical aspect of this computation is the need for a noninvariant counterterm owing to the breaking of de Sitter invariance by our gauge condition. Our result can be used in the quantum-corrected Dirac equation to search for inflation-enhanced quantum effects from gravitons, analogous to those which have been found for massless, minimally coupled scalars.
Published: 15 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8fe2
Abstract:
We introduce the extended Freudenthal–Rosenfeld–Tits magic square based on six algebras: the reals , complexes , ternions , quaternions , sextonions and octonions . The sextonionic row/column of the magic square appeared previously and was shown to yield the non-reductive Lie algebras, , , , and , for and respectively. The fractional ranks are used to denote the semi-direct extension of the simple Lie algebra in question by a unique (up to equivalence) Heisenberg algebra. The ternionic row/column yields the non-reductive Lie algebras, , , , and , for and respectively. The fractional ranks here are used to denote the semi-direct extension of the semi-simple Lie algebra in question by a unique (up to equivalence) nilpotent 'Jordan' algebra. We present all possible real forms of the extended magic square. It is demonstrated that the algebras of the extended magic square appear quite naturally as the symmetries of supergravity Lagrangians. The sextonionic row (for appropriate choices of real forms) gives the non-compact global symmetries of the Lagrangian for the maximal , magic and magic non-supersymmetric theories, obtained by dimensionally reducing the parent theories on a circle, with the graviphoton left undualised. In particular, the extremal intermediate non-reductive Lie algebra (which is not a subalgebra of ) is the non-compact global symmetry algebra of , supergravity as obtained by dimensionally reducing , supergravity with symmetry on a circle. On the other hand, the ternionic row (for appropriate choices of real forms) gives the non-compact global symmetries of the Lagrangian for the maximal , magic and magic non-supersymmetric theories, as obtained by dimensionally reducing the parent theories on a circle. In particular, the Kantor–Koecher–Tits intermediate non-reductive Lie algebra is the non-compact global symmetry algebra of , supergravity as obtained by dimensionally reducing , supergravity with symmetry on a circle.
Published: 11 December 2017
Classical and Quantum Gravity, Volume 35; https://doi.org/10.1088/1361-6382/aa8fb3
Abstract:
We illustrate and examine diverse approaches to the quantum matter–gravity system which refer to the Born–Oppenheimer (BO) method. In particular we first examine a quantum geometrodynamical approach introduced by other authors in a manner analogous to that previously employed by us, so as to include back reaction and non-adiabatic contributions. On including such effects it is seen that the unitarity violating effects previously found disappear. A quantum loop space formulation (based on a hybrid quantisation, polymer for gravitation and canonical for matter) also refers to the BO method. It does not involve the classical limit for gravitation and has a highly peaked initial scalar field state. We point out that it does not resemble in any way our traditional BO approach. Instead it does resemble an alternative, canonically quantised, non BO approach which we have also previously discussed.
, Fabien Casse, Philippe Grandclement, Eric Gourgoulhon, Frederic Dauvergne
Published: 28 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8fb5
Abstract:
We present two-dimensional general relativistic hydrodynamics simulations of free-falling gas clouds onto rotating boson stars (BS). Those objects consist of a complex scalar field coupled to gravity. BS are interesting as black hole (BH) mimickers. By this, one means that they are very compact objects but without any event horizon. It is then expected that the physics around BS is different than the one around BH. In this paper, we consider two BS configurations and study the trajectories and internal properties of infalling gas clouds, varying their initial positions. We follow the various disruption phases of the cloud until the formation, in some cases, of a gas torus in the inner region of the BS . We then discuss the cloud capture process by BS and the torus formation. We find that the characteristic time for torus formation increases when the initial distance between of the cloud and the BS decreases.
Published: 27 September 2017
Classical and Quantum Gravity, Volume 35; https://doi.org/10.1088/1361-6382/aa8f7a
Abstract:
This paper presents the recent version of the Lunar Laser Ranging (LLR) analysis model at the Institut für Erdmessung (IfE), Leibniz Universität Hannover and highlights a few tests of Einstein's theory of gravitation using LLR data. Investigations related to a possible temporal variation of the gravitational constant, the equivalence principle, the PPN parameters β and γ as well as the geodetic precession were carried out. The LLR analysis model was updated by gravitational effects of the Sun and planets with the Moon as extended body. The higher-order gravitational interaction between Earth and Moon as well as effects of the solid Earth tides on the lunar motion were refined. The basis for the modeled lunar rotation is now a 2-layer core/mantle model according to the DE430 ephemeris. The validity of Einstein's theory was studied using this updated analysis model and a LLR data set from 1970 to January 2015. Within the estimated accuracies, no deviations from Einstein's theory are detected. A relative temporal variation of the gravitational constant is estimated as $\dot{G}$/G0=(7.1±7.6)×10-14 yr-1, the test of the equivalence principle gives Δ(mg/mi)EM=(-3±5)×10-14 and the Nordtvedt parameter η=(-0.2±1.1)×10-4, the PPN-parameters β and γ are determined as β-1=(-4.5±5.6)×10-5 and γ-1=(-1.2±1.2)×10-4 and the geodetic precession is confirmed within 0.09 %. The results for selected relativistic parameters are obtained by introducing constraints from a LLR solution without estimating relativistic quantities. The station coordinates are constrained for the estimation of $\dot{G}$/G0, β and γ, the initial value of the core rotation vector is constrained to a reasonable model value for the estimation of $\dot{G}$/G0 and geodetic precession. A constrained z-component of the initial lunar velocity is used for the estimation of the geodetic precession.
, , Behrouz Mirza
Published: 27 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8f7c
Abstract:
Lorentz gauge theory (LGT) is a feasible candidate for theory of quantum gravity in which routine field theory calculations can be carried out perturbatively without encountering too many divergences. In LGT spin of matter also gravitates. The spin-generated gravity is expected to be extremely stronger than that generated by mass and could be explored in current colliders. In this article the observable signals of the theory in an electron-positron collider is investigated. We specifically study pair annihilation into two gravitons, and LGT corrections to processes like $e^-+e^+\rightarrow \mu^-+\mu^+$ and $e^-+e^+\rightarrow e^-+e^+$.
, Bianca Dittrich
Published: 26 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8f24
Abstract:
One of the most pressing issues for loop quantum gravity and spin foams is the construction of the continuum limit. In this paper, we propose a systematic coarse-graining scheme for three-dimensional lattice gauge models including spin foams. This scheme is based on the concept of decorated tensor networks, which have been introduced recently. Here we develop an algorithm applicable to gauge theories with non-Abelian groups, which for the first time allows for the application of tensor network coarse-graining techniques to proper spin foams. The procedure deals efficiently with the large redundancy of degrees of freedom resulting from gauge symmetry. The algorithm is applied to 3D spin foams defined on a cubical lattice which, in contrast to a proper triangulation, allows for non--trivial simplicity constraints. This mimics the construction of spin foams for 4D gravity. For lattice gauge models based on a finite group we use the algorithm to obtain phase diagrams, encoding the continuum limit of a wide range of these models. We find phase transitions for various families of models carrying non--trivial simplicity constraints.
, Jason Koeller,
Published: 26 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8f2c
Abstract:
The quantum null energy condition (QNEC) is a conjectured bound on components $(T_{kk} = T_{ab} k^a k^b$) of the stress tensor along a null vector $k^a$ at a point $p$ in terms of a second $k$-derivative of the von Neumann entropy $S$ on one side of a null congruence $N$ through $p$ generated by $k^a$. The conjecture has been established for super-renormalizeable field theories at points $p$ that lie on a bifurcate Killing horizon with null tangent $k^a$ and for large-N holographic theories on flat space. While the Koeller-Leichenauer holographic argument clearly yields an inequality for general $(p,k^a)$, more conditions are generally required for this inequality to be a useful QNEC. For $d\le 3$, for arbitrary backgroud metric we show that the QNEC is naturally finite and independent of renormalization scheme when the expansion $\theta$ of $N$ at the point $p$ vanishes. This is consistent with the original QNEC conjecture which required $\theta$ and the shear $\sigma_{ab}$ to satisfy $\theta |_p= \dot{\theta}|_p =0$, $\sigma_{ab}|_p=0$. But for $d=4,5$ more conditions than even these are required. In particular, we also require the vanishing of additional derivatives and a dominant energy condition. In the above cases the holographic argument does indeed yield a finite QNEC, though for $d\ge6$ we argue these properties to fail even for weakly isolated horizons (where all derivatives of $\theta, \sigma_{ab}$ vanish) that also satisfy a dominant energy condition. On the positive side, a corrollary to our work is that, when coupled to Einstein-Hilbert gravity, $d \le 3$ holographic theories at large $N$ satisfy the generalized second law (GSL) of thermodynamics at leading order in Newton's constant $G$. This is the first GSL proof which does not require the quantum fields to be perturbations to a Killing horizon.
Richard H Price,
Published: 26 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8f29
Abstract:
The recent detection of gravitational waves has generated interest in alternatives to the black hole interpretation of sources. A subset of such alternatives involves a prediction of gravitational wave echoes''. We consider two aspects of possible echoes: First, general features of echoes coming from spacetime reflecting conditions. We find that the detailed nature of such echoes does not bear any clear relationship to quasi-normal frequencies. Second, we point out the pitfalls in the analysis of local reflecting walls'' near the horizon of rapidly rotating black holes.
David Langlois, Hongguang Liu, ,
Published: 26 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8f2f
Abstract:
Recently, Chamseddine and Mukhanov introduced a higher-derivative scalar-tensor theory which leads to a modified Friedmann equation allowing for bouncing solutions. As we note in the present work, this Friedmann equation turns out to reproduce exactly the loop quantum cosmology effective dynamics for a flat isotropic and homogeneous space-time. We generalize this result to obtain a class of scalar-tensor theories, belonging to the family of mimetic gravity, which all reproduce the loop quantum cosmology effective dynamics for flat, closed and open isotropic and homogeneous space-times.
Published: 26 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa89f4
Abstract:
Gravitoelectromagnetism (GEM) as a theory for gravity has been developed similar to the electromagnetic field theory. A weak field approximation of Einstein theory of relativity is similar to GEM. This theory has been quantized. Traditional Bhabha scattering, electron–positron scattering, is based on quantized electrodynamics theory. Usually the amplitude is written in terms of one photon exchange process. With the development of quantized GEM theory, the scattering amplitude will have an additional component based on an exchange of one graviton at the lowest order of perturbation theory. An analysis will provide the relative importance of the two amplitudes for Bhabha scattering. This will allow an analysis of the relative importance of the two amplitudes as the energy of the exchanged particles increases.
Steven Johnston
Published: 17 September 2015
Classical and Quantum Gravity, Volume 32; https://doi.org/10.1088/0264-9381/32/19/195020
Abstract:
The causal set approach to quantum gravity models spacetime as a discrete structure—a causal set. Recent research has led to causal set models for the retarded propagator for the Klein–Gordon equation and the d'Alembertian operator. These models can be compared to their continuum counterparts via a sprinkling process. It has been shown that the models agree exactly with the continuum quantities in the limit of an infinite sprinkling density—the continuum limit. This paper obtains the correction terms for these models for sprinkled causal sets with a finite sprinkling density. These correction terms are an important step towards testable differences between the continuum and discrete models that could provide evidence of spacetime discreteness.
, M Brüggen, C Gheller, S Hackstein, D Wittor, P M Hinz
Published: 7 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8e60
Abstract:
The origin of extragalactic magnetic fields is still poorly understood. Based on a dedicated suite of cosmological magneto-hydrodynamical simulations with the ENZO code we have performed a survey of different models that may have caused present-day magnetic fields in galaxies and galaxy clusters. The outcomes of these models differ in cluster outskirts, filaments, sheets and voids and we use these simulations to find observational signatures of magnetogenesis. With these simulations, we predict the signal of extragalactic magnetic fields in radio observations of synchrotron emission from the cosmic web, in Faraday rotation, in the propagation of ultra high energy cosmic rays, in the polarized signal from fast radio bursts at cosmological distance and in spectra of distant blazars. In general, primordial scenarios in which present-day magnetic fields originate from the amplification of weak (≤) uniform seed fields result in more homogeneous and relatively easier to observe magnetic fields than astrophysical scenarios, in which present-day fields are the product of feedback processes triggered by stars and active galaxies. In the near future the best evidence for the origin of cosmic magnetic fields will most likely come from a combination of synchrotron emission and Faraday rotation observed at the periphery of large-scale structures.
Published: 19 December 2017
Classical and Quantum Gravity, Volume 35; https://doi.org/10.1088/1361-6382/aa8e70
Abstract:
We analyze the stability of the Cauchy horizon associated with a globally naked, shell-focussing singularity arising from the complete gravitational collapse of a spherical dust cloud. In a previous work, we have studied the dynamics of spherical test scalar fields on such a background. In particular, we proved that such fields cannot develop any divergences which propagate along the Cauchy horizon. In the present work, we extend our analysis to the more general case of test fields without symmetries and to linearized gravitational perturbations with odd parity. To this purpose, we first consider test fields possessing a divergence-free stress-energy tensor satisfying the dominant energy condition, and we prove that a suitable energy norm is uniformly bounded in the domain of dependence of the initial slice. In particular, this result implies that free-falling observers co-moving with the dust particles measure a finite energy of the field, even as they cross the Cauchy horizon at points lying arbitrarily close to the central singularity. Next, for the case of Klein–Gordon fields, we derive point-wise bounds from our energy estimates which imply that the scalar field cannot diverge at the Cauchy horizon, except possibly at the central singular point. Finally, we analyze the behaviour of odd-parity, linear gravitational and dust perturbations of the collapsing spacetime. Similarly to the scalar field case, we prove that the relevant gauge-invariant combinations of the metric perturbations stay bounded away from the central singularity, implying that no divergences can propagate in the vacuum region. Our results are in accordance with previous numerical studies and analytic work in the self-similar case.
, Brian O’Reilly, Mario Diaz
Published: 13 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8e6b
Abstract:
Noise produced by light being scattered from objects limited the sensitivity of the laser interferometer gravitational-wave observatory (LIGO) during the observation period O1. This scattering noise followed a defined model relative to the object's motion from which light is being scattered. A method based on the Hilbert–Huang transform was developed to identify the scattering surfaces. In this document, we present the efficiency of our method identifying scattering objects in LIGO.
Rodrigo Eyheralde, Miguel Campiglia, Rodolfo Gambini,
Published: 15 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8e30
Abstract:
We study Hawking radiation on the quantum space-time of a collapsing null shell. We use the geometric optics approximation as in Hawking's original papers to treat the radiation. The quantum space-time is constructed by superposing the classical geometries associated with collapsing shells with uncertainty in their position and mass. We show that there are departures from thermality in the radiation even though we are not considering a back reaction. One recovers the usual profile for the Hawking radiation as a function of frequency in the limit where the space-time is classical. However, when quantum corrections are taken into account, the profile of the Hawking radiation as a function of time contains information about the initial state of the collapsing shell. More work will be needed to determine whether all the information can be recovered. The calculations show that non-trivial quantum effects can occur in regions of low curvature when horizons are involved, as is proposed in the firewall scenario, for instance.
Published: 13 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8e31
Abstract:
We apply a master equation approximation with dynamical coarse graining to a pair of detectors interacting with a scalar field. By solving the master equation numerically, we investigate the evolution of negativity between comoving detectors in de Sitter space. For a massless conformal scalar field with conformal vacuum, it is found that a pair of detectors can perceive entanglement beyond the Hubble horizon scale if the initial separation of detectors is sufficiently small. At the same time, violation of the Bell–Clauser–Horne–Shimony–Holt inequality on the super-horizon scale is also detected. For a massless minimal scalar field with Bunch–Davies vacuum, on the other hand, the entanglement decays within Hubble time scale, owing to the quantum noise caused by particle creations in de Sitter space, and the entanglement on the super-horizon scale cannot be detected.
Published: 21 September 2017
Classical and Quantum Gravity, Volume 35; https://doi.org/10.1088/1361-6382/aa8e2e
Abstract:
In this work we present a no-hair theorem which discards the existence of four-dimensional asymptotically flat, static and spherically symmetric or stationary axisymmetric, non-trivial black holes in the frame of f (R) gravity under the metric formalism. Here we show that our no-hair theorem also can discard asymptotic de Sitter stationary and axisymmetric non-trivial black holes. The novelty is that this no-hair theorem is built without resorting to known mapping between f(R) gravity and scalar-tensor theory. Then, an advantage will be that our no-hair theorem applies as well to metric f (R) models that cannot be mapped to scalar-tensor theory.
Karen Schulze-Koops, ,
Published: 18 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8d46
Abstract:
We study the propagation of light bundles in non-empty spacetime, as most of the Universe is filled by baryonic matter in the form of a (dilute) plasma. Here we restrict to the case of a cold (i.e., pressureless) and non-magnetised plasma. Then the influence of the medium on the light rays is encoded in the spacetime dependent plasma frequency. Our result for a general spacetime generalises the Sachs equations to the case of a cold plasma Universe. We find that the reciprocity law (Etherington theorem), the relation that connects area distance with luminosity distance, is modified. Einstein's field equation is not used, i.e., our results apply independently of whether or not the plasma is self-gravitating. As an example, our findings are applied to a homogeneous plasma in a Robertson-Walker spacetime. We find small modifications of the cosmological redshift of frequencies and of the Hubble law.
, , Frederic H Vincent, Philippe Grandclement, Eric Gourgoulhon
Published: 18 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8d39
Abstract:
The second-generation beam combiner at the VLT, GRAVITY, observes the stars orbiting the compact object located at the center of our galaxy, with an unprecedented astrometric accuracy of 10 $\mu$as. The nature of this compact source is still unknown since black holes are not the only candidates explaining the 4 million solar masses at the Galactic center. Boson stars are such an alternative model to black holes. This paper focuses on the study of trajectories of stars orbiting a boson star and a Kerr black hole. We put in light strong differences between orbits obtained in both metrics when considering stars with sufficiently close pericenters to the compact object, typically $\lesssim 30~M$. Discovery of closer stars to the Galactic center than the S2 star by the GRAVITY instrument would thus be a powerful tool to possibly constrain the nature of the central source.
Published: 27 May 2014
Classical and Quantum Gravity, Volume 31; https://doi.org/10.1088/0264-9381/31/12/125008
Abstract:
In a seminal paper, Kaminski et al for the first time extended the definition of spin foam models to arbitrary boundary graphs. This is a prerequisite in order to make contact to the canonical formulation of loop quantum gravity whose Hilbert space contains all these graphs. This makes it finally possible to investigate the question whether any of the presently considered spin foam models yields a rigging map for any of the presently defined Hamiltonian constraint operators. We postulate a rigging map by summing over all abstract spin foams with arbitrary but given boundary graphs. The states induced on the boundary of these spin foams can then be identified with elements in the gauge invariant Hilbert space of the canonical theory. Of course, such a sum over all spin foams is potentially divergent and requires a regularization. Such a regularization can be obtained by introducing specific cut-offs and a weight for every single foam. Such a weight could be for example derived from a generalized formal group field theory allowing for arbitrary interaction terms. Since such a derivation is, however, technical involved we forgo to present a strict derivation and assume that there exist a weight satisfying certain natural axioms, most importantly a gluing property. These axioms are motivated by the requirement that spin foam amplitudes should define a rigging map (physical inner product) induced by the Hamiltonian constraint. In the analysis of the resulting object we are able to identify an elementary spin foam transfer matrix that allows to generate any finite foam as a finite power of the transfer matrix. It transpires that the sum over spin foams, as written, does not define a projector on the physical Hilbert space. This statement is independent of the concrete spin foam model and Hamiltonian constraint. However, the transfer matrix potentially contains the necessary ingredient in order to construct a proper rigging map in terms of a modified transfer matrix.
, Usha Kulshreshtha, Daya Shankar Kulshreshtha
Published: 1 August 2014
Classical and Quantum Gravity, Volume 31; https://doi.org/10.1088/0264-9381/31/16/167001
Abstract:
We study boson shells and boson stars in a theory of a complex scalar field coupled to the gauge field and Einstein gravity with the potential . This could be considered either as a theory of a massive complex scalar field coupled to an electromagnetic field and gravity in a conical potential, or as a theory in the presence of a potential that is an overlap of a parabolic and conical potential. Our theory has a positive cosmological constant . Boson stars are found to come in two types, having either ball-like or shell-like charge density. We studied the properties of these solutions and also determined their domains of existence for some specific values of the parameters of the theory. Similar solutions have also been obtained by Kleihaus, Kunz, Laemmerzahl and List, in a V-shaped scalar potential.
James M Cordes
Published: 4 November 2013
Classical and Quantum Gravity, Volume 30; https://doi.org/10.1088/0264-9381/30/22/224002
, Luciano Rezzolla
Published: 19 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8d98
Abstract:
In grid-based codes that provide the combined solution of the Einstein equations and of relativistic hydrodynamics, the history of the fluid is not simple to track, especially when compared with particle-based codes. The use of tracers, namely massless particles that are advected with the flow, represents a simple and effective way to solve this problem. Yet, the use of tracers in numerical relativity is far from being settled and several issues, such as the impact of different placements in time and space of the tracers, or the relation between the placement and the description of the underlying fluid, have not yet been addressed. In this paper we present the first detailed discussion of the use tracers in numerical-relativity simulations focussing on both unbound material -- such as the one needed as input for nuclear-reaction networks calculating r-process nucleosynthesis in mergers of neutron stars -- and on bound material -- such as the one in the core of the object produced from the merger of two neutron stars. In particular, when interested in unbound matter, we have evaluated different placement schemes that could be used to initially distribute the tracers and how well their predictions match those obtained when using information from the actual fluid flow. Countering our naive expectations, we found that the most effective method does not rely on the rest-mass density distribution nor on the fluid that is unbound, but simply distributes tracers uniformly in rest-mass density. This prescription leads to the closest matching with the information obtained from the hydrodynamical solution. When considering bound matter, we demonstrate that tracers can provide insight into the fine details of the fluid motion as they can be used to track the evolution of fluid elements or to calculate the variation of quantities that are conserved along streamlines of adiabatic flows.
Published: 19 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8da8
Abstract:
Recently a consistent non-perturbative quantization of the Schwarzschild interior resulting in a bounce from black hole to white hole geometry has been obtained by loop quantizing the Kantowski-Sachs vacuum spacetime. As in other spacetimes where the singularity is dominated by the Weyl part of the spacetime curvature, the structure of the singularity is highly anisotropic in the Kantowski-Sachs vacuum spacetime. As a result the bounce turns out to be in general asymmetric creating a large mass difference between the parent black hole and the child white hole. In this manuscript, we investigate under what circumstances a symmetric bounce scenario can be constructed in the above quantization. Using the setting of Dirac observables and geometric clocks we obtain a symmetric bounce condition which can be satisfied by a slight modification in the construction of loops over which holonomies are considered in the quantization procedure. These modifications can be viewed as quantization ambiguities, and are demonstrated in three different flavors which all lead to a non-singular black to white hole transition with identical masses. Our results show that quantization ambiguities can mitigate or even qualitatively change some key features of physics of singularity resolution. Further, these results are potentially helpful in motivating and constructing symmetric black to white hole transition scenarios.
, , Guillaume Faye, Francesco Haardt,
Published: 19 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8da5
Abstract:
We highlight some subtleties that affect naive implementations of quadrupolar and octupolar gravitational waveforms from numerically-integrated trajectories of three-body systems. Some of those subtleties arise from the requirement that the source be contained in its "coordinate near zone" when applying the standard PN formulae for gravitational-wave emission, and from the need to use the non-linear Einstein equations to correctly derive the quadrupole emission formula. We show that some of these subtleties were occasionally overlooked in the literature, with consequences for published results. We also provide prescriptions that lead to correct and robust predictions for the waveforms computed from numerically-integrated orbits.
J Aasi, B P Abbott, R Abbott, T Abbott, M R Abernathy, T Accadia, , K Ackley, C Adams, T Adams, et al.
Published: 5 August 2014
Classical and Quantum Gravity, Volume 31; https://doi.org/10.1088/0264-9381/31/16/165014
Published: 14 December 2012
Classical and Quantum Gravity, Volume 30; https://doi.org/10.1088/0264-9381/30/1/013001
Published: 15 September 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8d06
Abstract:
The covariant Hamiltonian formulation for general relativity is studied in terms of self-dual variables on a manifold with an internal and lightlike boundary. At this inner boundary, new canonical variables appear: a spinor and a spinor-valued two-form that encode the entire intrinsic geometry of the null surface. At a two-dimensional cross-section of the boundary, quasi-local expressions for the generators of two-dimensional diffeomorphisms, time translations, and dilatations of the null normal are introduced and written in terms of the new boundary variables. In addition, a generalisation of the first-law of black-hole thermodynamics for arbitrary null surfaces is found, and the relevance of the framework for non-perturbative quantum gravity is stressed and explained.
Published: 13 November 2017
Classical and Quantum Gravity, Volume 34; https://doi.org/10.1088/1361-6382/aa8cf9
Abstract:
On the path towards quantum gravity we find friction between temporal relations in quantum mechanics (QM) (where they are fixed and field-independent), and in general relativity (where they are field-dependent and dynamic). This paper aims to attenuate that friction, by encoding gravity in the timeless configuration space of spatial fields with dynamics given by a path integral. The framework demands that boundary conditions for this path integral be uniquely given, but unlike other approaches where they are prescribed—such as the no-boundary and the tunneling proposals—here I postulate basic principles to identify boundary conditions in a large class of theories. Uniqueness arises only if a reduced configuration space can be defined and if it has a profoundly asymmetric fundamental structure. These requirements place strong restrictions on the field and symmetry content of theories encompassed here; shape dynamics is one such theory. When these constraints are met, any emerging theory will have a Born rule given merely by a particular volume element built from the path integral in (reduced) configuration space. Also as in other boundary proposals, Time, including space-time, emerges as an effective concept; valid for certain curves in configuration space but not assumed from the start. When some such notion of time becomes available, conservation of (positive) probability currents ensues. I show that, in the appropriate limits, a Schrödinger equation dictates the evolution of weakly coupled source fields on a classical gravitational background. Due to the asymmetry of reduced configuration space, these probabilities and currents avoid a known difficulty of standard WKB approximations for Wheeler DeWitt in minisuperspace: the selection of a unique Hamilton–Jacobi solution to serve as background. I illustrate these constructions with a simple example of a full quantum gravitational theory (i.e. not in minisuperspace) for which the formalism is applicable, and give a formula for calculating gravitational semi-classical relative probabilities in it.
Page of 284
Articles per Page
by
Show export options
Select all |
Gaussian Integers and Pythagorean Triplets
1. Sep 22, 2007
ramsey2879
It is well known that 4n(n+1) + 1 is a square if n is an integer but if n is a Gaussian integer i.e., 4n(n+1) + 1 = A + Bi, then the norm (A^2 + B^2) is always a square! The proof is quite easy since A = u^2 - v^2 and B = 2uv.
2. Sep 23, 2007
robert Ihnot
4n(n+1)+1 =(2n+1)^2, but it can not equal A+Bi, because you are not considering the imaginary part, are you?
If, instead, we take the form Z=A+Bi, and look at (2Z+1)^2, what do we do now?
3. Sep 23, 2007
ramsey2879
If n=Z = 1+2i for example
$$4*(1+2i)*(2+2i) + 1 = -7+24i = (3+4i)^2 = (2Z+1)^2 = (2A+1 +2Bi)^2)$$
$$-7 = (2A+1)^2 - (2B)^2 = u^2 - v^2$$
$$24 = 2(2A+1)(2B) = 2uv$$
So we have the x and y of the Pathagorean triple: (7*7 + 24*24 = 25*25)
Last edited: Sep 23, 2007
4. Sep 23, 2007
robert Ihnot
First let Z* = conjugate of Z, then if (2Z+1)=A+Bi, we have (2Z*+1)(2Z+1) =A^2 +B^2
This then results in (2a+1)^2 + b^2 = A^2+B^2, which is what is expected. But it does not make (2Z+1)(2Z*+1) a square.
You say, quote: The proof is quite easy since A = u^2 - v^2 and B = 2uv.
You introduce the above, in the second sentence, which is what is required to show that A^2+B^2 =(u^2+v^2)^2. But it does not relate to (2Z+1)^2, as introduced in the first sentence.
Anyway, since with sentence 2 you have created the Pythagorian triples, we need only say for example that since 3^2+4^2 = 5^2, the Gaussian integer 3+4i has a square as its norm.
5. Sep 24, 2007
ramsey2879
You must multiply 4*Z*(Z+1) and add 1 to get a square that I am talking about.
However any Gaussian integer squared is of the form A+Bi where A= u^2 - v^2 and B = 2uv since I read that the norm of a product of two complex numbers is the product of their norms. So yes it is possible to have two squares that do not sum to a square but that is not possible for the A and B where A+Bi is the square of a Gaussian integer.
Last edited: Sep 24, 2007
6. Sep 24, 2007
ramsey2879
The last couple of posts are troubling at first glance. Of course the product of two conjugates equals A^2 + B^2 but here that is of the form A' = A^2 + B^2 and B' =0 so the norm is (A^2+B^2)^2 which what is to be expected. What my first post states is that 4Z(Z+1)+1 = (2Z+1)^2 = A + Bi where A = u^2 + v^2 and B = 2uv which I showed in a later post to be true. The product of two conjugates are also of the form A' = u^2+v^2 abd B' = 2uv since this is a trival case where v = 0.
Last edited: Sep 24, 2007
7. Sep 24, 2007
robert Ihnot
O.K., you have (a+bi)^2 = a^2-b^2 +2abi. Thus N(a+bi)^2 =(a^2-b^2)^2+(2ab)^2 = (A^2+b^2)^2.
So then you are saying (2Z+1)^2 = A+Bi is such that (2Z*+1)^2(2Z+1)^2 = A^2 +B^2.
I guess I was confused about how you were writing that up.
8. Sep 24, 2007
ramsey2879
I guess we confused each other. I went back and corrected my last post since A = u^2-v^2 not u^2+v^2; but I don't think I ever said anything about the product of the squares of two conjugates except that by inference they too are of the form A+Bi where A = u^2-v^2 and B =2uv.
9. Sep 24, 2007
ramsey2879
It is easily shown that all Gaussian integers that are squares are of the form A+Bi where A=u^2-v^2 and B = -2uv. Therefore all Gaussian integers that are squares have a square norm. But not all Gausian integers that have a square norm are squares since 3 is not a Guassian square but has a square norm and 3*Z^2 has a square norm but likewise is not a square. Is it true that all Gaussian integers that have a square norm are either a Gaussian square or a product of a Gaussian square and an integer which is not a Gaussian square?
Edit, I forgot to consider the Gaussian units, "i" is not a Gaussian square so I have to amend my question. Are only the only Gaussian integers that have a square norm either a Gaussian square or the product of i and or an integer and a Gaussian square?
Last edited: Sep 24, 2007
10. Sep 25, 2007
robert Ihnot
I think the first problem here was The Axiom of Symbolic Stability. It is well known that 4n(n+1) + 1 is a square if n is an integer but if n is a Gaussian integer i.e., 4n(n+1) + 1 = A + Bi, then the norm (A^2 + B^2) is always a square! The proof is quite easy since A = u^2 - v^2 and B = 2uv. I failed to recognize that A+Bi was the same as u+vi.
I also was considering, wrongly, a Gaussian integer to be only those that decomposed over the imaginary. I had not considered 3, for example. With the exception of 2 the only primes that will decompose are those congruent to 1 Mod 4. Thus the Pythagorian triples are built up from 5, 13, 17, etc. For example 5 =(1+2i)(1-2i) =(2+i)(2-i). Or products of primes==1 Mod 4 such as: 65 = 8^2+1^1 = 7^2+4^2. (Which can be done in two distinct ways.) However, 3 can be present only in a squared form, such as: (15)^2=9^2+12^2.
Without going into the question of a Gaussain square, I think you are right. As for 2, (1+i) and (1-i) and not distinct primes since (-i)(1+i) = (1-i), so they differ only by a unit.
Last edited: Sep 25, 2007 |
MORE IN Applied Mathematics 3
MU Information Technology (Semester 3)
Applied Mathematics 3
December 2013
Total marks: --
Total time: --
INSTRUCTIONS
(1) Assume appropriate data and state your reasons
(2) Marks are given to the right of every question
(3) Draw neat diagrams wherever necessary
1 (a) $Find \ L^{-1}\left\{\frac{e^{\frac{4-3}{s}}}{{\left(s+4\right)}^{\frac{5}{2}}}\right\}$
5 M
1 (b) Find the constant a,b,c,d and e If
$f\left(z\right)=\left(ax^4+bx^2y^2+cy^4+dx^2-2y^2\right)+i\ \left(4x^3y-exy^3+4xy\right)\$ is analytic.
5 M
1 (c) Obtain half range Fourier cosine series for f(x)=sin x, x ∈ (0, π).
5 M
1 (d) If r and r have their usual meaning and a is constant vector, prove that
$abla{}x\left[\frac{a\ x\\bar{r}}{r^n}\right]=\frac{\left(2-n\right)}{r^n}a+\frac{n\left(a\cdot{}\bar{r}\right)\bar{r}}{r^{n+2}}$
5 M
2 (a) Find the analytic function f(c) =u+iv if 3u+2v=y2- x2 + 16 xy.
6 M
2 (b) Find the z-transform of $\left\{a^{\left\vert{}k\right\vert{}}\right\}$ and hence find the z-transform of $\left\{{\left(\frac{1}{2}\right)}^{\left\vert{}k\right\vert{}}\right\}$
6 M
2 (c) Obtain Fourier series expansion for $f\left(x\right)=\sqrt{1-\cos{x,\ }}\ x\in{}\left(0,\ 2\pi{}\right)$ and hence deduce that $\sum_{n=1}^{\infty{}}\frac{1}{4n^2-1}=\frac{1}{2}.$
8 M
3 (a) $\left(i\right)\ \ L^{-1}\left\{\frac{s}{{\left(2\ s+1\right)}^2}\right\}$
$\left(ii\right)\ \ L^{-1}\left\{\log{\begin{array}{l}\frac{s^2+a^2}{\sqrt{s+b}}\\\ \end{array}}\right\}$
6 M
3 (b) Find the orthogonal trajectories of the family of curves e-x cos y+xy=? where ? is the real constant in xy - plane.
6 M
3 (c) Show that $\bar{F}=(y\ e^{xy}\cos{z)i+(x\ e^{xy}\cos{z)\ j-(e^{xy}\sin{z)\ k\ }}}$ is irrotational and find the scalar potential for $\ \bar{F}\ and\ evaluate\ \int_c^{\ }\bar{F}\ \$ dr along the curve joining the points (0,0,0) and (-1,2,π).
8 M
4 (a) Evaluate by Green's theorem ∫ e-x sin y dx+e-x cos y dy where c is the rectangle whose vertices are (0,0) (π,0) (π, π/2) and (0, π/2)
6 M
4 (b) Find the half range sine series for the function.
$f\left(x\right)=\frac{2\ k\ x}{l},\ \ 0\leq{}x\leq{}\frac{l}{2}$
$f\left(x\right)=\frac{2k}{l}\left(l-x\right),\ \frac{l}{2}\leq{}z\leq{}l$
6 M
4 (c) Find the inverse z-transform of $\frac{1}{\left(z-3\right)\left(z-2\right)}$
(i) |z|<2
(ii) 2<|z|<3
(iii) |z|>3.
8 M
5 (a) Solve using Laplace transform.
$\frac{d^2y}{dx^2}+4\frac{dy}{dx}+3y=e^{-x},\ y\left(0\right)=1,\ y^{'}\left(0\right)=1.$
6 M
5 (b) Express f(x)= π/2 e-x cos x for x>0 as Fourier sine integral and show that
$\int_0^{\infty{}}\frac{w^3\sin{wx}}{w^4+4}\ dw=\frac{\pi{}}{2}\ e^{-x}\cos{x.}$
6 M
5 (c) $Evaluate\ \iint_s^{\ }F\cdot{}nds,\ where\ \bar{F}=xi-yi+\left(z^2-1\right)k\ \$ and s s=is the cylinder formed by the surface z=0, z=1, x2+y2=4, using the Gauss-Divergence theorem.
8 M
6 (a) Find the inverse Laplace transform by using convolution theorem.
$L^{-1}\left\{\frac{s^2+2s+3}{\left(s^2+2s+5\right)\left(s^2+2s+2\right)}\right\}$
6 M
6 (b) Find the directional derivative of ? = 4e2x-y+z at the point (1, 1, -1) in the direction towards the point (-3,5,6).
6 M
6 (c) Find the image of the circle x2+y2=1, under the transformation $w=\frac{5-4z}{4z-2}$
8 M
More question papers from Applied Mathematics 3 |
# Ideals of $\mathbb{Z}[\zeta_{p}]$ factorise uniquely
I am trying to show that the ideals of $\mathbb{Z}[\zeta_{p}]$ factorise uniquely.
In know that $\mathbb{Z}[\zeta_{p}]$ is not a UFD in general. I also know that, for Dedekind rings, non-zero proper ideals have unique factorisation as a product of non-zero prime ideals. I think I just need to show that $\mathbb{Z}[\zeta_{p}]$ is a Dedekind ring.
Is that the case? If so, how to I show it? If not, what do I need to show?
• "Just" show $\;\Bbb Z[\zeta_p]\;$ is the ring of algebraic integers of the number field $\;\Bbb Q(\zeta_p)\;$ . Then it automatically follows it is a Dedekind ring. Apr 1 '17 at 13:16
• How would I go about doing that? Apr 1 '17 at 14:32
• You may want to try this: math.uiuc.edu/~r-ash/Ant/AntChapter7.pdf It is not trivial and can be lengthy to show such a thing. Apr 1 '17 at 15:54
• For the proof of unique factorization of ideals in rings of integers, the main step is en.wikipedia.org/wiki/Primary_decomposition and the fact that if $\mathcal{P}$ is a prime ideal then $mathcal{O}_K/ \mathcal{P}$ is an integral domain with finitely many elements and hence a field. Apr 1 '17 at 17:00
• The simpler example is $w =i \sqrt{5}, \mathcal{O}_K= \mathbb{Z}[w]$ whose class group has two elements : $(1)$ and $(2,1+w)$, so its ideals are of the form $(a), a \in \mathcal{O}_K$ or $(a,a\frac{1+w}{2}), a \in \mathcal{O}_K$ Apr 1 '17 at 17:16 |
Now showing items 2-2 of 2
• #### Minimal contention-free matrices with application to multicasting
(2000)
Article
Open Access
In this paper, we show that the multicast problem in trees can be expressed in term of arranging rows and columns of boolean matrices. Given a $p \times q$ matrix $M$ with 0-1 entries, the {\em shadow} of $M$ is ... |
# Show that $\forall n \in \mathbb{N} \left ( \left [(2+i)^n + (2-i)^n \right ]\in \mathbb{R} \right )$
Show that $\forall n \in \mathbb{N} \left ( \left [(2+i)^n + (2-i)^n \right ]\in \mathbb{R} \right )$
My Trig is really rusty and weak so I don't understand the given answer:
$(2+i)^n + (2-i)^n$
$= \left ( \sqrt{5} \right )^n \left (\cos n\theta + i \sin n\theta \right ) + \left ( \sqrt{5} \right )^n \left (\cos (-n\theta) + i \sin (-n\theta) \right )$
$= \left ( \sqrt{5} \right )^n \left ( \cos n\theta + \cos (-n\theta) + i \sin n\theta + i \sin (-n\theta) \right )$
$= \left ( \sqrt{5} \right )^n 2\cos n\theta$
-
You have $z^n=|z|^n\exp(ni\arg\,z)=|z|^n(\cos(n\arg\,z)+i\sin(n\arg\,z))$ for starters... – J. M. May 14 '12 at 8:40
This gives a neat formula. Another way of proving this is to show that (if we call your expression $a_n$) it satisfies the equation $a_n=4a_{n-1}-5a_{n-2}$ and work from there. – Mark Bennet May 14 '12 at 8:45
Where did Mark get that recursion relation, you ask? Note that $(z-(2+i))(z-(2-i))=z^2-4z+5$... it's the same theory behind Fibonacci sequences. – J. M. May 14 '12 at 8:52
...and the high-brow route is Newton-Girard: $x^n+y^n$ for integer $n$ is always expressible as a combination of $x+y$ and $xy$; for your particular case, $x+y=4$ and $xy=5$ (notice a pattern?) – J. M. May 14 '12 at 8:55
The binomial expansion works, because the odd powers of $i$ are attached to odd powers of $b$ and $-b$ respectively, so will cancel. – Mark Bennet May 14 '12 at 9:15
There are two ways to write a complex number: rectangular form, e.g., $x+iy$, and polar form, e.g., $re^{i\theta}$. The conversion between them uses trig functions: $$re^{i\theta}=r\cos\theta+ir\sin\theta\;.\tag{1}$$ Going in the other direction, $$x+iy=\sqrt{x^2+y^2}\,e^{i\theta}\;,$$ where $\theta$ is any angle such that $$\cos\theta=\frac{x}{\sqrt{x^2+y^2}}\;\text{ and }\sin\theta=\frac{y}{\sqrt{x^2+y^2}}\;.$$ The important thing for your argument is that $r=\sqrt{x^2+y^2}$.
The $r$ corresponding to $2+i$ is therefore $\sqrt{2^2+1^2}=\sqrt5$, and that corresponding to $2-i$ is $\sqrt{2^2+(-1)^2}=\sqrt5$ as well. The angles for $2+i$ is an angle $\theta$ whose cosine is $\frac2{\sqrt5}$ and whose sine is $\frac1{\sqrt5}$, while the angle for $2-i$ is an angle whose cosine is $\frac2{\sqrt5}$ and whose sine is $-\frac1{\sqrt5}$. It doesn’t matter exactly what they are; the important thing is that if we let the first be $\theta$, the second is $-\theta$, since $$\cos(-\theta)=\cos\theta\;\text{ and }\sin(-\theta)=-\sin\theta\;.$$
Substituting into $(1)$ gives you $$2+i=\sqrt5\cos\theta+i\sqrt5\sin\theta=\sqrt5(\cos\theta+i\sin\theta)=\sqrt5 e^{i\theta}$$ and $$2-i=\sqrt5\cos(-\theta)+i\sqrt5\sin(-\theta)=\sqrt5(\cos\theta-i\sin\theta)=\sqrt5 e^{-i\theta}\;.$$
Now use the fact that it’s easy to raise an exponential to a power:
\begin{align*} (2+i)^n+(2-i)^n&=(\sqrt5)^n\left(e^{i\theta}\right)^n+(\sqrt5)^n\left(e^{-i\theta}\right)^n\\ &=(\sqrt5)^n\left(e^{in\theta}+e^{-in\theta}\right)\\ &=(\sqrt5)^n\Big(\big(\cos n\theta+i\sin n\theta\big)+\big(\cos(-n\theta)+i\sin(-n\theta)\big)\Big)\\ &=(\sqrt5)^n\Big(\cos n\theta+i\sin n\theta+\cos n\theta-i\sin n\theta\Big)\\ &=(\sqrt5)^n 2\cos n\theta\;. \end{align*}
-
Thanks, it's the fact that $\cos \theta = \cos (- \theta)$ and $\sin - \theta = - \sin \theta$ that threw me. – Robert S. Barnes May 14 '12 at 9:21
@Robert: I wasn’t sure just where the problem was, so I took you at your word and went back to basics; I’m glad that it helped. – Brian M. Scott May 14 '12 at 9:23
If you believe that complex conjugation respects products (hence also powers), then the simple way is: $$\overline{x}=\overline{(2+i)^n+(2-i)^n}=(\overline{2+i})^n+(\overline{2-i})^n=(2-i)^n+(2+i)^n=x.$$ So $\overline{x}=x$, and hence $x$ is real.
The binomial formula gives an alternative route: $$x=(2+i)^n+(2-i)^n=\sum_{k=0}^n{n\choose k}2^ki^{n-k}+\sum_{k=0}^n{n\choose k}2^ki^{n-k}(-1)^{n-k}.$$ Here the terms where $n-k$ is odd cancel each other, so we get $$x=2\sum_{k=0,\ k\equiv n\pmod2}^n{n\choose k}2^ki^{n-k}.$$ Here everywhere $i^{n-k}$ is real, because $(n-k)$ is even in all the terms remaining in the sum.
-
+1 Even though it doesn't explain the given answer, that's really nice. – Robert S. Barnes May 14 '12 at 9:16
@Robert, sorry about that. I simply looked at the title, and didn't read your question to the end. I did see your own suggestion of using the binomial formula, so I added that. – Jyrki Lahtonen May 14 '12 at 9:21
Still a very nice answer. If I could mark it up more than once I would. :-) Glad to see that intuition on the binomial formula was right. – Robert S. Barnes May 14 '12 at 9:31
So you're notation there $2|n-k$ means that the summation is only over even values of k? Is that a common notation? – Robert S. Barnes May 14 '12 at 9:35
@Robert, I mean that the summation is over only such values of $k$ that $n-k$ is even. A better way of expressing that would be $k\equiv n\pmod2$. Will edit. – Jyrki Lahtonen May 14 '12 at 9:47
Hint $\$ Scaling the equation by $\sqrt{5}^{\:-n}$ and using Euler's $\: e^{{\it i}\:\!x} = \cos(x) + {\it i}\: \sin(x),\$ it becomes
$$\smash[b]{\left(\frac{2+i}{\sqrt{5}}\right)^n + \left(\frac{2-i}{\sqrt{5}}\right)^n} =\: (e^{{\it i}\:\!\theta})^n + (e^{- {\it i}\:\!\theta})^n$$ But $$\smash[t]{ \left|\frac{2+i}{\sqrt{5}}\right| = 1\ \Rightarrow\ \exists\:\theta\!:\ e^{{\it i}\:\!\theta} = \frac{2+i}{\sqrt{5}} \ \Rightarrow\ e^{-{\it i}\:\!\theta} = \frac{1}{e^{i\:\!\theta}} = \frac{\sqrt{5}}{2+i} = \frac{2-i}{\sqrt 5}}$$
Remark $\$ This is an example of the method that I describe here, of transforming the equation into a simpler form that makes obvious the laws or identities needed to prove it. Indeed, in this form, the only nontrivial step in the proof becomes obvious, viz. for complex numbers on the unit circle, the inverse equals the conjugate: $\: \alpha \alpha' = 1\:\Rightarrow\: \alpha' = 1/\alpha.$ |
# irf - Instrument response functions¶
## Introduction¶
gammapy.irf handles instrument response functions (IRFs):
• Effective area (AEFF)
• Energy dispersion (EDISP)
• Point spread function (PSF)
Most of the formats defined at IACT IRFs are supported. Otherwise, at the moment, there is very little support for Fermi-LAT or other instruments.
Most users will not use gammapy.irf directly, but will instead use IRFs as part of their spectrum, image or cube analysis to compute exposure and effective EDISP and PSF for a given dataset.
Most (at some point maybe all) classes in gammapy.irf have an gammapy.utils.nddata.NDDataArray as data attribute to support interpolation.
## Getting Started¶
See cta_1dc_introduction.html for an example how to access IACT IRFs.
## PSF¶
The TablePSF and EnergyDependentTablePSF classes represent radially-symmetric PSFs where the PSF is given at a number of offsets.
The PSFKernel represents a PSF kernel.
(png, hires.png, pdf)
## Energy Dispersion¶
The EnergyDispersion class represents an energy migration matrix (finite probabilities per pixel) with y=log(energy_reco).
The EnergyDispersion2D class represents a probability density with y=energy_reco/energy_true that can also have a FOV offset dependence.
(png, hires.png, pdf)
## Using gammapy.irf¶
If you’d like to learn more about using gammapy.irf, read the following sub-pages:
## Reference/API¶
### gammapy.irf Package¶
Instrument response functions (IRFs).
#### Functions¶
multi_gauss_psf_kernel(psf_parameters[, …]) Create multi-Gauss PSF kernel.
#### Classes¶
Background2D(energy_lo, energy_hi, …[, …]) Background 2D. Background3D(energy_lo, energy_hi, …[, …]) Background 3D. BgRateTable(energy_lo, energy_hi, data) Background rate table. CTAIrf([aeff, edisp, psf, bkg, ref_sensi]) CTA instrument response function container. CTAPerf([aeff, edisp, psf, bkg, sens, rmf]) CTA instrument response function container. EffectiveAreaTable(energy_lo, energy_hi, data) Effective area table. EffectiveAreaTable2D(energy_lo, energy_hi, …) 2D effective area table. EnergyDependentMultiGaussPSF(energy_lo, …) Triple Gauss analytical PSF depending on energy and theta. EnergyDependentTablePSF(energy, rad[, …]) Energy-dependent radially-symmetric table PSF (gtpsf format). EnergyDispersion(e_true_lo, e_true_hi, …) Energy dispersion matrix. EnergyDispersion2D(e_true_lo, e_true_hi, …) Offset-dependent energy dispersion matrix. GaussPSF([sigma]) Extension of Gauss2D PDF by PSF-specific functionality. HESSMultiGaussPSF(source) Multi-Gauss PSF as represented in the HESS software. IRFStacker(list_aeff, list_livetime[, …]) Stack instrument response functions. PSF3D(energy_lo, energy_hi, offset, rad_lo, …) PSF with axes: energy, offset, rad. PSF3DChecker(psf[, d_norm, …]) Automated quality checks for gammapy.irf.PSF3D. PSFKing(energy_lo, energy_hi, offset, gamma, …) King profile analytical PSF depending on energy and offset. Psf68Table(energy_lo, energy_hi, data) Background rate table. SensitivityTable(energy_lo, energy_hi, data) Sensitivity table. TablePSF(rad, dp_domega[, spline_kwargs]) Radially-symmetric table PSF. |
# Angular Momentum commutation relationships
It seems to be implied, but I cant find it explicitly - the order in which linear operators are applied makes a difference. IE given linear operators A,B then AB is NOT necessarily the same as BA ? I thought it was only with rotation operators that the order made a difference?
I noticed this while looking at text that showed [Lx,Ly] = i(h-bar)Lz, using only position and momentum operators...
<<mentor note: originally posted in homework forum, template removed>>
Last edited by a moderator:
Orodruin
Staff Emeritus
Homework Helper
Gold Member
In general, linear operators will not commute. Another common example is the position and momentum operators.
Thanks - of course, not a clever question when I am studying commutation relationships... But now I might see what was bothering me (I think) - the text expands [Lx,Ly] in terms of position and momentum operators, you get 8 terms like YPzZPx - the last 4 could cancel the 1st 4 out - but only if it was OK to change the order - like ZPxYPz (which is the 1st of the last 4, to complete the example). So are they OK in assuming that the operators don't commute - in order to prove that other operators don't commute?
Orodruin
Staff Emeritus
Homework Helper
Gold Member
So are they OK in assuming that the operators don't commute - in order to prove that other operators don't commute?
Yes. You can easily derive the commutation relations for ##P_i## with ##X_i## using the position basis representation ##P_i \to -i\partial_i## and ##X_i \to x^i## and their action on any wave function ##\psi(x)##.
ChrisVer
Gold Member
the order in which linear operators are applied makes a difference.
If they don't commute yes... if they commute, no...
If they commute, you have to be careful when changing the order -> new terms can be brought in.
For example if I have $x p_x$ and I want for some calculation to rewrite it in $p_x x$ (because it would come handy) I would have to use the relation:
$[x, p_x ] = x p_x - p_x x= i \hbar \Rightarrow x p_x = p_x x + i \hbar$ and that's with what you change $x p_x$.
Thanks all |
# Brilliant.org bug Experiment Results
this note is purely to conduct research whether there truly is a bug in the save and ping system
theory: if you put a comment in somebody's note you will receive pings from it whether you have saved it or not we believe this might be a bug and will forward our findings to Brilliant mathematics once done.
Note by Nscs 747
3 months, 2 weeks ago
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
• Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .
• Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone.
• Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.
MarkdownAppears as
*italics* or _italics_ italics
**bold** or __bold__ bold
- bulleted- list
• bulleted
• list
1. numbered2. list
1. numbered
2. list
Note: you must add a full line of space before and after lists for them to show up correctly
paragraph 1paragraph 2
paragraph 1
paragraph 2
[example link](https://brilliant.org)example link
> This is a quote
This is a quote
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
# I indented these lines
# 4 spaces, and now they show
# up as a code block.
print "hello world"
MathAppears as
Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.
2 \times 3 $2 \times 3$
2^{34} $2^{34}$
a_{i-1} $a_{i-1}$
\frac{2}{3} $\frac{2}{3}$
\sqrt{2} $\sqrt{2}$
\sum_{i=1}^3 $\sum_{i=1}^3$
\sin \theta $\sin \theta$
\boxed{123} $\boxed{123}$
Sort by:
experiment 1: i was mentioned
result: no ping
- 3 months, 2 weeks ago
- 3 months, 2 weeks ago
cleared
- 3 months, 2 weeks ago
experiment 2: i commented and then another person commented seperately
result: ping recieved
- 3 months, 2 weeks ago
cleared
- 3 months, 2 weeks ago
experiment 3: single mention comment and separate comment put
result: 2 seperate pings recieved
- 3 months, 2 weeks ago
cleared
- 3 months, 2 weeks ago
other findings findings: if a post is deleted then the replies are still there, but invisible and cannot be deleted. -> so they still get pinged even if they deleted all their other posts this is also basicly experiment 4
IMPORTANT AND NEEDS TO BE FIXED: possible fix is to make the system fully wipe the replies to the deleted comment because even though to the user it is cleared the message is still marked thare which makes the user continue to receive pings
- 3 months, 1 week ago
testing space deleted and fresh one made.
- 3 months, 1 week ago
experiment 5: nscs i will not use the @ now. but u can make a post and leave it here until i did another post. then u delete your post and i do another post. (i will do all posts without mentioning you)-Num Po
findings: after comment has been put even if deleted user will still receive pings
- 3 months, 1 week ago
experiment 6 (possible final)
num po-mention me as a reply
me-delet my comment
num po-post new comment
i will then check for ping
- 3 months, 1 week ago
POSSIBLE IMPROVEMENT SUGGESTION FOR BRILLIANT: when a note or comment is deleted please terminate all comments there as well and stop users from stil posting there because otherwise for other users it is annoying to be pinged for something that has been deleted as others continue to comment there because they still have access to it
- 3 months, 1 week ago
Thank you for your findings, I will forward them all to the engineers.
Staff - 3 months ago
i am curious: have you done some changes already?
if a comment is deleted now, then i can still see the answers to that post.
but some days ago that was not possible.
- 3 months ago
Not that I'm aware of, no. There might be other engineers playing around with this system but they have not updated me with anything substantial.
Staff - 3 months ago
Okay..........
- 3 months ago
ok ty. while we did these tests l had the impression that the system behaviour changed.
so l assumed a staff member had recognized our tests and adujusted the system.
is there a description what effect "unsubscribe" from a question should have?
- 3 months ago
I don't know the full details. Please forward your inquiries to [email protected]
Staff - 3 months ago
thanks
- 3 months ago
it seems u are in a hurry? i just saw your description for experiment 6.
- 3 months, 1 week ago
yes
- 3 months, 1 week ago
@Brilliant Mathematics please have a look at our results and i have mentioned some possible improvements
- 3 months, 1 week ago
I don't get it
- 3 months ago |
# PLC and DCS
S
hi all,
in almost all books it is given as the PLC is better for discrete (ON/OFF) control, and interlocking operation. Although PID loops can also be implemented. similarly it is said that DCS is better for PID control.
my question is why there are restrictions on PLC and DCS to the tasks of each other? example if one PID loop can implement on PLC, why cant multiple? and why DCS is not good for discrete operations?
thanks
D
#### Donald Pittendrigh
Hi All
This is one of life's ugly little generalisations, there are definitely many
PLC's capable of great numbers of PID controlers and there are certainly enough DCS's around with the ability to handle high speed digital interlocking. The area between DCS and PLC has greyed out considerably since these books were written, in fact most texts in this area which are older than 6 - 12 months probably aren't written about the same generation of equipment on the market today, anything written today will be out of date in less than a year, and by then there will probably be no definable boundary between DCS and PLC ayway, that is presuming there still is today.
Cheers
Donald Pittendrigh
W
#### William F. Hullsiek
The best explanation is ... market forces.
DCS - are oriented toward large scale plant / continuous process.
PLC - are oriented toward a cell or machine.
Both are good solutions - but you select the solution based on the project economics, i.e., cost/benefit / limits on capital.
Look at the $per point - and the$ per monitor point and \$ per alarm. Put the point count into a scale going from 10-100, 100-300, 300-750, 750 - 1500, 1500 - 2500, and 2500 to 5000. Then look at the control loops in a similar fashion.
You wlll get a good economic picture of where the price/performance curve shifts from one technology to another.
Then you factor in the fear, uncertainty and doubt - this is known as the FUD mark-up.
FUD can add 50-100% to the cost of a system.
Lawyers, lawsuits, liability add the LLL mark-up to the economic equation.
In some markets - this actually can be exponential.
Have fun !
J
#### Jiri Baum
Mostly - because the books say so.
It's not really a restriction as such, it's just that products which concentrate on discrete control get called PLCs and products which concentrate on PIDs are called DCSs.
In reality, these days, most products can do both and sometimes it's more tradition than reality whether they get called PLCs or DCSs. Still, they're likely to be somewhat better in their traditional strengths.
Jiri
--
Jiri Baum <[email protected]> http://www.csse.monash.edu.au/~jirib
MAT LinuxPLC project --- http://mat.sf.net --- Machine Automation Tools
M
#### MPRABA
Invention of PLC was the next logical step to implementation of ladder logic consisting of relays. Hence, codes for all PLCs were written around discrete logic and controls. Subsequently, PLCs were designed to take care of analog controls for applications where minimum number of analog loops are involved.
But, PLCs are best suited for discrete controls in the sense that it 'acts fast', i.e the execution time for each loop is typically in milliseconds (Of course, a few DCS systems in the market are also competing in this regard). As far as analog controls are concerned, DCS is superior as it contains lot more function blocks, better interconnectivity, dedicated HMI etc.
Having said that, however, selection between the two involves good engineering judgement.
Regards,
B
#### Bob Peterson
The reason the books say this is because they are far behind the times. This was true 15 years ago, but with the advent of processors such as AB's PLC5, this is no longer true. There are still differences, and there are reasons to use a DCS over a PLC but simple PID loops are not it. OTOH-PLCs handle typical discrete control tasks in a much simpler and more efficient fashion.
I have done PLC systems with 50 or more PID loops, and it worked quite well.
The main reason DCS systems do not deal well with discrete control tasks is because the vast majority of such tasks do not fit well into the preconfigured control blocks they give you.
I think most DCS systems can now give you ladder logic, but why bother when you can have a perfectly good little PLC that actually does this stuff for a living?
Bob Peterson
J
#### Jimmy Saldivias
Also Genetics. They were born from different mothers and later discovered they were related.
DCS was born in process industries controlling loops. i.e. you need a lot of processing power to solve 1 equation i.e. think of analog signals: how many bits you need for them?
PLC was born in factory controlling sequential activities. i.e. you need low processing power to solve "first do this, then do that" i.e. think of discrete signals: how many bits you need for them?
But then again, this is ancient history.
Silicon went down in price and up in sofistication. Today is very cheap to
buy a chip, with everything on it.
And you can place that chip in an electronic equipment and call it: PLC or
DCS or Hybrid system.
And you have a very complicated set of names and control philosophies which go to the same end.
As William said, this a place where you can have a lot of fun!
MBA Ing. Jimmy Saldivias
Gerente
TECSIM
Phone: 591-4-4523438
Fax: 591-4-4523413
S
#### SELVARAJAN
DCS systems are costly for Digital systems. DCS systems have predefined control blocks related to PID loops like CASCADE/THREE element control, Feedforward, etc.
PLC have simple PID control implemetation and the PLC cost is less and powerful in Digital operations.
R
#### Rajesh
Hi
In old days the PLCs were not having hi end microprocessesors, so they were not able to do mathematical calculations and as the name suggests PLC (Programmable Logic Controller), it was used to control the logic and not mathematical calculations and even only in open loop control so they were low cost.
On the other side DCS were used for close loop controls where there was need of computation power and so hi-end processors. They were not doing any Logic \ discrete control.
But with the course of time the PLC started building the features that were required for close loop control. And today you will find there can not be a clear line in terms of applications those can only be implemented using DCS or PLC.
In some PLCs today you can run multiple PID or even some complex algorithms and on other side DCS can do discrete control.
Regards
Rajesh
P
#### Paul Jager
In the past this was because:
PLC = simple little devices, manufacturing oriented - not enough software and hardware horsepower any serious process control. DCS for the mostly costly digital control you'll ever install.
Here is where a good server based system can technically slam-dunk both PLC and DCS solutions. Because IT servers are used 100's even 1000's of PID loops, no matter how complex are essentially little or no-cost. At the same time, access to Profibus, Device Net or Ethernet/TCP mean hoards of cheap digital are a couple of terminals away.
Software based systems can do both, in large or small applications with no penalty to the user. Old rules need not apply, in fact we are happy to chuck them out the control room, or plant floor window!
Paul Jager, CEO
www.mnrcan.com
[email protected]
(250)-724-1402 |
# Battlefield 3 (BF3) PC Crash Fix and Fixes to Freezes, Errors, Stuttering, Poor FPS, Lagging, Mouse Bugs, CTD, Launch Crash, Origins Crash
« Prev Page 1 2 3 Next Page »
drivers and DirectX are updated and you should be good. You can check the following links to check for driver updates: ATI users & nVidia users.
### Problem #5 Battlefield 3 (BF3) Graphics Fix – Cannot see main menu or HUD
A common problem when playing on multiple screens. For now, the solution is to unplug other monitors and play on single screen. If that doesn’t work, try playing in windowed mode.
### Problem #6 Battlefield 3 (BF3) Joystick Fix – Joystick not working for flight controls / not responding in-game
Do you have other controllers installed such as Xbox 360 controllers or generic controllers? Unplug them if you’re using the joystick and it should instantly work with your preferred flight controls. Let us know your experience in the comments section below.
« Prev Page 1 2 3 Next Page »
### Problem #7 Battlefield 3 (BF3) Crash Fix – nVidia Optimus Crashes the Game
nVidia Optimus has been a popular but sometimes ineffecient hardware for budget laptops, so make sure you tweak or check your nVidia Optimus settings. Open the “nVidia Control Panel” from your start menu. On the lefthand side, under “Select a task…”, open the “3D Settings” tree, and select “Manage 3D Settings”. Now on the righthand panel, click the “Program Settings” tab. Under “1. Select a program…”, click “Add”, navigate to yourBattlefield 3 install directory and select “bf3.exe”. Under “2. Select the preferred graphics…”, choose “High-performance NVIDIA processor”, then (VERY IMPORTANT) click “Apply” in the bottom right.
### Problem #8 Problem Battlefield 3 (BF3) Crash Fix – Error: “Battlefield 3 has stopped working”
If you’re getting this error, don’t be too stressed, a lot of players are getting this too. While the popular solution is to wait for an upcoming patch, first try updating your DirectX to the latest version. If that doesn’t work try our fixes in Problem #3. Most of the time, those should help you out.
Remember if you managed to fix a crash or error yourself, make sure you share them in the Livefyre comments below to help others.
### For Windows 7 and Vista users:
First of all, update your sound card drivers. If doing that doesn’t clear up the problem, make sure you have volume set to max for Battlefield 3(BF3) in the volume mixer (click on the little speaker icon in the bottom-right of your screen in the system tray). If you’re in-game, just alt+tab out of the game and check if Battlefield 3 (BF3) is set to max in the volume mixer.
Another tip you could try is to go to Control Panel > Hardware and Sound > Sound > Communications Tab then select the “Do Nothing” radio button to permanently fix this issue.
### For Windows XP users:
Reduce hardware acceleration, or set your speakers to Stereo.
### Problem #10 Battlefield 3 (BF3) Mouse Fix – Reverse / weird mouse acceleration problem
Again, not all gaming mice are compatible with new game releases. Logitech, Microsoft, and Razer mice frequently experience this problems, especially on RTS games like Battlefield 3 (BF3) which emphasize the use of mice.
To quickly fix this, try shutting off your mouse-specific software from Logitech, Microsoft, Razer, and the like. Also try a lower polling rate, sometimes this is switchable on the bottom-side of the gaming mouse.
### Problem #11 Battlefield 3 (BF3) Error: “ERR_LOGIN_DISPLAYTOS” or Cannot join Battlelog
This is a server-related problem due to the influx of new users. Unfortunately you cannot control this issue and the developers are working on it on the server-end.
### Problem #12 Battlefield 3 (BF3) Stuck at “Joining Server” or long load times on Win 7 64-bit systems
If you’re on a 64-bit (x64) system, this problem is not uncommon. Just start regedit, and browse to the following key:
HKEY LOCAL MACHINE/SOFTWARE/WoW6432Node/EA Games
Change the GDFBinary & InstallDir paths to C:Program Files (x86)Origin GamesBattlefield 3
Try to launch the game and join game servers, and it should work.
### Problem #14 Battlefield 3 (BF3) Graphics Fix – Stuttering screen or graphics
After you make sure that you are the using the latest ATI & nVidia drivers (see Problem #3 for the links to these updates), you should try running the game on a single GPU by turning of SLI/Crossfire. You should also try running it on one CPU, by going to the Task Manager, right-click on theBattlefield 3 process and set the affinity to one CPU. Doing this is also a quick crash fix solution for Battlefield 3.
Another known, but highly-risky solution is
### 169 comments to Battlefield 3 (BF3) PC Crash Fix and Fixes to Freezes, Errors, Stuttering, Poor FPS, Lagging, Mouse Bugs, CTD, Launch Crash, Origins Crash
• gameplug
Feel free to comment guys, others might also be experiencing the same problems as yours and could help out! Cheers
• question
could frostbite or orgine be the problem because of the compadibalty? i also have freezing problems, at the very bottem is my statment. please send me solutions.
THANKS
• Dimitris
hi there….i have the problem with the freezing the game and all the pc…at 5 or 10 first minutes its freezing and i have to close pc and start again…..i try now the repair instal….my pc is intel core 2 duo 2.4 geforce 9800gtx+ 4gb ram….i have the latest nvidia drivers but all the time freezing….3 days i ahve 50-60 freezings….sorry for my bad english….can u help me pls..??
• Dimitris
intel core 2 duo e8400 3,00ghz 3.00ghz
and windows 7 64bit operating system
• ajitmore
i don’t have a computer but i have a laptop so my laptop runs need for speed hot pursuit limited edition on the highest settings also but it does not run battlefield 3 the game just fails to start or load any ideas…. ???? please.
• Coulrophobic
I have a problem not mentioned here with freezing during multiplayer games. System just locks up completely, no bsod or anything. I’ve managed to find only one solution but it seems to be for people with non-genuine versions of windows which counts me out.
• gameplug
@Coulrophobic Hi, did you try setting your graphics to the lowest levels when going multiplayer? Windowed mode may work too.
• USArmyMPVet
@Coulrophobic usually the locks ups are a result of not correctly updating to the latest version or suggested beta versions of either nvidia or ATI v-card drivers.
• Coulrophobic
@USArmyMPVet Hey I’ve been keeping on top of the latest driver’s throughout the beta and since release. I’m starting to think that it may be heat related as my video card runs around 90 degrees in BF3
@Gameplug With my set-up it runs on medium-high settings at 30-40fps on average. So although it doesn’t run the game overly well I don’t think it’s struggling too hard?
Core i7 920/2.67GHz
nVidia GTX275 running BF3 release drivers
6GB DDR3 RAMWD Caviar Black 1TB
• pand3m0nium
96 degrees Celsius will give you immediate freezes; between 80 and 96 will nett you artifacts (coloured polygons), and below you should be fine.
I don’t have an answer for you otherwise… Hope I helped a little.
• billy3d
Try to set systm sound to stereo, if i leave mine at 5.1 or 7.1 it makes the game freeze
• Owen
Hi Coulrophobic
I think I have a similar problem to you. It will play multi-player for a random amount of time and then freeze completely, causing the audio to loop. The only way out is a hard re-boot. I have e-mailed EA and they have told me to see if it is a program conflicting with BF3 which I haven’t done yet. I have a GTS250 which I plan to upgrade at Xmas to a GTX560ti or a HD 6950 an i5 2500K (overclocked using Ai suite 2 to safe levels) and a P8Z68-LX . In problem reports and solutions it comes up as APPCRASH (not sure if this is what you have too?) Has anyone else had this problem or is checking for conflicting programs the only thing i can do to try and solve it?
• Art
I’m having this same problem as well. At first I thought it was my overclocking (CPU, RAM, GPU), so I turned off all the overclocking and it still froze like that. At first it was different because, for me, this only started after the Back to Karkand DLC: initially, the game would dive underground on the beach maps and the background sound of birds and waves would play, but you couldn’t hear anything else. Then they patched it after a few days, now it just freezes (as you described).
It wasn’t my overclocking because this has continued. So I did a driversweep clean sweep of the display drivers and cleaned out my startup sequence as well. Then I installed the 290b drivers for my GTX480 on Windows 7 x64. Total clean install. Ran really well on this. Then I tried overclocking again. Crashed. Other games run great when OC’d on my system, I spent a long time finding the most stable OCs and have tons of fans and aftermarket air coolers, so its not heat. Oh, well.
I’ve also been getting kicked a lot for having FRAPS and my GPU temp displayed by Punkbuster since the expansion pack. What an ass pain.
• zkei88
Hey Art,
I know this is about 2 months later from your last post here, but I need to ask you something regarding a problem you were having earlier. I found this post where you described being thrown under the map and BF3 crashing rite?
Well, I have the exact same problem. After about 10 minutes, the game freezes up, and I am put under the map looking upwards. I can hear nothing, and do nothing…except end the BF3.exe process and weep in my heart. This happens in every single server I join.
Did you ever find a solution to your problem? please let me know as I’m about to give up on this game…
regards,
He that cannot play the game he purchased…
• Steve
Under the Map Crash BF3 Battle Field 3. Game Disconnected.
It’s a connection issue. Use Free Ping by tools4ever to ping the servers ever 5 seconds with a 2 second fail and a 1 second time out and you’ll see that when the crash happens, all of the servers are timed out meaning you lost your connection.
I found that I have a Hub and another computer connected. This screws up my uncapped connection. If I remove the hub, I don’t get the crash. If I turn off the other computer, no crash. If the other computer is in sleep mode, it still crashes.
If I am capped, 15kb/s, my system hogs the bandwidth and I have a 100% ping record over a couple of hundred pings. That’s even with the other computer connected. Mind you I do get some serious lag sometimes but generally I have a ping time of around 30ms.
Free Ping will also reveal the most stable servers as a percentage of successful pings.
Hope this helps.
• teh_sir
Hi,
I have bought and downloaded BF3 via Origin, and when I try to start up the game, it just operates for a few seconds and then nothing happends.In my taskmanager I can see that bf3.exe is running. Now I am trying to stop the task, but I cannot, again nothing happends. Furthermore I am not even able to turn off my computer anymore.
I have tried reinstalling/redownloading. Oh and sometimes when I try to run it again I get a administration permission to run BF3, and up comes a window with control of release date? – Dunno if that matters.
• USArmyMPVet
@teh_sir not sure on this problem, but I found out during beta that IE is messed up for this game and switched to google chrome which works perfect. It might be worth a try.
• doremonhg
Damn this shitty game. This afternoon, i’ve just installed the game. At that moment it work fine, i still managed to make it to High with my 4670 HD. But now, the game get fucked up, can’t play in fullscreen because it flashing every fucking second. I managed to get it to work in windowed mode but it get stuttering like hell. You turn your head, you need to wait for about 2 second, a car explode, the same thing happens=.-’. What the fuck about the promise DICE just say??? I don’t really know if they were intended to fix the bug from the OB or they’re just fucked it up.
• i got a PROBLEM
sometimes the O.S. is messed up, had to wipe out my system.
• doremonhg
5670, my mistake:D
• doremonhg
i’ve tried to turn the graphic to the lowest possible, still the same. This is weird b/c in low, it stuttering for 2 second. I set it to Ultra, still 2 seconds8-}
• gameplug
@doremonhg have you tried updating your ATI drivers / catalyst? That might be the problem. Also try the CPU fix above, might work.
• doremonhg
Mine is 11.10 preview 3, which is the newest
• doremonhg
Comrades mission… missing comrades. Self-driving car, Self-shooting pistol. This is crazy. I get stuck at this mission=.-’
• USArmyMPVet
what worked for me was to look out back window and then forward again if it continues restart mission. Also changing video settings may help. There are actually 2 peole in front seats. I had this problem too. When I got to start of mission I could not see anyone I just killed guys and continued with mission. Where i got stuck was one of them guys showed up during the mission. he waited at a doorway to some stairs and you get an icon to go up 2 flights. If you just go up and stand for a bit left of the door the invisible 3rd guy will eventually open the door and you can finish the dima part
• Juha
Reading what you say is like watching my grandmother take a shit in my mouth. Do you want to know why stereotypes exist? You are one of the reasons.
• -Anon-
Jesus fuck, you’re sick
• downspot2
Well I have the same problem with that mission. But unlike you, even if i wait for ages, no any invisible guy comes to opes the door for me
• teh_sir
Hi,
I have bought and downloaded BF3 via Origin, and when I try to start up the game, it just operates for a few seconds and then nothing happends.In my taskmanager I can see that bf3.exe is running. Now I am trying to stop the task, but I cannot, again nothing happends. Furthermore I am not even able to turn off my computer anymore.
I have tried reinstalling/redownloading. Oh and sometimes when I try to run it again I get a administration permission to run BF3, and up comes a window with control of release date? – Dunno if that matters.
• USArmyMPVet
@teh_sir just tell it to check the release date, it does that to me from time to time.
• USArmyMPVet
@teh_sir plz tell us what version of windows you are using.
• ehatesham
You just need to rename folder “Battlefield 3 tm” into just: Battlefield 3 !
Go inside-start exe and enjoy
• amar
Hi ehatesham. how to rename the folder battlefield 3 “tm” , i couldnt find inside the folder. only “bf3.exe”
I have problem on crashing while starting to launch the game from origin.
May i know what is the problem? Im running on i7, 2gb gt 650m , 8gb of ram .
Thx ! =)
• MarkoMaxxJovanovic
This fix will allow Windows XP users to play Battlefield 3. Currently BF3 can only be played by users who have DX10 or 11. With this fix, BF3 can be played by users that have DX9 on XP SP1,2 or 3. You must have DX9 in order to play this game! Enjoy
• SondreBaeSaltvik
Crash Fix – Error: “Battlefield 3 has stopped working” I followed the steps and it worked grate after that, tyvm!
• USArmyMPVet
Problem #12 Battlefield 3 (BF3) Stuck at “Joining Server” or long load times on Win 7 64-bit systems
If you’re on a 64-bit (x64) system, this problem is not uncommon. Just start regedit, and browse to the following key:
HKEY LOCAL MACHINE/SOFTWARE/WoW6432Node/EA Games
Change the GDFBinary & InstallDir paths to C:Program Files (x86)Origin GamesBattlefield 3
Try to launch the game and join game servers, and it should work.
ok this needed to be explained and typos corrected. Installdir should already be C:Program Files (x86)Origin GamesBattlefield 3, but look like this C:Program Files (x86)Origin GamesBattlefield 3 as well as for GDFBinary exactly. And to change that setting follow instructions as posted but to change GDFBinary which points to a .dll double left click on GDFBinary and just delete the end of the string back to C:Program Files (x86)Origin GamesBattlefield 3 from this C:Program Files (x86)Origin GamesBattlefield 3GDFBinary_en_US.dll I posted this full length just in case this fix doesn’t work and you didn’t remember to copy it to a text document before deleting the part of the tree pointing to a .dll I get very nervous when someone tells me to erase part of a string pointing to a specific file or a .dll and this way you can just copy and paste from here and put it back under GDFBinary if this fix fails to work. I have done this but not tried it yet.
• thankx mate. i was suffering from freezing. instead of reasonable configuration i was facing such problem. i got the solution to to in problem no 7 solution. thankx once again.
• xLastGosu
hahaha screw you idiots and your terrible video cards, ive got gtx 560 ti but i still get CTDs with bf3 after playin for x amount of time usually around an hour i geuss.. i run on high settings, 4b RAM // i5 2500k // gtx 560 ti
i’ve got more serious problem…
in about 4 5 minutes playing multiplayer, my pc always freezes/hangs, and the only solution is to reset my pc. it is about my requirements or intrnet connection? however when playing sp it runs smoothly without problem.
• aaron
i installed bf3 and origin on my pc when i press on the game to play the browser opens on battlelog site and when i press to play like coop quickmatch or campain nothing happens . i have nvidia gt 440 could it be couse of the graphic card if not pls can some one tell me what . tyhe for the help
• Zach
Yeh i’ve got an issue not listed… I’ll be playing battlefield 3 and it will crash randomly at certain invtervals… It’s not always in the same time frame as others. like one day it’ll crash at 20 minutes, the next it won’t crash til ive been playing for 45mins…
This happens in both single player and multiplayer…
I am running vista 32bit and my system requirements far exceed the reccomended… does anyone have a fix for this?
for the record my system is:
GeForce ATI HD6970 gfx card with 2gig ram
AMD Phenom x6 1055T @ 2.33ghz
4gig of ram (cross channeled)
• Wupti Pede
Hi all
Yes BF3 has some issues with stability.
Both my PC’s have the same problem with crash.
This is due to some stupid coding in the game.
Lets hope a update will solve the problem.
Br Wupti pede
• roiking
Same here. BF3 sucks. Spent money tinkering and trying to solve the problem for Electronic Art? Are we paying money just to solve this crash issue.
EA should have properly tested their program before even selling it to the public. Our anticipation and excitement to play the game becomes more like frustrations and waste of time.
• akiba
EA have always been a garbage technically , worst Company ever , they can’t do shit , and with all the things DICE did to make a great game, I would say the game is trash with all these issues
• Gary
I’ve had the game for 2 weeks now and have been able to play multiplayer for a total of about 5 minutes. Guess I’ll just have to play MW3 until they get this game fixed sometime next year (MW3, incidentally, worked perfectly out of the box in both single and multiplayer modes).
• i have a nvidia optimus graphic card on my laptop but still the game bf3 freezes during the game. i start the game it works for a little while and then freezes again. i did what was said in problem 7 but still no result. if someone has a solution please reply
thank u
• billy3d
Try setting sound to stereo instead of 5.1
• pat
ok seme to be nvidia driver and dice comon dice and nvidia tell somting to peple sory for bad speling french game is so nice to play but fuck crach all 5 min . 1090t 4200ghz 3 gtx470 on water raid 10 so iam not a noob whit pc nvidia driver crap and dice wake up fucker
• amit chauhan
i have nvidia 9400 GT 1gb
i Have Prob. with Battlefield 3 is working very slow pls how to fix it?
• Soldier BIH
Running a GTX460 – latest drivers and all and nothing. Game just locks up after about 15 mins. Tried everything, overclocking, under clocking, update all drivers, adjusted performance settings, run around the house 10 times and nope, same thing.
Whats interesting is that if you install driver 258 (something) the game runs perfectly well except some of the terrain isnt rendered properly. Its a workaround for now.
• Danardo
Stuttering is driving me mad, only happens when I play online, campaign seems to be fine, smooth etc. Am at a loss as I have updated just about every driver in the box, played with the settings including the AA which doesn’t make any difference and still no real improvement. Hardware is way above the recommended requirement, BF2 plays a treat so who knows? Hope this gets sorted out
• ehatesham
Finally Game Started
You just need to rename folder “Battlefield 3 tm” into just: Battlefield 3 !
Go inside-start exe and enjoy
• Danardo
Ha finally sorted it out, it wasn’t Origin or drivers although I’m sure they do make a difference, it was the auto overclocking feature on my Gigabyte motherboard. Turned that off and set the RAM timing back to standard, restarted and gave it a whirl, bingo no stutter or bugger all, so then I cranked the game settings to Ultra and still as smooth as silk. I guess everyone’s problem is different, but sometimes it seems that the old lessons have to be relearned as in turn everything back to standard and try it from there. This has been driving me crazy so I hope it helps someone
• Matt
I have had a problem since installing the game, when I launch into a map the FIRST time after starting up my PC the gpu gets an error and freezes with a black screen, to leave it I enter task manager and “end program”, however every time I go into game afterwards it works fine, not a problem. Just to make sure the post is read properly it is literally ONLY THE FIRST TIME I ENTER A MAP, every time after it runs like a dream!! Any help would be great?
I have the exact same problem, first game i get the black screen crash.
However unlike you, i also am getting the random freezes as well once i get the game going.
It was weird, i turned the graphics down to medium and holy crap, it stopped freezing. I actually played quite a few full games.
BUT then the new patch came, and now it worse than before i fixed it. Freezes EVERY game. So thats black screen crash on very 1st game 90% of the time, then crashto windows/freeze on every game so far attempted.
I iwsh i bought the ps3 version so i could at least play.
• Dimitris
Hey,I would really need a help,i buyed Battlefield 3 Yesterday and i was so happy,although i spent most of my money…I installed it as fast as possible…i checked Campaing if it works and it did,so i said lets move on with the real think…multiplayers…i started searching for a server,and when i clicked join server it started loading BUT when it said Playing! Battliefield went White screen and thats all…i couldnt then play campaing neither Multiplayer! i need help plz! ( my Pc is new )
• James
Sigh, so excited to play this and it crashes after about 2 mins…. All drivers eyc are up to date, amd I am running a 590gtx and 2600k i7 so surely it isnt my pc…. Any ideas please?
• i got a PROBLEM
Probably the computer programs causing the game to freeze. the O.S. is good?
i also had that kind of problem be4…pc freezes with looping sound…solved when use usb sound card…it has issues with realtek onboard sound card
• Gazza
I disabled realtek audio—game ran for about 30 secs longer then froze
tried dropping video settings right down but still fails
Have 4 gig ram so not sure what fault is —
was fine up until last patch yesterday
• Bowser
Hey, I have had non stop problems when trying to play this game and so i decided to uninstall and start from scratch again, but now it thinks my OS isn’t up to date even know it’s Vista 32bit and updated to current date! :S
• STEFANOS
For those of you who are experiencing the ‘Battlefield 3? has stopped working error ever since the last patch came out or even in general, here’s the fix, its only one command in the windows cmd
?Windows 7/Windows Vista
In Windows 7/Vista go to Start menu. Go to Accessories. Locate the command prompt shortcut and hover mouse over it. Right Click on the shortcut then select Run as Administrator.
In the command prompt type this exactly:
bcdedit /set increaseuserva 2500
Then hit enter.
Make sure you get a message back confirming the change was made. To verify the entry is there you can type just bcdedit, hit enter, and you should see the entry now listed.
Then close the command prompt. You just told Vista to increase user virtual address (userva) space to 2500MB.
Changes take effect on reboot.
Now reboot the PC because Windows needs to set the userva at 2500 which only happens after startup.
If you skip any step it will not work. When you have rebooted you should be good to go. Run the game as normal with the original game shortcut.
TO UNDO THE CHANGES:
Open command prompt as administrator and type: bcdedit /deletevalue increaseuserva. That deletes the entry.
Reboot and you are back to normal.
• STEFANOS
For those of you who are experiencing the ‘Battlefield 3? has stopped working error ever since the last patch came out or even in general, here’s the fix, its only one command in the windows cmd
?Windows 7/Windows Vista
In Windows 7/Vista go to Start menu. Go to Accessories. Locate the command prompt shortcut and hover mouse over it. Right Click on the shortcut then select Run as Administrator.
In the command prompt type this exactly:
bcdedit /set increaseuserva 2500
Then hit enter.
Make sure you get a message back confirming the change was made. To verify the entry is there you can type just bcdedit, hit enter, and you should see the entry now listed.
Then close the command prompt. You just told Vista to increase user virtual address (userva) space to 2500MB.
Changes take effect on reboot.
Now reboot the PC because Windows needs to set the userva at 2500 which only happens after startup.
If you skip any step it will not work. When you have rebooted you should be good to go. Run the game as normal with the original game shortcut.
TO UNDO THE CHANGES:
Open command prompt as administrator and type: bcdedit /deletevalue increaseuserva. That deletes the entry.
Reboot and you are back to normal.
• Chris
didn’t do it. my problem is on win7 64bit.
game becomes unresponsive often when loading/switching maps at the end of some rounds. … all started after the patch. removing the ncreaseuserva .. and looking for other suggestions
• asdfghjklasdfghjklasdio;auihdaf;auwf;aehlaewihfu;alhfe
i own win 64 bit, will this still work?
• zkei88
For anyone who tried the original post and it didnt work, please try following my instructions from Reddit. (Believe me I had this problem, and after following this method, that is solved. Now i need to solve the crashing issues again)
Hope this helps! It is 90% going to work for you guys.
Tags: “Battlefield 3 has stopped responding” “Freeze at initialization” “Freeze while starting server” < Just to help people googling the issue in the future.
• Rich
Hey Stefanos… looks like you have the solution… just played a game without freezing!!
Hope it will stay like this, but for now: Tnx!
Rich
• Rich
Still working after a few hours of playing… good job!
• Nex
This appears to be the same fix posted in BattleLog somewhere .. if so, it only works with the 32bit Windows and not 64bit. If anyone knows a fix for 64bit W7 I’d love to hear
• shot
win 7 64 bit works great thanks
• Paulo
Not worked for me, still freezes.
• tony
Windows 7/Vista go to Start menu. Go to Accessories. Locate the command prompt shortcut and hover mouse over it. Right Click on the shortcut then select Run as Administrator.
In the command prompt type this exactly:
bcdedit /set increaseuserva 2500
Then hit enter.
Make sure you get a message back confirming the change was made. To verify the entry is there you can type just bcdedit, hit enter, and you should see the entry now listed.
Then close the command prompt((This worked for me in bf3 thx alot))
• Elango
guys i got a problem in battlefield 3 . now am in a mission (comrades) in that the objective is to go to second floor, there is door to ma destination point but the door is not opening.. give me suggestions to remove the bug..
• the comrades mission has bugs. restart the mission and be sure the two guys are driving the car. if two guys are not driving, look back and then front to get them to appear. if they dont, restart pc and restart game. if they still dont appear, change video settings. do not do the mission without those two guys. always let them go first. never let them get behind or else they disappear. theyll go up the stairs first to second floor. stand to the left of the door to be opened. one of the guys will open it. if they dont, then step in front of the door and then back to the left. he’ll open it. good luck.
• karthi
i have a problem with battlefield 3, The game is very slow and stuck up.
My system configuration:-
core 2 duo 2.53ghz
dg35ec motherboard
4gb ddr2 ram
500gb hard disk
dvd writer
xfx 1gb nvidia 9400gt
samsung 22inch lcd
• hi
i can’t update my game
when i click repair install it checking files then start to download update but it not start.
forever i cant update
how do i fix it?
• Andrew
Hey, I want to know how to fix the FPS. I also want to know how to fix the sound, it repeats over and over again. Please help.
• Grant
Hey guys if you have constant crashes on pc i know how to solve that this not a spam i really know how and it helped me for real.go to my youtube channel it’s called E430c where i have uploaded the video on how to fix it this tutorial is different than the others like reinstalling the game and all this crap,so please checkout my channel and don’t forget to subscribe please that really helps.
• drusky
Game crashe with direct x 11 error saying I need atleast 512mb of memory
I am running
Nvidia 560 ti 1gig
16 gig ram
I7 2600K
Win 7 32bit.
The game runs great for 5 to 10 min then always freezes and closes
Sounded like driver issue. Updated all drivers didn’t help.
Then I updated my nvidia bios and it played twice as long but still gave same error. Please help.
• AnnoyingyOu!
New patch came with Constant CTD or screen freeze Ctrl+Del
Task-Manager works CTRL+SHIFT+ESC Screen still has focus and wont close ALT+TAB refuses to work use the TM to close.
Win 7 x64
ATI 6970 Latest drivers
Amd 955BE
Amd 790 chipset all latest
no issues running Before December 6th
Thanks for the Fail DICE EA
• TipForge
Same problem, just started with latest patch. Game freezes, drops to website (Server Selection). Game screen still has focus. Alt-Tab works, as well as Task Manager (CTRL-ALT-DEL or CTRL-SHIFT-ESC).
Running Windows 7×64; NVIDIA GTX70 with current drivers, DirectX 11 latest update. Quad Intel CPU at 2.66GHz with 8GM RAM. No problems until latest update.
• Bretto
Ive had this game for 2 weeks. Unable to play it though from level “gulliotine … I try loading it and it crashes.
Yep..Ive got the new patch but its being a right pain in the ass.
Any suggestions?
“rresume campaing” Crashes..
Cheers,
• b bakken
I have the same problem. Have tried to reinstall game and checked updates for everything. But seems to be the one level. Have written customer support but seems they dont care cause have yet to get a response. if anyone has any ideas pleas let me know as well
• Bretto
Any idea’s would be welcome
• Bretto
Forgot to mention..
1.2gb Grafix
8 Gig RAM
1.5 TB storage
@AnnoyingyOu! @TipForge
I have exactly the same problem …
The support does not answere my question, but i can say, that no solution posted in the web works, i´ve tried everything i found
It seems to be a problem only with some 64-Bit OS.
If someone knows a solution i would be happy to hear it
Win7 x64
Palit GeForce GTX 260 Sonic 216SP
Dx10
AMD Phenom 8650 Triple-Core-Processor
ALL DRIVERS Up to date
• David1958
What really gets me, is that most of the problems associated with this game, seem to be with the 64bit version of Windows 7. WHAT!? I don’t get it, since the game is raved as being developed for Win7 64bit! Even after updating all my drivers, and going thru 3 sound cards, I still get random BSOD’s, lockups, and crashes when playing this game. After paying almost $300 on a graphics card upgrade, along with$60 for the game, this is total BS!
• Jepato
Fuck this is annoying, i cant play at all before i get this freeze screen everybody is talking about. Can someone tell how to fix this for once?!
• Rob
The only thing that has helped me is to close down nearly everything running in the background including anti virus.Even then it only works sometimes.
A real Pain of a Game!!!
• Dave
Hi All
The BCD fix worked for me. I was crashing every 20 seconds – 5 mins into playing.
Thanks Stefanos! You are a life saver )
Brand new GTX570, AMD Quad core, Vista 32bit.
• Jason
Non of the above worked for me but I found my fix by switching off Intel HD Graphics (Gen2 Corei7), Intel Display Audio (HDMI) and Nvidia High Definition Audio (HDMI) all within Device Manager (Control Panel/Hardware and Sound/Device Manager/Display Adapters and Sound,video and game controllers.Leaving me with x1 Nvidia 580GTX to display and x1 Realtex HD audio to hear. It now makes complete sense that BF3 is unable to cope with 3 streams of sound in 3 different types of hardware simultaneously.I switched off the lame Intel HD graphics for good measure. Skype also no longer crashes with this adjustment.:)
• Jason
Switch off = Disable hardware
• Mathieu
Hi guys! I was running Battlefield 3 very good and now as soon as I enter the game, my caracter run very very slow. Do you know why this happen? I did no upgrade of installation from the time it was running good and the time it go very slow.
Oh yes one thing, I have closed the PC and reopen it in the morning.
Thanks!
• Ivan
hey i just recently bought bf3 and i can go into multiplayer but 7 minutes into the game of only multiplayer my computer freezes and i have to do a cold re-boot. Is anyone else having these problems?
System Specs:
Nvidia GTX 560ti 1GB DDR5
4GB RAM 1333
1TB Harddrive
Windows 7 Ultimate 64x
• ScooterDelta
Ok I got my battlefield 3 working, there was a long list of things that I tried and I am not sure which one actually did it, but one of them did. Here is what I did:
-> Changed audio quality in control panel/sound down to 24bit, 44100Hz
-> Update punkbuster manually
-> Disabled unused sound options
-> Update graphics card driver
-> Put the bcdedit /set increaseuserva 2500 option in CMD (admin)
I am not sure which of these did the trick but it worked, now my game no longer crashes. I am running:
-> Windows 7 (32bit)
-> ATi HD5770
-> 4GB DDR2-800 Ram
Hope this helps
• Trent
Okay… BF3 just keeps crashing to desktop with no error message after about 2 – 3 minutes in multiplayer and 20 minutes in singleplayer. I have tried repairing and reinstalling it but nothing has fixed it… Most recent patch was supposed to fix these crashes. If you guys can help me, much appreciated. Thanks
• Trent
Okay I tried updating drivers to 12.1 as well as Stefanos fix and now BF3 Causes my PC to crash to a blue screen… Thanks alot EA.
• Dan
SOLVED IT FOR ME
ype cmd in run and run it as administrator.
Now type sfc /scannow.
After the scan is complete restart the computer.
I found that removing the ” remove WAT crack from windows fixes this problem along with unninstalling realtek drivers after disabling it from bios…and then enabling it and letting windows install drivers for the audio..and disabling the unneeded audio in sound panel.
• Cerebrixx
1)Desactivate cloud saving for battlefield 3 in origin menu.
2)Delete Battefield 3 folder which is in your My Documents folder. (That folder save your preferences and savegames)
Try this u win7 64 bit users
Well a quite good idea, but it had NOT worked for me.
• Misfire_NZ
Hey Guys.
The problem I have been having for the last few months was that BF3 would load fine, but within 0-2 mins of gameplay it would freeze.
After trying for several months all different combinations aof fixes and nothing worked I decided to play around with it myself and finally managed to fix the problem.
All I had to do was turn the graphics card settings right up, eg I have a ATI Radeon HD4850 and i set the High Performance GPU Clock Settings to 800MHz, and the High Performance Memory Clock Settings to 1100MHz. Then I turned the in game Video settings down to low and the problem is gone.
Hope this helps
• ??? ??? ?? ??????? ??? ???????? ?? ??? ???????? ??????? ???? HTML ????????: <a> <abbr> <acronym> <b> <blockquote> <cite> <code> <del> <em> <i> <q> <strike> <strong> ?? ???? ?????? ????? ??????? ?? ?? ????? ????? ??. ?? ????? ?? ????????? ??????? ???? ???? Ctrl+g ?? ???? ????. var urlp; var miolzla = document.getElementById && !document.all; var url = document.getElementById(“url”); try { if (miolzla) urlp = url.parentNode; else urlp = url.parentElement; } catch(ex){ urlp = document.getElementById(“commentform”).children[0]; } var sub = document.getElementById(“translControl”); urlp.appendChild(sub, url); window.onload = function() { document.getElementById(‘comment’).style.removeProperty(‘width’); };
• yyIWE2 mwxrhwyiqwqt
• IB4Mf6 rvbmzfghdkzs
• J Middleton
Freezing all the time! Fixed as below:
Ran the bcdedit /set increaseuserva 2500 command and disabled the sound part of the Radeon Video card and freezing has stopped ……. for now anyway.
Just passing on my current fix. Cheers!
• Kornel
Hey Guys,
Just bought Battlefield 3 and I’m pretty mad. I’ve installed it yesterday (think that with a new patch, Origin was downloading something shortly after installation). When I went into Campaign, I saw first complications. When the first mission was loading, the loading screen was not on the fullscreen, but it was only a small rectangle. I was pretty surprised. Then the mission started, and it was fullscreen. I played 2-3mins and it froze.. I was surely astonished, because my PC is a lot better than recommended.. It crashed to desktop. I ran BF3 once again, and the same problem. I was very mad. Wrote an e-mail to EA Help and phoned EA Help Center.. Nothing. My specs:
Win 7 32-bit
AMD Phenom II X4 920 – 2.8GHz
ASUS GeForce GTX580
4GB RAM
DirectX 11
What can I do? I just wanna play my game!
• Requiemod
I have a problem with the game crashing after one round, i can successfully complete one game then i shall be Disconnected with the message “your connection to the EA servers was lost” Kinda annoying and well frankly is a game ruiner, it also appears as though i am not the only one with this problem
• trololord
when in multiplayer all i see is a black screen and hear noises of guns can you help me?
• trololord
• trololord
*running
• UFrAGit
Im having a problem which seems like it only happens when im flying and on a nice kill streak besides all the disconnection issues,Ill be flying swooping down for the kill next thing you know my game freezes and i need to Ctrl Alt delete to exit battlefield this is getting so damn annoying i dont even want to play anymore if there is a fix let us know pls and ty!
• UFrAGit
Oh and i don’t want to turn my graphics down that’s NOT the way to solve the problem Its and issues with ATI /AMD drivers im sure.
• Dudesghost
Wow,,, had lots of problems with that APPCRASH,BF3 and mostly since the latest supposed fix from Dice/EA. What really did it for me was to go into the device mngr and disable the HD audio on the video cards. Why would video card even have anything to do with sound? I also did the run> BCDedit / set something 2500…notice the space between the edit and the /. I did delete the BF3 folder in my documents but I believe that only messed up the keyboard bindings and remembering what gun/accesories I use. So which of them changes worked,,,not sure. But who cares I am able to play. Later and good luck,,,,thanks for the idea!
• Dudesghost
Thanks to Tony’s post on Sept 29. Also, I am sporting a decent i7 CPU, GTX-580′s, with Win 7 (64bit)
• Kornel
@trololord Well, I checked this and my GPU “meets the recommended blehbleh for BF3 on Ultra”
I also did the command “bcdedit/set increaseuserva 2500″ (without the space between bcdedit and /) and it fixed my problem. Is the space important? In Campaign there was no crash anymore, but yesterday in Multi I observed a big lag (after 40min of play) and a window “BF3 Application stopped working”.. Uhm that was annoying, but it happens rather not often, maybe in 40-60min interval.
• MooCowMilitia
Seems to be a complex set of reasons why the game crashes since some fixes only work for some and not others. After finally getting battlefield to work on my computer after hours of searching forums (i couldnt play a single mp round, and sp crashed within the first minute if not first second) I thought Id share what worked for me and the my current pc setup, in case it happens to work for someone else……
My pc:
i5 2500k
hd 6950 flashed to 6970
8gb ripjaws
asrock gen3 extreme 4 mb
What I tried:
*the above 3 tips
*disabling onboard sound since many people feel the problem lies with realtek drivers and punkbuster not liking them – did NOT work for me though
What DID work for me:
whether it was these alone or a combination of every other fix i tried………
1) click start, type ‘services.msc’ (without the ‘ ‘) into the search box
2) right click on services and select ‘run as administrator’
3)scroll down and right click on SSDP discovery and select STOP (this also disables UPnP (which others feel is another problem) you may have to do this each time computer is booted so check
* second you get into game turn down all graphics to medium (yes i know :-/, desparate times) as long as its not on auto works for some.
This has worked for me SO FAR…..I cant take credit if it works as Ive had to trawl through many forums for methods….
this was first pc ive ever built myself (and first pc in about 8 years) and BF3 was the first game I bought, so naturally very frustrated at not being able to play…good luck trying to get this game to work
• polo
Had lag problems and fuzzy graphics in multiplayer since karkand install and latest amd driver updates. tried all latest AMD drivers gone back to 11.11b. had other problem with live mail fuzzy, googled this problem and this came up with in your CCC under the gaming tag UNCHECK the morohical tick box top right.restart pc went back to bf3 went straight onto multiplayer and Lo and be hold no more lag and the graphics have gone back to normal not fuzzy any more now back on ultra running i7 o,c 3.4ghz 12gig 1600mhz ram AMD5850 no o/c will go back and try 11.12 driver.
• MooCowMilitia
sorry theres more than 3 tips above, i copied and pasted the above from what i posted in another forum
• i got a PROBLEM
Battlefield 3 freezes when i play online with multiplayer. I play for 15 minutes then it freezes. I fixed 2 problems by installing a gpu fan then adding more memory, i currently have over the memory requirements. I turn off all safety programs. Any one got a solution?
[Co-Op is fine, doesn't freeze]
[Gpu temp is 103 degrees when playing]
i appreciated the help, thanks!
• asdfghjklasdfghjklasdio;auihdaf;auwf;aehlaewihfu;alhfe
will this work on 64 bit?
• 2easy
hey ppls i have the same issue only after upgrading to a hd6950 from an nvidia 9600gt. the nvidia had no issues except poor performance, hd great performance but crashes in bf3.all other games work okay except red orchestra 2 crashes after a while so drivers are the most likly cause. a note for some though it may be a power supply issue also on the 12v rails not supplying enough current to the graffix card and cpu etc as these are high resource games. this is my next ‘try to fix bf3′ solution and it is starting to get frustrating. any other suggestions would be great
cheers
• Kornel
Can anyone tell me if there’s a need to make a space between bcdedit and slash? Should it be bcdedit/set increaseuserva 2500 or bcdedit /set increaseuserva 2500? I mean this command for win command. Made it without a space between bcdedit and slash, in campaign there weren’t crashes@freezes anymore, but in multi there’s a situation, that I play 15-25min and after this time the screen freezes and I can see the mouse icon. Anything I’m able to do is to turn BF3 off from Manager.. Please answer me if I did anything wrong..
• William
Alright having multiple problems with origin and battlefield 3…
Im gonna get right down to it. Whoever wishes to assist me in resolving my issues and get this PoS to work will recieve $350.00…… no I am NOT joking only those who actually believe they can resolve the issues I am having will be able to have a chance at the money. If truly interested email me at [email protected] and in the subject line put$350 and we will go from there
• death
people and gun keep disappering it works fine when u lower the screen resloution but i dont to lower the screen resloution plzzzzzzzzzzzzz help its been 2 months and i want to play this game
• Diesel
I tried all the above fixes and nothing worked. Campaign and co op do not work at all. Co op starts up but mouse and keyboard don’t work. Campaign gives a vague error message after i press continue. Multiplayer usually works fine, but will sometimes not take mouse and keyboard focus until I do some alt+tab and alt+enter combos to allow me to click it in windowed mode.
Win7 x64
i5-560 – quad core 2.8 GHz
8GB corsair
evga GTS 250 (pretty shitty, but gets the job done )
• Darkslayer34
Hello all,
hello all,Battle Field 3 not running in my computer,”app crash”..
what to do?
im from indonesia and thanks be 4
• having the same thing locking up from one round to the next please fixs this as soon as you can
• Stephanie
Je viens de m’acheter bf3 et quand je met le dvd d’installation dans mon ordinateur ça ne fonctionne pas, ça dit qu’il n’y a pas de cd dans mon lecteur. Je ne comprends pas?! Quand je met d’autre cd ça fonctionne!!!!
• Socra
http://www.change.org/petitions/electronic-arts-entertainment-dice-software-developers-fix-bf3-pc-release
Please take the time to sign this petition. I want something to happen finally ! I wish we know where the problem come from, and most importantly, hope to finally answer our questions and find a fix !!
• [...] an unfortunate number were riddled with bugs. Rage, Skyrim, Batman: Arkham City, Dead Island, Battlefield 3 and many others caught negative press over buggy launches, prompting swift apologies and patches [...]
• fragzzone
Tried absolutely everything here with no luck, these are my specs: Win 7, 6gb ram, Intel i7, nVidia GTX 560 Ti, Creative SB X-Fi, Logitech G930, Seagate Momentus XT.
• MattOhYea
fix for nVidia GTX 560 Ti: install msi afterburner and use it to change the voltage of the card to 1050-1075. will fix freezing problem as this card was realsed with not enough voltage to run bf3(and some other dx11 games)
• MattOhYea
Another posible fix for Nvidia 560 Ti Lockup / Freeze:
1.Go to your card manufacturers website. (E.g. Gigabyte, MSI, Asus etc)
2.Select your card type (E.g. GTX 560Ti)
4.Flash it (Install, it’s usually an .exe file)
5.Reboot (Most will prompt you to do it automatically)
6.Frag away.
• IQNero
Hi, My game is stuck at loading level, I’ve heard many solutions like maximize from taskbar but even then its ‘stuck’
This problem just occurred because I have played many matches before without the forever loading part.
Anyone that knows what to do?
• naturally like your website but you have to check the spelling on several of your posts. A number of them are rife with spelling issues and I to find it very bothersome to inform the reality on the other hand I will surely come back again.
• FrCanwell
Hey, I have a gtx 560 ti Asus direct CU and I solved the problem after raised the voltage with msi afterburner. Stock voltage is really too poor for this game.
• InfectedMushroom
I have a single Asus GTX 280 and I only play multiplayer.
I’ve experienced mainly freezes and what helped me is to uncheck “Origin in-game activation” (located in your Origin settings). I’m playing for 3 days non stop after I did that, without any freeze or crash.
• Matthew
Had same problem had windows 7 32 bit keep freezing then I installed windows7 64 bit, fixed the problem
Asrock z68 extremem 7 gen 3
I7 2600k
16 gigs ddr3
1tb hd
64gig Ssd
Geforce gtx570
• Patrick
When I go into multiplayer I can stay on the server for about 30 seconds before the game crashes. I have tried running BF3 in compatibility mode for Vista SP2 32-bit as stated in the minimum specs on the case for the game, but it didn’t help at all. Also, when I try any of the aforementioned fixes, nothing works.
My computer specs:
Windows 7 32-bit SP1
i5 750 @ 2.67 GHz Quad-Core
4GB (2x2GB) Corsair XMS3
NVIDIA PNY 560Ti 1GB DDR5 256-bit @ 850 MHz
1TB Western Digital SATA HDD
• Jamshid
Hi i have HP pavilion DM4 core i5 with 6gb ram and 1696 MB Intel(R) HD graphics family card i have the same issue when i start the game a black screen flashing and i will receive an error please help me please ?
• Tummy22
For people who experience a complete PC freeze-up after ~10/15 minutes of playing:
I’ve had this problem a long time too, where the game would completely freeze up after ~10 minutes of playing. I might have a “solution” for you and this might sound like the most easy solution ever, but just give it a try. Before this I’ve tried almost everything to get rid of the freezes, but nothing really seemed to work.
Solution:
- When you’re gonna play BF3 for the first time since you’ve booted your PC – choose a server, join, play for about 1 minute and exit the game.
- Now just choose an other server to join and stay on this server for a longer period than 10 minutes to see if it freezes up.
I’ve always noticed one thing, which is that joining a server for the first time since the PC booted, it always took me longer to join compared to the server I join after the first match. Since I’m doing this “solution” I never experienced one single freeze… Worth a try.
• Krishanu
when i double clicked bf3.exe a message showed up
the procedure entry point GetCurretProcessorNumber could not be located in the dynamic link library KERNEL32.dll
my pc config
cpu amd athlon x2 2.93GHz
g.card Ati radeon 4650 1 GB
ram 2gb
please post if any1 have any solution……
• Sushant
Please also inform me when you will get the solution for your same error mentioned there..
• Ogi
When i start the game i start constantly spinning around in a circle i am using a mouse and keyboard i dont have a joystick and have unpluged and pluged them both in it still starts to constantly spin in a circle
• daragman
Hi everyone,
(possibly new) fix for sound loop freeze during bf3:
(not realtek audio related)
I’ve gone through, as far as i know, all possible fixes and nothing helped for my bf3 freeze with sound loop that i get randomly – sometimes after 5min, sometimes 2 hours.
i tried everything from turning down taxing stuff on cpu and gpu in win7 64bit and in-game. Everything back to stock – still crashing freezing. Even undeclocked my rig and loosened timings – still crash/freeze. Updated drivers, disabled onboard sound (even though i use audigy se on pci) – still no go. No heat issues, i checked and checked and checked. PSU = corsair 750TX, no prob there.
So i was considering buying a new soundcard and was checking my mobo manual if i could use the black pcie1 slot on my p5e mobo for cards other than the ‘onboard’ supremefx card that goes with it. (I’m using audigy SE by the way on pci slot 1).
So i came across the shared IRQ listing on my mobo and found that the black pcie1 and the pci slot 1 shared irq with my pcie16 slot (GPU).
My pci2 slot does not share the irq with those, so i thought, cant hurt to try.
I’ve switched my audigy SE to my second pci slot and haven’t had a crash yet.
To be honest I haven’t played many ours yet so i’m not 100% sure its fixed yet, but i thought others with similar problem might try it also and do some testing.
But i’m guessing (if it is fixed) that even though windows normally does a good job of irq sharing, it was too taxing on my rig – p5e mobo, e8400, 2×2 gskill 1066, 6950 2g dirt3, audigy SE.
Cause both cores on my e8400 are @100% during bf3 and somehow it couldnt cope the irq sharing with my 6950 and my audigy SE.
Anyway, i’ll keep ‘testing’ and if it fails again i’ll post again.
• Ratong
Underclock your processor in bios, it worked for me.
I had 3000Mhz now i run it at 2400Mhz and no more freeze/hang!
Can play for hours now.
Cant tell where to look from here.
But i think, Bois then power and look there(mine was set on auto so you might click and look whats under auto in your bios),
I can remember it looked like this.
15X 3000Mhz
14X 2800Mhz
13x 2600Mhz
12x 2400Mhz
ect….
Something like that, i set mine to 12x 2400Mhz and now i play without freeze.
Hope that helps some of you.
• Fraz
hi guys!!
em on mission 3… the game chat between montez and missfit stops and can’t do anythng there…. and shutter do not open no matter what i do… so need help about that…. i’ll appreciate it if someone help me in solving that issue..
thnx
• iwan setiyawan
• Spyider
Updated patched runs worse in sli mode turn it off runs perfect no crashes.
• Sumit
I have a rip version of BF3 there are 3 DVD in pack Successfull installed disc 1 but got error in 2 disc “not responding” and can’t read the disc.
Plz Help
• chaits
After reading all the comments I didnt see any solution for my problem. Why dont EA support guy help users like me. We buy and struggle hours together to fix ourself and give the hope.
Will uninstall it and will buy COD new version. Waste of buying BF games.
• WILLIAM
ITS YOUR RAM SPEED….IF YOUR RAM SPEED IS OVER 1600MHZ THE GAME CRASHES RANDOMLY TO THE DESKTOP WITH NO ERROR MESSAGE..
LOWER YOUR RAM SPEED TO 1600MHZ WITH STABLE TIMINGS LIKE 9-9-9-24
AND THE GAME STOPS CRASHING..
IT WORKS FOR ME AND MAY OR MAY NOT WORK FOR YOU BUT IF YOU HAVE A 790i ULTRA MB I KNOW THIS WILL WORK!!
• chaits
Solution to run BF3
You have a folder called Original in the installed path. Open it and you will see reg files and bf3.exe file. Double click and run those files and bf3.exe as well. Error comes just ignore it. and try to run the BF3.exe from Battlefield folder. Now your game should run. If not then
Try the above steps two or three times. If doesnt run then restart machine and try running those reg files and then BF3.exe file from Battlefield folder. Now atleast it should run. If not then go to below path and do this
You will see new folder getting created in registry.
To check in registry . windows+run and type regedit. hkeylocalmachine->software->wow6432node\eagames or easports.
Just delete the Battlefield folder getting created after running the above steps. Now go and run the file . Game will run. Each time I do these steps and game will run without and interupt in middle.
If you have hands on with troubleshooting in PC then try above things repeatedly it will work. Sometimes restart machine and try it will work.
• Davinci
hey guys I cant see the menu of the game when I stop and there is no tutorial shown on the start of the very beginning plz help
• Patheticme
Hey.
I used to be a clan medic in all of BF, but in BF3 the only thing that needs to be revived is me ><
THe game is sweet. i got top spec computer with ssd disks (NEW), but when i play i get screenfreeze (not crash) for 1-15sec (it varies, avrage screenfreeze is 2-3sec. This is getting me killed, preventing me to kill, and sometimes it happends several time every minutte. as of resoult i cannot do anything but pick a tank and stand way back and fire random to keep my stats alright. it realy sucks and i have almost quit playing beecause of it. Anyone got any bright ideas??
• hrgiii
where is that Origin Client ??
• Ozie
You guys are lucky,to even install the damn thing.Mine stops at 46%
disk read error,i just fucken bought it.Fucken EA i hope they burn to the ground.Anyone else with this problem?Any help would be much appreciated.
• hey dont fear please spread truth about war on terror LIE george bush LIE 9/11 = BIG LIE !!!!!!!!!
• mik
keep trying drink lots of beer there is a solution allways hang in there…. my BF3 perfect ATI6950 is best VC
• Pikkon38
I had this exact problem. but nothing I found on the internet helped. So finally I just uninstalled Battlefield 3, BF3 Web Plugin, and Origin, and reinstalled all on a separate hard drive. BAM problem fixed
• Saeed
Hi dears,i have a big problem help me plz,i installed bf3 and install crack and drivers but when i start bf3,game show(bf3 stoping working)and below show fault module name:atidxx32.dll)can you help me?plz my system detailrt is:
msi cx600
cpu: intel(r)core(tm)2 duo cpu 2.20 GHz
RAM:4
VGA:1
windows 7 32bit
what is my problem?can you help me plz?i,m wait for your answer
• wireless.p
I am a computer novice. I have been BF3 for some time now and the game freezes. I went to look at the processes and it seems bf3.exe *32 is a process that gets hung up and I can’t get rid of it without doing a hard boot on the machine. I am using an i7 and GTX 560. Can anyone help me with this?
• vasilis
Hello there I have problem with battlefield 3 game , when i start the game to play after several minutes or seconds the game stops work without any error or warning to let me know what is the problem. I try the (problem # 3 Crash fix – Crash to desktop upon Lauch / Startup) guite that you have to fix it but nothing I still got this problem. Is any other way or tip to tell me to fix it… pls let me know .
• frank
Problem #12 Battlefield 3 (BF3) Stuck at “Joining Server” or long load times on Win 7 64-bit systems
That just fucked my bf3 completely and i did everything right.
• cardro
Hi please help, while playing bf3 online, the moment i zoom in with a scope the game freezes.
intel i5 650 @ 3.2GhZ
nvidia gforce gts 450 1G
6G ram
WINDOWS 7 64b
Log Name: Application
Source: Application Error
Date: 2012/11/03 08:52:34 PM
Event ID: 1000
Level: Error
Keywords: Classic
User: N/A
Computer: SEEKER
Description:
Faulting application name: bf3.exe, version: 1.4.0.0, time stamp: 0x500530ad
Faulting module name: nvwgf2um.dll, version: 9.18.13.1033, time stamp: 0x5081d490
Exception code: 0xc0000005
Fault offset: 0x006dec8e
Faulting process id: 0×2904
Faulting application start time: 0x01cdb9ee890032a8
Faulting application path: C:\Program Files (x86)\Electronic Arts\Battlefield 3\bf3.exe
Faulting module path: C:\Windows\system32\nvwgf2um.dll
Report Id: a0c43a54-25e7-11e2-8ebf-00012e31592c
Event Xml:
1000
2
100
0×80000000000000
93109
Application
SEEKER
bf3.exe
1.4.0.0 |
234 views
An elevator has a weight limit of $630$ kg. It is carrying a group of people of whom the heaviest weighs $57$ kg and the lightest weighs $53$ kg. What is the maximum possible number of people in the group?
1. $12$
2. $13$
3. $11$
4. $14$
Given that, Weight limit of elevator $= 630\;\text{kg}$
• The lightest person's weight $= 53 \;\text{kg}$
• The heaviest person's weight $= 57 \;\text{kg}$
We can write, $53 + \underbrace{\ldots \ldots}_{\text{Weights of$n$people}} + 57 = 630$
In order to have maximum people in the lift, all the remaining people should be of the lightest weight possible, which is $53 \;\text{kg}.$
Suppose there are $n$ people in the elevator.
Then, $53 + n(53) + 57 < 630$
$\Rightarrow 53n < 520$
$\Rightarrow n < \frac{520}{53}$
$\Rightarrow n < 9.811$
$\Rightarrow n_{\text{max}} = \left \lfloor 9.811 \right \rfloor$
$\Rightarrow n_{\text{max}} = 9$
$\therefore$ The maximum number of people in the group $= n_{\text{max}} + 2 = 9 + 2 = 11.$
Correct Answer $: \text{C}$
10.3k points
1 vote
1 |
# evaluating vector space models with word analogies
## Introduction
This post walks through corpus-based methods for evaluating the efficacy of vector space models in capturing semantic relations. Here we consider the standard evaluation tool for VSMs: the offset method for solving word analogies. While this method is not without its limitations/criticisms (see Linzen (2016) for a very nice discussion), our focus here is on an R-based work-flow.
The nuts/bolts of these types of evaluations can often be glossed over in the NLP literature; here we unpack methods and work through some reproducible examples. Ultimately, our goal is to understand how standard VSM parameters (eg, dimensionality & window size) affect model efficacy, specifically for personal and/or non-standard corpora.
## Corpus & model
The corpus used here for demonstration is derived from texts made available via Project Gutenberg. We have sampled the full PG corpus to create a more manageable sub-corpus of ~7K texts and 250M words. A simple description of its construction is available here.
library(tidyverse)
setwd('/home/jtimm/jt_work/GitHub/data_sets/project_gutenberg')
corpus <- readRDS('sample-pg-corpus.rds')
The type of vector space model (VSM) implemented is a GloVe model; the R package text2vec is utilized to construct this semantic space. Below, two text2vec data primitives are created: an iterator and a vectorized vocabulary. See this vignette for a more detailed account of the text2vec framework.
t2v_itokens <- text2vec::itoken(unlist(corpus),
preprocessor = tolower,
tokenizer = text2vec::space_tokenizer,
n_chunks = 1,
ids = names(corpus))
vocab1 <- text2vec::create_vocabulary(t2v_itokens, stopwords = tm::stopwords())
vocab2 <- text2vec::prune_vocabulary(vocab1, term_count_min = 50)
vectorizer <- text2vec::vocab_vectorizer(vocab2)
## Evaluation & analogy
We use two sets of analogy problems for VSM evaluation: the standard Google data set (Mikolov et al. 2013) and the BATS set (Gladkova, Drozd, and Matsuoka 2016). See this brief appendix for some additional details about the problem sets, including category types and examples. See this appendix for a simple code-through of building your own analogy problem sets – compatible structure-wise with the Google data set and the text2vec framework.
I have stored these files on Git Hub, but both are easily accessible online. Important to note, the Google file has not been modified in any way; the BATS file, on the other hand, has been re-structured in the image of the Google file.
questions_file <- paste0(analogy_dir, 'questions-words.txt')
questions_file2 <- paste0(analogy_dir, 'bats-questions-words.txt')
questions_file_path = questions_file,
vocab_terms = vocab2$term) ## INFO [18:19:44.078] 11779 full questions found out of 19544 total bats_analogy_set <- text2vec::prepare_analogy_questions( questions_file_path = questions_file2, vocab_terms = vocab2$term)
## INFO [18:19:45.217] 39378 full questions found out of 56036 total
tests <- c(google_analogy_set, bats_analogy_set)
Long & short of the vector offset method applied to analogy problems. Per some analogy defined as (1) below:
(1) a:a* :: b:__
where a = Plato, a* = Greek, and b = Copernicus, we solve for b* as
(2) b* = a* - a + b
based on the assumption that:
(3) a* - a = b* - b
In other words, we assume that the vector offsets between two sets of words related semantically in similar ways will be consistent when plotted in 2d semantic space. Solving for b*, then, amounts to identifying the word whose vector representation is most similar (per cosine similarity) to a* - a + b (excluding a*, a, or b).
## Experimental set-up
### Parameters
So, to evaluate effects of window size and dimensionality on the efficacy of a GloVe model in solving analogies, we build a total of 50 GloVe models – ie, all combinations of window sizes 3:12 and model dimensions in (50, 100, 150, 200, 250).
p_windows <- c(3:12)
p_dimensions <- c(50, 100, 150, 200, 250)
ls <- length(p_windows) * length(p_dimensions)
z = 0
results <- list()
details <- vector()
### Flow
The nasty for loop below can be translated into layman’s terms as: for window size j and dimensions k, (1) build GloVe model j-k via text2vec::GlobalVectors, and then (2) test accuracy of GloVe model j-k via text2vec::check_analogy_accuracy.
for(j in 1:length(p_windows)) {
tcm <- text2vec::create_tcm(it = t2v_itokens,
vectorizer = vectorizer,
skip_grams_window = p_windows[j])
for(k in 1:length(p_dimensions)) {
glove <- text2vec::GlobalVectors$new(rank = p_dimensions[k], x_max = 10) wv_main <- glove$fit_transform(tcm,
n_iter = 10,
convergence_tol = 0.01)
glove_vectors <- wv_main + t(glove$components) res <- text2vec::check_analogy_accuracy( questions_list = google_analogy_set, m_word_vectors = glove_vectors) id <- paste0('-windows_', p_windows[j], '-dims_', p_dimensions[k]) z <- z + 1 results[[z]] <- res details[z] <- id } } ### Output structure Responses to the analogy test are summarized as a list of data frames – one for each of our 50 GloVe models. names(results) <- details answers <- results %>% bind_rows(.id = 'model') Test components have been hashed (per text2vec) to speed up the “grading” process – here, we cross things back to actual text. key <- vocab2 %>% mutate(id = row_number()) %>% select(id, term) tests_df <- lapply(tests, data.frame) %>% bind_rows() %>% mutate(aid = row_number()) tests_df$X1 <- key$term[match(tests_df$X1, key$id)] tests_df$X2 <- key$term[match(tests_df$X2, key$id)] tests_df$X3 <- key$term[match(tests_df$X3, key$id)] tests_df$X4 <- key$term[match(tests_df$X4, key$id)] Then we join test & response data to create a single, readable data table. predicted_actual <- answers %>% group_by(window, dimensions) %>% mutate(aid = row_number()) %>% ungroup() %>% left_join(key, by = c('predicted' = 'id')) %>% rename(predicted_term = term) %>% left_join(key, by = c('actual' = 'id')) %>% rename(actual_term = term) %>% left_join(tests_df %>% select(aid, X1:X3)) %>% na.omit %>% mutate(correct = ifelse(predicted == actual, 'Y', 'n')) A sample of this table is presented below. incorrect answers are generally more interesting. ## Results: model parameters Our performance metric for a given model, then, is the percentage of correct analogy responses, or analogy accuracy. Accuracy scores by dimensions, window size, and data set/analogy category are computed below. mod_category_summary <- answers %>% mutate(correct = ifelse(predicted == actual, 'Y', 'N')) %>% mutate(dset = ifelse(grepl('gram', category), 'google', 'bats')) %>% group_by(dset, dimensions, window, category, correct) %>% summarize(n = n()) %>% ungroup() %>% spread(correct, n) %>% mutate(N = as.integer(N), Y = as.integer(Y),) ### Vector dimensionality effect mod_summary <- mod_category_summary %>% filter(!is.na(Y)) %>% # group_by(dset, window, dimensions) %>% summarize(N = sum(N), Y = sum(Y)) %>% ungroup() %>% mutate (per = round(Y/(N+Y) *100, 1)) The plot below illustrates the relationship between analogy accuracy and # of model dimensions as a function of window size, faceted by analogy set. Per plot, GloVe model gains in analogy performance plateau at 150 dimensions for all window sizes; in several instances, accuracy decreases at dimensions > 150. Also – the BATS collection of analogies would appear to be a bit more challenging. mod_summary %>% ggplot() + geom_line(aes(x = dimensions, y = per, color = factor(window), linetype = factor(window)), size = 1) + facet_wrap(~dset) + theme_minimal() + ggthemes::scale_color_stata() + theme(axis.text.x = element_text(angle = 90, hjust = 1)) + theme(legend.position = 'right') + ylab('Accuracy (%)') + xlab("Dimensions") + labs(title = 'Accuracy (%) versus # Dimensions') ### Window Size effect The plot below illustrates the relationship between analogy accuracy and window size as a function of dimensionality. Here, model performance improves per step-increase in widow size – accuracy seems to improve most substantially from window sizes 8 to 9. Also, some evidence of a leveling off at window sizes > 9 for higher-dimension models. The simplest and highest performing model, then, for this particular corpus (in the aggregate) is a window size = 10 and dimensions = 150 model. mod_summary %>% ggplot() + geom_line(aes(x = window, y= per, color = factor(dimensions), linetype = factor(dimensions) ), size = 1.25) + facet_wrap(~dset) + theme_minimal() + ggthemes::scale_color_few() + scale_x_continuous(breaks=c(3:12)) + theme(legend.position = 'right') + ylab('Accuracy (%)') + xlab("Window Size") + labs(title = 'Accuracy (%) versus Window Size') ## Results: analogy categories Next, we disaggregate model efficacy by analogy category and window size, holding dimensionality constant at 150. Results for each set of analogy problems are visualized as tiled “heatmaps” below; dark green indicates higher accuracy within a particular category, dark brown lower accuracy. Folks have noted previously in the literature that smaller window sizes tend to be better at capturing relations more semantic (as opposed to more grammatical) in nature. Some evidence for that here. ### Google analogy set mod_category_summary %>% filter(!grepl('_', category)) %>% filter(dimensions == 150) %>% mutate(per = round(Y/(N+Y) *100, 1)) %>% filter(!is.na(per)) %>% group_by(category) %>% mutate(rank1 = rank(per)) %>% ungroup() %>% ggplot(aes(x = factor(window), y = category)) + geom_tile(aes(fill = rank1)) + geom_text(aes(fill = rank1, label = per), size = 3) + scale_fill_gradient2(low = scales::muted("#d8b365"), mid = "#f5f5f5", high = scales::muted('#5ab4ac'), midpoint = 5) + theme(legend.position = 'none') + xlab('WINDOW SIZE') + ggtitle('Google analogies: accuracy by category') ### BATS analogy set mod_category_summary %>% filter(grepl('_', category)) %>% filter(dimensions == 150) %>% mutate(per = round(Y/(N+Y) *100, 1), category = gsub('^.*/','', category)) %>% filter(!is.na(per)) %>% group_by(category) %>% mutate(rank1 = rank(per)) %>% ungroup() %>% ggplot(aes(x = factor(window), y = category)) + geom_tile(aes(fill = rank1)) + geom_text(aes(fill = rank1, label = per), size = 3) + scale_fill_gradient2(low = scales::muted("#d8b365"), mid = "#f5f5f5", high = scales::muted('#5ab4ac'), midpoint = 5) + theme(legend.position = 'none') + xlab('WINDOW SIZE') + ggtitle('BATS analogies: accuracy by category') ## Visualizing vector offsets ### GloVe model in two dimensions For demonstration purposes, we use a semantic space derived from the window size = 5 and dimensions = 100 GloVe model. This space is transformed from 100 GloVe dimensions to two dimensions via principal component analysis. pca_2d <- prcomp(glove_vectors, scale = TRUE, center = TRUE) %>% pluck(5) %>% data.frame() %>% select(PC1, PC2) A 30,000 foot view of this two-dimensional semantic space. ggplot(data = pca_2d, aes(x = PC1, y = PC2)) + geom_point(size = .05, color = 'lightgray') + geom_text(data = pca_2d, aes(x = PC1, y = PC2, label = rownames(pca_2d)), size = 3, color = 'steelblue', check_overlap = TRUE) + xlim (-2.5,2.5) + ylim(-2.5,2.5) + theme_minimal() ### Copernicus & Plato x1 = 'copernicus'; x2 = 'polish'; y1 = 'plato'; y2 = 'greek' y <- pca_2d[rownames(pca_2d) %in% c(x1, x2, y1, y2),] Finally, a visual demonstration of the vector offset method at-work in solving the Copernicus analogy problem. Situated within the full semantic space for context. ggplot(data = pca_2d, aes(x = PC1, y = PC2)) + geom_text(data = pca_2d, aes(x = PC1, y = PC2, label = rownames(pca_2d)), size = 3, color = 'gray', check_overlap = TRUE) + xlim(min(c(off_dims$x1, off_dims$x2)), max(c(off_dims$x1, off_dims$x2))) + ylim(min(c(off_dims$y1, off_dims$y2)), max(c(off_dims$y1, off_dims\$y2))) +
geom_segment(data = off_dims[1:2,],
aes(x = x2, y = y2,
xend = x1, yend = y1),
color = '#df6358',
size = 1.25,
arrow = arrow(length = unit(0.025, "npc"))) +
geom_segment(data = off_dims[3:4,],
aes(x = x1, y = y1,
xend = x2, yend = y2),
color = 'steelblue',
size = 1.25,
#linetype = 4,
arrow = arrow(length = unit(0.025, "npc"))) +
ggrepel::geom_text_repel(
data = y,
aes(label = toupper(rownames(y))),
direction = "y",
hjust = 0,
size = 4.25,
color = 'black') +
theme_minimal() +
ggtitle(paste0(x1, ':', x2, ' :: ', y1, ':', y2))
## Summary & caveats
Mostly an excuse to gather some thoughts. I use GloVe models quite a bit for exploratory purposes. To better trust insights gained from exploration, it is generally nice to have an evaluative tool, however imperfect. And certainly to justify parameter selection for corpus-specific tasks. Hopefully a useful resource and guide. For some innovative and more thoughtful applications of VSMs, see Weston et al. (2019) and Tshitoyan et al. (2019).
## References
Gladkova, Anna, Aleksandr Drozd, and Satoshi Matsuoka. 2016. “Analogy-Based Detection of Morphological and Semantic Relations with Word Embeddings: What Works and What Doesn’t.” In Proceedings of the Naacl Student Research Workshop, 8–15.
Linzen, Tal. 2016. “Issues in Evaluating Semantic Spaces Using Word Analogies.” arXiv Preprint arXiv:1606.07736.
Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. “Efficient Estimation of Word Representations in Vector Space.” arXiv Preprint arXiv:1301.3781.
Tshitoyan, Vahe, John Dagdelen, Leigh Weston, Alexander Dunn, Ziqin Rong, Olga Kononova, Kristin A Persson, Gerbrand Ceder, and Anubhav Jain. 2019. “Unsupervised Word Embeddings Capture Latent Knowledge from Materials Science Literature.” Nature 571 (7763): 95–98.
Weston, Leigh, Vahe Tshitoyan, John Dagdelen, Olga Kononova, Amalie Trewartha, Kristin A Persson, Gerbrand Ceder, and Anubhav Jain. 2019. “Named Entity Recognition and Normalization Applied to Large-Scale Information Extraction from the Materials Science Literature.” Journal of Chemical Information and Modeling 59 (9): 3692–3702.
Share |
## Algebra 2 (1st Edition)
Published by McDougal Littell
# Chapter 10 Counting Methods and Probability - 10.3 Define and Use Probability - 10.3 Exercises - Skill Practice - Page 702: 22
#### Answer
$\frac{5}{2}$
#### Work Step by Step
The number of reds is $8$, and the number of non-reds is: $10+6+4=20$. Thus the odds: $\frac{20}{8}=\frac{5}{2}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback. |
## B3/S12-ae34ceit
For discussion of other cellular automata.
### B3/S12-ae34ceit
A mix of 2x2, LowLife (B3/S13), maze, and explosions (not really, just takes a while to stabilise). Apgsearch doesn't work so well but you can try.
p2:
x = 17, y = 6, rule = B3_S12-ae34ceit2o3b2obo4bo$b2o3b2o5b2o$11b2o2bo$6b2o4bo2b2o$5b2obo4b2o$14bo! p3: x = 14, y = 6, rule = B3_S12-ae34ceit2o6bo4bo$2o6bo4bo$6bo4bo$2b2o4bo4bo$2b2o4bo2bo$11bo!
p4:
x = 4, y = 5, rule = B3_S12-ae34ceit2b2o$o$ob2o2$2b2o! p6: x = 15, y = 5, rule = B3_S12-ae34ceit5bo3bo$2o2bobobobo2b2o$4bobobobo$2o2bobobobo2b2o$5bo3bo! Last edited by drc on June 2nd, 2016, 9:19 pm, edited 1 time in total. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA ### Re: B3/S12-ae34ceit Indestructable wall: x = 15, y = 323, rule = B3_S12-ae34ceit2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o9bobo$10bo3bo$2o9bobo2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o! Ms can be used to make oscs, like this p3: x = 8, y = 5, rule = B3_S12-ae34ceit2o4b2o$2bo2bo$3o2b3o$2bo2bo$2o4b2o! Or this p6: x = 11, y = 5, rule = B3_S12-ae34ceit2o7b2o$2bo5bo$3o5b3o$2bo5bo$2o7b2o! p7: x = 8, y = 8, rule = B3_S12-ae34ceit2o$2bo$3o$2bo$2o$4b3o$3bobobo$3bobobo!
p8:
x = 9, y = 6, rule = B3_S12-ae34ceit7b2o$2o4bo$2bo3b3o$3o3bo$2bo4b2o$2o! p9: x = 12, y = 6, rule = B3_S12-ae34ceit10b2o$2o7bo$2bo6b3o$3o6bo$2bo7b2o$2o!
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
drc
Posts: 1664
Joined: December 3rd, 2015, 4:11 pm
Location: creating useless things in OCA
### Re: B3/S12-ae34ceit
A c/2 orthogonal and a c/4 diagonal:
x = 11, y = 6, rule = B3_S12-ae34ceitbo$o2bo6bo$o9bo$o7bo$b2o4b2o$2bo5bo! Dirty puffer: x = 6, y = 13, rule = B3_S12-ae34ceit2bo$b2o$o$o$o2bo$bo3bo$5bo$bo3bo$o2bo$o$o$b2o$2bo! Smaller, cleaner puffer: x = 6, y = 6, rule = B3_S12-ae34ceit2bo$b2o$o$o$o2bobo$bo3bo!
p15:
x = 9, y = 11, rule = B3_S12-ae34ceitobobobo$obobobo$2bobo2b2o$2bo$obobob3o$obobo$2bobob3o$obo$obobo2b2o$2bobobo$2bobobo!
p20:
x = 8, y = 11, rule = B3_S12-ae34ceit2bobobo$2bobobo$3obo$4bo$4o2$3o$3bobobo$2obobobo$3bo$3bo! Sphenocorona Posts: 477 Joined: April 9th, 2013, 11:03 pm ### Re: B3/S12-ae34ceit This feels like it could be made into a c/2: x = 7, y = 6, rule = B3_S12-ae34ceit3bo$2b3o$b5o$2o3b2o$3bo$3bo!
Tubfuse:
x = 101, y = 101, rule = B3_S12-ae34ceitbo$2o$2bo$3bo$4bo$5bo$6bo$7bo$8bo$9bo$10bo$11bo$12bo$13bo$14bo$15bo$16bo$17bo$18bo$19bo$20bo$21bo$22bo$23bo$24bo$25bo$26bo$27bo$28bo$29bo$30bo$31bo$32bo$33bo$34bo$35bo$36bo$37bo$38bo$39bo$40bo$41bo$42bo$43bo$44bo$45bo$46bo$47bo$48bo$49bo$50bo$51bo$52bo$53bo$54bo$55bo$56bo$57bo$58bo$59bo$60bo$61bo$62bo$63bo$64bo$65bo$66bo$67bo$68bo$69bo$70bo$71bo$72bo$73bo$74bo$75bo$76bo$77bo$78bo$79bo$80bo$81bo$82bo$83bo$84bo$85bo$86bo$87bo$88bo$89bo$90bo$91bo$92bo$93bo$94bo$95bo$96bo$97bo$98bo$99bo$100bo!
Another Tubfuse:
x = 102, y = 102, rule = B3_S12-ae34ceitbo$2bo$3o$3bo$4bo$5bo$6bo$7bo$8bo$9bo$10bo$11bo$12bo$13bo$14bo$15bo$16bo$17bo$18bo$19bo$20bo$21bo$22bo$23bo$24bo$25bo$26bo$27bo$28bo$29bo$30bo$31bo$32bo$33bo$34bo$35bo$36bo$37bo$38bo$39bo$40bo$41bo$42bo$43bo$44bo$45bo$46bo$47bo$48bo$49bo$50bo$51bo$52bo$53bo$54bo$55bo$56bo$57bo$58bo$59bo$60bo$61bo$62bo$63bo$64bo$65bo$66bo$67bo$68bo$69bo$70bo$71bo$72bo$73bo$74bo$75bo$76bo$77bo$78bo$79bo$80bo$81bo$82bo$83bo$84bo$85bo$86bo$87bo$88bo$89bo$90bo$91bo$92bo$93bo$94bo$95bo$96bo$97bo$98bo$99bo$100bo$101bo! How many of these things are there? x = 103, y = 103, rule = B3_S12-ae34ceit3bo$o2bo$3bo$4o$4bo$5bo$6bo$7bo$8bo$9bo$10bo$11bo$12bo$13bo$14bo$15bo$16bo$17bo$18bo$19bo$20bo$21bo$22bo$23bo$24bo$25bo$26bo$27bo$28bo$29bo$30bo$31bo$32bo$33bo$34bo$35bo$36bo$37bo$38bo$39bo$40bo$41bo$42bo$43bo$44bo$45bo$46bo$47bo$48bo$49bo$50bo$51bo$52bo$53bo$54bo$55bo$56bo$57bo$58bo$59bo$60bo$61bo$62bo$63bo$64bo$65bo$66bo$67bo$68bo$69bo$70bo$71bo$72bo$73bo$74bo$75bo$76bo$77bo$78bo$79bo$80bo$81bo$82bo$83bo$84bo$85bo$86bo$87bo$88bo$89bo$90bo$91bo$92bo$93bo$94bo$95bo$96bo$97bo$98bo$99bo$100bo$101bo$102bo! This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA ### Re: B3/S12-ae34ceit Here's another c/2 engine (p22, odd bilateral symmetry). On it's own it evolves into a p660 puffer. x = 27, y = 5, rule = B3_S12-ae34ceit3bo9bo9bo$2b3o7b3o7b3o$2obob2o3b2obob2o3b2obob2o$3bo9bo9bo$3bo9bo9bo! The two additional copies are included to reduce the startup time of the p660 puffer and reduce the settling time of the ash. Two copies (same spacing as above) form a clean p44 puffer: x = 17, y = 5, rule = B3_S12-ae34ceit3bo9bo$2b3o7b3o$2obob2o3b2obob2o$3bo9bo$3bo9bo! Another combination which settles into a p330 puffer: x = 34, y = 5, rule = B3_S12-ae34ceit3bo26bo$2b3o24b3o$2obob2o20b2obob2o$3bo26bo$3bo26bo! wildmyron Posts: 885 Joined: August 9th, 2013, 12:45 am ### Re: B3/S12-ae34ceit drc wrote:Indestructable wall: x = 15, y = 323, rule = B3_S12-ae34ceit2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o9bobo$10bo3bo$2o9bobo2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o2$2o! You might want to run that for a bit longer... The wall gets destroyed. Gamedziner Posts: 584 Joined: May 30th, 2016, 8:47 pm Location: Milky Way Galaxy: Planet Earth ### Re: B3/S12-ae34ceit P18: x = 11, y = 13, rule = B3_S12-ae34ceit3bo3bo$3b2ob2o$4bobo$2bo5bo$2o2bobo2b2o$b3o3b3o2$b3o3b3o$2o2bobo2b2o$2bo5bo$4bobo$3b2ob2o$3bo3bo!
P8:
x = 8, y = 9, rule = B3_S12-ae34ceit3b2o$bo4bo$2b6o2$8o2$2b6o$bo4bo$3b2o!
P15:
x = 7, y = 6, rule = B3_S12-ae34ceit4bo$b4o$obobo$obobobo$2bobobo$bo3bo! Still drifting. Bullet51 Posts: 495 Joined: July 21st, 2014, 4:35 am ### Re: B3/S12-ae34ceit Gamedziner wrote: You might want to run that for a bit longer... The wall gets destroyed. I knew that, I meant that if you made the wall go on for longer, it becomes indestructable. This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA ### Re: B3/S12-ae34ceit Small flipflop p72: x = 9, y = 9, rule = B3_S12-ae34ceit2bobobo$2bobobo$2o4b3o2$2o4b3o2$3o3b3o$2bobobo$2bobobo! p16: x = 17, y = 17, rule = B3_S12-ae34ceit4bobobobobo$4bobobobobo$6bobobo$6bo3bo$2o13b2o2$4o9b4o$8bo$3o4bobo4b3o$8bo$4o9b4o2$2o13b2o$6bo3bo$6bobobo$4bobobobobo$4bobobobobo! Sphenocorona Posts: 477 Joined: April 9th, 2013, 11:03 pm ### Re: B3/S12-ae34ceit Well, I hope that Catagolue accepts 1000 soup hauls. Just to clarify, a blinker lasts 10,731 generations: x = 3, y = 1, rule = B3_S12-ae34ceit3o! A pretty durable eater, shown eating two ways in seven gens: x = 4, y = 9, rule = B3_S12-ae34ceit2o$2bo$2o2$2obo2$2o$2bo$2o! This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.) Current rule interest: B2ce3-ir4a5y/S2-c3-y drc Posts: 1664 Joined: December 3rd, 2015, 4:11 pm Location: creating useless things in OCA ### Re: B3/S12-ae34ceit p6 2c/3 orthogonal signal travels down wire: x = 439, y = 10, rule = B3_S12-ae34ceitb2o433b2o$o2bobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobo2bo$o2bobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobo2bo$b2o5b2obo3bo5b3o412b2o$9b3o3bo5b3o$9b3o3bo5b3o$b2o5b2obo3bo5b3o412b2o$o2bobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobo2bo$o2bobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobo2bo$b2o433b2o!
It's made from a replicator that travels along the wire pulling some other pattern on the wire that would otherwise just decay away. No idea if anything can be converted to it or extracted from it without destroying the wire.
Sphenocorona
Posts: 477
Joined: April 9th, 2013, 11:03 pm
### Re: B3/S12-ae34ceit
Gamedziner wrote:You might want to run that for a bit longer... The wall gets destroyed.
Yes, but not from the front - it only fails once the pattern reaches the ends, but otherwise it's indestructible. So if you drew the wall in a torus and/or made it infinitely long, it would work.
gamer54657 wrote:God save us all.
God save humanity.
hgkhjfgh
nutshelltlifeDiscord 'Conwaylife Lounge'
M. I. Wright
Posts: 370
Joined: June 13th, 2015, 12:04 pm
### Re: B3/S12-ae34ceit
Throwing out the S4c actually makes it easier to search.
This post was brought to you by the letter D, for dishes that Andrew J. Wade won't do. (Also Daniel, which happens to be me.)
Current rule interest: B2ce3-ir4a5y/S2-c3-y
drc
Posts: 1664
Joined: December 3rd, 2015, 4:11 pm
Location: creating useless things in OCA
### Re: B3/S12-ae34ceit
I've modified Catagolue to accept hauls as long as they *either* have 10k soups or 250k objects. Presumably this would make searching B3/S12-ae34ceit much easier?
What do you do with ill crystallographers? Take them to the mono-clinic!
calcyman
Posts: 1832
Joined: June 1st, 2009, 4:32 pm
### Re: B3/S12-ae34ceit
p64 c/16 orthogonal signal:
x = 454, y = 18, rule = B3/S12-ae34ceit5bobo$4bo3bo$5b3o2$bo3b3o$obobo4bo$2bobo2b2obobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobob2o$obobobo3bobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobob2o$bo4b4o3b3o2bobo$5bo14b2o421b2obo$6bob2o10bo426bobobo$13b3o2bobo422b2ob2obobo$7b2obobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobo4bobobobo$7b2obobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobobob2o2bo3bo$443bo$445b6o2$447b2o! A p13: x = 13, y = 9, rule = B3/S12-ae34ceit2bobobobobo$2bobobobobo$2o4bobo$7b6o$3o2bo$7b6o$2o4bobo$2bobobobobo$2bobobobobo! And a p1161 121c/387 puffer: x = 11, y = 24, rule = B3/S12-ae34ceit5bo$4bobo$4bobo$4bobo$o4bo4bo$3ob3ob3o$o9bo$4b3o3$4bobo$5bo6$b3obob3o$2bo5bo3$2b7o$2bo5bo$3bo3bo! 5 cell infinite growth: x = 4, y = 3, rule = B3/S12-ae34ceit3bo$2obo$3bo! Head of a c/3 orthogonal spaceship: x = 6, y = 3, rule = B3/S12-ae34ceit4bo$3o2bo$4bo! x = 4, y = 2, rule = B3/S23ob2o$2obo!
(Check Gen 2)
toroidalet
Posts: 881
Joined: August 7th, 2016, 1:48 pm
Location: Somewhere on a planet called "Earth"
### Re: B3/S12-ae34ceit
toroidalet wrote:p64 c/16 orthogonal signal:
signal
Simple termination accepts either-parity signal:
x = 36, y = 26, rule = B3/S12-ae34ceitobobobobobobobobobobobobobobobobo$obobobobobobobobobobobobobobobobo$b3o2bobo23bob2o$8b2o22bo$8bo22b2o$b3o2bobo23bob2o$obobobobobobobobobobobobobobobobo$obobobobobobobobobobobobobobobobo11$obobobobobobobobobobobobobobobobo$obobobobobobobobobobobobobobobobo$b3o2bobo23bob2o$8bo23bo$8b2o21b2o$b3o2bobo23bob2o$obobobobobobobobobobobobobobobobo$obobobobobobobobobobobobobobobobo! LifeWiki: Like Wikipedia but with more spaceships. [citation needed] BlinkerSpawn Posts: 1808 Joined: November 8th, 2014, 8:48 pm Location: Getting a snacker from R-Bee's ### Re: B3/S12-ae34ceit c/9!! I name it "Gangsta" or "Weekender 2.0" x = 12, y = 15, rule = B3/S12-ae34ceit2bo6bo$2bo6bo5$5o2b5o$b3o4b3o$2bo6bo2$3b6o$2b2o4b2o2$4bo2bo$5b2o! Weekender 2.0 seems better Thanks ntzfind once again!! If you're the person that uploaded to Sakagolue illegally, please PM me. x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 2624
Joined: June 19th, 2015, 8:50 pm
Location: In the kingdom of Sultan Hamengkubuwono X
### Re: B3/S12-ae34ceit
A wave based on a partial from ntzfind:
x = 36, y = 296, rule = B3/S12-ae34ceit2bo$bobo2$2bo$2bo$2bo2$b3o$3o$3bo$2bobo2$3bo$3bo$3bo2$2b3o$b3o$4bo$3bobo2$4bo$4bo$4bo2$3b3o$2b3o$5bo$4bobo2$5bo$5bo$5bo2$4b3o$3b3o$6bo$5bobo2$6bo$6bo$6bo2$5b3o$4b3o$7bo$6bobo2$7bo$7bo$7bo2$6b3o$5b3o$8bo$7bobo2$8bo$8bo$8bo2$7b3o$6b3o$9bo$8bobo2$9bo$9bo$9bo2$8b3o$7b3o$10bo$9bobo2$10bo$10bo$10bo2$9b3o$8b3o$11bo$10bobo2$11bo$11bo$11bo2$10b3o$9b3o$12bo$11bobo2$12bo$12bo$12bo2$11b3o$10b3o$13bo$12bobo2$13bo$13bo$13bo2$12b3o$11b3o$14bo$13bobo2$14bo$14bo$14bo2$13b3o$12b3o$15bo$14bobo2$15bo$15bo$15bo2$14b3o$13b3o$16bo$15bobo2$16bo$16bo$16bo2$15b3o$14b3o$17bo$16bobo2$17bo$17bo$17bo2$16b3o$15b3o$18bo$17bobo2$18bo$18bo$18bo2$17b3o$16b3o$19bo$18bobo2$19bo$19bo$19bo2$18b3o$17b3o$20bo$19bobo2$20bo$20bo$20bo2$19b3o$18b3o$21bo$20bobo2$21bo$21bo$21bo2$20b3o$19b3o$22bo$21bobo2$22bo$22bo$22bo2$21b3o$20b3o$23bo$22bobo2$23bo$23bo$23bo2$22b3o$21b3o$24bo$23bobo2$24bo$24bo$24bo2$23b3o$22b3o$25bo$24bobo2$25bo$25bo$25bo2$24b3o$23b3o$26bo$25bobo2$26bo$26bo$26bo2$25b3o$24b3o$27bo$26bobo2$27bo$27bo$27bo2$26b3o$25b3o$28bo$27bobo2$28bo$28bo$28bo2$27b3o$26b3o$29bo$28bobo2$29bo$29bo$29bo2$28b3o$27b3o$30bo$29bobo2$30bo$30bo$30bo2$29b3o$28b3o$31bo$30bobo2$31bo$31bo$31bo2$30b3o$29b3o$32bo$31bobo2$32bo$32bo$32bo2$31b3o$30b3o$33bo$32bobo2$33bo$33bo$33bo2$32b3o$31b3o$34bo$33bobo2$34bo$34bo$34bo2$33b3o! A p30 wick: x = 5, y = 409, rule = B3/S12-ae34ceit2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o2$2bo$bobo2$2bo$2bo$2bo2$b3o$5o! A c/4 ship from ntzfind: x = 13, y = 30, rule = B3/S12-ae34ceit6bo$6bo$3b2obob2o$2b2o2bo2b2o$bo4bo4bo$b2o7b2o$2b3o3b3o$5b3o2$5b3o$4bobobo$3bo2bo2bo$6bo$6bo$3bobobobo$2b2obobob2o$3bobobobo$5bobo$b3o5b3o2$4bobobo$bo4bo4bo$3b2obob2o$4bobobo$5b3o$5b3o$b3o5b3o$o2b2o3b2o2bo$b2ob2ob2ob2o$bobo5bobo!
A c/5 ship from ntzfind (again):
x = 11, y = 72, rule = B3/S12-ae34ceit2bo5bo$2bo5bo2$2ob2ob2ob2o$b3o3b3o$o3bobo3bo$bo2bobo2bo$b2obobob2o$4bobo$b2obobob2o$4bobo$b2obobob2o$2bobobobo$b2obobob2o$4bobo$2bobobobo$3b2ob2o$b2o5b2o$3o5b3o$3bo3bo$2b3ob3o2$5bo$5bo$2b2obob2o$bo7bo2$4b3o$b2o2bo2b2o$3bo3bo2$2bo5bo$2b7o$bo7bo$o9bo2$2bo5bo$3b5o2$2ob2ob2ob2o$bobobobobo$2bo5bo$b2o5b2o$3ob3ob3o$3bobobo$2b2obob2o$3bobobo3$3b5o$4b3o$5bo$4b3o$4bobo$2o2bobo2b2o2$o3bobo3bo$4bobo$4bobo$3b2ob2o$4bobo$bobo3bobo$o9bo$2b2o3b2o$b3o3b3o$2b2o3b2o$2bo5bo$2bobobobo2$bo2bobo2bo$b2o5b2o$2bo5bo! A p304 wick: x = 11, y = 693, rule = B3/S12-ae34ceit4b3o$b2o5b2o$2b2o3b2o$bobo3bobo$3bo3bo$2b2obob2o$bo3bo3bo$bobobobobo$2b2obob2o$3b2ob2o$4bobo$4b3o$2bo2bo2bo$3bobobo$3bobobo$2bo5bo$4bobo$3bo3bo$bobobobobo$o2bobobo2bo$3bobobo$o2bobobo2bo$5bo$4b3o$3ob3ob3o$3bobobo$2bo2bo2bo$bobobobobo$4b3o$5bo$2bo2bo2bo$2b7o5$2bo5bo$b2ob3ob2o$3bobobo$3bobobo$2b2obob2o$2b2obob2o$5bo$3b2ob2o$3b2ob2o$3b2ob2o$3b2ob2o$3bo3bo$3bobobo$4b3o$4b3o$3b2ob2o$2bo2bo2bo2$3b2ob2o$3bobobo$2b2obob2o$bobobobobo$b2o2bo2b2o$5bo2$3bo3bo$2b7o$5bo$bo3bo3bo$bo3bo3bo3$5bo$bobobobobo$2obobobob2o$2b2obob2o$2bo2bo2bo$2bo2bo2bo$2obo3bob2o$bob2ob2obo$3bobobo$bo3bo3bo$3bobobo$2bo2bo2bo$4b3o$5bo$3bo3bo$3bo3bo$4b3o3$b3obob3o$2bo2bo2bo$2b2obob2o$3bobobo$2b2obob2o$5bo$2bo2bo2bo$2b2o3b2o$2b2o3b2o$b3o3b3o$b2o5b2o$2b2obob2o$3bobobo$3b2ob2o2$2b2obob2o$2bo5bo$2bo5bo$3bo3bo$5bo$2b2obob2o$bobobobobo$2bo2bo2bo$5bo$4bobo$5bo$3bobobo$2b2obob2o$bobobobobo$bo2b3o2bo2$5bo$4b3o$ob2obob2obo$bobobobobo$bobobobobo$5bo$5bo$bobobobobo$ob2obob2obo$5bo$3bobobo$bo3bo3bo$2bo2bo2bo$3bobobo$3bo3bo2$3bo3bo$5bo$4b3o$2b3ob3o$2b2obob2o$2bo2bo2bo$5bo$2bo2bo2bo$2b2obob2o$3bobobo2$4bobo$4bobo2$4b3o$3bobobo$5bo$4b3o$4b3o$b2o5b2o$2b2o3b2o$bobo3bobo$3bo3bo$2b2obob2o$bo3bo3bo$bobobobobo$2b2obob2o$3b2ob2o$4bobo$4b3o$2bo2bo2bo$3bobobo$3bobobo$2bo5bo$4bobo$3bo3bo$bobobobobo$o2bobobo2bo$3bobobo$o2bobobo2bo$5bo$4b3o$3ob3ob3o$3bobobo$2bo2bo2bo$bobobobobo$4b3o$5bo$2bo2bo2bo$2b7o5$2bo5bo$b2ob3ob2o$3bobobo$3bobobo$2b2obob2o$2b2obob2o$5bo$3b2ob2o$3b2ob2o$3b2ob2o$3b2ob2o$3bo3bo$3bobobo$4b3o$4b3o$3b2ob2o$2bo2bo2bo2$3b2ob2o$3bobobo$2b2obob2o$bobobobobo$b2o2bo2b2o$5bo2$3bo3bo$2b7o$5bo$bo3bo3bo$bo3bo3bo3$5bo$bobobobobo$2obobobob2o$2b2obob2o$2bo2bo2bo$2bo2bo2bo$2obo3bob2o$bob2ob2obo$3bobobo$bo3bo3bo$3bobobo$2bo2bo2bo$4b3o$5bo$3bo3bo$3bo3bo$4b3o3$b3obob3o$2bo2bo2bo$2b2obob2o$3bobobo$2b2obob2o$5bo$2bo2bo2bo$2b2o3b2o$2b2o3b2o$b3o3b3o$b2o5b2o$2b2obob2o$3bobobo$3b2ob2o2$2b2obob2o$2bo5bo$2bo5bo$3bo3bo$5bo$2b2obob2o$bobobobobo$2bo2bo2bo$5bo$4bobo$5bo$3bobobo$2b2obob2o$bobobobobo$bo2b3o2bo2$5bo$4b3o$ob2obob2obo$bobobobobo$bobobobobo$5bo$5bo$bobobobobo$ob2obob2obo$5bo$3bobobo$bo3bo3bo$2bo2bo2bo$3bobobo$3bo3bo2$3bo3bo$5bo$4b3o$2b3ob3o$2b2obob2o$2bo2bo2bo$5bo$2bo2bo2bo$2b2obob2o$3bobobo2$4bobo$4bobo2$4b3o$3bobobo$5bo$4b3o$4b3o$b2o5b2o$2b2o3b2o$bobo3bobo$3bo3bo$2b2obob2o$bo3bo3bo$bobobobobo$2b2obob2o$3b2ob2o$4bobo$4b3o$2bo2bo2bo$3bobobo$3bobobo$2bo5bo$4bobo$3bo3bo$bobobobobo$o2bobobo2bo$3bobobo$o2bobobo2bo$5bo$4b3o$3ob3ob3o$3bobobo$2bo2bo2bo$bobobobobo$4b3o$5bo$2bo2bo2bo$2b7o5$2bo5bo$b2ob3ob2o$3bobobo$3bobobo$2b2obob2o$2b2obob2o$5bo$3b2ob2o$3b2ob2o$3b2ob2o$3b2ob2o$3bo3bo$3bobobo$4b3o$4b3o$3b2ob2o$2bo2bo2bo2$3b2ob2o$3bobobo$2b2obob2o$bobobobobo$b2o2bo2b2o$5bo2$3bo3bo$2b7o$5bo$bo3bo3bo$bo3bo3bo3$5bo$bobobobobo$2obobobob2o$2b2obob2o$2bo2bo2bo$2bo2bo2bo$2obo3bob2o$bob2ob2obo$3bobobo$bo3bo3bo$3bobobo$2bo2bo2bo$4b3o$5bo$3bo3bo$3bo3bo$4b3o3$b3obob3o$2bo2bo2bo$2b2obob2o$3bobobo$2b2obob2o$5bo$2bo2bo2bo$2b2o3b2o$2b2o3b2o$b3o3b3o$b2o5b2o$2b2obob2o$3bobobo$3b2ob2o2$2b2obob2o$2bo5bo$2bo5bo$3bo3bo$5bo$2b2obob2o$bobobobobo$2bo2bo2bo$5bo$4bobo$5bo$3bobobo$2b2obob2o$bobobobobo$bo2b3o2bo2$5bo$4b3o$ob2obob2obo$bobobobobo$bobobobobo$5bo$5bo$bobobobobo$ob2obob2obo$5bo$3bobobo$bo3bo3bo$2bo2bo2bo$3bobobo$3bo3bo2$3bo3bo$5bo$4b3o$2b3ob3o$2b2obob2o$2bo2bo2bo$5bo$2bo2bo2bo$2b2obob2o$3bobobo2$4bobo$4bobo2$4b3o$3bobobo$5bo$4b3o$4b3o$b2o5b2o$2b2o3b2o$bobo3bobo$3bo3bo$2b2obob2o$bo3bo3bo$bobobobobo$2b2obob2o$3b2ob2o$4bobo$4b3o$2bo2bo2bo$3bobobo$3bobobo$2bo5bo$4bobo$3bo3bo$bobobobobo$o2bobobo2bo$3bobobo$o2bobobo2bo$5bo$4b3o$3ob3ob3o$3bobobo$2bo2bo2bo$bobobobobo$4b3o$5bo$2bo2bo2bo$2b7o5$2bo5bo$b2ob3ob2o$3bobobo$3bobobo$2b2obob2o$2b2obob2o$5bo$3b2ob2o$3b2ob2o$3b2ob2o$3b2ob2o$3bo3bo$3bobobo$4b3o$4b3o$3b2ob2o$2bo2bo2bo2$3b2ob2o$3bobobo$2b2obob2o$bobobobobo$b2o2bo2b2o$5bo2$3bo3bo$2b7o$5bo$bo3bo3bo$bo3bo3bo3$5bo$bobobobobo$2obobobob2o$2b2obob2o$2bo2bo2bo$2bo2bo2bo$2obo3bob2o$bob2ob2obo$3bobobo$bo3bo3bo$3bobobo$2bo2bo2bo$4b3o$5bo$3bo3bo$3bo3bo$4b3o3$b3obob3o$2bo2bo2bo$2b2obob2o$3bobobo$2b2obob2o$5bo$2bo2bo2bo$2b2o3b2o$2b2o3b2o$b3o3b3o$b2o5b2o$2b2obob2o$3bobobo$3b2ob2o2$2b2obob2o$2bo5bo$2bo5bo$3bo3bo$5bo$2b2obob2o$bobobobobo$2bo2bo2bo$5bo$4bobo$5bo$3bobobo$2b2obob2o$bobobobobo$bo2b3o2bo2$5bo$4b3o$ob2obob2obo$bobobobobo$bobobobobo$5bo$5bo$bobobobobo$ob2obob2obo$5bo$3bobobo$bo3bo3bo$2bo2bo2bo$3bobobo$3bo3bo2$3bo3bo$5bo$4b3o$2b3ob3o$2b2obob2o$2bo2bo2bo$5bo$2bo2bo2bo$2b2obob2o$3bobobo2$4bobo$4bobo2$4b3o$3bobobo$5bo$4b3o$4b3o$b2o5b2o$2b2o3b2o$bobo3bobo$3bo3bo$2b2obob2o$bo3bo3bo$bobobobobo$2b2obob2o$3b2ob2o$4bobo$4b3o$2bo2bo2bo$3bobobo$3bobobo$2bo5bo$4bobo$3bo3bo$bobobobobo$o2bobobo2bo$3bobobo$o2bobobo2bo$5bo$4b3o$3ob3ob3o$3bobobo$2bo2bo2bo$bobobobobo$4b3o$5bo$2bo2bo2bo$2b7o5$2bo5bo$b2ob3ob2o$3bobobo$3bobobo$2b2obob2o$2b2obob2o$5bo$3b2ob2o$3b2ob2o$3b2ob2o$3b2ob2o$3bo3bo$3bobobo$4b3o$4b3o$3b2ob2o$2bo2bo2bo2$3b2ob2o$3bobobo$2b2obob2o$bobobobobo$b2o2bo2b2o$5bo2$3bo3bo$2b7o$5bo$bo3bo3bo$bo3bo3bo3$5bo$bobobobobo$2obobobob2o$2b2obob2o$2bo2bo2bo$2bo2bo2bo$2obo3bob2o$bob2ob2obo$3bobobo$bo3bo3bo$3bobobo$2bo2bo2bo$4b3o$5bo$3bo3bo$3bo3bo$4b3o! x = 4, y = 2, rule = B3/S23ob2o$2obo!
(Check Gen 2)
toroidalet
Posts: 881
Joined: August 7th, 2016, 1:48 pm
Location: Somewhere on a planet called "Earth"
### Re: B3/S12-ae34ceit
toroidalet wrote:A c/4 ship from ntzfind:
x = 13, y = 30, rule = B3/S12-ae34ceit6bo$6bo$3b2obob2o$2b2o2bo2b2o$bo4bo4bo$b2o7b2o$2b3o3b3o$5b3o2$5b3o$4bobobo$3bo2bo2bo$6bo$6bo$3bobobobo$2b2obobob2o$3bobobobo$5bobo$b3o5b3o2$4bobobo$bo4bo4bo$3b2obob2o$4bobobo$5b3o$5b3o$b3o5b3o$o2b2o3b2o2bo$b2ob2ob2ob2o$bobo5bobo! The front of that is also a ship: x = 13, y = 8, rule = B3/S12-ae34ceit6bo$6bo$3b2obob2o$2b2o2bo2b2o$bo4bo4bo$b2o7b2o$2b3o3b3o$5b3o!
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome
Posts: 1673
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
### Re: B3/S12-ae34ceit
A for awesome wrote:
toroidalet wrote:A c/4 ship from ntzfind:
Large tagalong
The front of that is also a ship:
x = 13, y = 8, rule = B3/S12-ae34ceit6bo$6bo$3b2obob2o$2b2o2bo2b2o$bo4bo4bo$b2o7b2o$2b3o3b3o$5b3o! Power button! If you're the person that uploaded to Sakagolue illegally, please PM me. x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 2624
Joined: June 19th, 2015, 8:50 pm
Location: In the kingdom of Sultan Hamengkubuwono X
### Re: B3/S12-ae34ceit
4c/11 ship from ntzfind:
x = 11, y = 20, rule = B3/S12-ae34ceit5bo$5bo$4b3o3$3bo3bo$2b3ob3o$3bo3bo2$bo2b3o2bo$2o3bo3b2o$2o7b2o$b2o5b2o$2b2o3b2o$4bobo$2b2o3b2o$2b3ob3o2$4bobo$4bobo! x = 4, y = 2, rule = B3/S23ob2o$2obo!
(Check Gen 2)
toroidalet
Posts: 881
Joined: August 7th, 2016, 1:48 pm
Location: Somewhere on a planet called "Earth"
### Re: B3/S12-ae34ceit
toroidalet wrote:4c/11 ship from ntzfind:
x = 11, y = 20, rule = B3/S12-ae34ceit5bo$5bo$4b3o3$3bo3bo$2b3ob3o$3bo3bo2$bo2b3o2bo$2o3bo3b2o$2o7b2o$b2o5b2o$2b2o3b2o$4bobo$2b2o3b2o$2b3ob3o2$4bobo$4bobo! Wow!! Now this rule has 2 velocities life doesn't! If you're the person that uploaded to Sakagolue illegally, please PM me. x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 2624
Joined: June 19th, 2015, 8:50 pm
Location: In the kingdom of Sultan Hamengkubuwono X
### Re: B3/S12-ae34ceit
2c/7 ship from ntzfind:
x = 15, y = 127, rule = B3/S12-ae34ceit4bo5bo$4bo5bo2$2bo4bo4bo$b2ob7ob2o$2bo9bo$2b2ob5ob2o$2obo7bob2o$bo3bo3bo3bo$b2obo5bob2o2$4b2o3b2o2$3bobo3bobo$2b2o7b2o$3b3o3b3o$3bo2bobo2bo$2b5ob5o$2bo3bobo3bo$3b3o3b3o$6b3o$3b2o5b2o$4b3ob3o$5bobobo$3bobo3bobo$b3o2bobo2b3o$b2o4bo4b2o$bobobo3bobobo$6b3o2$3bo7bo$4bo5bo$5bo3bo$4b2obob2o$7bo$7bo$7bo$7bo$5b2ob2o$4bobobobo$3b2obobob2o$3bo2bobo2bo$3b4ob4o$2bob2o3b2obo$2bo2bo3bo2bo$bo5bo5bo$4b7o$3b2o5b2o$4b7o$b2o9b2o$2b11o$2bo9bo$4b7o2$4b7o$3bo7bo$3b9o$3b2ob3ob2o$3b2o5b2o$4b2o3b2o$4b2o3b2o$3bobo3bobo$3bobo3bobo$3bo7bo$b3obobobob3o$3ob7ob3o$3b2o5b2o$5b5o$4bo5bo$5b5o$2b2o7b2o$bo3b5o3bo2$bo3b5o3bo$2bo4bo4bo$2bo2bo3bo2bo$2bo2bo3bo2bo$4b2o3b2o2$4b2o3b2o$3bobobobobo$3b2o2bo2b2o$b2ob2obob2ob2o$2bo2bobobo2bo$3b2ob3ob2o$3b2ob3ob2o$6bobo$b2o2b5o2b2o$b13o$2bo9bo$bo3bo3bo3bo$5bo3bo$4bo5bo3$5bobobo$2b2obobobob2o$2b2obobobob2o$2bo2bobobo2bo$3b2o2bo2b2o$3bo3bo3bo$4b2o3b2o$2o11b2o$5bo3bo$4b2o3b2o$3bo7bo$2bob2o3b2obo3$4bo5bo$3bo7bo3$3bo2b3o2bo$2b2o7b2o$b2o2bo3bo2b2o$bobo7bobo$b2o2bo3bo2b2o$3bo7bo$3bob2ob2obo$2b2o3bo3b2o$bo2bob3obo2bo$bobo2b3o2bobo2$2bo9bo2$2o11b2o! As it turns out, the last partial outputted by the search evolves into a shorter 2c/7, and here it is: x = 15, y = 95, rule = B3/S12-ae34ceit4bo5bo$4bo5bo2$2bo4bo4bo$b2ob7ob2o$2bo9bo$2b2ob5ob2o$2obo7bob2o$bo3bo3bo3bo$b2obo5bob2o2$4b2o3b2o2$3bobo3bobo$2b2o7b2o$3b3o3b3o$3bo2bobo2bo$2b5ob5o$2bo3bobo3bo$3b3o3b3o$6b3o$3b2o5b2o$4b3ob3o$5bobobo$3bobo3bobo$b3o2bobo2b3o$b2o4bo4b2o$bobobo3bobobo$6b3o2$3bo7bo$4bo5bo$5bo3bo$4b2obob2o$7bo$7bo$7bo$7bo$5b2ob2o$4bobobobo$3b2obobob2o$3bo2bobo2bo$3b4ob4o$2bob2o3b2obo$2bo2bo3bo2bo$bo5bo5bo$4b7o$3b2o5b2o$4b7o$b2o9b2o$2b11o$2bo9bo$4b7o2$4b7o$3bo7bo$3b9o$3b2ob3ob2o$3b2o5b2o$4b2o3b2o$4b2o3b2o$3bobo3bobo$3bobo3bobo$3bo7bo$b3obobobob3o$3ob7ob3o$3b2o5b2o$5b5o$4bo5bo$5b5o$2b2o7b2o$bo3b5o3bo2$bo3b5o3bo$2bo4bo4bo$2bo2bo3bo2bo$2bo2bo3bo2bo$4b2o3b2o2$4b2o3b2o$3bobobobobo$3b2o2bo2b2o$b2ob2obob2ob2o$2bo2bobobo2bo$3b2ob3ob2o$3b2ob3ob2o$6bobo$b2o2b5o2b2o$b13o$2bo9bo$bo3bo3bo3bo$5bo3bo$4bo5bo2$6bobo! Here are the known spaceships arranged by speed: x = 99, y = 95, rule = B3/S12cikn34ceit3bo9b3o7bo16bo5bo15bo11bo5bo8bo6bo$3bo8bo3bo6bo16bo5bo15bo11bo5bo8bo6bo$bo9b2o9b3o34b2obob2o$2o13bo22bo4bo4bo9b2o2bo2b2o5b2ob2ob2ob2o$bo35b2ob7ob2o7bo4bo4bo5b3o3b3o$21bo3bo12bo9bo8b2o7b2o4bo3bobo3bo$20b3ob3o11b2ob5ob2o9b3o3b3o6bo2bobo2bo5b5o2b5o$21bo3bo10b2obo7bob2o10b3o9b2obobob2o6b3o4b3o$37bo3bo3bo3bo26bobo10bo6bo$19bo2b3o2bo9b2obo5bob2o23b2obobob2o$18b2o3bo3b2o47bobo11b6o$18b2o7b2o11b2o3b2o26b2obobob2o7b2o4b2o$19b2o5b2o46bobobobo$20b2o3b2o12bobo3bobo25b2obobob2o9bo2bo$22bobo13b2o7b2o27bobo13b2o$20b2o3b2o12b3o3b3o26bobobobo$20b3ob3o12bo2bobo2bo27b2ob2o$38b5ob5o24b2o5b2o$22bobo13bo3bobo3bo23b3o5b3o$22bobo14b3o3b3o27bo3bo$42b3o29b3ob3o$39b2o5b2o$40b3ob3o30bo$41bobobo31bo$39bobo3bobo26b2obob2o$37b3o2bobo2b3o23bo7bo$37b2o4bo4b2o$37bobobo3bobobo26b3o$42b3o28b2o2bo2b2o$75bo3bo$39bo7bo$40bo5bo27bo5bo$41bo3bo28b7o$40b2obob2o26bo7bo$43bo28bo9bo$43bo$43bo30bo5bo$43bo31b5o$41b2ob2o$40bobobobo25b2ob2ob2ob2o$39b2obobob2o25bobobobobo$39bo2bobo2bo26bo5bo$39b4ob4o25b2o5b2o$38bob2o3b2obo23b3ob3ob3o$38bo2bo3bo2bo26bobobo$37bo5bo5bo24b2obob2o$40b7o28bobobo$39b2o5b2o$40b7o$37b2o9b2o25b5o$38b11o27b3o$38bo9bo28bo$40b7o29b3o$76bobo$40b7o25b2o2bobo2b2o$39bo7bo$39b9o24bo3bobo3bo$39b2ob3ob2o28bobo$39b2o5b2o28bobo$40b2o3b2o28b2ob2o$40b2o3b2o29bobo$39bobo3bobo25bobo3bobo$39bobo3bobo24bo9bo$39bo7bo26b2o3b2o$37b3obobobob3o23b3o3b3o$36b3ob7ob3o23b2o3b2o$39b2o5b2o26bo5bo$41b5o28bobobobo$40bo5bo$41b5o27bo2bobo2bo$38b2o7b2o24b2o5b2o$37bo3b5o3bo24bo5bo2$37bo3b5o3bo$38bo4bo4bo$38bo2bo3bo2bo$38bo2bo3bo2bo$40b2o3b2o2$40b2o3b2o$39bobobobobo$39b2o2bo2b2o$37b2ob2obob2ob2o$38bo2bobobo2bo$39b2ob3ob2o$39b2ob3ob2o$42bobo$37b2o2b5o2b2o$37b13o$38bo9bo$37bo3bo3bo3bo$41bo3bo$40bo5bo2$42bobo!
You can see my unsightly 2c/7 and c/5 ships next to the nice tiny ships by everyone else.
x = 4, y = 2, rule = B3/S23ob2o$2obo! (Check Gen 2) toroidalet Posts: 881 Joined: August 7th, 2016, 1:48 pm Location: Somewhere on a planet called "Earth" ### Re: B3/S12-ae34ceit toroidalet wrote:4c/11 ship from ntzfind: x = 11, y = 20, rule = B3/S12-ae34ceit5bo$5bo$4b3o3$3bo3bo$2b3ob3o$3bo3bo2$bo2b3o2bo$2o3bo3b2o$2o7b2o$b2o5b2o$2b2o3b2o$4bobo$2b2o3b2o$2b3ob3o2$4bobo$4bobo!
Wow. I don't think that such a large ship has ever been found before in any rule with a period >8. This is exactly what I hoped ntzfind would find (apart from more ships in tlife), and it gives me hope that similar things exist in Life.
x₁=ηx
V ⃰_η=c²√(Λη)
K=(Λu²)/2
Pₐ=1−1/(∫^∞_t₀(p(t)ˡ⁽ᵗ⁾)dt)
$$x_1=\eta x$$
$$V^*_\eta=c^2\sqrt{\Lambda\eta}$$
$$K=\frac{\Lambda u^2}2$$
$$P_a=1-\frac1{\int^\infty_{t_0}p(t)^{l(t)}dt}$$
http://conwaylife.com/wiki/A_for_all
Aidan F. Pierce
A for awesome
Posts: 1673
Joined: September 13th, 2014, 5:36 pm
Location: 0x-1
### Re: B3/S12-ae34ceit
ntzfind:
c/2
x = 7, y = 14, rule = B3/S12cikn34ceit2bo$b3o$b3o$bobo$b3o2$4b2o2$3bo2bo$3bo2bo$2b3o$b2o$bo$o! c/3 that uses the "head of a c/3 spaceship" x = 16, y = 19, rule = B3/S12cikn34ceit5bo4bo$4bobo2bobo2$5bo4bo$5bo4bo$4bob4obo$3bob2o2b2obo$3bo8bo$5bob2obo$2bobob4obobo$bo2b3o2b3o2bo$bo2b2o4b2o2bo$2obo8bob2o$b3ob2o2b2ob3o$4b2o4b2o$6bo2bo$3bo8bo$bob2o6b2obo$bo12bo!
If you're the person that uploaded to Sakagolue illegally, please PM me.
x = 17, y = 10, rule = B3/S23b2ob2obo5b2o$11b4obo$2bob3o2bo2b3o$bo3b2o4b2o$o2bo2bob2o3b4o$bob2obo5bo2b2o$2b2o4bobo2b3o$bo3b5ob2obobo$2bo5bob2o\$4bob2o2bobobo!
(Check gen 2)
Saka
Posts: 2624
Joined: June 19th, 2015, 8:50 pm
Location: In the kingdom of Sultan Hamengkubuwono X |
# How to add a 32-bit input in Quartus 2
Can you help me to add a multiple pins input in Quartus 2? If there is not a default one, how can I add it by MegaWizard Plug-in Manager? Thanks!
If so, just change the name of the input (or output or bidir) to whatever[31..0]. The array indices are the same as if you did it in Verilog, except that a .. is used instead of a :. |
# reference request – Kernels with finite dimensional feature spaces
Suppose $$x,y in mathbb{R}^n$$ for some given fixed n.
Consider a kernel $$K(x,y) = f(langle x, y rangle)$$, I’d like to know which functions $$f$$ admit a finite dimensional feature map. In other words, for $$x,y in mathbb{R}^n$$, what functions $$f$$ does there exist an $$m$$ and $$phi: mathbb{R^n} rightarrow mathbb{R}^m$$ with
$$f(langle x, y rangle ) = langle phi(x), phi(y)rangle?$$
I can show that $$f$$ must be polynomial if $$m < 2^n$$, but I’m sure there must exist a more comprehensive result. |
# Study on integro-differential equation with generalized p-Laplacian operator
Li Wei1, Ravi P Agarwal23* and Patricia JY Wong4
Author Affiliations
1 School of Mathematics and Statistics, Hebei University of Economics and Business, Shijiazhuang, 050061, China
2 Department of Mathematics, Texas A&M University — Kingsville, Kingsville, TX, 78363, USA
3 Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
4 School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore, 639798, Singapore
For all author emails, please log on.
Boundary Value Problems 2012, 2012:131 doi:10.1186/1687-2770-2012-131
Received: 13 June 2012 Accepted: 24 October 2012 Published: 13 November 2012
This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
### Abstract
We tackle the existence and uniqueness of the solution for a kind of integro-differential equations involving the generalized p-Laplacian operator with mixed boundary conditions. This is achieved by using some results on the ranges for maximal monotone operators and pseudo-monotone operators. The method used in this paper extends and complements some previous work.
MSC: 47H05, 47H09.
##### Keywords:
maximal monotone operator; pseudo-monotone operator; generalized p-Laplacian operator; integro-differential equation; mixed boundary conditions
### 1 Introduction
Nonlinear boundary value problems (BVPs) involving the p-Laplacian operator Δ p arise from a variety of physical phenomena such as non-Newtonian fluids, reaction-diffusion problems, petroleum extraction, flow through porous media, etc. Thus, the study of such problems and their generalizations have attracted numerous attention in recent years. Some of the BVPs studied in the literature include the following:
{ Δ p u + g ( x , u ( x ) ) = f ( x ) , a.e. in Ω , u n = 0 , a.e. on Γ (1.1)
whose existence results in L p ( Ω ) (for various ranges of p) can be found in [1-4]; a related BVP
{ Δ p u + g ( x , u ( x ) ) = f ( x ) , a.e. in Ω , ϑ , | u | p 2 u β x ( u ( x ) ) , a.e. on Γ (1.2)
was tackled in [5-7] and later generalized to one that contains a perturbation term | u | p 2 u [8,9]
{ Δ p u + | u | p 2 u + g ( x , u ( x ) ) = f ( x ) , a.e. in Ω , ϑ , | u | p 2 u β x ( u ( x ) ) , a.e. on Γ . (1.3)
Motivated by Tolksdorf’s work [10] where the following Dirichlet BVP has been discussed:
{ div [ ( C ( x ) + | u | 2 ) p 2 2 u ] = f ( x ) , a.e. in K ( 1 , S ) , u = g , a.e. in Σ ( 1 , S ) , (1.4)
several generalizations have been investigated. These include [11-14]
(1.5)
(1.6)
and
{ div [ ( C ( x ) + | u | 2 ) p 2 2 u ] + ε | u | q 2 u + g ( x , u ( x ) ) = f ( x ) , a.e. in Ω , ϑ , ( C ( x ) + | u | 2 ) p 2 2 u β x ( u ( x ) ) , a.e. on Γ , (1.7)
where 0 C ( x ) L p ( Ω ) , ε is a nonnegative constant and ϑ denotes the exterior normal derivative of Γ.
Inspired by all this research, recently we have studied the following nonlinear parabolic equation with mixed boundary conditions [15]:
{ u t div [ ( C ( x , t ) + | u | 2 ) p 2 2 u ] + ε | u | p 2 u = f ( x , t ) , ( x , t ) Ω × ( 0 , T ) , ϑ , ( C ( x , t ) + | u | 2 ) p 2 2 u β ( u ) h ( x , t ) , ( x , t ) Γ × ( 0 , T ) , u ( x , 0 ) = u ( x , T ) , a.e. x Ω . (1.8)
We tackle the existence of solutions for (1.8) via the study of existence of solutions for two BVPs: (i) the elliptic equation with Dirichlet boundary conditions
{ div [ ( C ( x ) + | u | 2 ) p 2 2 u ] + ε | u | q 2 u = f ( x ) , a.e. in Ω , γ u = w , a.e. on Γ (1.9)
and (ii) the elliptic equation with Neumann boundary conditions
{ div [ ( C ( x ) + | u | 2 ) p 2 2 u ] + ε | u | q 2 u = f ( x ) , a.e. in Ω , ϑ , ( C ( x ) + | u | 2 ) p 2 2 u β ( u ) h ( x ) , a.e. in Γ . (1.10)
By setting up the relations between the auxiliary equations (1.9) and (1.10) and by employing some results on ranges for maximal monotone operators, we showed that (1.8) has a unique solution in L p ( 0 , T ; W 1 , p ( Ω ) ) , where 2 p < + , 1 q < + if p N , and 1 q 2 N p N p if p < N .
In this paper, we shall employ the technique used in (1.8), viz. using the results on ranges for nonlinear operators, to study the existence and uniqueness of the solution to a nonlinear integro-differential equation with the generalized p-Laplacian operator. We note that most of the existing methods in the literature used to investigate such problems are based on the finite element method, hence our technique is new in tackling integro-differential equations. We shall consider the following nonlinear integro-differential problem with mixed boundary conditions:
{ u t div [ ( C ( x , t ) + | u | 2 ) p 2 2 u ] + ε | u | q 2 u + a t Ω u d x = f ( x , t ) , ( x , t ) Ω × ( 0 , T ) , ϑ , ( C ( x , t ) + | u | 2 ) p 2 2 u β x ( u ) , ( x , t ) Γ × ( 0 , T ) , u ( x , 0 ) = u ( x , T ) , x Ω . (1.11)
Our discussion is based on some results on the ranges for maximal monotone operators and pseudo-monotone operators in [16-18]. Some new methods of constructing appropriate mappings to achieve our goal are employed. Moreover, we weaken the restrictions on p and q. The paper is outlined as follows. In Section 2 we shall state the definitions and results needed, and in Section 3 we shall establish the existence and uniqueness of the solution to (1.11).
### 2 Preliminaries
Let X be a real Banach space with a strictly convex dual space X . We use ( , ) to denote the generalized duality pairing between X and X . For a subset C of X, we use IntC to denote the interior of C. We also use ‘→’ and ‘w-lim’ to denote strong and weak convergences, respectively.
Let X and Y be Banach spaces. We use X Y to denote that X is embedded continuously in Y.
The function Φ is called a proper convex function on X[17] if Φ is defined from X to ( , + ] , Φ is not identically +∞ such that Φ ( ( 1 λ ) x + λ y ) ( 1 λ ) Φ ( x ) + λ Φ ( y ) , whenever x , y X and 0 λ 1 .
The function Φ : X ( , + ] is said to be lower-semicontinuous on X[17] if lim inf y x Φ ( y ) Φ ( x ) for any x X .
Given a proper convex function Φ on X and a point x X , we denote by Φ ( x ) the set of all x X such that Φ ( x ) Φ ( y ) + ( x y , x ) for every y X . Such elements x are called subgradients of Φ at x, and Φ ( x ) is called the subdifferential of Φ at x[17].
A mapping T : D ( T ) = X X is said to be demi-continuous on X if w - lim n T x n = T x for any sequence { x n } strongly convergent to x in X. A mapping T : D ( T ) = X X is said to be hemi-continuous on X if w - lim t 0 T ( x + t y ) = T x for any x , y X [17].
With each multi-valued mapping A : X 2 X , we associate the subset A 0 as follows [17]:
A 0 x = { y A x : y = | A x | } ,
where | A x | : = inf { z : z A x } . If X is strictly convex, then D ( A ) = D ( A 0 ) and A 0 is single-valued, which in this case is called the minimal section of A.
A multi-valued mapping B : X 2 X is said to be monotone[18] if its graph G ( B ) is a monotone subset of X × X in the sense that ( u 1 u 2 , w 1 w 2 ) 0 for any [ u i , w i ] G ( B ) , i = 1 , 2 . The monotone operator B is said to be maximal monotone if G ( B ) is not properly contained in any other monotone subsets of X × X .
Definition 2.1[18]
Let C be a closed convex subset of X, and let A : C 2 X be a multi-valued mapping. Then A is said to be a pseudo-monotone operator provided that
(i) for each x C , the image Ax is a nonempty closed and convex subset of X ;
(ii) if { x n } is a sequence in C converging weakly to x C and if f n A x n is such that lim sup n ( x n x , f n ) 0 , then to each element y C , there corresponds an f ( y ) A x with the property that
( x y , f ( y ) ) lim inf n ( x n x , f n ) ;
(iii) for each finite-dimensional subspace F of X, the operator A is continuous from C F to X in the weak topology.
Lemma 2.1[19]
Let Ω be a bounded conical domain in R N . If m p > N , then W m , p ( Ω ) C B ( Ω ) ; if 0 < m p N and q = N p N m p , then W m , p ( Ω ) L q ( Ω ) ; if m p = N and p > 1 , then for 1 q < + , W m , p ( Ω ) L q ( Ω ) .
Lemma 2.2[18]
If B : X 2 X is an everywhere defined, monotone, and hemi-continuous operator, thenBis maximal monotone. If B : X 2 X is a maximal monotone operator such that D ( B ) = X , thenBis pseudo-monotone.
Lemma 2.3[18]
IfXis a Banach space and Φ : X ( , + ] is a proper convex and lower-semicontinuous function, thenΦ is maximal monotone fromXto X .
Lemma 2.4[18]
If B 1 and B 2 are two maximal monotone operators inXsuch that int D ( B 1 ) D ( B 2 ) , then B 1 + B 2 is maximal monotone.
Lemma 2.5[20]
LetXand its dual X be strictly convex Banach spaces. Suppose S : D ( S ) X X is a closed linear operator and S is the conjugate operator ofS. If ( u , S u ) 0 u D ( S ) and ( v , S v ) 0 v D ( S ) , thenSis a maximal monotone operator possessing a dense domain.
Lemma 2.6[18]
Any hemi-continuous mapping T : X X is demi-continuous on Int D ( T ) .
Theorem 2.1[16]
LetXbe a real reflexive Banach space with X being its dual space. LetCbe a nonempty closed convex subset ofX. Assume that
(i) the mapping A : C 2 X is a maximal monotone operator;
(ii) the mapping B : C X is pseudo-monotone, bounded, and demi-continuous;
(iii) if the subsetCis unbounded, then the operatorBisA-coercive with respect to the fixed element b X , i.e., there exists an element u 0 C D ( A ) and a number r > 0 such that ( u u 0 , B u ) > ( u u 0 , b ) for all u C with u > r .
Then the equation b A u + B u has a solution.
### 3 Existence and uniqueness of the solution to (1.11)
We begin by stating some notations and assumptions used in this paper. Throughout, we shall assume that
1 < q p < + , 1 p + 1 p = 1 and 1 q + 1 q = 1 .
Let V = L p ( 0 , T ; W 1 , p ( Ω ) ) and V be the dual space of V. The duality pairing between V and V will be denoted by , V . The norm in V will be denoted by V , which is defined by
u V = ( 0 T u ( t ) W 1 , p ( Ω ) p d t ) 1 p , u ( x , t ) V .
Let W = L q ( 0 , T ; W 1 , p ( Ω ) ) and W be the dual space of W. The norm in W will be denoted by W , which is defined by
v W = ( 0 T v ( t ) W 1 , p ( Ω ) q d t ) 1 q , v ( x , t ) W .
In the integro-differential equation (1.11), Ω is a bounded conical domain of a Euclidean space R N where N 1 , Γ is the boundary of Ω with Γ C 1 [5], ϑ denotes the exterior normal derivative to Γ. Here, | | and , denote the Euclidean norm and the inner-product in R N , respectively. Also, 0 C ( x , t ) L p ( 0 , T ; W 1 , p ( Ω ) ) , f ( x , t ) V is a given function, T and a are positive constants, and ε is a nonnegative constant. Moreover, β x is the subdifferential of φ x , where φ x = φ ( x , ) : R R for x Γ , and φ : Γ × R R is a given function.
To tackle (1.11), we need the following assumptions which can be found in [5,14].
Assumption 1Green’s formula is available.
Assumption 2For each x Γ , φ x = φ ( x , ) : R R is a proper, convex, and lower-semicontinuous function and φ x ( 0 ) = 0 .
Assumption 3 0 β x ( 0 ) and for each t R , the function x Γ ( I + λ β x ) 1 ( t ) R is measurable for λ > 0 .
We shall present a series of lemmas before we prove the main result.
Lemma 3.1Define the function Φ : V R by
Φ ( u ) = 0 T Γ φ x ( u | Γ ( x , t ) ) d Γ ( x ) d t , u V .
Then Φ is a proper, convex, and lower-semicontinuous mapping onV. Therefore, Φ : V V , the subdifferential of Φ, is maximal monotone.
Proof The proof of this lemma is analogous to that of Lemma 3.1 in [1]. We give the outline of the proof as follows.
Note that for each s R , the function x Γ β x 0 ( s ) R is measurable, where β x 0 ( s ) denotes the minimal section of β x . Since for all s 1 , s 2 R we have
{ x Γ : φ x ( s 1 ) > s 2 } = n { x Γ : i = 1 n s 1 n β x 0 ( i s 1 n ) > s 2 } ,
it implies that for u V , the function φ x ( u | Γ ( x , t ) ) is measurable on Γ. Then from the property of φ x , we know that Φ is proper and convex on V.
To see that Φ is lower-semicontinuous on V, let u n u in V. We may assume that there exists a subsequence of u n , for simplicity, we still denote it by u n , such that u n | Γ ( x , t ) u | Γ ( x , t ) for x Γ and t ( 0 , T ) a.e. This yields
φ x ( u | Γ ( x , t ) ) lim inf n φ x ( u n | Γ ( x , t ) )
for all x Γ and each t ( 0 , T ) a.e. since φ x is lower-semicontinuous for each x Γ . It then follows from Fatou’s lemma that for each t ( 0 , T ) ,
Γ φ x ( u | Γ ( x , t ) ) d Γ ( x ) Γ lim inf n φ x ( u n | Γ ( x , t ) ) d Γ ( x ) lim inf n Γ φ x ( u n | Γ ( x , t ) ) d Γ ( x ) .
So, Φ ( u ) lim inf n Φ ( u n ) whenever u n u in V. This completes the proof. □
Lemma 3.2Define S : D ( S ) = { u V : u t V , u ( x , 0 ) = u ( x , T ) } V by
S u = u t + a t Ω u d x .
ThenSis a linear maximal monotone operator possessing a dense domain inV.
Proof It is obvious that S is closed and linear.
For u ( x , t ) , w ( x , t ) D ( S ) , integrating by parts gives
Then S w = w t a t Ω w d x , where D ( S ) = { w V : w t V , w ( x , 0 ) = w ( x , T ) } .
For u ( x , t ) D ( S ) , we find
0 T Ω u t u ( x , t ) d x d t = Ω | u ( x , T ) | 2 d x Ω | u ( x , 0 ) | 2 d x 0 T Ω u t u ( x , t ) d x d t = 0 T Ω u t u ( x , t ) d x d t ,
which implies that
0 T Ω u t u ( x , t ) d x d t = 0 .
Similarly, for u ( x , t ) D ( S ) ,
which implies that
a 0 T Ω u ( x , t ) ( t Ω u d x ) d x d t = 0 .
Thus,
u , S u V = 0 T Ω u t u ( x , t ) d x d t + a 0 T Ω u ( x , t ) ( t Ω u d x ) d x d t = 0 .
In the same manner, we have w , S w V = 0 for w D ( S ) . Therefore, noting Lemma 2.5 the result follows. □
In view of Lemmas 2.3 and 2.4, we have the following result.
Lemma 3.3 S + Φ : V V is maximal monotone.
Lemma 3.4[14]
Define the mapping B p , q : W 1 , p ( Ω ) ( W 1 , p ( Ω ) ) as follows:
( v ¯ , B p , q u ¯ ) = Ω ( C ( x , t ) + | u ¯ | 2 ) p 2 2 u ¯ , v ¯ d x + ε Ω | u ¯ | q 2 u ¯ v ¯ d x , u ¯ , v ¯ W 1 , p ( Ω ) .
Then B p , q is maximal monotone.
Lemma 3.5[14]
Let X 0 denote the closed subspace of all constant functions in W 1 , p ( Ω ) . LetXbe the quotient space W 1 , p ( Ω ) X 0 . For u ¯ W 1 , p ( Ω ) , define the mapping P : W 1 , p ( Ω ) X 0 by
P u ¯ = 1 meas ( Ω ) Ω u ¯ d x .
Then, there is a constant C > 0 such that for every u ¯ W 1 , p ( Ω ) ,
u ¯ P u ¯ L p ( Ω ) C u ¯ ( L p ( Ω ) ) N .
Here meas ( Ω ) denotes the measure of Ω.
Definition 3.1 Define A : V V as follows:
v , A u V = 0 T ( v , B p , q u ) d t 0 T Ω f ( x , t ) v ( x , t ) d x d t , u , v V .
Lemma 3.6The mapping A : V V is everywhere defined, bounded, monotone, and hemi-continuous. Therefore, Lemma 2.2 implies that it is also pseudo-monotone.
Proof From Lemma 2.1, we know that W 1 , p ( Ω ) C B ( Ω ) when p > N , and W 1 , p ( Ω ) L q ( Ω ) when p = N . If p < N , then W 1 , p ( Ω ) L N p N p ( Ω ) L p ( Ω ) L q ( Ω ) since 1 < q p < + . Thus, for all w ¯ W 1 , p ( Ω ) , w ¯ L q ( Ω ) k w ¯ W 1 , p ( Ω ) , where k > 0 is a constant. Therefore, for u , v V , we have
0 T u L q ( Ω ) q d t const 0 T u W 1 , p ( Ω ) q d t = const u W q
and
0 T v L q ( Ω ) q d t const 0 T v W 1 , p ( Ω ) q d t = const v W q .
Moreover, since 1 < q p < + , then L p ( 0 , T ; W 1 , p ( Ω ) ) L q ( 0 , T ; W 1 , p ( Ω ) ) , which implies that u W u V and v W v V for u , v V .
If p 2 , then for u , v V , we have
which implies that A is everywhere defined and bounded.
If 1 < p < 2 , then for u , v V , we have
which also implies that A is everywhere defined and bounded.
Since B p , q is monotone, we can easily see that for u , v V ,
u v , A u A v V = 0 T ( u v , B p , q u B p , q v ) d t 0 ,
which implies that A is monotone.
To show that A is hemi-continuous, it suffices to show that for any u , v , w V and k [ 0 , 1 ] , w , A ( u + k v ) A u V 0 , as k 0 . Noting the fact that B p , q is hemi-continuous and using the Lebesgue’s dominated convergence theorem, we have
0 lim k 0 | w , A ( u + k v ) A u V | 0 T lim k 0 | ( w , B p , q ( u + k v ) B p , q u ) | d t = 0 .
Hence, A is hemi-continuous.
This completes the proof. □
Lemma 3.7The mapping A : V V satisfies that for u D ( S ) ,
u u 0 , A u V u V + , (3.1)
as u V + inV.
Proof First, we shall show that for u V ,
u V +
is equivalent to
u 1 meas ( Ω ) Ω u d x V + .
In fact, from Lemma 3.5, we know that for u V ,
u 1 meas ( Ω ) Ω u d x L p ( Ω ) C u ( L p ( Ω ) ) N ,
where C is a positive constant. Thus,
which implies that
u 1 meas ( Ω ) Ω u d x V [ ( C p + 1 ) 0 T u ( L p ( Ω ) ) N p d t ] 1 p ( C p + 1 ) 1 p u V . (3.2)
On the other hand, we have
u 1 meas ( Ω ) Ω u d x W 1 , p ( Ω ) u W 1 , p ( Ω ) 1 meas ( Ω ) Ω u d x W 1 , p ( Ω ) ,
which implies that
u W 1 , p ( Ω ) u 1 meas ( Ω ) Ω u d x W 1 , p ( Ω ) + const .
Hence,
u V u 1 meas ( Ω ) Ω u d x V + const . (3.3)
In view of (3.2) and (3.3), we have shown that for u V , u V + is equivalent to u 1 meas ( Ω ) Ω u d x V + .
Next, we shall show that A satisfies (3.1). In fact, we have
(3.4)
If 1 < p < 2 , then
(3.5)
From (3.2) and (3.3), we know that
0 T Ω | u | p d x d t 1 C p + 1 u 1 meas ( Ω ) Ω u d x V p 1 C p + 1 u V p + const .
Also,
0 T Ω C ( x , t ) p 2 d x d t C ( x , t ) V p < + .
It follows from (3.5) that
0 T Ω ( C ( x , t ) + | u | 2 ) p 2 2 u , u d x d t u V + ε 0 T Ω | u | q d x d t u V + ,
as u V + .
Moreover, we have
(3.6)
Therefore, it follows from (3.4), (3.5), and (3.6) that A satisfies (3.1) when 1 < p < 2 .
If p 2 , then
(3.7)
where M is a positive constant. We can easily see that
u 1 | Ω | Ω u d x V p u 0 V u 1 | Ω | Ω u d x V p p u V + ,
as u V + . Moreover, if 0 T Ω | u | q d x d t < + , then
ε ( 0 T Ω | u | q d x d t ) 1 q [ ( 0 T Ω | u | q d x d t ) 1 1 q u 0 V ] u V 0 ,
as u V + ; while if 0 T Ω | u | q d x d t + ,
ε ( 0 T Ω | u | q d x d t ) 1 q [ ( 0 T Ω | u | q d x d t ) 1 1 q u 0 V ] u V > 0 .
Hence, the right side of (3.7) tends to +∞ as u V + , which implies that A satisfies (3.1).
This completes the proof. □
Lemma 3.8If w ( x , t ) Φ ( u ) , then w ( x , t ) = w ˜ ( x , t ) β x ( u ) a.e. on Γ × ( 0 , T ) .
Proof If w ( x , t ) Φ ( u ) , then from the definition of subdifferential, we have
0 T Γ φ x ( u | Γ ( x , t ) ) d Γ ( x ) d t 0 T Γ φ x ( w | Γ ( x , t ) ) d Γ ( x ) d t + 0 T Γ w ( x , t ) ( u w ) d Γ ( x ) d t ,
which implies that the result is true. □
We are now ready to prove the main result.
Theorem 3.1The integro-differential equation (1.11) has a unique solution inVfor f ( x , t ) V .
Proof First, we shall show the existence of a solution. Noting Lemmas 2.6, 3.6, 3.7 and 3.3, and by using Theorem 2.1, we know that there exists u ( x , t ) D ( S ) V such that
0 = S u + A u + Φ ( u ) . (3.8)
Then we have for all w V ,
u w , S u V + u w , A u V + u w , Φ ( u ) V = 0 .
The definition of subdifferential implies that
u w , u t V + u w , a t Ω u d x V + u w , A u V + Φ ( u ) Φ ( w ) 0 .
From the definition of S, we have
u ( x , 0 ) = u ( x , T ) . (3.9)
Moreover,
(3.10)
Let w = u ± ψ , where ψ C 0 ( Ω × ( 0 , T ) ) . Then we have
From the properties of a generalized function, we get
(3.11)
Noting (3.10) again, by using Green’s formula, we have
Then using (3.10), we obtain
Φ ( w ) Φ ( u ) 0 T Γ ϑ , ( C ( x , t ) + | u | 2 ) p 2 2 u ( w u ) | Γ d Γ ( x ) d t .
Thus, ϑ , ( C ( x , t ) + | u | 2 ) p 2 2 u Φ ( u ) .
In view of Lemma 3.8, we have ϑ , ( C ( x , t ) + | u | 2 ) p 2 2 u β x ( u ) a.e. on Γ × ( 0 , T ) . Combining it with (3.8) and (3.11), we know that (1.11) has a solution in V.
Next, we shall prove the uniqueness of the solution. Let u ( x , t ) and v ( x , t ) be two solutions of (1.11). By (3.8), we have
u v , ( A + Φ ) u ( A + Φ ) v V = u v , S u S v V 0
since S is monotone. But A + Φ is monotone too, so u v , S u S v V = 0 , which implies that u ( x , t ) = v ( x , t ) .
The proof is complete. □
### Competing interests
The authors declare that they have no competing interests.
### Authors’ contributions
All authors approve the final manuscript.
### Acknowledgements
Li Wei is supported by the National Natural Science Foundation of China (No. 11071053), the Natural Science Foundation of Hebei Province (No. A2010001482) and the Project of Science and Research of Hebei Education Department (the second round in 2010).
### References
1. Calvert, BD, Gupta, CP: Nonlinear elliptic boundary value problems in L p -spaces and sums of ranges of accretive operators. Nonlinear Anal.. 2, 1–26 (1978)
2. Gupta, CP, Hess, P: Existence theorems for nonlinear noncoercive operator equations and nonlinear elliptic boundary value problems. J. Differ. Equ.. 22, 305–313 (1976). Publisher Full Text
3. Wei, L, He, Z: The applications of sums of ranges of accretive operators to nonlinear equations involving the p-Laplacian operator. Nonlinear Anal.. 24, 185–193 (1995). Publisher Full Text
4. Wei, L: The existence of solution of nonlinear elliptic boundary value problem. Math. Pract. Theory. 31, 360–364 (2001) in Chinese
5. Wei, L, He, Z: The applications of theories of accretive operators to nonlinear elliptic boundary value problems in L p -spaces. Nonlinear Anal.. 46, 199–211 (2001). Publisher Full Text
6. Wei, L: The existence of a solution of nonlinear elliptic boundary value problems involving the p-Laplacian operator. Acta Anal. Funct. Appl.. 4, 46–54 (2002) in Chinese
7. Wei, L: Study of the existence of the solution of nonlinear elliptic boundary value problems. Math. Pract. Theory. 34, 123–130 (2004) in Chinese
8. Wei, L, Zhou, H: The existence of solutions of nonlinear boundary value problem involving the p-Laplacian operator in L s -spaces. J. Syst. Sci. Complex.. 18, 511–521 (2005)
9. Wei, L, Zhou, H: Research on the existence of solution of equation involving the p-Laplacian operator. Appl. Math. J. Chin. Univ. Ser. B. 21(2), 191–202 (2006). Publisher Full Text
10. Tolksdorf, P: On the Dirichlet problem for quasilinear equations in domains with conical boundary points. Commun. Partial Differ. Equ.. 8(7), 773–817 (1983). Publisher Full Text
11. Wei, L, Hou, W: Study of the existence of the solution of nonlinear elliptic boundary value problems. J. Hebei Norm. Univ.. 28(6), 541–544 (2004) in Chinese
12. Wei, L, Zhou, H: Study of the existence of the solution of nonlinear elliptic boundary value problems. J. Math. Res. Expo.. 26(2), 334–340 (2006) in Chinese
13. Wei, L: The existence of solutions of nonlinear boundary value problems involving the generalized p-Laplacian operator in a family of spaces. Acta Anal. Funct. Appl.. 7(4), 354–359 (2005) in Chinese
14. Wei, L, Agarwal, RP: Existence of solutions to nonlinear Neumann boundary value problems with generalized p-Laplacian operator. Comput. Math. Appl.. 56(2), 530–541 (2008). Publisher Full Text
15. Wei, L, Agarwal, RP, Wong, PJY: Existence of solutions to nonlinear parabolic boundary value problems with generalized p-Laplacian operator. Adv. Math. Sci. Appl.. 20(2), 423–445 (2010)
16. Zeilder, E: Nonlinear Functional Analysis and Its Applications, Springer, New York (1990)
17. Barbu, V: Nonlinear Semigroups and Differential Equations in Banach Spaces, Noordhoff, Leyden (1976)
18. Pascali, D, Sburlan, S: Nonlinear Mappings of Monotone Type, Sijthoff and Noordhoff, The Netherlands (1978)
19. Adams, RA: The Sobolev Space, People’s Education Press, China (1981)
20. Lions, JL: Quelques Methods de Resolution des Problems aux Limites Nonlineaires, Dunod Gauthier-Villars, Paris (1969) |
Tag Info
41
When light is propagating in glass or other medium, it isn't really true, pure light. It is what (you'll learn about this later) we call a quantum superposition of excited matter states and pure photons, and the latter always move at the speed of light $c$. You can think, for a rough mind picture, of light propagating through a medium as somewhat like a ...
40
In a Newtonian/Galilean world, where $c$ is infinite, you could not escape Olbers' paradox with an infinite universe. Any line of sight would eventually intersect the surface of a star, and so the whole sky would be as bright as the Sun. This is true whenever two hypotheses are satisfied: The universe is spatially infinite (or rather, the distribution of ...
20
A photon will travel "at the speed of light" until obstructed. From the speed, and elapsed time, you can calculate how far the light will travel. Laser light consists of more than one photon "in phase", which has exactly the same property in this respect, as a solitary photon.
15
In vacuum $$\nabla \times \vec{B} = \frac{1}{c^2} \frac{\partial \vec{E}}{\partial t} = 0$$ so a changing E-field does not beget a changing B-field. Larmors formula for radiation from accelerating charges also has $c$ in the denominator. Therefore no (star)light at all ? [Or at least no electromagnetic waves].
12
Theoretically, the photon (or the beam of photons, there really isn't a difference) can go an infinite distance, traveling all the while at a speed $c$. Since photons contain energy, $E=h\nu$, then energy conservation requires the photon to only be destroyed via interaction (e.g., absorption in an atom). There is nothing that could make the photon simply ...
10
A classical explanation to supplement Rod's excellent quantum mechanical one: If you make a Huygens construction of wave propagation (I assume you know how to do that) then every point on the wave front is treated as the source of a new wave of the same frequency and phase. How that wave propagates depends on the medium it encounters. So the Huygens ...
8
Note that it is correct that a photon can travel an infinite distance in an infinite time, but it can not reach any desired point in the universe. This is caused by the expansion of the universe, which also leads to the fact that we can not receive information outside of the observable universe.
5
Changing c to infinite changes some important things. The actual effect depends on how you want to propose magnetic forces work (they're normally fictitious forces induced by relativity). If we assume the coupling constant (this constant doesn't appear in the equation as it's value is normally 1) goes to infinity as c goes to infinity so that magnetostatics ...
5
I reject the notion that $\epsilon_0=4\pi\times10^{-7}\,{\rm F/m}$ and $\mu_0\sim9\times10^{-12}\,{\rm H/m}$, they are precisely $\epsilon_0=1$ and $\mu_0=c^{-2}$. This obviously takes care of any issue with defining any unit with $\pi$ in it because, $$\sqrt{\frac{1}{\epsilon_0\mu_0}}=\sqrt{\frac{1}{1\cdot c^{-2}}}=\sqrt{c^2}=c$$ We use the MKSA system ...
4
One small addition to the other answers: While it is indeed true that the light will never stop if it doesn't hit anything, it will however get red shifted, and thus become less energetic, due to the expansion of the universe. For example, the cosmic microwave background consists of photons which were emitted back when the atoms formed. However, back then ...
3
The hand-wavy way to do it is to consider a wave solution like the one below, and apply Faraday's law to loop 1, and Ampere's law to loop 2: If you make the loops narrow enough, i.e., their widths are $dx$, then $$\oint_1\!\vec{E}\cdot \vec{ds} = -\frac{d\Phi_B}{dt} \to \frac{\partial E_y}{\partial x} = -\frac{\partial B_z}{dt}$$ $$\oint_2\!\vec{B}\cdot ... 3 Technically, \omega^2/1^2-k^2/c^2=0 is a degenerate hyperbola if that counts. But I don't think you can derive an equation of the form \omega^2/a^2 - k^2/b^2 = 1 for waves propagating in free space. You may however find something of the kind if you consider materials with fancier dispersion relations than \omega = ck, like e.g. plasmas. 3 For an object close to you, the speed of light is effectively infinite - i.e. the time taken for the light bulb 10m away from you to get to you is so close to zero that it can be considered immediate, and thus the speed of light is assumed to be infinite. With this in mind, this would mean that the sky would be brighter. In reality, the speed of light is a ... 2 A photon cannot be said to have its own inertial reference frame, because inertial reference are defined to be a family of coordinate systems that satisfy the two fundamental postulates of SR, one of which is that light moves at c in all frames. You could construct a coordinate system where the photon was at rest, but since this coordinate system wouldn't be ... 2 Of course you know from special relativity that no information can propagate faster than c, which includes your directing the Mars end of the stick to move through your Earth end push. But you don't need to think about SR at all. Simply think carefully about what would happen if you did give the end of this rod a sudden shove. It has a great deal of mass ... 2 The freespace dispersion equation is \omega^2 = k^2\,c^2 and this cannot change: this simply follows from considering plane wave components of propagating fields, which all fulfil the Helmholtz equation$$\nabla^2 A_j + \frac{\omega^2}{c^2} A_j = 0\tag{1} which is fulfilled by all Cartesian components of the moncrhomatic EM field vectors and, for a ...
2
The problem with FTL and causality has to do with two issues: 1) the relativity of simultaneity between inertial frames (not an issue in classical physics with sound waves, since in classical physics all inertial frames agree about simultaneity), which implies that a signal moving FTL but forward in time in one frame is moving backwards in time in other ...
2
Andrew the problem is that the speed of light is not an exact number. You give it as $2.99792458\times 10^{8}$ m/s; but suppose I instead tell you that it is really $1.8026175\times 10^{12}$ furlongs per fortnight. Neither of these numbers is any more correct than the other, but we have an accepted and defined system of units that we work in. Maxwell's ...
2
It sounds like that site is discussing the Standard Model neutrino. Neutrinos were presumed to be massless for a long time and the SM still models them as massless. We now know that this is not true, that (at least two of the three flavors of) neutrinos do have a small mass, although the SM is still a good approximation. So, like other massive particles ...
2
A ray of light or a laser beam will not stop until it reaches an obstruction. If there is no obstruction, light will NEVER stop. It has no end.
2
A "ray of light" must be respelled as "photon" because here we are talking physics. Between a single photon and a laser beam, in this case, there is no difference. Every photon will continue his travel until stopped, every single photon is "indistinguible" from others (in the sense that they are no different intrinsecally). The photons of a laser beam are ...
2
The differential and integral forms of Maxwell's equations are truly equivalent; they are essentially the same set of equations. One can convert between the two using two mathematical theorems: Divergence Theorem (Wikipeda - Divergence Theorem) Stokes' Theorem (Wikipedia - Stokes Theorem) The divergence theorem states that the flux over a closed surface ...
1
OK... I can't give a definitive answer to the problem. My intuition tells me that any massive particle or macroscopic mass, boosted high enough, has to look like a black hole. Why? Because it is very hard to see why/how gravity, if we believe in the equivalence principle, should be able to distinguish between kinetic energy and other forms of internal energy ...
1
Provided that there is nothing for the photon to interact with (i.e. we look at it in vacuum), the mean free path will be infinite; that is, it will continue travelling forever in a given direction. There's nothing which will stop the photon's path. Hence, it will go arbitrarily far. Whether you have a single photon or a laser, the answer won't change. The ...
1
As soon as the universe came out of its dark age if light speed was infinite then it would be able to keep up with the expansion of the universe. It would be very much brighter all around, perhaps intolerably to us. The universe would appear very active since event far far away would appear to us instantly. We might be blind as our light sensory organs might ...
1
At the Event Horizon of Black Holes, the acceleration an object experiences is infinite (Here, I am NOT talking about freely falling objects; Freely Falling objects are in inertial motion in General Relativity). As for you, you can't feel that on the Event Horizon of Black Hole (Stephen Hawking was wrong in A Brief History of Time) because you won't survive ...
1
As pointed out you can't travel at the speed of light but you can look at the limits we are tending towards as we approach it. So, if I were to travel in a spacecraft at the speed of light, would I freeze and stop moving? From the perspective of a stationary observer if your spacecraft was traveling at close to the speed of light, time on the ...
1
Stated simply, causality means that all causes should precede their effects, for all observers. The timings of the causes and effects aren't the times at which a human registers them, they are the times at which they occur in an observers reference frame - i.e. the time on the observer's watch at the moment they occur. If faster-than-light signals were ...
1
It is a matter of the standpoint of the observer. Because time comes to a standstill at the speed of light, to the photon, no time passes, whatever the distance traveled and its speed is therefore infinite.
1
The speed of the push would be roughly the speed of the sound in whatever medium the stick was made of. One thing is certain - it would not exceed the speed of light.
Only top voted, non community-wiki answers of a minimum length are eligible |
# What was the original volume of a gas that has changed from a pressure of 6.20 atm to 9.150 atm and a new volume of 322 mL?
Sep 6, 2017
The original volume was 475 mL.
#### Explanation:
Since only the pressure changes, we can use Boyle's Law to solve this problem.
Boyle's Law is
$\textcolor{b l u e}{\overline{\underline{| \textcolor{w h i t e}{\frac{a}{a}} {p}_{1} {V}_{1} = {p}_{2} {V}_{2} \textcolor{w h i t e}{\frac{a}{a}} |}}} \text{ }$
We can rearrange Boyle's Law to get
V_1 = V_2 × p_2/p_1
In this problem,
${p}_{1} = \text{6.20 atm"; color(white)(ll)V_1 = "?}$
${p}_{2} = \text{9.150 atm"; V_2 = "322 mL}$
${V}_{1} = \text{322 mL" × (9.150 color(red)(cancel(color(black)("atm"))))/(6.20 color(red)(cancel(color(black)("atm")))) = "475 mL}$ |
Liam C
What is the adjacent side to an angle at 90 degrees? Aren't two sides adjacent to it?
Mentor
What is the adjacent side to an angle at 90 degrees? Aren't two sides adjacent to it?
Yes, but the usual context is that we're interested in one of the acute angles in a right triangle, not the right angle.
Liam C
Yes, but the usual context is that we're interested in one of the acute angles in a right triangle, not the right angle.
Thank you.
Staff Emeritus
Homework Helper
What is the adjacent side to an angle at 90 degrees? Aren't two sides adjacent to it?
There's two adjacent sides to every angle.
Liam C
There's two adjacent sides to every angle.
Lol, well yes, but what I meant to say was, and what I think was implied was, "Aren't there two adjacent sides, neither of which are the hypotenuse?"
Staff Emeritus
Homework Helper
Lol, well yes, but what I meant to say was, and what I think was implied was, "Aren't there two adjacent sides, neither of which are the hypotenuse?"
Yep.
Liam C
Yep. |
Gravitation
Discussion
You must be signed in to discuss.
Elyse G.
Cornell University
LB
Marshall S.
University of Washington
Meghan M.
McMaster University
Lectures
Join Bootcamp
Video Transcript
Let's do part, eh? Well, the change of the energy is Delta key is equal to one divided by square root off one minus. We f divided by see whole square minus one, divided by one divided by, uh squared Ruto one minus one minus. Oh, the initial the initial divided by sea Whole square times e r The rest energy on dhe. Let's plug in the values. Well changing energy is equal to one divided by screed. Route off one minus. We have a cedar 10.900 All scare C squared gets canceled with C square. Since we have zero point, find 00 times the speed of light. Um, minus one divided by our square root off one minus zero point 500 Hold scare. Multiply by, uh, rest in receiving. Concede a 0.511 1,000,000 electron volts 1,000,000 electron balls and solving this change. No, that energy Delta K is equal to 0.5, babe. Two million electron. Walt. All right. Now let's to part B again using, uh, the same week Origen Well, changing energy is a cool too one divided by square root off one minus. Um, we have divided by a C old square minus one, divided by square root off one minus. Ah, oui. I divided by. See? Whole square times. Uh, the rest energy. Now let's split in the values. So, uh, change in energy is equal to, um, one divided by square root off one minus, Uh, 0.90 See the whole square minus one, divided by square. Root off one minus again. We have, uh, see the point. This is zero point. My 92. Excuse me. This is, uh, zero point 990 and one minus. Here we have 0.900 all square multiply by, uh, 0.511 million electron world and therefore change in kinetic changing energy is equal to 2.45 million electrons, 2.45 billion electron volts.
Topics
Gravitation
Elyse G.
Cornell University
LB
Marshall S.
University of Washington
Meghan M.
McMaster University
Lectures
Join Bootcamp |
bond angle of cyclohexane
It is used as a solvent in some brands of correction fluid. The following equations and formulas illustrate how the presence of two or more substituents on a cyclohexane ring perturbs the interconversion of the two chair conformers in ways that can be predicted. Due to lone pair (N) – bond pair (H) repulsion in ammonia, the bond angles decrease to 106.6°. > A planar cyclohexane would look like a regular hexagon. Among these is glucose, Hence Therefore, there are two types of H-atoms in cyclohexane – axial (Ha) and Consequently, there is some redistribution of s- and p-orbital character between the C–C and C–H bonds.
Because of the alternating nature of equatorial and axial bonds, the opposite relationship is true for 1,3-disubstitution (cis is all equatorial, trans is equatorial/axial). by rotating the Chime structure.
Finally, 1,4-disubstitution reverts to the 1,2-pattern: The above analysis is not something that you should try to memorize: rather, become comfortable with drawing cyclohexane in the chair conformation, with bonds pointing in the correct directions for axial and equatorial substituents.
The presence of bulky atoms, lone pair repulsion, lone pair-bond pair repulsion, and bond pair repulsion can affect the geometry of a molecule. a carbon atom, an axial hydrogen bonded to it, and the midpoint of a vicinal C-C bond ?
R(11-12) &1.528 \\ This deviation in bond angle from the ideal bond angle 109.5° would bring some kind of ring strain into the structure. Contents. What are some common mistakes students make with boat and chair conformations?
This conformation allows for the most stable structure of cyclohexane. The results showed that almost similar energies were released during their combustion. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. One chair conformer ring flips to the other chair conformer, and in between, three other conformers are formed.
are delocalized around the ring. Why do cyclic compounds most commonly found in nature contain six-membered rings? \begin{array}{ll}
The cyclohexanone–cyclohexanol mixture, called "KA oil", is a raw material for adipic acid and caprolactam, precursors to nylon. How do I conduct myself when dealing with a coworker who provided me with bad data and yet keeps pushing responsibility for bad results onto me?
#"Sum of interior angles" = (n-2) × 180°#, where #n# is the number of interior angles. \angle(\ce{CCH_{ax}}) & =109.1^\circ,\\ This ensures the absence of any ring strain in the molecule. On careful examination of a chair conformation of cyclohexane, we find that the twelve hydrogens are not structurally equivalent. R(10-13) &1.100 \\ In the chair form of cyclohexane, the carbon atoms and the bonds around them are almost perfectly tetrahedral.
It is frequently used as a recrystallization solvent, as many organic compounds exhibit good solubility in hot cyclohexane and poor solubility at low temperatures.
Understanding the hybridization of cyclohexane? A(2-5) &105.8 \\ It is always possible to have both groups equatorial, but whether this requires a cis-relationship or a trans-relationship depends on the relative location of the substituents. 4 Spectral Information Expand this section.
A(3-11-12) &111.3 \\ density both above and below the ring. During the chair flip, there are three other intermediate conformations that are encountered: the half-chair, which is the most unstable conformation, the more stable boat conformation, and the twist-boat, which is more stable than the boat but still much less stable than the chair. But it is less stable than the chair A planar cyclohexane would look like a regular hexagon.
A(7-2-10) &100.5 \\
2 Names and Identifiers Expand this section.
For more information contact us at [email protected] or check out our status page at https://status.libretexts.org.
), Figure 4: Axial and equatorial bonds in cyclohexane. Also, the C-atom above the plane of the four C-atoms goes below the plane and vice-versa. The crystal structure shows a strangely distorted molecule with only $C_\mathrm{i}$ symmetry. they are as far apart as possible.
Dates: Modify .
A(2-1-4) &106.7 \\ A(1-3-11) &112.4 \\ Suggestions for braking with severe osteoarthritis in both hands.
As shown in the ball and stick structure, the left most carbon Notice that a 'ring flip' causes equatorial hydrogens to become axial, and vice-versa. (ii) Substituents on chair conformers prefer to occupy equatorial positions due to the increased steric hindrance of axial locations.
the left is the most stable conformation.
Cyclohexane has two crystalline phases. of the ring. As we count around the ring from carbon #1 to #6, the uppermost bond on each carbon changes its orientation from equatorial (or axial) to axial (or equatorial) and back. Cyclohexane has the lowest angle and torsional strain of all the cycloalkanes; as a result cyclohexane has been deemed a 0 in total ring strain. Cyclohexane is non-polar. ∴ "Each interior angle" = (n-2)/n × 180 °= (6-2)/6 × 180 ° = 4/6 × 180 ° = 120 °. What defines a JRPG, and how is it different from an RPG? in the U. S., with over 90% being used in the synthesis of nylon. In the twist-boat conformation, due to the movement of C-3 and C-6, the eclipsing of the C – H bonds is reduced to some extent. In the chair conformation, if any carbon-carbon bond were examined, it would be found to exist with its substituents in the staggered conformation and all bonds would be found to possess an angle of 109.5°. Now we must examine the way in which favorable ring conformations influence the properties of the configurational isomers.
A(4-1-5) &116.7 \\
Can I include my published short story as a chapter to my new book? The "C-C-C" bond angles in a planar cyclohexane would be 120 °. \mathbf{d}(\ce{CH_{eq}}) &=1.103~\mathrm{\mathring{A}}.\\ The complex TpBr3Cu(NCMe) catalyzes, at room temperature, the insertion of a nitrene group from PhINTs into the carbon−hydrogen bond of cyclohexane and benzene, as well as into the primary C−H bonds of the methyl groups of toluene and mesitylene, in moderate to high yield.
At each angle change is a Another conformation of cyclohexane exists, known as boat conformation, but it interconverts to the slightly more stable chair formation. Note the tip up on Cyclohexane vapour is used in vacuum carburizing furnaces, in heat treating equipment manufacture. Therefore, it should be clear that for cis-1,2-disubstitution, one of the substituents must be equatorial and the other axial; in the trans-isomer both may be equatorial.
Due to lone pair (O) - lone pair (O) repulsion the bond angle in water further decreases to 104.5°. is tipped up from the ring.
button of Chime to see this effect. the blood sugar. Hence
Cyclohexane in the chair conformation. shows the alternating double bonds. They are practically inseparable because they interconvert very rapidly at room temperature. Because the expected normal C-C-C bond angle should be near the tetrahedral value of 109.5°, the suggested planar configuration of cyclohexane would have angle strain at each of the carbons, and would correspond to less stable cyclohexane molecules than those with …
Cyclohexane is a colourless, flammable liquid with a distinctive detergent-like odor, reminiscent of cleaning products (in which it is sometimes used). Conformations of monosubstituted cyclohexanes, Conformations of Disubstituted Cyclohexanes, Axial and Equitorial positions in cyclohexanes.
Cyclohexane is sometimes used as a non-polar organic solvent, although n-hexane is more widely used for this purpose. A(11-12-18) &105.8 \\
C-4 is the foot and the four carbon atoms form the seat of the chair). Why does F replace the axial bond in PCl5? A(1-2-10) &111.3 \\
The conformation in which the methyl group is equatorial is more stable, and thus the equilibrium lies in this direction. R(1-4) &0.884 \\ 1 Structures Expand this section.
and the right are tipped up, while the other four carbons form In the chair form of cyclohexane, the carbon atoms and the bonds around them are almost perfectly tetrahedral. \begin{array}{lr} For this reason, early investigators synthesized their cyclohexane samples.[10].
Conformation of cyclohexane I: Chair and Boat, GOOD Extensive information about cyclohexane conformations, Drawing Chairs in 3D Axial and Equitorial positions in cyclohexanes, Conformations of Substituted Cyclohexanes, Conformation of Cyclohexane II: Monosubstituted, Cis- and trans-substituted of cyclohexane, Conformations of Cyclohexanes III: Disubstituted, Conformations of Cyclohexanes IV: Trisubstituted. Thanks for contributing an answer to Chemistry Stack Exchange!
A(3-11-16) &100.5 \\
The flat ring is based seen The molecular formula of cyclohexane is C6H12. The hybridization of all the C-atoms is sp3.
A(13-10-14) &106.9 \\ The chair and twist-boat are energy minima and are therefore conformers, while the half-chair and the boat are transition states and represent energy maxima.
The idea that the chair conformation is the most stable structure for cyclohexane was first proposed as early as 1890 by Hermann Sachse, but only gained widespread acceptance much later. Have questions or comments? Although the customary line drawings of simple cycloalkanes are geometrical polygons, the actual shape of these compounds in most cases is very different. carbon atom, and each carbon has the correct number of hydrogens, A(12-11-16) &118.2 \\ Consequently, there is some redistribution of s- and p-orbital character between the C–C and C–H bonds. The simplest ring compound From the Newman projection, it is clear that all C – H bonds in chair conformer are staggered. These conformation inter convert very rapidly Register now!
A(10-12-11) &110.4 \\ Click on the spacefill A(17-12-18) &116.7 \\ |
# Kx force, find x(t)
#### Ryan95
1. The problem statement, all variables and given/known data
A particle of mass m is subject to force F(x)=kx with K>0. The initial starting position is x0 and the initial speed is zero. Find x(t).
2. Relevant equations
F(x)=kx
F=ma
3. The attempt at a solution
Honestly, I am totally lost on this. I've written acceleration as v(dv/dx) which gave me mv(dv/dx)=Kx and then tried separating variables to integrate, but once I do that, I'm totally lost as I end up with m(v2/2)=K(x2/2).
Last edited:
Related Introductory Physics Homework News on Phys.org
#### drvrm
Honestly, I am totally lost on this. I've written acceleration as v(dv/dt) which gave me mv(dv/dx)=Kx and then tried separating variables to integrate, but once I do that, I'm totally lost as I end up with m(v2/2)=K(x2/2).
i wonder how one can write acceleration as v..dv/dt as we know it as rate of change of velocity with time.. may be a typo.
now dv/dt can be expressed as d/dxof v multiplied by dx/dt .
i think you should proceed with the analysis as per the rule of integration and have initial conditions at t=0 and try to find x as a function of t. as one normally does with constant forces.
#### Ryan95
i wonder how one can write acceleration as v..dv/dt as we know it as rate of change of velocity with time.. may be a typo.
now dv/dt can be expressed as d/dxof v multiplied by dx/dt .
i think you should proceed with the analysis as per the rule of integration and have initial conditions at t=0 and try to find x as a function of t. as one normally does with constant forces.
Oh, thank you, yes that was a typo. I've edited the post.
"Kx force, find x(t)"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving |
# Using Integration, Find the Area of Triangle Abc, Whose Vertices Are A(2, 5), B(4, 7) and C(6, 2). - Mathematics
Sum
Using integration, find the area of triangle ABC, whose vertices are A(2, 5), B(4, 7) and C(6, 2).
#### Solution
Vertices of the given triangle are A(2,5), B(4,7), and C(6,2).
Equation of AB
y - 5 = (7-5)/(4-2) (x -2)
⇒ y - 5 = x -2
⇒ y = x + 3
Let's say y1= x+3
Equation of BC:
y -7= (2-7)/(6-4)(x-4)
⇒ y = (-5)/(2) (x-4) +7=(-5)/(2) x+17
Let's say y_2 = -(5)/(2)x+17
Equation of AC:
y -5 = (2-5)/(6-2) (x-2)
⇒ y = (-3)/(4)(x-2)+5 = (-3)/(4)x+13/2
Let's say y_3 = (-3)/(4)x+13/2
"ar" (Δ"ABC") = int_2^4 y_1 dx + int_4^6 y_2 dx - int_2^6 y_3 dx
= int_2^4 (x+3) dx + int_4^6 ((-5)/2 x+17) dx -int_2^6 (-3)/4 x+13/2)dx
= [x^2/2 + 3x]_2^4 + [(-5x^2)/4 + 17x]_4^6 - [(-3x^2)/8 + (13x)/2]_2^6
= [16/2 + 12 - 4/2 -6]+[(-180)/4 + 102+80/4-68]-[(-108)/8+78/2+12/8-26/2]
= 12 + 9 - 14
= 7 sq units.
Concept: Area of a Triangle
Is there an error in this question or solution? |
# Polymath 12: Rota’s Basis Conjecture
Polymath 12 has just recently launched, and the topic is Rota’s Basis Conjecture! See this guest post by Jan Draisma for information on Rota’s Basis Conjecture.
The polymath projects are a sort of mathematical experiment proposed by Tim Gowers to see if massively collaborative mathematics is possible. The proposed proof proceeds via a sequence of public comments on a blog. The general idea is to blurt out whatever idea comes into your head to allow for rapid public dissemination.
Polymath 12 is hosted by Timothy Chow and you can click here to follow the progress or to contribute.
# Building matroids from infinite graphs
Today we’ll be looking at infinite matroids again. We started this series by examining the question of how infinite matroids could be defined. With a suitable definition in hand, we took a quick tour through the zoo of known examples. Then we took a closer look at one very flexible way to build infinite matroids: by sticking together infinite trees of matroids.
In that construction we used 2-sums to stick the matroids together. Suppose that we have a finite collection of matroids arranged in a tree, so that their ground sets overlap (always in single edges) if and only if they are adjacent in the tree. Then because 2-sums are associative we have an unambiguously defined 2-sum of that collection. In the previous post in this series, we saw that this construction also works if we allow the tree to be infinite, but that we have to specify a little extra information the set $\Psi$ of ends of the tree which the circuits are allowed to use.
The same trick can be used for other kinds of associative sum of finite matroids. In this post we’ll see how it works for sums of representable matroids, and why that is useful for understanding the topological spaces associated to infinite graphs.
To see how the addition of representable matroids works, we need to look at them from a slightly unusual angle. Let’s say that we have a matroid $M$ on a ground set $E$, and suppose that we have vectors $\{v_e | e \in E\}$ in some vector space over a field $k$, giving us a representation of $M$ over $k$. Then we can capture all the essential information about this representation by forgetting the details of the vector space and focusing just on the linear relationships amongst the vectors. More formally, we say that an element $\lambda$ of the space $k^E$ is a linear dependence of the $v_e$ if $\sum_{e \in E} \lambda(e)v_e = 0$. Then the linear dependences form a subspace $V$ of $k^E$, and this subspace is enough to recover the matroid $M$; the circuits of $M$ are precisely the minimal nonempty supports of vectors in $V$. For those who prefer to think of representations in terms of matrices rather than families of vectors, the subspace we’re working with is just the orthogonal complement of the row space of the matrix.
So we can encode representations of matroids on $E$ over $k$ as subspaces of $k^E$. This way of seeing representations fits well with matroid duality, in that if $V \subseteq k^E$ represents a matroid $M$ on $E$ then the orthogonal complement $V^{\bot}$ represents the dual matroid $M^*$. If we define $M(V)$ to be the matroid whose circuits are the minimal nonempty supports of elements of $V$, then we can express this as $M(V^{\bot}) = (M(V))^*$.
The advantage of this perspective is that there is a natural way to glue together such subspaces, which we can use to build a self-dual gluing operation for represented matroids. Suppose that we have two sets $E_1$ and $E_2$ and subspaces $V_1$ and $V_2$ of $k^{E_1}$ and $k^{E_2}$, respectively. We now want to glue these together to give a subspace $V_1 \oplus V_2$ of $E_1 \triangle E_2$. As with the 2-sum, we throw away the gluing edges’ in the overlap $E_1 \cap E_2$. The idea is to take pairs of vectors which match on the gluing edges, patch them together and throw away the part supported on the gluing edges. More precisely, we set $V_1 \oplus V_2 := \{v \mathord{\upharpoonright}_{E_1 \triangle E_2} | v \in k^{E_1 \cup E_2}, v \mathord{\upharpoonright}_{E_i} \in V_i\}$.
Like the 2-sum, this definition is self-dual in the sense that $(M(V_1 \oplus V_2))^* = M(V_1^{\bot} \oplus V_2^{\bot})$. It is also associative, in that if $V_1$, $V_2$ and $V_3$ are subspaces of $k^{E_1}$, $k^{E_2}$ and $k^{E_3}$ respectively and the sets $E_1$ and $E_3$ are disjoint then $(V_1 \oplus V_2) \oplus V_3 = V_1 \oplus (V_2 \oplus V_3)$. So if we have a finite collection of such representations on ground sets $E_t$ arranged in a finite tree, such that the ground sets only overlap if they are adjacent in the tree, then we have an unambiguous sum of all these subspaces.
Just as for 2-sums, we can also glue together infinite trees of represented matroids in this way, as long as we are careful to specify which ends of the tree the circuits are allowed to use. Formally, we do this as follows. Suppose that we have a tree $T$, a family of sets $E_t$ indexed by the nodes of $T$, such that $E_s \cap E_t$ is only nonempty if $s = t$ or $s$ and $t$ are adjacent in $T$, a family of subspaces $V_t \subseteq k^{E_t}$ and a Borel subset $\Psi$ of the set $\Omega(T)$ of ends of $T$. Then we can build a subspace of $k^{\bigtriangleup_{t}E_t}$ by setting
$\bigoplus^{\Psi}_t V_t := \{v \mathord{\upharpoonright}_{\bigtriangleup_t E_t} | v \in k^{\bigcup_t E_t}, v \mathord{\upharpoonright}_{E_t} \in V_t \text{ and } \Omega(T) \cap \overline{\{t | v \upharpoonright_{E_t} = 0\}} \subseteq \Psi\}$
and $M(\bigoplus^{\Psi}_t V_t)$ will be an infinite matroid.
What are these infinite sums good for? Well, if we have a representable matroid and we have a $k$-separation of that matroid then we can split it up as a sum of two matroids in this way such that there are fewer than $k$ gluing edges. We can use this to break problems about bigger matroids down into problems about smaller matroids. Similarly, if we have a nested collection of finite separations in an infinite matroid, cutting the ground set up into a tree of finite parts, then we can cut the matroid up into a sum of finite matroids and analyse its properties in terms of their properties. This kind of chopping up and reconstruction can also be helpful to show that the infinite object is a matroid in the first place.
Let’s see how that might work for a more concrete problem. Suppose that we have a locally finite graph $G$. Then we can build a topological space $|G|$ from it by formally adding its ends as new points at infinity (see for example [D10]). These spaces and their subspaces are key elements of topological infinite graph theory, which was where this series of posts started.
At first, it was hoped that these subspaces would have the nice property that if they are connected then they are path connected. But Agelos Georgakopoulos eventually found a counterexample to this claim [G07]. However, the set of ends used by the counterexample he constructed was topologically horrible, so we might still hope that if we have a connected subspace $X$ of $|G|$ such that the set $\Psi$ of ends contained in $X$ is topologically nice, then $X$ will be path-connected. Well, if we take topologically nice’ to mean `Borel’, then the ideas above let us show that this is true.
We can do this by considering the matroid whose circuits are the edge sets of those topological circles in $|G|$ which only go through ends in $\Psi$. More precisely, we need to show that this really does give the circuit set of a matroid $M(G, \Psi)$. If we can do that, then we can argue as follows:
Let $P$ be the set of edges which, together with both endpoints, are completely included in $X$. Let $u$ and $v$ be vertices in $X$. Build a new graph $G + e$ with an extra edge $e$ joining $u$ to $v$. Then since $X$ is connected, there can be no cocircuit of $M(G + e, \Psi)$ which contains $e$ and is disjoint from $P$ (such a cocircuit would induce a topological separation of $X$ with $u$ and $v$ on opposite sides). So $e$ is not a coloop in the restriction of $M(G + e, \Psi)$ to $P \cup \{e\}$. Hence there is a circuit through $e$ in that matroid, and removing $e$ from the corresponding topological circle gives an arc from $u$ to $v$ through $X$ in $|G|$. So any two vertices are in the same path-component of $X$. Similar tricks show the same for ends and interior points of edges.
What this argument shows is that the connection between connectivity and path-connectivity is encoded in the statement that $M(G, \Psi)$ is a matroid. To prove that statement, we can build $M(G, \Psi)$ as the sum of an infinite tree of graphic matroids in the sense described above. First of all, since $G$ is locally finite, we can cut it up into an infinite tree of finite connected parts using disjoint finite separators. Then we define the torso corresponding to a part to consist of that part together with new edges forming complete graphs on each of the separators. This gives us an infinite tree of finite graphs, and the ends of the tree correspond precisely to the ends of $G$. Now we take the graphic matroids corresponding to those graphs, take the standard binary representations of those matroids, and glue them together along this tree, taking the ends in $\Psi$ to be allowed for circuits. And presto! We have build the matroid $M(G, \Psi)$.
The details of this argument are explained in [BC15].
I can’t resist mentioning that the matroids we’ve just built in a bottom-up way also have a top-down characterisation. Consider the class of matroids whose ground set is the set of edges of $G$, and in which every circuit is a topological circuit of $G$ and every cocircuit is a bond of $G$. Let’s call such matroids $G$-matroids.
For some graphs $G$, we can find $G$-matroids which are not of the form $M(G, \Psi)$. For example, in the following graph $Q$ we can define an equivalence relation on the (edge sets of) double rays by saying that two double rays are equivalent if they have finite symmetric difference. Then the set of finite circuits together with any collection of double rays closed under this equivalence relation gives the set of circuits of an infinite matroid.
The matroids I just described are a bit pathological, and they hover on the boundary between being binary and non-binary. None of them has a $U_{2,4}$-minor. They also still have the familiar property that any symmetric difference of two circuits is a disjoint union of circuits. But symmetric differences of three circuits might not be disjoint unions of circuits!
For example, there is such a matroid in which the first three sets depicted below are circuits, but the fourth, their symmetric difference, is not.
The problem is that these matroids are wild. This means that there are circuits and cocircuits which intersect in infinitely many elements. We only have a good theory of representability for tame matroids, those in which every intersection of a circuit with a cocircuit is finite. I hope to discuss this in more detail in a future post.
If we only consider tame $G$-matroids, then this proliferation of pathological matroids disappears. For the graph $Q$, for example, there are only 2 tame $Q$-matroids, namely the finite cycle matroid and the topological cycle matroid. Remarkably, for any locally finite graph $G$ it turns out that the tame $G$-matroids are precisely the matroids of the form $M(G, \Psi)$. So our top-down and bottom up characterisations meet, and any matroid associated to a graph can be built up from finite parts in a way which mirrors the structure of that graph. The reasons for this correspondence go far beyond the scope of this post, but they can for example be found in [BC16].
Now that we’ve seen the standard ways to build infinite matroids and their relationship to infinite graphs, in the next post we’ll examine the most important open problem about them: the Infinite Matroid Intersection Conjecture.
[BC15] N. Bowler and J. Carmesin, Infinite Matroids and Determinacy of Games, preprint here.
[BC16] N. Bowler and J. Carmesin, The ubiquity of Psi-matroids, preprint here.
[D10] R. Diestel, Locally finite graphs with ends: a topological approach I–III, Discrete Math 311–312 (2010–11).
[G07] A. Georgakopoulos. Connected but not path-connected subspaces of infinite graphs, Combinatorica, 27(6) 683–698 (2007).
# The Lights Out Game
In this short post, I will discuss a cute graph theory problem called the Lights Out Game. The set-up is as follows. We are given a graph $G=(V,E)$, where we consider each vertex as a light. At the beginning of the game, some subset of the vertices are ON, and the rest of the vertices are OFF. We are allowed to press any vertex $v$, which has the effect of switching the state of $v$ and all of the neighbours of $v$. The goal is to determine if it is possible to press some sequence of vertices to turn off all the lights.
The version of this game where $G$ is the $5 \times 5$ grid was produced in the real world by Tiger Electronics, and has a bit of a cult following. For example, there is a dedicated wikipedia page and this site (from which I borrowed the image below) even has the original user manuals.
The goal of this post is to prove the following theorem.
Theorem. Let $G=(V,E)$ be a graph for which all the lights are initially ON. Then it is always possible to press a sequence of vertices to switch all the lights OFF.
(Note that if not all the lights are ON initially, it will not always be possible to switch all the lights OFF. For example, consider just a single edge where only one light is ON initially.)
Proof. Evidently, there is no point in pressing a vertex more than once, and the sequence in which we press vertices does not matter. Thus, we are searching for a subset $S$ of vertices such that $|S \cap N[v]|$ is odd for all $v \in V$, where $N[v]$ is the closed neighbourhood of $v$ (the neighbours of $v$ together with $v$ itself). We can encode the existence of such a set $S$ by doing linear algebra over the binary field $\mathbb{F}_2$.
Let $A$ be the $V \times V$ adjacency matrix of $G$, and let $B=A+I$. Note that the column corresponding to a vertex $v$ is the characteristic vector of the closed neighbourhood of $v$. Thus, the required set $S$ exists if and only if the all ones vector $\mathbb{1}$ is in the column space of $B$. Since $B$ is symmetric, this is true if and only if $\mathbb{1}$ is in the row space of $B$. Let $B^+$ be the matrix obtained from $B$ by adjoining $\mathbb{1}$ as an additional row. Thus, we can turn all the lights OFF if and only if $B$ and $B^+$ have the same rank.
Since this is a matroid blog, let $M(B)$ and $M(B^+)$ be the column matroids of $B$ and $B^+$ (over $\mathbb{F}_2$). We will prove the stronger result that $M(B)$ and $M(B^+)$ are actually the same matroid. Every circuit of $M(B^+)$ is clearly a dependent set in $M(B)$. On the other hand let $C \subseteq V$ be a circuit of $M(B)$. Then $\sum_{v \in C} \chi(N[v])=\mathbb{0}$, where $\chi(N[v])$ is the characteristic vector of the closed neighbourhood of $v$. Therefore, in the subgraph of $G$ induced by the vertices in $C$, each vertex has odd degree. By the Handshaking Lemma, it follows that $|C|$ is even, and so $C$ is also dependent in $M(B^+)$.
Reference
Anderson, M., & Feil, T. (1998). Turning lights out with linear algebra. Mathematics Magazine, 71(4), 300-303. |
# Polymode installation on Windows machine
Polymode is very promising to handle the integration of R chunks using different modes.
I went through the documentation and it works well when I am inside .Rmd file but not inside .Rnw. In the .Rnw I found the Noweb mode activated without polymode PM-rmd. So there must be something wrong with my installation on Windows machine.
installation
I installed rmarkdown-mode from MELPA, BTW this was not shown clearly in the documentation of polymode, I wish there was a requirement section in it.
I was confused about that part of installation:
(setq load-path
(append '("path/to/polymode/" "path/to/polymode/modes")
Because in Windows, polymode resides in c:/emacs/.emacs.d/elpa/polymode-20150105.931/ but I don't see the \modes folder in there! So is the above code needed if I had used install-packages from MELPA?
I installed the polymode package from MELPA. M-x list packages.
I have pandoc installed and checked in the PATH variables by M-x getenv RET PATH RET; pandoc was there.
relevant .init.el code
(require 'poly-R)
(require 'poly-markdown)
;; Markdown
(add-to-list 'auto-mode-alist '("\\.md" . poly-markdown-mode))
;;; R related modes
(add-to-list 'auto-mode-alist '("\\.Rnw" . poly-noweb+r-mode))
(add-to-list 'auto-mode-alist '("\\.Rmd" . poly-markdown+r-mode))
(setq ess-swv-processing-command "%s(%s)") % this to get rid of .ess_weave() function not found error
MWE of .Rnw file
\documentclass[a4]{scrartcl}
\begin{document}
Here is a code chunk.
<<demo, fig.height=4,message=FALSE,warning=FALSE>>=
library(ggplot2)
summary(cars)
qplot(speed,dist,data=cars) +
geom_smooth()
@
You can also write inline expressioins, e.g. $\pi=\Sexpr{pi}$.
\end{document}
Notes
• Windows 7 32 bit
• Polymode updated from MELPA
Update
I used this code right after ESS code in the init.el and it worked well:
(require 'poly-R)
(require 'poly-markdown)
(add-to-list 'auto-mode-alist '("\\.Rnw" . poly-noweb+r-mode))
I realized that for MELPA installation these lines of code in the documentation are irrelevant:
(setq load-path
(append '("path/to/polymode/" "path/to/polymode/modes")
You need to add poly-noweb+r-mode to auto-mode-alist for Rnw files. You also need to watch for conflicts with ESS. ESS adds its own mode to auto-mode-alist for Rnw files, so you have to wait until after this happens to make sure you over-ride the ESS settings. This is what I have in my .emacs:
(require 'polymode)
(require 'poly-R)
• is it necessary to put this code after ESS code in the init.el file or doesn't matter? – doctorate Feb 13 '15 at 21:04
• @doctorate, @Tyler You need not put all the (requre ..)s, poly-R should be enough. Also deleting Rnw-mode is not necessary, the first assignment in auto-mode-alist takes precedence. – VitoshKa Feb 13 '15 at 21:16
• I confirm that there is no need to require (polymode). I just put the code after the ESS code and it worked well (as in the update). – doctorate Feb 14 '15 at 16:52 |
## Smooth representations of reductive $$p$$-adic groups: structure theory via types.(English)Zbl 0911.22014
This paper is concerned with the behavior under parabolic induction of the “type” of a connected reductive $$\ell$$-group $$G$$, and its use for classification of admissible representation [cf. I. N. Bernstein and A. V. Zelevinsky, Russ. Math. Surv. 31, 1–68 (1976); translation from Usp. Mat. Nauk 31, No. 3(189), 5–70 (1976; Zbl 0342.43017); Ann. Sci. Éc. Norm. Supér., IV. Sér. 10, 441–472 (1977; Zbl 0412.22015)] of $$G$$ (the authors refer only to unpublished notes of W. Casselman [An introduction to the theory of admissible representations of reductive $$p$$-adic groups, dated 1974], but not to the fundamental well-written publications cited above). This continues many previous papers of the authors, which are concerned mainly with $$\text{GL}(n)$$ and related groups.
### MSC:
2.2e+51 Representations of Lie and linear algebraic groups over local fields
### Keywords:
representations of reductive $$p$$-adic groups; types
### Citations:
Zbl 0342.43017; Zbl 0348.43007; Zbl 0412.22015
Full Text: |
# Points of Maxima
Calculus Level 4
Let $$f(x)$$ be a function defined on $$\mathbb R$$ (the set of all real numbers) such that $f'(x)=2010 (x-2009)(x-2010)^2(x-2011)^3(x-2014)^4$ for all $$x\in\mathbb R$$.
If $$g(x)$$ is a function defined on $$\mathbb R$$ with values in the interval $$(0,\infty)$$ such that $$f(x)=\ln (g(x))$$ for all $$x\in \mathbb R$$, then find the number of points in $$\mathbb R$$ at which $$g(x)$$ has a local maximum.
× |
definition: reduced in strength or concentration or quality or "a dilute solution"; "dilute acetic acid" similar words: thin. $\text{Molarity} \: \left( \text{M} \right) = \frac{\text{moles of solute}}{\text{liters of solution}} = \frac{\text{mol}}{\text{L}}$. 2. You dilute the solution by adding enough water to make the solution volume $$500. It's also critical to use units with any values to ensure the correct dosage of medications or report levels of substances in blood, to name just two. How many equivalents of \(\ce{Ca^{2+}}$$ are present in a solution that contains 3.5 moles of $$\ce{Ca^{2+}}$$? The juice was first concentrated by evaporating the water (removing the solvent) and then diluted by adding water. 'This means that there is 1 mole of hydrochloric acid per liter of solution. 4. The liquid is found in a highly concentrated form. For example, our respiration medium is potassium based so we might prepare 100 mM EDTA and use 1 ml of the stock per liter of working solution. Suppose that a solution was prepared by dissolving $$25.0 \: \text{g}$$ of sugar into $$100 \: \text{g}$$ of water. Calculate percentage concentration (m/m, v/v, m/v). At 60°C an intermediate case occurs in which both diffusion and convection are important. The highly concentrated solution is typically referred to as the stock solution. Then, the molarity is calculated by dividing by liters. A patient received $$1.50 \: \text{L}$$ of saline solution which has a concentration of $$154 \: \text{mEq/L} \: \ce{Na^+}$$. Nowadays you can find a lot of concentrated solutions in supermarkets: soups, stock, paint, detergent, etc. Calculate the concentration of the acid in g/dm, mass of solute in g = concentration in g/dm, A solution of sodium chloride has a concentration of 10 g/dm, . Maximum Order Volume C++, Convert Pull Start Chainsaw To Electric, Contagious: Why Things Catch On Genre, Types Of Stone Veneer, Valkyrie Armor Ragnarok, Rose Bouquet Images Hd, How Much Is 50 Grams In Ounces, " />
# concentrated solution example
When ordered from a chemical supply company, its molarity is $$16 \: \text{M}$$. Find concentration of solution by percent mass. Kate sat up fully, her attention now totally concentrated. You would need to weigh out $$150 \: \text{g}$$ of $$\ce{NaCl}$$ and add it to $$2850 \: \text{g}$$ of water. A concentration unit based on moles is preferable. where the subscripts s and d indicate the stock and dilute solutions, respectively. This is because the number of moles of the solute does not change, but the total volume of the solution increases. Favorite Answer. Brine is used for melting salt. Solution We are given the concentration of a stock solution, C 1, and the volume and concentration of the resultant diluted solution, V 2 and C 2. Concentration can be calculated as long as you know the moles of solvent per liter, or molarity. Concentrated solutions. An example of a concentrated solution is 98 percent sulfuric acid (~18 M). A concentrated solution is one that has a relatively large amount of dissolved solute. To concentrate a solution, one must add more solute (for example, alcohol), or reduce the amount of solvent (for example, water). 20 Oct. 2018. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. For example, a standard IV solution does not contain the same solutes as blood but the concentration of charges is the same. Unsaturated solutions can be found on a daily basis, it is not necessary to be in a chemical laboratory. solution using the same salt as the sample. Volume of a Concentrated Solution Needed for Dilution What volume of 1.59 M KOH is required to prepare 5.00 L of 0.100 M KOH? New heavy industries were concentrated in n 2. : So she sent her magic into the bird and concentrated it on the broken wing and felt the bone re-knitting. Calculating concentration . Solution: Mass of Solute: 10 g. Mass of Solution: 10 + 70 = 80 g Chemists primarily need the concentration of solutions to be expressed in a way that accounts for the number of particles present that could react according to a particular chemical equation. 4. Calculate its molarity. 8 g of sodium hydroxide is dissolved in 2 dm 3 of water. It is often easiest to convert from cm. concentrated solution in a sentence - Use "concentrated solution" in a sentence 1. Note that both strong and weak acids and bases can be used in concentrated solutions. Example: 10 g salt and 70 g water are mixed and solution is prepared. [ "article:topic", "concentration", "ppm", "authorname:soulta", "showtoc:no", "license:ccbync" ], information contact us at [email protected], status page at https://status.libretexts.org. When adding a solute and solvent together, mass is conserved, but volume is not. If you were to heat a solution, causing the solvent to evaporate, you would be concentrating it, because the ratio of solute to solvent would be increasing. Watch the recordings here on Youtube! The resulting ash is then redissolved into concentrated solution in hot water. Still, concentrated and dilute are useful as terms to compare one solution to another (see figure below). The solution has been diluted by a factor of five, since the new volume is five times as great as the original volume. When a molarity is reported, the unit is the symbol $$\text{M}$$, which is read as "molar". Consequently, the molarity is one-fifth of its original value. A saturated solution is a form of a concentrated solution, but it contains the maximum amount of … particles are evenly distributed. Equivalents are used because the concentration of the charges is important than the identity of the solutes. Example 8. Note that the given volume has been converted to liters. Evaporation is controlled by convection. Another way of looking at concentration such as in IV solutions and blood is in terms of equivalents. 1000). Since the moles of solute in a solution is equal to the molarity multiplied by the volume in liters, we can set those equal. When the solute in a solution is a solid, a convenient way to express the concentration is a mass percent (mass/mass), which is the grams of solute per $$100 \: \text{g}$$ of solution. We need to find the volume of the stock solution, V 1. Remember that $$85\%$$ is the equivalent of 85 out of a hundred. Sometimes, you may want to make a particular amount of solution with a certain percent by mass and will need to calculate what mass of the solute is needed. The value of the equivalents is always positive regardless of the charge. Mass of $$\ce{NH_4Cl} = 42.23 \: \text{g}$$, Molar mass of $$\ce{NH_4Cl} = 53.50 \: \text{g/mol}$$, Volume of solution $$= 500.0 \: \text{mL} = 0.5000 \: \text{L}$$, Stock $$\ce{HNO_3} \: \left( M_1 \right) = 16 \: \text{M}$$, Volume of stock $$\ce{HNO_3} \: \left( V_1 \right) = ? 204+12 sentence examples: 1. When you stir salt into water until no more dissolves, you make a … Frequently, ingredient labels on food products and medicines have amounts listed as percentages (see figure below). For example, \(32 \: \text{ppm}$$ could be written as $$\frac{32 \: \text{mg solute}}{1 \: \text{L solution}}$$ while $$59 \: \text{ppb}$$ can be written as $$\frac{59 \: \mu \text{g solute}}{1 \: \text{L solution}}$$. The percentage of solute in a solution can more easily be determined by volume when the solute and solvent are both liquids. Nitric acid $$\left( \ce{HNO_3} \right)$$ is a powerful and corrosive acid. Example of concentrated solution? A solution that cannot hold any more solute at room temperature would be ___ a) a dilute solution b) a concentrated solution c) a saturated solution d) a supersaturated solution. Her business is now concentrated at Orielton Mill, Hundleton, where she plans to rear young trout to sell on to other fish farmers. Read about our approach to external linking. 9 years ago. : The more concentrated in terms of time and space an airdrop was, the more probable success was. This is not the blind animal that lives in the ground. \: \text{mL}\). Hand soap, soft drinks and liquid medicine are concentrated solutions commonly found in … To calculate the molarity of a solution, you divide the moles of solute by the volume of the solution expressed in liters. 2. Sometimes, the concentration is lower in which case milliequivalents $$\left( \text{mEq} \right)$$ is a more appropriate unit. The volume of the solute divided by the volume of the solution expressed as a percent, yields the percent by volume (volume/volume) of the solution. Describe a solution whose concentration is in $$\text{ppm}$$ or $$\text{ppb}$$. You can rearrange and solve for the mass of solute. The degree of concentration is measured in moles. Note that if this problem had a different ion with a different charge, that would need to be accounted for in the calculation. The new molarity can easily be calculated by using the above equation and solving for $$M_2$$. Relevance. It is the amount of solute dissolves in 100 g solvent. For example, a pinch of salt in a glass of water will be a dilute solution, as there is barely any salt in the water, whereas mixing 10 spoons of salt in a glass of water will lead to a concentrated solution, as there is way too much salt in the water. Step 1: List the known quantities and plan the problem. \: \text{mL}\) of ethanol and adding enough water to make $$240. Common examples of solutions are the sugar in water and salt in water solutions, soda water, etc. $V_1 = \frac{M_2 \times V_2}{V_1} = \frac{0.50 \: \text{M} \times 8.00 \: \text{L}}{16 \: \text{M}} = 0.25 \: \text{L} = 250 \: \text{mL}$. $\text{Percent by mass} = \frac{\text{mass of solute}}{\text{mass of solution}} \times 100\%$. 12 M hydrochloric acid is also called concentrated sulfuric acid because it contains a minimum amount of water. Examples of Concentrated Solutions 12 M HCl is more concentrated than 1 M HCl or 0.1 M HCl. Just like metric prefixes used with base units, milli is used to modify equivalents so \(1 \: \text{Eq} = 1000 \: \text{mEq}$$. Definition of concentrated in english: concentrated. The dilution from $$16 \: \text{M}$$ to $$0.5 \: \text{M}$$ is a factor of 32. The solvent does not necessarily have to be water. A concentrated stock of EDTA can be prepared using either NaOH or KOH to adjust pH, to be available whenever a solution requires EDTA as a component. Instead it is referring to a specific number of molecules. There is particle homogeneity i.e. Vinegar is a dilute solution of acetic acid in water. before continuing with a concentration calculation. Examples of household solutions would include the following: coffee or tea We can set up an equality between the moles of the solute before the dilution (1) and the moles of the solute after the dilution (2). Legal. There are several ways to express the amount of solute present in a solution. $\begin{array}{ll} \textbf{Ion} & \textbf{Equivalents} \\ \ce{Na^+} & 1 \\ \ce{Mg^{2+}} & 2 \\ \ce{Al^{3+}} & 3 \\ \ce{Cl^-} & 1 \\ \ce{NO_3^-} & 1 \\ \ce{SO_4^{2-}} & 2 \end{array}$. Adding a tablespoon of sugar to a cup of hot coffee produces a solution of unsaturated sugar. Concentrated solution is a solution that contains a large amount of solute relative to the amount that could dissolve. \: \text{mL}\) of solution, the percent by volume is: \begin{align} \text{Percent by volume} &= \frac{\text{volume of solute}}{\text{volume of solution}} \times 100\% \\ &= \frac{40 \: \text{mL ethanol}}{240 \: \text{mL solution}} \times 100\% \\ &= 16.7\% \: \text{ethanol} \end{align}. 3. \: \text{g} \end{align}\]. ‘The use of these concentrated sprays can reduce application costs and avoid excessive leaf run-off as well as spray drift out of the orchard.’ ‘It is assumed to be obtained in dilute solutions, and may not strictly apply to concentrated solutions.’ ‘The electrolyte is concentrated KOH solution.’ However, these terms are relative, and we need to be able to express concentration in a more exact, quantitative manner. CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon. Complete calculations relating equivalents to moles, volumes, or mass. One way to describe the concentration of a solution is by the percent of the solution that is composed of the solute. If a solution is made by taking $$40. Having four significant figures is appropriate. A dilute solution is one that has a relatively small amount of dissolved solute. Another word for concentrated. Some examples of concentrated solutions include concentrated acids and concentrated bases used in laboratories. Cadmium sulphate, CdSO 4, is known in several hydrated forms; being deposited, on spontaneous evaporation of a concentrated aqueous solution, in the form of large monosymmetric crystals of composition 3CdSO 4.8H 2 O, whilst a boiling saturated solution, to which concentrated sulphuric acid has been added, deposits crystals of composition CdSO 4 4H 2 0. What mass of sodium did the patient receive? Because these methods generally result in slightly different vales, it is important to always indicate how a given percentage was calculated. For example, \(\ce{Na^+}$$ and $$\ce{Cl^-}$$ both have 1 equivalent per mole. Notice that it was necessary to subtract the mass of the $$\ce{NaCl}$$ $$\left( 150 \: \text{g} \right)$$ from the mass of solution $$\left( 3.00 \times 10^3 \: \text{g} \right)$$ to calculate the mass of the water that would need to be added. Examples of the concentrated solution: Household strength bleach solutions can be 4% sodium hypochlorite and possibly some commercial strength bleach could be more concentrated. This percentage can be determined in one of three ways: (1) the mass of the solute divided by the mass of solution, (2) the volume of the solute divided by the volume of the solution, or (3) the mass of the solute divided by the volume of the solution. For example, when measuring the pH of sodium bromide brines, use sodium bromide as the filling solution. solutions are made out of SOLUTE (usually solid substances) dissolved in a SOLVENT (usually liquid) $M_2 = \frac{M_1 \times V_1}{V_2} = \frac{2.0 \: \text{M} \times 100. xener70. Worked example. The percent by mass would be calculated as follows: \[\text{Percent by mass} = \frac{25 \: \text{g sugar}}{125 \: \text{g solution}} \times 100\% = 20\% \: \text{sugar}$. Croatian-English Chemistry Dictionary & Glossary. concentrated Definition of concentrated in English by. Missed the LibreFest? The total volume of the solution is the amount of solvent plus the amount of solute added to it. Finally, because the two sides of the equation are set equal to one another, the volume can be in any units we choose, as long as that unit is the same on both sides. At 80.1°C benzene boils (p = 1atm). $42.23 \: \text{g} \: \ce{NH_4Cl} \times \frac{1 \: \text{mol} \: \ce{NH_4Cl}}{53.50 \: \text{g} \: \ce{NH_4Cl}} = 0.7893 \: \text{mol} \: \ce{NH_4Cl}$, $\frac{0.7893 \: \text{mol} \: \ce{NH_4Cl}}{0.5000 \: \text{L}} = 1.579 \: \text{M}$. Find more ways to say concentrated, along with related words, antonyms and example phrases at Thesaurus.com, the world's most trusted free thesaurus. The following are examples of everyday diluted solutions: 1. First let's review what a mole is. It should be noted that, unlike in the case of mass, you cannot simply add together the volumes of solute and solvent to get the final solution volume. Summary – Saturated vs Concentrated Solution. In a solution, all the components appear as a single phase. One equivalent is equal to one mole of charge in an ion. Simply mixing $$40 \: \text{mL}$$ of ethanol and $$200 \: \text{mL}$$ of water would not give you the same result, as the final volume would probably not be exactly $$240 \: \text{mL}$$. If you’re finding the volume in a lab, mix the solution in a graduated cylinder or beaker and look at the measurement. These units are used for very small concentrations of solute such as the amount of lead in drinking water. Another common dilution problem involves deciding how much a highly concentrated solution is required to make a desired quantity of solution with a lower concentration. 3. The mass-volume percent is also used in some cases and is calculated in a similar way to the previous two percentages. Use the relationship between moles and equivalents of $$\ce{Ca^{2+}}$$ to find the answer. Measure the volume from the curve at the top of the solution, or the meniscus, to get the most accurate reading. \: \text{mL}}{500. For example, a solution labeled as $$1.5 \: \text{M} \: \ce{NH_3}$$ is a "1.5 molar solution of ammonia". In this video we will learn the dilution equation and how to use that equation to perform a dilution. Use dimensional analysis to set up the problem based on the values given in the problem, the relationship for $$\ce{Na^+}$$ and equivalents and the molar mass of sodium. Most people chose this as the best definition of concentration: The definition of concent... See the dictionary meaning, pronunciation, and sentence examples. Examples: - concentrated solution of sulfuric acid - concentrated solution of nitric acid - concentrated solution of ethanol - concentrated solution of sugar (syrup) - concentrated solution … 1 Answer. For example, we might say: ''a 1 M solution of hydrochloric acid. When additional water is added to an aqueous solution, the concentration of that solution decreases. Citing this page: Generalic, Eni. Also, be aware that the terms "concentrate" and "dilute" can be used as verbs. A concentrated solution is one in which there is a large amount of substance present in a mixture. A solution is prepared by dissolving $$42.23 \: \text{g}$$ of $$\ce{NH_4Cl}$$ into enough water to make $$500.0 \: \text{mL}$$ of solution. The acid used for cleaning bricks for example to remove mortar that may have got on them in the process of bricking. So what does that mean and how do we know that there is 1 mole in the liter of solution? Understanding these two units is much easier if you consider a percentage as parts per hundred. A concentrated solution is one where a large amount of a substance (solute) has been added to a solvent. The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Dilute solution: When small amount of solute is dissolved in a relatively large quantity of solvent, then the solution is called dilute solution. more example sentences вђ it is вђ subsequent adsorption from a less concentrated solution of dopc and, kids.net.au - dictionary > definition: reduced in strength or concentration or quality or "a dilute solution"; "dilute acetic acid" similar words: thin. $\text{Molarity} \: \left( \text{M} \right) = \frac{\text{moles of solute}}{\text{liters of solution}} = \frac{\text{mol}}{\text{L}}$. 2. You dilute the solution by adding enough water to make the solution volume $$500. It's also critical to use units with any values to ensure the correct dosage of medications or report levels of substances in blood, to name just two. How many equivalents of \(\ce{Ca^{2+}}$$ are present in a solution that contains 3.5 moles of $$\ce{Ca^{2+}}$$? The juice was first concentrated by evaporating the water (removing the solvent) and then diluted by adding water. 'This means that there is 1 mole of hydrochloric acid per liter of solution. 4. The liquid is found in a highly concentrated form. For example, our respiration medium is potassium based so we might prepare 100 mM EDTA and use 1 ml of the stock per liter of working solution. Suppose that a solution was prepared by dissolving $$25.0 \: \text{g}$$ of sugar into $$100 \: \text{g}$$ of water. Calculate percentage concentration (m/m, v/v, m/v). At 60°C an intermediate case occurs in which both diffusion and convection are important. The highly concentrated solution is typically referred to as the stock solution. Then, the molarity is calculated by dividing by liters. A patient received $$1.50 \: \text{L}$$ of saline solution which has a concentration of $$154 \: \text{mEq/L} \: \ce{Na^+}$$. Nowadays you can find a lot of concentrated solutions in supermarkets: soups, stock, paint, detergent, etc. Calculate the concentration of the acid in g/dm, mass of solute in g = concentration in g/dm, A solution of sodium chloride has a concentration of 10 g/dm, . |
# capita personal income spots toward the beds base of this terrain, it means that many the standing’s folks
capita personal income spots toward the beds base of this terrain, it means that many the standing’s folks
## Prepare yourself for promote utilizing the state procedures expertise definitely up-to-the-minute for cash loans in Georgia.
Georgia surely try the eight rated issue into the U.S. when it comes to population, and plenty of among these folks go ahead and take the investigate financial assistance like advance loan financial loans. The kingdom issue concerning southern area received that nickname in part due to its cost-effective development, regards in no small-part towards industrious mindset of people just who work businees right here. The , storefronts let resident in tiny cities, and even larger people desire Atlanta obtain problems funds. You are sure that the most up-to-date regulations and rules for payday loans during the Peach say before signing in the speckled range, be sure. We have been ready to allow once you are prepared to enter a protected financing need version.
Georgia Progress Money Laws
Customary cash loans are generally forbidden in Georgia. Their state’s Commercially created debt function layouts the loan that will be small cover at 60 percentage every single year. It is then for typical lenders which will make revenue on lending products. |
kira201's blog
By kira201, history, 4 months ago,
In the codeforces round 709, I made a submission from my alt account for div2A to see if the codeforces plagiarism checker really works. To be honest, I thought initially that the round was unrated because of no formal announcement, so I dared to try it out. Eventually, ended up losing rating ,lol. Well, it doesn't matter.
Here are the two submissions:110629586 and 110636692 with exact 100% match. But, the plagiarism wasn't detected. So, I wondered if several such submissions go unnoticed every contest to due to this bug or am I missing something. Feel free to comment, I don't care about the downvotes as well but if this is a serious issue, MikeMirzayanov please look into it.
• +34
» 4 months ago, # | +18 My guess is the plagiarism checker won't be run on all the questions in a contest because it is highly likely that one solution to trivial problems (implementation wise) such as the one that you have shared is similar to another.PS: I don't think having an alt account is a good idea. Not sure about the exact rules though.
• » » 4 months ago, # ^ | 0 Yeah, I understand in many cases div2A and B are trivial, so the logic can be same. But what about the case when there is a 100% code match as above, like those who copy and paste the solution received from their friends or groups.
» 4 months ago, # | +22 It is not exactly the same code, because one uses "GNU C++17" compiler and the other uses "GNU C++17 (64)", so when the compiler replaces the libraries, maybe it will produce a different code, but yeah, definitely the Code Plagiarism Checker should detect this kind of things.
» 4 months ago, # | 0 I got plagiarized for the first question. My sol — 109233079, Other sol — 109231770 So, imo plag checker runs on every problem even if sol is very trivial
» 4 months ago, # | +70 Sure, if a problem is like print a*b we don't run anti-plagiarism. And the fact that the codes coincide can only mean that the participants use the same template code. It is strange to punish for this.
» 4 months ago, # | 0 I think it is much better than this blog :)
» 7 weeks ago, # | +3 At what percentage are the two solutions plagarized? |
# Scheduling
In Unix it is recommended to configure Snow Inventory Agent to run at a given interval using the scheduler, i.e. crontab (or similar), to perform inventory, and transfer the result file to Snow Inventory Server.
In the following example from root crontab file, the agent will run every day at 1:15 AM:
15 1 * * * nice -n 10 java -jar /opt/snow/snowagent.jar config=/opt/snow/snowagent.config
For business-critical environments, the agent can be scheduled to run in the regular service maintenance windows for the servers.
Ideally, an agent should be configured not to disturb or consume system resources that are needed for business-critical applications running on the server. To achieve this in a Unix environment one would typically use the nice program to set the process priority to low. In the example niceness is set to 10, but can of course be set to any suitable value |
home
# Julia Evans
http://jvns.ca/atom.xml
2019-02-17T20:55:37+00:00
https://jvns.ca/atom.xml
Today I organized the front page of this blog (jvns.ca) into CATEGORIES! Now it is actually possible to make some sense of what is on here!! There are 28 categories (computer networking! learning! “how things work”! career stuff! many more!) I am so excited about this.
How it works: Every post is in only 1 category. Obviously the categories aren’t “perfect” (there is a “how things work” category and a “kubernetes” category and a “networking” category, and so for a “how container networking works in kubernetes” I need to just pick one) but I think it’s really nice and I’m hoping that it’ll make the blog easier for folks to navigate.
If you’re interested in more of the story of how I’m thinking about this: I’ve been a little dissatisfied for a long time with how this blog is organized. Here’s where I started, in 2013, with a pretty classic blog layout (this is Octopress, which was a Jekyll Wordpress-lookalike theme that was cool back then and which served me very well for a long time):
### problem with “show the 5 most recent posts”: you don’t know what the person’s writing is about!
This is a super common way to organize a blog: on the homepage of your blog, you display maybe the 5 most recent posts, and then maybe have a “previous” link.
1. it’s hard to hunt through their back catalog to find cool things they’ve written
2. it’s SO HARD to get an overall sense for the body of a person’s work by reading 1 blog post at a time
### next attempt: show every post in chronological order
My next attempt at blog organization was to show every post on the homepage in chronological order. This was inspired by Dan Luu’s blog, which takes a super minimal approach. I switched to this (according to the internet archive) sometime in early 2016. Here’s what it looked like (with some CSS issues :))
The reason I like this “show every post in chronological order” approach more is that when I discover a new blog, I like to obsessively binge read through the whole thing to see all the cool stuff the person has written. Rachel by the bay also organizes her writing this way, and when I found her blog I was like OMG WOW THIS IS AMAZING I MUST READ ALL OF THIS NOW and being able to look through all the entries quickly and start reading ones that caught my eye was SO FUN.
Will Larson’s blog also has a “list of all posts” page which I find useful because it’s a good blog, and sometimes I want to refer back to something he wrote months ago and can’t remember what it was called, and being able to scan through all the titles makes it easier to do that.
I was pretty happy with this and that’s how it’s been for the last 3 years.
### problem: a chronological list of 390 posts still kind of sucks
As of today, I have 390 posts here (360,000 words! that’s, like, 4 300-page books! eep!). This is objectively a lot of writing and I would like people new to the blog to be able to navigate it and actually have some idea what’s going on.
And this blog is not actually just a totally disorganized group of words! I have a lot of specific interests: I’ve written probably 30 posts about computer networking, 15ish on ML/statistics, 20ish career posts, etc. And when I write a new Kubernetes post or whatever, it’s usually at least sort of related to some ongoing train of thought I have about Kubernetes. And it’s totally obvious to me what other posts that post is related to, but obviously to a new person it’s not at all clear what the trains of thought are in this blog.
### solution for now: assign every post 1 (just 1) category
My new plan is to assign every post a single category. I got this idea from Itamar Turner-Trauring’s site.
Here are the initial categories:
• Cool computer tools / features / ideas
• Computer networking
• How a computer thing works
• Kubernetes / containers
• Zines / comics
• On writing comics / zines
• Conferences
• Organizing conferences
• Statistics / machine learning / data analysis
• Year in review
• Infrastructure / operations engineering
• Career / work
• Working with others / communication
• Remote work
• Talks transcripts / podcasts
• On blogging / speaking
• On learning
• Rust
• Linux debugging / tracing tools
• Debugging stories
• Fan posts about awesome work by other people
• Inclusion
• rbspy
• Performance
• Open source
• Linux systems stuff
• Recurse Center (my daily posts during my RC batch)
I guess you can tell this is a systems-y blog because there are 8 different systems-y categories (kubernetes, infrastructure, linux debugging tools, rust, debugging stories, performance, and linux systems stuff, how a computer thing works) :).
But it was nice to see that I also have this huge career / work category! And that category is pretty meaningful to me, it includes a lot of things that I struggled with and were hard for me to learn. And I get to put all my machine learning posts together, which is an area I worked in for 3 years and am still super interested in and every so often learn a new thing about!
### How I assign the categories: a big text file
I came up with a scheme for assigning the categories that I thought was really fun! I knew immediately that coming up with categories in advance would be impossible (how was I supposed to know that “fan posts about awesome work by other people” was a substantial category?)
So instead, I took kind of a Marie Kondo approach: I wrote a script to just dump all the titles of every blog post into a text file, and then I just used vim to organize them roughly into similar sections. Seeing everything in one place (a la marie kondo) really helped me see the patterns and figure out what some categories were.
Here’s the final result of that text file. I think having a lightweight way of organizing the posts all in one file made a huge difference and that it would have been impossible for me to seen the patterns otherwise.
### How I implemented it: a hugo taxonomy
Once I had that big text file, I wrote a janky python script to assign the categories in that text file to the actual posts.
I use Hugo for this blog, and so I also needed to tell Hugo about the categories. This blog already technically has tags (though they’re woefully underused, I didn’t want to delete them). I use Hugo, and it turns out that in Hugo you can define arbitrary taxonomies. So I defined a new taxonomy for these sections (right now it’s called, unimaginitively, juliasections).
The details of how I did this are pretty boring but here’s the hugo template that makes it display on the homepage. I used this Hugo documentation page on taxonomies a lot.
### organizing my site is cool! reverse chronology maybe isn’t the best possible thing!
Amy Hoy has this interesting article called how the blog broke the web about how the rise of blog software made people adopt a site format that maybe didn’t serve what they were writing the best.
I don’t personally feel that mad about the blog / reverse chronology organization: I like blogging! I think it was nice for the first 6 years or whatever to be able to just write things that I think are cool without thinking about where they “fit”. It’s worked really well for me.
But today, 360,000 words in, I think it makes sense to add a little more structure :).
### what it looks like now!
Here’s what the new front page organization looks like! These are the blogging / learning / rust sections! I think it’s cool how you can see the evolution of some of my thinking (I sure have written a lot of posts about asking questions :)).
### I ❤ the personal website
This is also part of why I love having a personal website that I can organize any way I want: for both of my main sites (jvns.ca and now wizardzines.com) I have total control over how they appear! And I can evolve them over time at my own pace if I decide something a little different will work better for me. I’ve gone from a jekyll blog to octopress to a custom-designed octopress blog to Hugo and made a ton of little changes over time. It’s so nice.
I think it’s fun that these 3 screenshots are each 3 years apart – what I wanted in 2013 is not the same as 2016 is not the same as 2019! This is okay!
And I really love seeing how other people choose to organize their personal sites! Please keep making cool different personal sites.
### !!Con 2019: submit a talk!
As some of you might know, for the last 5 years I’ve been one of the organizers for a conferences called !!Con. This year it’s going to be held on May 11-12 in NYC.
The submission deadline is Sunday, March 3 and you can submit a talk here.
(we also expanded to the west coast this year: !!Con West is next week!! I’m not on the !!Con West team since I live on the east coast but they’re doing amazing work, I have a ticket, and I’m so excited for there to be more !!Con in the world)
### !!Con is about the joy, excitement, and surprise of computing
Computers are AMAZING. You can make programs that seem like magic, computer science has all kind of fun and surprising tidbits, there are all kinds of ways to make really cool art with computers, the systems that we use every day (like DNS!) are often super fascinating, and sometimes our computers do REALLY STRANGE THINGS and it’s very fun to figure out why.
!!Con is about getting together for 2 days to share what we all love about computing. The only rule of !!Con talks is that the talk has to have an exclamation mark in the title :)
We originally considered calling !!Con ExclamationMarkCon but that was too unwieldy so we went with !!Con :).
### !!Con is inclusive
The other big thing about !!Con is that we think computing should include everyone. To make !!Con a space where everyone can participate, we
• have open captioning for all talks (so that people who can’t hear well can read the text of the talk as it’s happening). This turns out to be great for LOTS of people – if you just weren’t paying attention for a second, you can look at the live transcript to see what you missed!
• pay our speakers & pay for speaker travel
• have a code of conduct (of course)
• use the RC social rules
• make sure our washrooms work for people of all genders
• let people specify on their badges if they don’t want photos taken of them
• do a lot of active outreach to make sure our set of speakers is diverse
### past !!Con talks
I think maybe the easiest way to explain !!Con if you haven’t been is through the talk titles! Here are a few arbitrarily chosen talks from past !!Cons:
If you want to see more (or get an idea of what !!Con talk descriptions usually look like), here’s every past year of the conference:
### this year you can also submit a play / song / performance!
One difference from previous !!Cons is that if you want submit a non-talk-talk to !!Con this year (like a play!), you can! I’m very excited to see what people come up with. For more of that see Expanding the !!Con aesthetic.
### all talks are reviewed anonymously
One big choice that we’ve made is to review all talks anonymously. This means that we’ll review your talk the same way whether you’ve never given a talk before or if you’re an internationally recognized public speaker. I love this because many of our best talks are from first time speakers or people who I’d never heard of before, and I think anonymous review makes it easier to find great people who aren’t well known.
### writing a good outline is important
We can’t rely on someone’s reputation to determine if they’ll give a good talk, but we do need a way to see that people have a plan for how to present their material in an engaging way. So we ask everyone to give a somewhat detailed outline explaining how they’ll spend their 10 minutes. Some people do it minute-by-minute and some people just say “I’ll explain X, then Y, then Z, then W”.
Lindsey Kuper wrote some good advice about writing a clear !!Con outline here which has some examples of really good outlines which you can see here.
!!Con is pay-what-you-can (if you can’t afford a $300 conference ticket, we’re the conference for you!). Because of that, we rely on our incredible sponsors (companies who want to build an inclusive future for tech with us!) to help make up the difference so that we can pay our speakers for their amazing work, pay for speaker travel, have open captioning, and everything else that makes !!Con the amazing conference it is. If you love !!Con, a huge way you can help support the conference is to ask your company to sponsor us! Here’s our sponsorship page and you can email me at [email protected] if you’re interested. ### hope to see you there ❤ I’ve met so many fantastic people through !!Con, and it brings me a lot of joy every year. The thing that makes !!Con great is all the amazing people who come to share what they’re excited about every year, and I hope you’ll be one of them. ### Networking tool comics! Hello! I haven’t been blogging too much recently because I’m working on a new zine project: Linux networking tools! I’m pretty excited about this one – I LOVE computer networking (it’s what I spent a big chunk of the last few years at work doing), but getting started with all the tools was originally a little tricky! For example – what if you have the IP address of a server and you want to make a https connection to it and check that it has a valid certificate? But you haven’t changed DNS to resolve to that server yet (because you don’t know if it works!) so you need to use the IP address? If you do curl https://1.2.3.4/, curl will tell you that the certificate isn’t valid (because it’s not valid for 1.2.3.4). So you need to know to do curl https://jvns.ca --resolve jvns.ca:443:104.198.14.52. I know how to use curl --resolve because my coworker told me how. And I learned that to find out when a cert expires you can do openssl x509 -in YOURCERT.pem -text -noout the same way. So the goal with this zine is basically to be “your very helpful coworker who gives you tips about how to use networking tools” in case you don’t have that person. And as we know, a lot of these tools have VERY LONG man pages and you only usually need to know like 5 command line options to do 90% of what you want to do. For example I only ever do maybe 4 things with openssl even though the openssl man pages together have more than 60,000 words. There are a few things I’m also adding (like ethtool and nmap and tc) which I don’t personally use super often but I think are super useful to people with different jobs than me. And I’m a big fan of mixing more advanced things (like tc) with basic things (like ssh) because then even if you’re learning the basic things for the first time, you can learn that the advanced thing exists! Here’s some work in progress: It’s been super fun to draw these: I didn’t know about ssh-copy-id or ~. before I made that ssh comic and I really wish I’d known about them earlier! As usual I’ll announce the zine when it comes out here, or you can sign up for announcements at https://wizardzines.com/mailing-list/. ### A few early marketing thoughts At some point last month I said I might write more about business, so here are some very early marketing thoughts for my zine business (https://wizardzines.com!). The question I’m trying to make some progress on in this post is: “how to do marketing in a way that feels good?” ### what’s the point of marketing? Okay! What’s marketing? What’s the point? I think the ideal way marketing works is: 1. you somehow tell a person about a thing 2. you explain somehow why the thing will be useful to them / why it is good 3. they buy it and they like the thing because it’s what they expected (or, when you explain it they see that they don’t want it and don’t buy it which is good too!!) So basically as far as I can tell good marketing is just explaining what the thing is and why it is good in a clear way. ### what internet marketing techniques do people use? I’ve been thinking a bit about internet marketing techniques I see people using on me recently. Here are a few examples of internet marketing techniques I’ve seen: 1. word of mouth (“have you seen this cool new thing?!”) 2. twitter / instagram marketing (build a twitter/instagram account) 3. email marketing (“build a mailing list with a bajillion people on it and sell to them”) 4. email marketing (“tell your existing users about features that they already have that they might want to use”) 5. social proof marketing (“jane from georgia bought a sweater”), eg fomo.com 6. cart notifications (“you left this sweater in your cart??! did you mean to buy it? maybe you should buy it!“) 7. content marketing (which is fine but whenever people refer to my writing as ‘content’ I get grumpy :)) ### you need some way to tell people about your stuff Something that is definitely true about marketing is that you need some way to tell new people about the thing you are doing. So for me when I’m thinking about running a business it’s less about “should i do marketing” and more like “well obviously i have to do marketing, how do i do it in a way that i feel good about?” ### what’s up with email marketing? I feel like every single piece of internet marketing advice I read says “you need a mailing list”. This is advice that I haven’t really taken to heart – technically I have 2 mailing lists: 1. the RSS feed for this blog, which sends out new blog posts to a mailing list for folks who don’t use RSS (which 3000 of you get) 2. https://wizardzines.com's list, for comics / new zine announcements (780 people subscribe to that! thank you!) but definitely neither of them is a Machine For Making Sales and I’ve put in almost no efforts in that direction yet. here are a few things I’ve noticed about marketing mailing lists: • most marketing mailing lists are boring but some marketing mailing lists are actually interesting! For example I kind of like amy hoy’s emails. • Someone told me recently that they have 200,000 people on their mailing list (?!!) which made the “a mailing list is a machine for making money” concept make a lot more sense to me. I wonder if people who make a lot of money from their mailing lists all have huge 10k+ person mailing lists like this? ### what works for me: twitter Right now for my zines business I’d guess maybe 70% of my sales come from Twitter. The main thing I do is tweet pages from zines I’m working on (for example: yesterday’s comic about ss). The comics are usually good and fun so invariably they get tons of retweets, which means that I end up with lots of followers, which means that when I later put up the zine for sale lots of people will buy it. And of course people don’t have to buy the zines, I post most of what ends up in my zines on twitter for free, so it feels like a nice way to do it. Everybody wins, I think. (side note: when I started getting tons of new followers from my comics I was actually super worried that it would make my experience of Twitter way worse. That hasn’t happened! the new followers all seem totally reasonable and I still get a lot of really interesting twitter replies which is wonderful ❤) I don’t try to hack/optimize this really: I just post comics when I make them and I try to make them good. ### a small Twitter innovation: putting my website on the comics Here’s one small marketing change that I made that I think makes sense! In the past, I didn’t put anything about how to buy my comics on the comics I posted on Twitter, just my Twitter username. Like this: After a while, I realized people were asking me all the time “hey, can I buy a book/collection? where do these come from? how do I get more?“! I think a marketing secret is “people actually want to buy things that are good, it is useful to tell people where they can buy things that are good”. So just recently I’ve started adding my website and a note about my current project on the comics I post on Twitter. It doesn’t say much: just “❤ these comics? buy a collection! wizardzines.com” and “page 11 of my upcoming bite size networking zine”. Here’s what it looks like: I feel like this strikes a pretty good balance between “julia you need to tell people what you’re doing otherwise how are they supposed to buy things from you” and “omg too many sales pitches everywhere”? I’ve only started doing this recently so we’ll see how it goes. ### should I work on a mailing list? It seems like the same thing that works on twitter would work by email if I wanted to put in the time (email people comics! when a zine comes out, email them about the zine and they can buy it if they want!). One thing I LOVE about Twitter though is that people always reply to the comics I post with their own tips and tricks that they love and I often learn something new. I feel like email would be nowhere near as fun :) But I still think this is a pretty good idea: keeping up with twitter can be time consuming and I bet a lot of people would like to get occasional email with programming drawings. (would you?) One thing I’m not sure about is – a lot of marketing mailing lists seem to use somewhat aggressive techniques to get new emails (a lot of popups on a website, or adding everyone who signs up to their service / buys a thing to a marketing list) and while I’m basically fine with that (unsubscribing is easy!), I’m not sure that it’s what I’d want to do, and maybe less aggressive techniques will work just as well? We’ll see. ### should I track conversion rates? A piece of marketing advice I assume people give a lot is “be data driven, figure out what things convert the best, etc”. I don’t do this almost at all – gumroad used to tell me that most of my sales came from Twitter which was good to know, but right now I have basically no idea how it works. Doing a bunch of work to track conversion rates feels bad to me: it seems like it would be really easy to go down a dumb rabbit hole of “oh, let’s try to increase conversion by 5%” instead of just focusing on making really good and cool things. My guess is that what will work best for me for a while is to have some data that tells me in broad strokes how the business works (like “about 70% of sales come from twitter”) and just leave it at that. ### should I do advertising? I had a conversation with Kamal about this post that went: • julia: “hmm, maybe I should talk about ads?” • julia: “wait, are ads marketing?” • kamal: “yes ads are marketing” So, ads! I don’t know anything about advertising except that you can advertise on Facebook or Twitter or Google. Some non-ethical questions I have about advertising: • how do you choose what keywords to advertise on? • are there actually cheap keywords, like is ‘file descriptors’ cheap? • how much do you need to pay per click? (for some weird linux keywords, google estimated 20 cents a click?) • can you use ads effectively for something that costs$10?
This seems nontrivial to learn about and I don’t think I’m going to try soon.
### other marketing things
a few other things I’ve thought about:
• I learned about “social proof marketing” sites like fomo.com yesterday which makes popups on your site like “someone bought COOL THING 3 hours ago”. This seems like it has some utility (people are actually buying things from me all the time, maybe that’s useful to share somehow?) but those popups feel a bit cheap to me and I don’t really think it’s something I’d want to do right now.
• similarly a lot of sites like to inject these popups like “HELLO PLEASE SIGN UP FOR OUR MAILING LIST”. similar thoughts. I’ve been putting an email signup link in the footer which seems like a good balance between discoverable and annoying. As an example of a popup which isn’t too intrusive, though: nate berkopec has one on his site which feels really reasonable! (scroll to the bottom to see it)
Maybe marketing is all about “make your things discoverable without being annoying”? :)
### that’s all!
Hopefully some of this was interesting! Obviously the most important thing in all of this is to make cool things that are useful to people, but I think cool useful writing does not actually sell itself!
If you have thoughts about what kinds of marketing have worked well for you / you’ve felt good about I would love to hear them!
### Some nonparametric statistics math
I’m trying to understand nonparametric statistics a little more formally. This post may not be that intelligible because I’m still pretty confused about nonparametric statistics, there is a lot of math, and I make no attempt to explain any of the math notation. I’m working towards being able to explain this stuff in a much more accessible way but first I would like to understand some of the math!
There’s some MathJax in this post so the math may or may not render in an RSS reader.
Some questions I’m interested in:
• what is nonparametric statistics exactly?
• what guarantees can we make? are there formulas we can use?
• why do methods like the bootstrap method work?
since these notes are from reading a math book and math books are extremely dense this is basically going to be “I read 7 pages of this math book and here are some points I’m confused about”
### what’s nonparametric statistics?
Today I’m looking at “all of nonparametric statistics” by Larry Wasserman. He defines nonparametric inference as:
a set of modern statistical methods that aim to keep the number of underlying assumptions as weak as possible
Basically my interpretation of this is that – instead of assuming that your data comes from a specific family of distributions (like the normal distribution) and then trying to estimate the paramters of that distribution, you don’t make many assumptions about the distribution (“this is just some data!!“). Not having to make assumptions is nice!
There aren’t no assumptions though – he says
we assume that the distribution $F$ lies in some set $\mathfrak{F}$ called a statistical model. For example, when estimating a density $f$, we might assume that $$f \in \mathfrak{F} = \left\{ g : \int(g^{\prime\prime}(x))^2dx \leq c^2 \right\}$$ which is the set of densities that are not “too wiggly”.
I have not too much intuition for the condition $\int(g^{\prime\prime}(x))^2dx \leq c^2$. I calculated that integral for the normal distribution on wolfram alpha and got 4, which is a good start. (4 is not infinity!)
• what’s an example of a probability density function that doesn’t satisfy that $\int(g^{\prime\prime}(x))^2dx \leq c^2$ condition? (probably something with an infinite number of tiny wiggles, and I don’t think any distribution i’m interested in in practice would have an infinite number of tiny wiggles?)
• why does the density function being “too wiggly” cause problems for nonparametric inference? very unclear as yet.
### we still have to assume independence
One assumption we won’t get away from is that the samples in the data we’re dealing with are independent. Often data in the real world actually isn’t really independent, but I think the what people do a lot of the time is to make a good effort at something approaching independence and then close your eyes and pretend it is?
### estimating the density function
Okay! Here’s a useful section! Let’s say that I have 100,000 data points from a distribution. I can draw a histogram like this of those data points:
If I have 100,000 data points, it’s pretty likely that that histogram is pretty close to the actual distribution. But this is math, so we should be able to make that statement precise, right?
For example suppose that 5% of the points in my sample are more than 100. Is the probability that a point is greater than 100 actually 0.05? The book gives a nice formula for this:
$$\mathbb{P}(|\widehat{P}_n(A) - P(A)| > \epsilon ) \leq 2e^{-2n\epsilon^2}$$
(by “Hoeffding’s inequality” which I’ve never heard of before). Fun aside about that inequality: here’s a nice jupyter notebook by henry wallace using it to identify the most common Boggle words.
here, in our example:
• n is 1000 (the number of data points we have)
• $A$ is the set of points more than 100
• $\widehat{P}_n(A)$ is the empirical probability that a point is more than 100 (0.05)
• $P(A)$ is the actual probability
• $\epsilon$ is how certain we want to be that we’re right
So, what’s the probability that the real probability is between 0.04 and 0.06? $\epsilon = 0.01$, so it’s $2e^{-2 \times 100,000 \times (0.01)^2} = 4e^{-9}$ ish (according to wolfram alpha)
here is a table of how sure we can be:
• 100,000 data points: 4e-9 (TOTALLY CERTAIN that 4% - 6% of points are more than 100)
• 10,000 data points: 0.27 (27% probability that we’re wrong! that’s… not bad?)
• 1,000 data points: 1.6 (we know the probability we’re wrong is less than.. 160%? that’s not good!)
• 100 data points: lol
so basically, in this case, using this formula: 100,000 data points is AMAZING, 10,000 data points is pretty good, and 1,000 is much less useful. If we have 1000 data points and we see that 5% of them are more than 100, we DEFINITELY CANNOT CONCLUDE that 4% to 6% of points are more than 100. But (using the same formula) we can use $\epsilon = 0.04$ and conclude that with 92% probability 1% to 9% of points are more than 100. So we can still learn some stuff from 1000 data points!
This intuitively feels pretty reasonable to me – like it makes sense to me that if you have NO IDEA what your distribution that with 100,000 points you’d be able to make quite strong inferences, and that with 1000 you can do a lot less!
### more data points are exponentially better?
One thing that I think is really cool about this estimating the density function formula is that how sure you can be of your inferences scales exponentially with the size of your dataset (this is the $e^{-n\epsilon^2}$). And also exponentially with the square of how sure you want to be (so wanting to be sure within 0.01 is VERY DIFFERENT than within 0.04). So 100,000 data points isn’t 10x better than 10,000 data points, it’s actually like 10000000000000x better.
Is that true in other places? If so that seems like a super useful intuition! I still feel pretty uncertain about this, but having some basic intuition about “how much more useful is 10,000 data points than 1,000 data points?“) feels like a really good thing.
### some math about the bootstrap
The next chapter is about the bootstrap! Basically the way the bootstrap works is:
1. you want to estimate some statistic (like the median) of your distribution
2. the bootstrap lets you get an estimate and also the variance of that estimate
3. you do this by repeatedly sampling with replacement from your data and then calculating the statistic you want (like the median) on your samples
I’m not going to go too much into how to implement the bootstrap method because it’s explained in a lot of place on the internet. Let’s talk about the math!
I think in order to say anything meaningful about bootstrap estimates I need to learn a new term: a consistent estimator.
### What’s a consistent estimator?
Wikipedia says:
In statistics, a consistent estimator or asymptotically consistent estimator is an estimator — a rule for computing estimates of a parameter $\theta_0$ — having the property that as the number of data points used increases indefinitely, the resulting sequence of estimates converges in probability to $\theta_0$.
This includes some terms where I forget what they mean (what’s “converges in probability” again?). But this seems like a very good thing! If I’m estimating some parameter (like the median), I would DEFINITELY LIKE IT TO BE TRUE that if I do it with an infinite amount of data then my estimate works. An estimator that is not consistent does not sound very useful!
### why/when are bootstrap estimators consistent?
spoiler: I have no idea. The book says the following:
Consistency of the boostrap can now be expressed as follows.
3.19 Theorem. Suppose that $\mathbb{E}(X_1^2) < \infty$. Let $T_n = g(\overline{X}_n)$ where $g$ is continuously differentiable at $\mu = \mathbb{E}(X_1)$ and that $g\prime(\mu) \neq 0$. Then,
$$\sup_u | \mathbb{P}_{\widehat{F}_n} \left( \sqrt{n} (T( \widehat{F}_n*) - T( \widehat{F}_n) \leq u \right) - \mathbb{P}_{\widehat{F}} \left( \sqrt{n} (T( \widehat{F}_n) - T( \widehat{F}) \leq u \right) | \rightarrow^\text{a.s.} 0$$
3.21 Theorem. Suppose that $T(F)$ is Hadamard differentiable with respect to $d(F,G)= sup_x|F(x)-G(x)|$ and that $0 < \int L^2_F(x) dF(x) < \infty$. Then,
$$\sup_u | \mathbb{P}_{\widehat{F}_n} \left( \sqrt{n} (T( \widehat{F}_n*) - T( \widehat{F}_n) \leq u \right) - \mathbb{P}_{\widehat{F}} \left( \sqrt{n} (T( \widehat{F}_n) - T( \widehat{F}) \leq u \right) | \rightarrow^\text{P} 0$$
things I understand about these theorems:
• the two formulas they’re concluding are the same, except I think one is about convergence “almost surely” and one about “convergence in probability”. I don’t remember what either of those mean.
• I think for our purposes of doing Regular Boring Things we can replace “Hadamard differentiable” with “differentiable”
• I think they don’t actually show the consistency of the bootstrap, they’re actually about consistency of the bootstrap confidence interval estimate (which is a different thing)
I don’t really understand how they’re related to consistency, and in particular the $\sup_u$ thing is weird, like if you’re looking at $\mathbb{P}(something < u)$, wouldn’t you want to minimize $u$ and not maximize it? Maybe it’s a typo and it should be $\inf_u$?
it concludes:
there is a tendency to treat the bootstrap as a panacea for all problems. But the bootstrap requires regularity conditions to yield valid answers. It should not be applied blindly.
### this book does not seem to explain why the bootstrap is consistent
In the appendix (3.7) it gives a sketch of a proof for showing that estimating the median using the bootstrap is consistent. I don’t think this book actually gives a proof anywhere that bootstrap estimates in general are consistent, which was pretty surprising to me. It gives a bunch of references to papers. Though I guess bootstrap confidence intervals are the most important thing?
### that’s all for now
This is all extremely stream of consciousness and I only spent 2 hours trying to work through this, but some things I think I learned in the last couple hours are:
1. maybe having more data is exponentially better? (is this true??)
2. “consistency” of an estimator is a thing, not all estimators are consistent
3. understanding when/why nonparametric bootstrap estimators are consistent in general might be very hard (the proof that the bootstrap median estimator is consistent already seems very complicated!)
4. boostrap confidence intervals are not the same thing as bootstrap estimators. Maybe I’ll learn the difference next!
### 2018: Year in review
I wrote these in 2015 and 2016 and 2017 and it’s always interesting to look back at them, so here’s a summary of what went on in my side projects in 2018.
### ruby profiler!
At the beginning of this year I wrote rbspy (docs: https://rbspy.github.io/). It inspired a Python version called py-spy and a PHP profiler called phpspy, both of which are excellent. I think py-spy in particular is probably better than rbspy which makes me really happy.
Writing a program that does something innovative (top for your Ruby program’s functions!) and inspiring other people to make amazing new tools is something I’m really proud of.
A very surprising thing that happened in 2018 is that I started a business! This is the website: https://wizardzines.com/, and I sell programming zines.
It’s been astonishingly successful (it definitely made me enough money that I could have lived on just the revenue from the business this year), and I’m really grateful to everyone’s who’s supported that work. I hope the zines have helped you. I always thought that it was impossible to make anywhere near as much money teaching people useful things as I can as a software developer, and now I think that’s not true. I don’t think that I’d want to make that switch (I like working as a programmer!), but now I actually think that if I was serious about it and was interested in working on my business skills, I could probably make it work.
I don’t really know what’s next, but I plan to write at least one zine next year. I learned a few things about business this year, mainly from:
I used to think that sales / marketing had to be gross, but reading some of these business books made me think that it’s actually possible to run a business by being honest & just building good things.
### work!
this is mostly about side projects, but a few things about work:
• I still have the same manager (jay). He’s been really great to work with. The help! i have a manager! zine is secretly largely things I learned from working with him.
• my team made some big networking infrastructure changes and it went pretty well. I learned a lot about proxies/TLS and a little bit about C++.
• I mentored another intern, and the intern I mentored last year joined us full time!
When I go back to work I’m going to switch to working on something COMPLETELY DIFFERENT (writing code that sends messages to banks!) for 3 months. It’s a lot closer to the company’s core business, and I think it’ll be neat to learn more about how financial infastracture works.
I struggled a bit with understanding/defining my job this year. I wrote What’s a senior engineer’s job? about that, but I have not yet reached enlightenment.
### talks!
I gave 4 talks in 2018:
• So you want to be a wizard at StarCon
• Building a Ruby profiler at the Recurse Center’s localhost series
• Build Impossible Programs in May at Deconstruct.
• High Reliability Infrastructure Migrations at Kubecon. I’m pretty happy about this talk because I’ve wanted to give a good talk about what I do at work for a long time and I think I finally succeeded. Previously when I gave talks about my work I think I fell into the trap of just describing what we do (“we do X Y Z” … “okay, so what?“). With this one, I think I was able to actually say things that were useful to other people.
In past years I’ve mostly given talks which can mostly be summarized “here are some cool tools” and “here is how to learn hard things”. This year I changed focus to giving talks about the actual work I do – there were two talks about building a Ruby profiler, and one about what I do at work (I spend a lot of time on infrastructure migrations!)
I’m not sure whether if I’ll give any talks in 2019. I travelled more than I wanted to in 2018, and to stay sane I ended up having to cancel on a talk I was planning to give with relatively short notice which wasn’t good.
### podcasts!
I also experimented a bit with a new format: the podcast! These were basically all really fun! They don’t take that long (about 2 hours total?).
what I learned about doing podcasts:
• It’s really important to give the hosts a list of good questions to ask, and to be prepared to give good answers to those questions! I’m not a super polished podcast guest.
• you need a good microphone. At least one of these people told me I actually couldn’t be on their podcast unless I had a good enough microphone, so I bought a medium fancy microphone. It wasn’t too expensive and it’s nice to have a better quality microphone! Maybe I will use it more to record audio/video at some point!
### !!Con
I co-organized !!Con for the 4th time – I ran sponsorships. It’s always such a delight and the speakers are so great.
!!Con is expanding to the west coast in 2019 – I’m not directly involved with that but it’s going to be amazing.
### blog posts!
I apparently wrote 54 blog posts in 2018. A couple of my favourites are What’s a senior engineer’s job? , How to teach yourself hard things, and batch editing files with ed.
There were basically 4 themes in blogging for 2018:
• progress on the rbspy project while I was working on it (this category)
• computer networking / infrastructure engineering (basically all I did at work this year was networking, though I didn’t write about it as much as I might have)
• musings about zines / business / developer education, for instance why sell zines? and who pays to educate developers?
• a few of the usual “how do you learn things” / “how do you succeed at your job” posts as I figure things about about that, for instance working remotely, 4 years in
### a tiny inclusion project: a guide to performance reviews
Last year in addition to my actual job, I did a couple of projects at work towards helping make sure the performance/promotion process works well for folks – i collaborated with the amazing karla on the idea of a “brag document”, and redid our engineering levels.
This year, in the same vein, I wrote a document called the “Unofficial guide to the performance reviews”. A lot of folks said it helped them but probably it’s too early to celebrate. I think explaining to folks how the performance review process actually works and how to approach it is really valuable and I might try to publish a more general version here at some point.
I like that I work at a place where it’s possible/encouraged to do projects like this. I spend a relatively small amount of time on them (maybe I spent 15 hours on this one?) but it feels good to be able to make tiny steps towards building a better workplace from time to time. It’s really hard to judge the results though!
### conclusions?
some things that worked in 2018:
• setting boundaries around what my job is
• doing open source work while being paid for it
• doing small inclusion projects at work
• writing zines is very time consuming but I feel happy about the time I spent on that
• blogging is always great
### New talk: High Reliability Infrastructure Migrations
On Tuesday I gave a talk at KubeCon called High Reliability Infrastructure Migrations. The abstract was:
For companies with high availability requirements (99.99% uptime or higher), running new software in production comes with a lot of risks. But it’s possible to make significant infrastructure changes while maintaining the availability your customers expect! I’ll give you a toolbox for derisking migrations and making infrastructure changes with confidence, with examples from our Kubernetes & Envoy experience at Stripe.
## video
### slides
Here are the slides:
since everyone always asks, I drew them in the Notability app on an iPad. I do this because it’s faster than trying to use regular slides software and I can make better slides.
## a few notes
Here are a few links & notes about things I mentioned in the talk
### skycfg: write functions, not YAML
I talked about how my team is working on non-YAML interfaces for configuring Kubernetes. The demo is at skycfg.fun, and it’s on GitHub here. It’s based on Starlark, a configuration language that’s a subset of Python.
My coworker John has promised that he’ll write a blog post about it at some point, and I’m hoping that’s coming soon :)
### no haunted forests
I mentioned a deploy system rewrite we did. John has a great blog post about when rewrites are a good idea and how he approached that rewrite called no haunted forests.
### ignore most kubernetes ecosystem software
One small point that I made in the talk was that on my team we ignore almost all software in the Kubernetes ecosystem so that we can focus on a few core pieces (Kubernetes & Envoy, plus some small things like kiam). I wanted to mention this because I think often in Kubernetes land it can seem like everyone is using Cool New Things (helm! istio! knative! eep!). I’m sure those projects are great but I find it much simpler to stay focused on the basics and I wanted people to know that it’s okay to do that if that’s what works for your company.
I think the reality is that actually a lot of folks are still trying to work out how to use this new software in a reliable and secure way.
### other talks
I haven’t watched other Kubecon talks yet, but here are 2 links:
I heard good things about this keynote from melanie cebula about kubernetes at airbnb, and I’m excited to see this talk about kubernetes security. The slides from that security talk look useful
Also I’m very excited to see Kelsey Hightower’s keynote as always, but that recording isn’t up yet. If you have other Kubecon talks to recommend I’d love to know what they are.
### my first work talk I’m happy with
I usually give talks about debugging tools, or side projects, or how I approach my job at a high level – not on the actual work that I do at my job. What I talked about in this talk is basically what I’ve been learning how to do at work for the last ~2 years. Figuring out how to make big infrastructure changes safely took me a long time (and I’m not done!), and so I hope this talk helps other folks do the same thing.
### How do you document a tech project with comics?
Every so often I get email from people saying basically “hey julia! we have an open source project! we’d like to use comics / zines / art to document our project! Can we hire you?“.
spoiler: the answer is “no, you can’t hire me” – I don’t do commissions. But I do think this is a cool idea and I’ve often wished I had something more useful to say to people than “no”, so if you’re interested in this, here are some ideas about how to accomplish it!
### zine != drawing
First, a terminology distinction. One weird thing I’ve noticed is that people frequently refer to individual tech drawings as “zines”. I think this is due to me communicating poorly somehow, but – drawings are not zines! A zine is a printed booklet, like a small magazine. You wouldn’t call a photo of a model in Vogue a magazine! The magazine has like a million pages! An individual drawing is a drawing/comic/graphic/whatever. Just clarifying this because I think it causes a bit of unnecessary confusion.
### comics without good information are useless
Usually when folks ask me “hey, could we make a comic explaining X”, it doesn’t seem like they have a clear idea of what information exactly they want to get across, they just have a vague idea that maybe it would be cool to draw some comics. This makes sense – figuring out what information would be useful to tell people is very hard!! It’s 80% of what I spend my time on when making comics.
You should think about comics the same way as any kind of documentation – start with the information you want to convey, who your target audience is, and how you want to distribute it (twitter? on your website? in person?), and figure out how to illustrate it after :). The information is the main thing, not the art!
Once you have a clear story about what you want to get across, you can start trying to think about how to represent it using illustrations!
### focus on concepts that don’t change
Drawing comics is a much bigger investment than writing documentation (it takes me like 5x longer to convey the same information in a comic than in writing). So use it wisely! Because it’s not that easy to edit, if you’re going to make something a comic you want to focus on concepts that are very unlikely to change. So talk about the core ideas in your project instead of the exact command line arguments it takes!
Here are a couple of options for how you could use comics/illustrations to document your project!
### option 1: a single graphic
One format you might want to try is a single, small graphic explaining what your project is about and why folks might be interested in it. For example: this zulip comic
This is a short thing, you could post it on Twitter or print it as a pamphlet to give out. The information content here would probably be basically what’s on your project homepage, but presented in a more fun/exciting way :)
You can put a pretty small amount of information in a single comic. With that Zulip comic, the things I picked out were:
• zulip is sort of like slack, but it has threads
• it’s easy to keep track of threads even if the conversation takes place over several days
• you can much more easily selectively catch up with Zulip
• zulip is open source
• there’s an open zulip server you can try out
That’s not a lot of information! It’s 50 words :). So to do this effectively you need to distill your project down to 50 words in a way that’s still useful. It’s not easy!
### option 2: many comics
Another approach you can take is to make a more in depth comic / illustration, like google’s guide to kubernetes or the children’s illustrated guide to kubernetes.
To do this, you need a much stronger concept than “uh, I want to explain our project” – you want to have a clear target audience in mind! For example, if I were drawing a set of Docker comics, I’d probably focus on folks who want to use Docker in production. so I’d want to discuss:
• publishing your containers to a public/private registry
• some best practices for tagging your containers
• how to use layers to save on disk space / download less stuff
• whether it’s reasonable to run the same containers in production & in dev
That’s totally different from the set of comics I’d write for folks who just want to use Docker to develop locally!
### option 3: a printed zine
The main thing that differentiates this from “many comics” is that zines are printed! Because of that, for this to make sense you need to have a place to give out the printed copies! Maybe you’re going present your project at a major conference? Maybe you give workshops about your project and want to give our the zine to folks in the workshop as notes? Maybe you want to mail it to people?
There are basically 3 ways to hire someone:
1. Hire someone who both understands (or can quickly learn) the technology you want to document and can illustrate well. These folks are tricky to find and probably expensive (I certainly wouldn’t do a project like this for less than 10,000 even if I did do commissions), just because programmers can usually charge a pretty high consulting rate. I’d guess that the main failure mode here is that it might be impossible/very hard to find someone, and it might be expensive. 2. Collaborate with an illustrator to draw it for you. The main failure mode here is that if you don’t give the illustrator clear explanations of your tech to work with, you.. won’t end up with a clear and useful explanation. From what I’ve seen, most folks underinvest in writing clear explanations for their illustrators – I’ve seen a few really adorable tech comics that I don’t find useful or clear at all. I’d love to see more people do a better job of this. What’s the point of having an adorable illustration if it doesn’t teach anyone anything? :) 3. Draw it yourself :). This is what I do, obviously. stick figures are okay! Most people seem to use method #2 – I’m not actually aware of any tech folks who have done commissioned comics (though I’m sure it’s happened!). I think method #2 is a great option and I’d love to see more folks do it. Paying illustrators is really fun! ### An example of how C++ destructors are useful in Envoy For a while now I’ve been working with a C++ project (Envoy), and sometimes I need to contribute to it, so my C++ skills have gone from “nonexistent” to “really minimal”. I’ve learned what an initializer list is and that a method starting with ~ is a destructor. I almost know what an lvalue and an rvalue are but not quite. But the other day when writing some C++ code I figured out something exciting about how to use destructors that I hadn’t realized! (the tl;dr of this post for people who know C++ is “julia finally understands what RAII is and that it is useful” :)) ### what’s a destructor? C++ has objects. When an C++ object goes out of scope, the compiler inserts a call to its destructor. So if you have some code like function do_thing() { Thing x{}; // this calls the Thing constructor return 2; } there will be a call to x’s destructor at the end of the do_thing function. so the code c++ generates looks something like: • make new thing • call the new thing’s destructor • return 2 Obviously destructors are way more complicated like this. They need to get called when there are exceptions! And sometimes they get called manually. And for lots of other reasons too. But there are 10 million things to know about C++ and that is not what we’re doing today, we are just talking about one thing. ### what happens in a destructor? A lot of the time memory gets freed, which is how you avoid having memory leaks. But that’s not what we’re talking about in this post! We are talking about something more interesting. ### the thing we’re interested in: Envoy circuit breakers So I’ve been working with Envoy a lot. 3 second Envoy refresher: it’s a HTTP proxy, your application makes requests to Envoy, which then proxies the request to the servers the application wants to talk to. One very useful feature Envoy has is this thing called “circuit breakers”. Basically the idea with is that if your application makes 50 billion connections to a service, that will probably overwhelm the service. So Envoy keeps track how many TCP connections you’ve made to a service, and will stop you from making new requests if you hit the limit. The default max_connection limit ### how do you track connection count? To maintain a circuit breaker on the number of TCP connections, that means you need to keep an accurate count of how many TCP connections are currently open! How do you do that? Well, the way it works is to maintain a connections counter and: • every time a connection is opened, increment the counter • every time a connection is destroyed (because of a reset / timeout / whatever), decrement the counter • when creating a new connection, check that the connections counter is not over the limit that’s all! And incrementing the counter when creating a new connection is pretty easy. But how do you make sure that the counter gets decremented wheh the connection is destroyed? Connections can be destroyed in a lot of ways (they can time out! they can be closed by Envoy! they can be closed by the server! maybe something else I haven’t thought of could happen!) and it seems very easy to accidentally miss a way of closing them. ### destructors to the rescue The way Envoy solves this problem is to create a connection object (called ActiveClient in the HTTP connection pool) for every connection. Then it: • increments the counter in the constructor (code) • decrements the counter in the destructor (code) • checks the counter when a new connection is created (code) The beauty of this is that now you don’t need to make sure that the counter gets decremented in all the right places, you now just need to organize your code so that the ActiveClient object’s destructor gets called when the connection has closed. Where does the ActiveClient destructor get called in Envoy? Well, Envoy maintains 2 lists of clients (ready_clients and busy_clients), and when a connection gets closed, Envoy removes the client from those lists. And when it does that, it doesn’t need to do any extra cleanup!! In C++, anytime a object is removed from a list, its destructor is called. So client.removeFromList(ready_clients_); takes care of all the cleanup. And there’s no chance of forgetting to decrement the counter!! It will definitely always happen unless you accidentally leave the object on one of these lists, which would be a bug anyway because the connection is closed :) ### RAII This pattern Envoy is using here is an extremely common C++ programming pattern called “resource acquisition is initialization”. I find that name very confusing but that’s what it’s called. basically the way it works is: • identify a resource (like “connection”) where a lot of things need to happen when the connection is initialized / finished • make a class for that connection • put all the initialization / finishing code in the constructor / destructor • make sure the object’s destructor method gets called when appropriate! (by removing it from a vector / having it go out of scope) Previously I knew about using this pattern for kind of obvious things (make sure all the memory gets freed in the destructor, or make sure file descriptors get closed). But I didn’t realize it was also useful for cases that are slightly less obviously a resource like “decrement a counter”. The reason this pattern works is because the C++ compiler/standard library does a bunch of work to make sure that destructors get called when you’re done with an object – the compiler inserts destructor calls at the end of each block of code, after exceptions, and many standard library collections make sure destructors are called when you remove an object from a collection. ### RAII gives you prompt, deterministic, and hard-to-screw-up cleanup of resources The exciting thing here is that this programming pattern gives you a way to schedule cleaning up resources that’s: • easy to ensure always happens (when the object goes away, it always happens, even if there was an exception!) • prompt & determinstic (it happens right away and it’s guaranteed to happen!) ### what languages have RAII? C++ and Rust have RAII. Probably other languages too. Java, Python, Go, and garbage collected languages in general do not. In a garbage collected language you can often set up destructors to be run when the object is GC’d. But often (like in this case, which the connection count) you want things to be cleaned up right away when the object is no longer in use, not some indeterminate period later whenever GC happens to run. Python context managers are a related idea, you could do something like: with conn_pool.connection() as conn: do stuff ### that’s all for now! Hopefully this explanation of RAII is interesting and mostly correct. Thanks to Kamal for clarifying some RAII things for me! ### Some notes on running new software in production I’m working on a talk for kubecon in December! One of the points I want to get across is the amount of time/investment it takes to use new software in production without causing really serious incidents, and what that’s looked like for us in our use of Kubernetes. To start out, this post isn’t blanket advice. There are lots of times when it’s totally fine to just use software and not worry about how it works exactly. So let’s start by talking about when it’s important to invest. ### when it matters: 99.99% If you’re running a service with a low SLO like 99% I don’t think it matters that much to understand the software you run in production. You can be down for like 2 hours a month! If something goes wrong, just fix it and it’s fine. At 99.99%, it’s different. That’s 45 minutes / year of downtime, and if you find out about a serious issue for the first time in production it could easily take you 20 minutes or to revert the change. That’s half your uptime budget for the year! ### when it matters: software that you’re using heavily Also, even if you’re running a service with a 99.99% SLO, it’s impossible to develop a super deep understanding of every single piece of software you’re using. For example, a web service might use: • 100 library dependencies • the filesystem (so there’s linux filesystem code!) • the network (linux networking code!) • a database (like postgres) • a proxy (like nginx/haproxy) If you’re only reading like 2 files from disk, you don’t need to do a super deep dive into Linux filesystems internals, you can just read the file from disk. What I try to do in practice is identify the components which we rely on the (or have the most unusual use cases for!), and invest time into understanding those. These are usually pretty easy to identify because they’re the ones which will cause the most problems :) ### when it matters: new software Understanding your software especially matters for newer/less mature software projects, because it’s morely likely to have bugs & or just not have matured enough to be used by most people without having to worry. I’ve spent a bunch of time recently with Kubernetes/Envoy which are both relatively new projects, and neither of those are remotely in the category of “oh, it’ll just work, don’t worry about it”. I’ve spent many hours debugging weird surprising edge cases with both of them and learning how to configure them in the right way. ### a playbook for understanding your software The playbook for understanding the software you run in production is pretty simple. Here it is: 1. Start using it in production in a non-critical capacity (by sending a small percentage of traffic to it, on a less critical service, etc) 2. Let that bake for a few weeks. 3. Run into problems. 4. Fix the problems. Go to step 3. Repeat until you feel like you have a good handle on this software’s failure modes and are comfortable running it in a more critical capacity. Let’s talk about that in a little more detail, though: ### what running into bugs looks like For example, I’ve been spending a lot of time with Envoy in the last year. Some of the issues we’ve seen along the way are: (in no particular order) • One of the default settings resulted in retry & timeout headers not being respected • Envoy (as a client) doesn’t support TLS session resumption, so servers with a large amount of Envoy clients get DDOSed by TLS handshakes • Envoy’s active healthchecking means that you services get healthchecked by every client. This is mostly okay but (again) services with many clients can get overwhelmed by it. • Having every client independently healthcheck every server interacts somewhat poorly with services which are under heavy load, and can exacerbate performance issues by removing up-but-slow clients from the load balancer rotation. • Envoy doesn’t retry failed connections by default • it frequently segfaults when given incorrect configuration • various issues with it segfaulting because of resource leaks / memory safety issues • hosts running out of disk space between we didn’t rotate Envoy log files often enough A lot of these aren’t bugs – they’re just cases where what we expected the default configuration to do one thing, and it did another thing. This happens all the time, and it can result in really serious incidents. Figuring out how to configure a complicated piece of software appropriately takes a lot of time, and you just have to account for that. And Envoy is great software! The maintainers are incredibly responsive, they fix bugs quickly and its performance is good. It’s overall been quite stable and it’s done well in production. But just because something is great software doesn’t mean you won’t also run into 10 or 20 relatively serious issues along the way that need to be addressed in one way or another. And it’s helpful to understand those issues before putting the software in a really critical place. ### try to have each incident only once My view is that running new software in production inevitably results in incidents. The trick: 1. Make sure the incidents aren’t too serious (by making ‘production’ a less critical system first) 2. Whenever there’s an incident (even if it’s not that serious!!!), spend the time necessary to understand exactly why it happened and how to make sure it doesn’t happen again My experience so far has been that it’s actually relatively possible to pull off “have every incident only once”. When we investigate issues and implement remediations, usually that issue never comes back. The remediation can either be: • a configuration change • reporting a bug upstream and either fixing it ourselves or waiting for a fix • a workaround (“this software doesn’t work with 10,000 clients? ok, we just won’t use it with in cases where there are that many clients for now!“, “oh, a memory leak? let’s just restart it every hour”) Knowledge-sharing is really important here too – it’s always unfortunate when one person finds an incident in production, fixes it, but doesn’t explain the issue to the rest of the team so somebody else ends up causing the same incident again later because they didn’t hear about the original incident. ### Understand what is ok to break and isn’t Another huge part of understanding the software I run in production is understanding which parts are OK to break (aka “if this breaks, it won’t result in a production incident”) and which aren’t. This lets me focus: I can put big boxes around some components and decide “ok, if this breaks it doesn’t matter, so I won’t pay super close attention to it”. For example, with Kubernetes: ok to break: • any stateless control plane component can crash or be cycled out or go down for 5 minutes at any time. If we had 95% uptime for the kubernetes control plane that would probably be fine, it just needs to be working most of the time. • kubernetes networking (the system where you give every pod an IP addresses) can break as much as it wants because we decided not to use it to start not ok: • for us, if etcd goes down for 10 minutes, that’s ok. If it goes down for 2 hours, it’s not • containers not starting or crashing on startup (iam issues, docker not starting containers, bugs in the scheduler, bugs in other controllers) is serious and needs to be looked at immediately • containers not having access to the resources they need (because of permissions issues, etc) • pods being terminated unexpectedly by Kubernetes (if you configure kubernetes wrong it can terminate your pods!) with Envoy, the breakdown is pretty different: ok to break: • if the envoy control plane goes down for 5 minutes, that’s fine (it’ll keep working with stale data) • segfaults on startup due to configuration errors are sort of okay because they manifest so early and they’re unlikely to surprise us (if the segfault doesn’t happen the 1st time, it shouldn’t happen the 200th time) not ok: • Envoy crashes / segfaults are not good – if it crashes, network connections don’t happen • if the control server serves incorrect or incomplete data that’s extremely dangerous and can result in serious production incidents. (so downtime is fine, but serving incorrect data is not!) Neither of these lists are complete at all, but they’re examples of what I mean by “understand your sofware”. ### sharing ok to break / not ok lists is useful I think these “ok to break” / “not ok” lists are really useful to share, because even if they’re not 100% the same for every user, the lessons are pretty hard won. I’d be curious to hear about your breakdown of what kinds of failures are ok / not ok for software you’re using! Figuring out all the failure modes of a new piece of software and how they apply to your situation can take months. (this is is why when you ask your database team “hey can we just use NEW DATABASE” they look at you in such a pained way). So anything we can do to help other people learn faster is amazing ### Tailwind: style your site without writing any CSS! Hello! Over the last couple of days I put together a new website for my zines (https://wizardzines.com). To make this website, I needed to write HTML and CSS. Eep!! Web design really isn’t my strong suit. I’ve been writing mediocre HTML/CSS for probably like 12 years now, and since I don’t do it at all in my job and am making no efforts to improve, the chances of my mediocre CSS skills magically improving are… not good. But! I want to make websites sometimes, and It’s 2018! All websites need to be responsive! So even if I make a pretty minimalist site, it does need to at least sort of work on phones and tablets and desktops with lots of different screen sizes. I know about CSS and flexboxes and media queries, but in practice putting all of those things together is usually a huge pain. I ended up making this site with Tailwind CSS, and it helped me make a site I felt pretty happy with my minimal CSS skills and just 2 evenings of work! The Tailwind author wrote a blog post called CSS Utility Classes and “Separation of Concerns” which you should very possibly read instead of this :). ### CSS zen garden: change your CSS, not your HTML Until yesterday, what I believed about writing good CSS was living in about 2003 with the CSS zen garden. The CSS zen garden was (and is! it’s still up!) this site which was like “hey everyone!! you can use CSS to style your websites instead of HTML tables! Just write nice semantic HTML and then you can accomplish anything you need to do with CSS! This is amazing!” They show it off by providing lots of different designs for the site, which all use exactly the same HTML. It’s a really fun & creative thing and it obviously made an impression because I remember it like 10 years later. And it makes sense! The idea that you should write semantic HTML, kind of like this: div class="zen-resources" id="zen-resources"> <h3 class="resources">Resources:</h3> and then style those classes. ### writing CSS is not actually working for me Even though I believe in this CSS zen garden semantic HTML ideal, I feel like writing CSS is not actually really working for me personally. I know some CSS basics – I know font-size and align and min-height and can even sort of use flexboxes and CSS grid. I can mostly center things. I made https://rbspy.github.io/ responsive by writing CSS. But I only write CSS probably every 4 months or something, and only for tiny personal sites, and in practive I always end up with some media query problem sadly googling “how do I center div” for the 500th time. And everything ends up kind of poorly aligned and eventually I get something that sort of works and hide under the bed. ### CSS frameworks where you don’t write CSS So! There’s this interesting thing that has happened where now there are CSS frameworks where you don’t actually write any CSS at all to use them! Instead, you just add lots of CSS classes to each element to style it. It’s basically the opposite of the CSS zen garden – you have a single CSS file that you don’t change, and then you use 10 billion classes in your HTML to style your site. Here’s an example from https://wizardzines.com/zines/manager/. This snippet puts images of the cover and the table of contents side by side. <div class="flex flex-row flex-wrap justify-center"> <div class="md:w-1/2 md:pr-4"> <img src='cover.png'> </div> <div class="md:w-1/2"> <a class="outline-none" href='/zines/manager/toc.png'> <img src='toc.png'> </a> </div> </div> Basically the outside div is a flexbox – flex means display: flex, flex-row means flex-direction: row, etc. Most (all?) of the classes apply exactly 1 line of CSS. Here’s the ‘Buy’ Button: <a class="text-xl rounded bg-orange pt-1 pb-1 pr-4 pl-4 text-white hover:text-white no-underline leading-loose" href="https://gum.co/oh-shit-git">Buy for10</a>
The Buy button breaks down as:
• pt, pb, pr, pl are padding
• text-white, hover:text-white are the text color
• no-underline is text-decoration: none
• leading-loose sets line-height: 1.5
### why it’s fun: easy media queries
Tailwind does a really nice thing with media queries, where if you add a class lg:pl-4, it means “add padding, but only on screens that are ‘large’ or bigger.
I love this because it’s really easy to experiment and I don’t need to go hunt through my media queries to make something look better on a different screen size! For example, for that image example above, I wanted to make the images display side by side, but only on biggish screens. So I could just add the class md:w-1/2, which makes the width 50% on screens bigger than ‘medium’.
<div class="md:w-1/2 md:pr-4">
<img src='cover.png'>
</div>
Basically there’s CSS in Tailwind something like:
@media screen and (min-width: 800px) {
.md:w-1/2 {
width: 50%;
}
}
I thought it was interesting that all of the Tailwind media queries seem to be expressed in terms of min-width instead of max-width. It seems to work out okay.
### why it’s fun: it’s fast to iterate!
Usually when I write CSS I try to add classes in a vaguely semantic way to my code, style them with CSS, realize I made the wrong classes, and eventually end up with weird divs with the id “WRAPPER-WRAPPER-THING” or something in a desperate attempt to make something centered.
It feels incredibly freeing to not have to give any of my divs styles or IDs at all and just focus on thinking about how they should look. I just have one kind of thing to edit! (the HTML). So if I want to add some padding on the left, I can just add a pl-2 class, and it’s done!
https://wizardzines.com/ has basically no CSS at all except for a single <link href="https://cdn.jsdelivr.net/npm/tailwindcss/dist/tailwind.min.css" rel="stylesheet">.
### why is this different from inline styles?
These CSS frameworks are a little weird because adding the no-underline class is literally the same as writing an inline text-decoration: none. So is this just basically equivalent to using inline CSS styles? It’s not! Here are a few extra features it has:
1. media queries. being able to specify alternate attributes depending on the size (sm:text-orange md:text-white) is awesome to be able to do so quickly
2. Limits & standards. With normal CSS, I can make any element any width I want. For me, this is not a good thing! With tailwind, there are only 30ish options for width, and I found that these limits made me way easier for me to make reasonable CSS choices that made my site look the way I wanted. No more width: 300px; /* i hope this looks okay i don't know help */ Here’s the colour palette! It forces you to do everything in em instead of using pixels which I understand is a Good Idea even though I never actually do it when writing CSS.
### why does it make sense to use CSS this way?
It seems like there are some other trends in web development that make this approach to CSS make more sense than it might have in, say, 2003.
I wonder if the reason this approach makes more sense now is that we’re doing more generation of HTML than we were in 2003. In my tiny example, this approach to CSS actually doesn’t introduce that much duplication into my site, because all of the HTML is generated by Hugo templates, so most styles only end up being specified once anyway. So even though I need to write this absurd text-xl rounded bg-orange pt-1 pb-1 pr-4 pl-4 text-white hover:text-white no-underline leading-loose set of classes to make a button, I only really need to write it once.
I’m not sure!
### other similar CSS frameworks
• tachyons
• bulma
• tailwind
• to some extent the much older bootstrap, though when I’ve used that I ultimately felt like all my sites looked exactly the same (“oh, another bootstrap site”), which made me stop using it.
There are probably lots more. I haven’t tried Tachyons or Bulma at all. They look nice too.
### utility-first, not utility-only
Tne thing the Tailwind author says that I think is interesting is that the goal of Tailwind is not actually for you to never write CSS (even though obviously you can get away with that for small sites). There’s some more about that in these HN comments.
### should everyone use this? no idea
I have no position on the One True Way to write (or not write) CSS. I’m not a frontend developer and you definitely should not take advice from me. But I found this a lot easier than just about everything I’ve tried previously, so maybe it will help you too.
### When does teaching with comics work well?
I’m speaking at Let’s sketch tech! in San Francisco in December. I’ve been thinking about what to talk about (the mechanics of making zines? how comics skills are different from drawing skills? the business of self-publishing?). So here’s one interesting question: in what situations does using comics to teach help?
### comics are kind of magic
The place I’m starting with is – comics often feel magical to me. I’ll post a comic on, for instance, /proc, and dozens of people will tell me “wow, I didn’t know this existed, this is so useful!“. It seems clear that explaining things with comics often works well for a lot of people. But it’s less clear which situations comics are useful in! So this post is an attempt to explore that.
### what’s up with “learning styles?”
One possible way to answer the question “when does using comics to teach work well?” is “well, some people are visual learners, and for those people comics work well”. This is based on the idea that different people have different “learning styles” and learn more effectively when taught using their preferred learning style.
It’s clear that different people have different learning preferences (for instance I like reading text and dislike watching videos). From my very brief reading of Wikipedia, it seems less clear that folks actually learn more effectively when taught using their preferences. So, whether or not this is true, it’s not how I think about what I’m doing here.
### learning preferences still matter
You could conclude from this that learning preferences don’t matter at all, and you should just teach any given concept in the best way for that concept. But!! I think learning preferences still matter, at least for me. I don’t teach in a classroom, I teach whoever feels like reading what I’m writing on the internet! And if people don’t feel like learning the things I’m teaching because of the way they’re presented, they won’t!
For example – I don’t watch videos to learn. (which is not to say that I’m incapable of learning from videos, just studies show I just don’t watch them). So if someone is teaching a lot of cool things I want to learn on YouTube, I won’t watch them!
So right now I’m reading statements like “I’m a visual learner” as a preference worth paying attention to :).
### when comics help: diagrams
A lot of the systems I work with involve a lot of interacting systems. For example, Kubernetes is a complicated system with many components. It took me months to understand how the components fit together. Eventually I understood that the answer is this diagram:
The point of this diagram is that all Kubernetes’ state lives in etcd, every other Kubernetes component decides what to do by making requests to the API server, and none of the components communicate with each other (or etcd) directly. Those are some of the most important things to know about Kubernetes’ architecture, which is why they’re in the diagram.
Not all diagrams are helpful though!! I’m going to pick on someone else’s kubernetes diagram (source), which is totally accurate but which I personally find less helpful.
I think the way this diagram (and a lot of diagrams!) are drawn is:
• identify the components of the system
• draw boxes for each component and arrows between components that communicate
This approach works well in a lot of contexts, but personally I find it often leaves me feeling confused about how the system works. Diagrams like this often don’t highlight the most important/unusual architectural decisions! The way I like to draw diagrams is, instead:
• figure out what the key architecture decision(s) are that folks need to understand to use it
• draw a diagram that illustrates those architecture decisions (possibly including boxes and arrows)
• leave out parts that aren’t key to understanding the architecture
So, for that kubernetes diagram, I left out pods and the role of the kubelet and where any of these components are running (on a master? on a worker?), because even those those are very important, they weren’t my teaching goals for the diagram.
### when comics help: explaining scenarios
Something I find really effective is to quickly explain a few important things about something that’s really complicated like “how to run kubernetes” or “how distributed systems work”.
Often when trying to explain a huge topic, people start with generalities (“let me explain what a linearizable system is!“). I have another approach that I prefer, which I think of as the “scenes from” approach, or “get specific!”. (which is the same as the best way to give a lightning talk – explain one specific interesting thing instead of trying to give an overview).
The idea is to zoom into a common specific scenario that you’ll run into in real life. For example, a really common situation when using a linearizable distributed system is that it’ll periodically become unavailable due to a leader election. I didn’t know that that was commmon when I started working with distributed systems!! So just saying “hey, here is a thing that happens in practice” can be useful.
Here are 2 example comics I’ve done in this style:
Comics are a really good fit for illustrating scenarios like this because often there’s some kind of interaction! (“can’t you see we’re having a leader election??”)
### when comics help: writing a short structured list
I’ve gotten really into using comics to explain command line tools recently (eg the bite size command line zine).
One of my favorite comics from that zine is the grep comic. The reason I love this comic is that it literally includes every grep command line argument I’ve ever used, as well as a few I haven’t but that I think seem useful. And I’ve been using grep for 15 years! I think it’s amazing that it’s possible to usefully summarize grep in such a small space.
I think it’s important in this case that the list be structured – all of the things in this list are the same type (“grep command line arguments”). I think comics work well here just because your can make the list colourful / fun / visually appealing.
### when comics help: explaining a simple idea
I spent most of bite size linux explaining various Linux ideas. Here’s a pipes comic that I was pretty happy with! I think this is a little bit like “draw a diagram” – there are a few fundamental concepts about pipes that I think are useful to understand, specifically that pipes have a buffer and that writes to a pipe block if the buffer is full.
I think comics work well for this just because you can mix text and small diagrams really easily, and with something like pipes the tiny diagrams help a lot.
### that’s all for now
I don’t think this is the ‘right’ categorization of “when comics work for teaching” yet. But I think this is a somewhat accurate description of how I’ve been using them so far. If you have other thoughts about when comics work (and when they don’t!) I’d love to hear them.
### New zine: Oh shit, git!
Hello! Last week Katie Sylor-Miller and I released a new zine called “Oh shit, Git!”. It has a bunch of common git mistakes and how to fix them! I learned a surprising number of things by working on it (like what HEAD@{2} means, and that you can do my-branch-name@{2} to see what a branch was previously pointing to, and more ways to use git diff)
You can get it for $10 at Oh shit, git! or a swear-free version at Dangit, git!. Here’s the cover and table of contents: (you can click on the table of contents to make it bigger). ### why this zine? I’ve thought for a couple of years that it might be fun to write a git zine, but I had NO IDEA how to do it. I was in this weird place with git where, even though I know that git is really confusing, I felt like I’d forgotten what it was like to be confused/scared by Git. And I write most things from a place of “I was super confused by this thing just recently, let me explain it!!”. But then!! I saw that Katie Sylor-Miller had made this delightful website called oh shit, git! explaining how to get out of common git mishaps. I thought this was really brilliant because a lot of the things on that site (“oh shit, i committed to the wrong branch!“) are things I remember being really scary when I was less comfortable with git! So I thought, maybe this could be useful for folks to have as a paper reference! Maybe we could make a zine out of it! So I emailed her and she agreed to work with me. And now here it is! :D. Very excited to have done a first collaboration. ### what’s new in the oh shit, git! zine? The zine isn’t the same as the website – we decided we wanted to add some fundamental information about how Git works (what’s a commit?), because to really work with Git effectively you need to understand at least a little bit about how commits and branches work! And some of the explanations are improved. Probably about 50% of the material in the zine is from the website and 50% is new. ### a couple of example pages Here are a couple of example pages, to give you an idea of what’s in the zine: and a page on git reflog: ### that might be it for zines in 2018! I’m not sure, but I don’t think I’ll write any more zines for a couple of months. So far there have been 5 (!!!) this year – perf, bite size linux, bite size command line, help! I have a manager!, and this one!. I’m really happy with that number and very grateful to everyone who’s supported them. ideas I have for zines right now include: • kubernetes • how to do statistics using programming • ‘bite size networking’, on the 10 billion different command line tools used for different networking things • ‘bite size linux v2’, about more core linux concepts that i didn’t get to in ‘bite size linux’ There’s a definite tradeoff between writing zines and blogging, and writing blog posts is really fun. Maybe I’ll try going back in that direction for a little. ### Some Envoy basics Envoy is a newish network proxy/webserver in the same universe as HAProxy and nginx. When I first learned about it around last fall, I was pretty confused by it. There are a few kinds of questions one might have about any piece of software: • how does do you use it? • why is it useful? • how does it work internally? I’m going to spend most of my time in this post on “how do you use it?”, because I found a lot of the basics about how to configure Envoy very confusing when I started. I’ll explain some of the Envoy jargon that I was initially confused by (what’s an SDS? XDS? CDS? EDS? ADS? filter? cluster? listener? help!) There will also be a little bit of “why is it useful?” and nothing at all about the internals. ### What’s Envoy? Envoy is a network proxy. You compile it, you put it on the server that you want the, you tell it which configuration file to use it, and away you go! Here’s probably the simplest possible example of using Envoy. The configuration file is a gist. This example starts a webserver on port 7777 that proxies to another HTTP server on port 8000. If you have Docker, you can try it now – just download the configuration, start the Envoy docker image, and away you go! python -mSimpleHTTPServer & # Start a HTTP server on port 8000 wget https://gist.githubusercontent.com/jvns/340e4d20c83b16576c02efc08487ed54/raw/1ddc3038ed11c31ddc70be038fd23dddfa13f5d3/envoy_config.json docker run --rm --net host -v=$PWD:/config envoyproxy/envoy /usr/local/bin/envoy -c /config/envoy_config.json
This will start an Envoy HTTP server, and then you can make a request to Envoy! Just curl localhost:7777 and it’ll proxy the request to localhost:8000.
### Envoy basic concepts: clusters, listeners, routes, and filters
This small tiny envoy_config.json we just ran contains all the basic Envoy concepts!
First, there’s a listener. This tells Envoy to bind to a port, in this case 7777:
"listeners": [{
Next up, the listener has filters. Filters tell the listener what to do with the requests it receives, and you give Envoy an array of filters. If you’re doing something complicated typically you’ll apply several filters to every requests coming in.
There are a few different kinds of filters (see list of TCP filters), but the most important filter is probably the envoy.http_connection_manager filter, which is used for proxying HTTP requests. The HTTP connection manager has a further list of HTTP filters that it applies (see list of HTTP filters). The most important of those is the envoy.router filter which routes requests to the right backend.
In our example, here’s how we’ve configured our filters. There’s one TCP filter (envoy.http_connection_manager) which uses 1 HTTP filter (envoy.router)
"filters": [
{
"name": "envoy.http_connection_manager",
"config": {
"stat_prefix": "ingress_http",
"http_filters": [{ "name": "envoy.router", "config": {} }],
....
Next, let’s talk about routes. You’ll notice that so far we haven’t explained to the envoy.router filter what to do with the requests it receives. Where should it proxy them? What paths should it match? In our case, the answer to that question is going to be “proxy all requests to localhost:8000”.
The envoy.router filter is configured with an array of routes. Here’s how they’re configured in our test configuration. In our case there’s just one route.
"route_config": {
"virtual_hosts": [
{
"name": "blah",
"domains": "*",
"routes": [
{
"match": { "prefix": "/" },
"route": { "cluster": "banana" }
This gives a list of domains to match (these are matched against the requests Host header). If we changed "domains": "*" to "domains": "my.cool.service", then we’d need to pass the header Host: my.cool.service to get a response.
If you’re paying attention to the ongoing saga of this configuration, you’ll notice that the port 8000 hasn’t been mentioned anywhere. There’s just "cluster": "banana". What’s a cluster?
Well, a cluster is a collection of address (IP address / port) that are the backend for a service. For example, if you have 8 machines running a HTTP service, then you might have 8 hosts in your cluster. Every service needs its own cluster. This example cluster is really simple: it’s just a single IP/port, running on localhost.
"clusters":[
{
"name": "banana",
"type": "STRICT_DNS",
"connect_timeout": "1s",
"hosts": [
]
}
]
### tips for writing Envoy configuration by hand
I find writing Envoy configurations from scratch pretty time consuming – there are some examples in the Envoy repository (https://github.com/envoyproxy/envoy), but even after using Envoy for a year this basic configuration actually took me 45 minutes to get right. Here are a few tips:
• Envoy has 2 different APIs: the v1 and the v2 API. Many newer features are only available in the v2 API, and I find its documentation a little easier to navigate because it’s automatically generated from protocol buffers. (eg the Cluster docs are generated from cds.proto)
• A few good starting points in the Envoy API docs: Listener, Cluster, Filter, Virtual Host. To get all the information you need you need to click a lot (for example to see how to configure the cluster for a route you need to start at “Virtual Host” and click route_config -> virtual_hosts -> routes -> route -> cluster), but it works.
• The architecture overview docs are useful and give an overall explanation of how some Envoy things are configured.
• You can use either json or yaml to configure Envoy. Above I’ve used JSON.
### You can configure Envoy with a server
Even though we started with a configuration file on disk, one thing that makes Envoy really different from HAProxy or nginx is that Envoy often isn’t configured with a configuration file. Instead, you can configure Envoy with one or several configuration servers which dynamically change your configuration.
To get an idea of why this might be useful: imagine that you’re using Envoy to load balance requests to 50ish backend servers, which are EC2 instances that you periodically rotate out. So http://your-website.com requests go to Envoy, and get routed to an Envoy cluster, which needs to be a list of the 50 IP addresses and ports of those servers.
But what if those servers change over time? Maybe you’re launching new ones or they’re getting terminated. You could handle this by periodically changing the Envoy configuration file and restarting Envoy. Or!! You could set up a “cluster discovery service” (or “CDS”), which for example could query the AWS API and return all the IPs of your backend servers to Envoy.
I’m not going to get into the details of how to configure a discovery service, but basically it looks like this (from this template). You tell it how often to refresh and what the address of the server is.
dynamic_resources:
cds_config:
api_config_source:
cluster_names:
- cds_cluster
refresh_delay: 30s
...
- name: cds_cluster
connect_timeout: 0.25s
type: STRICT_DNS
lb_policy: ROUND_ROBIN
hosts:
protocol: TCP
port_value: 80
### 4 kinds of Envoy discovery services
There are 4 kinds of resources you can set up discovery services for Envoy – routes (“what cluster should requests with this HTTP header go to”), clusters (“what backends does this service have?”), listener (the filters for a port), and endpoints. These are called RDS, CDS, LDS, and EDS respectively. XDS is the overall protocol.
The easiest way to write a discovery service from scratch is probably in Go using the go-control-plane library.
### some Envoy discovery services
It’s definitely possible to write Envoy configuration services from scratch, but there are some other open source projects that implement Envoy discovery services. Here are the ones I know about, though I’m sure there are more:
• There’s an open source Envoy discovery service called rotor which looks interesting. The company that built it just shut down a couple weeks ago.
• Istio (as far as I understand it) is basically an Envoy discovery service that uses information from the Kubernetes API (eg the services in your cluster) to configure Envoy clusters/routes. It has its own configuration language.
• consul might be adding support for Envoy (see this blog post), though I don’t fully understand the status there
### what’s a service mesh?
Another term that I hear a lot is “service mesh”. Basically a “service mesh” is where you install Envoy on the same machine as every one of your applications, and proxy all your network requests through Envoy.
Basically it gives you more easily control how a bunch of different applications (maybe written in different programming languages) communicate with each other.
### why is Envoy interesting?
I think these discovery services are really the exciting thing about Envoy. If all of your network traffic is proxied through Envoy and you control all Envoy configuration from a central server, then you can potentially:
• use circuit breaking
• route requests to only close instances
• encrypt network traffic end-to-end
• run controlled code rollouts (want to send only 20% of traffic to the new server you spun up? okay!)
all without having to change any application code anywhere. Basically it’s a very powerful/flexible decentralized load balancer.
Obviously setting up a bunch of discovery services and operating them and using them to configure your internal network infrastructure in complicated ways is a lot more work than just “write an nginx configuration file and leave it alone”, and it’s probably more complexity than is appropriate for most people. I’m not going to venture into telling you who should or should not use Envoy, but my experience has been that, like Kubernetes, it’s both very powerful and very complicated.
One of the things I really like about Envoy is that you can pass it a HTTP header to tell it how to retry/timeout your requests!! This is amazing because implementing timeout / retry logic correctly works differently in every programming language and people get it wrong ALL THE TIME. So being able to just pass a header is great.
The timeout & retry headers are documented here, and here are my favourites:
• x-envoy-max-retries: how many times to retry
• x-envoy-retry-on: which failures to retry (eg 5xx or connect-failure)
• x-envoy-upstream-rq-timeout-ms: total timeout
• x-envoy-upstream-rq-per-try-timeout-ms: timeout per retry
### that’s all for now
I have a lot of thoughts about Envoy (too many to write in one blog post!), so maybe I’ll say more later!
### What's a senior engineer's job?
There’s this great post by John Allspaw called “On being a senior engineer”. I originally read it 4ish years ago when I started my current job and it really influenced how I thought about the direction I wanted to go in.
Rereading it 4 years later, one thing that’s really interesting to me about that blog post is that it’s explaining that empathy / helping your team succeed is an important part of being a senior engineer. Which of course is true!
But from where I stand today, most (all?) of the senior engineers I know take on a significant amount of helping-other-people work in addition to their individual programming work. The challenge I see me/my coworkers struggling with today isn’t so much “what?? I have to TALK TO PEOPLE?? UNBELIEVABLE.” and more “wait, how do I balance all of this leadership work with my individual contributions / programming work in a way that’s sustainable for me? How much of what kind of work should I be doing?“. So instead of talking about the attributes that a senior engineer has from Allspaw’s post (which I totally agree with), instead I want to talk here about the work that a senior engineer does.
### what this post is describing
“what a senior engineer does” is a huge topic and this is a small post. things to keep in mind when reading:
• this is just one possible description of what a “senior engineer” could do. There are a lot of ways to work and this isn’t intended to be definitive.
• I have basically only worked at one company and this is just about my experiences so my perspective is obviously pretty limited
• There are obviously a lot of levels of “senior engineer” out there. This is aimed somewhere around P3/P4 in the Mozilla ladder (senior engineer / staff engineer), maybe a bit more on the “staff” side.
### What’s part of the job
These are things that I view as being mostly a senior engineer’s job and less a manager’s job. (though managers definitely do some of this too, especially creating new projects / relating projects to business priorities)
The thing that holds all this together is that almost all of this work is fundamentally technical: helping someone get unstuck on a tricky project is obviously a human interaction, but the issues we’ll be working on together will generally be computer issues! (“maybe if we simplify this design we can be done with this way sooner!“)
• Write code. (obviously)
• Do code reviews. (obviously)
• Write and review design docs. As with other review tasks, I think of “review design docs” as “get a second set of eyes on it, which will probably help improve the design”.
• Help team members when they’re stuck. Sometimes folks get stuck on a project, and it’s important to work to support them! I think of this less as “parachute from the sky and deliver your magical knowledge to people” and more as “work together to understand the problem they’re trying to solve and see if 2 brains are better than 1” :). This also means working with someone to solve the problem instead of solving the problem for them.
• Hold folks to a high quality standard. “Quality” will mean different things for different folks (for my team it means reliability/security/usability). Usually when someone makes a decision that seems off to me, it’s either because I know something that they don’t or they know something I don’t! So instead of telling someone “hey you did this wrong you should do X instead”, I try to just give them some extra information that they didn’t have and often that sorts it out. And pretty often it turns out that I was missing something and actually their decision was totally reasonable! In the past I’ve very occasionally seen senior engineers try to enforce quality standards by repeating their opinions more and more loudly because they think their opinions are Right and I haven’t personally found that helpful.
• Create new projects. A software engineering team isn’t a zero-sum place! The best engineers I know don’t hoard the most interesting work for themselves, they create new interesting/important work and create space for folks to do that work. For example, someone on my team spearheaded a rewrite of our deployment system which was super successful and now there’s a whole team working on new features that are way easier to build post-rewrite!
• Plan your projects’ work. This is about writing down / communicating the roadmap for projects you’re working on and making sure that folks understand the plan.
• Proactively communicate project risks. It’s really important to recognize when something you’re working on isn’t going well, communicate it to other engineers/managers, and figure out what to do.
• Communicate successes!
• Do side projects that benefit the team/company. I see a lot of senior engineers occasionally doing small high leverage projects (like building dev tooling / helping set policies) that end up helping a LOT of people get their work done a lot better.
• Be aware of how projects relate to business priorities.
• Decide when to stop doing a project. Figuring out when to stop / not start work on something is surprisingly hard :)
I put “write code” first because I find it surprisingly easy to accidentally let that take a back seat :)
One thing I left out is “make estimates”. Making estimates is something I’m still not very good at and that I don’t think I see very much of (?), but I think it could be worth spending more time on some day.
This list feels like a lot and like if you tried to do all those things all the time it would consume all available brain space. I think in general it probably makes sense to carve out a subset and decide “right now I’m going to focus on X Y Z, I think my brain will explode if I try to do A B C as well”.
### What’s not part of the job
This section is a bit tricky. I’m not saying that these aren’t a senior engineer’s job in the sense of “I won’t help create a good work environment on my team, how dare you suggest that’s part of my job!!“. Most senior engineers I know have spent a huge amount of time thinking about these issues and work on them quite a bit.
The reason I think it’s useful to create a boundary here is that everyone I work with has a really strong sense of ownership/responsibility to the team / company (“does it need to be done? well, sure, I can do that!!“) and I think it’s easy for that willingness to do whatever needs to happen to turn into folks getting overwhelmed/overworked/unable to make the kinds of technical contributions that are actually their core job. So if you can create some boundaries around your role it’s easier to decide what sorts of work to ask for help with when things are hectic. The actual boundary you draw course depends on you / your team :)
Most of these are a manager’s job. Caveats: managers do a lot more than the things listed here (for instance “create new projects”), and at some companies some of these things might actually be the job of a senior engineer (eg sprint management).
• Make sure every team member’s work is recognized
• Make sure work is allocated in a fair way
• Make sure folks are working well together
• Build team cohesion
• Have 1:1s with everyone on the team
• Train new managers / help them understand what’s expected of them (though I think senior ICs often actually do end up picking some of this up?)
• Do project management for projects you’re not working on (where I work, that’s the job of whatever engineer is leading that project)
• Be a product manager
• Do sprint management / organize everyone’s work into milestones / run weekly team meetings
### Explicitly setting boundaries is useful
I ran into an interesting situation recently where I was talking to a manager about which things were and weren’t part of my job as an engineer, and we realized that we had very different expectations! We talked about it and I think it’s sorted out now, but it made me realize that it’s very important to agree about what the expectations are :)
When I started out as an engineer, my job was pretty straightforward – I wrote code, tried to come up with projects that made sense, and that was fine. My manager always had a clear sense of what my job was and it wasn’t too complicated. Now that’s less true! So now I view it as being more my responsibility to define a job that:
• I can do / is sustainable for me
• I want to do / that’s overall enjoyable & in line with my personal goals
• is valuable to the team/organization
And the exact shape of that job will be different for different people (not everyone has the same interests & strengths, for example I am actually not amazing at code review yet!), which I think makes it even more important to negotiate it / do expectation setting.
### Don’t agree to a job you can’t do / don’t want
I think pushing back if I’m asked to do work that I can’t do or that I think will make me unhappy long term is important! I find it kind of tempting to agree to take on a lot of work that I know I don’t really enjoy (“oh, it’s good for the team!”, “well someone needs to do it!“). But, while I obviously sometimes take on tasks just because they need to be done, I think it’s actually really important for team health for folks to be overall doing jobs that are sustainable for them and that they overall enjoy.
So I’ll take on small tasks that just need to get done, but I think it’s important for me not to say “oh sure, I’ll spend a large fraction of my time doing this thing that I’m bad at and that I dislike, no problem” :). And if “someone” needs to do it, maybe that just means we need to hire/train someone new to fill the gap :)
### I still have a lot to learn!
While I feel like I’m starting to understand what this “senior engineer” thing is all about (7 years into my career so far), I still feel like I have a LOT to learn about it and I’d be interested to hear how other people define the boundaries of their job!
### Some possible career goals
I was thinking about career goals a person could have (as a software developer) this morning, and it occurred to me that there are a lot of possible goals! So I asked folks on Twitter what some possible goals were and got a lot of answers.
This list intentionally has big goals and small goals, and goals in very different directions. It definitely does not attempt to tell you what sorts of goals you should have. I’m not sure yet whether it’s helpful or not but here it is just in case :)
I’ve separated them into some very rough categories. Also I feel like there’s a lot missing from this list still, and I’d be happy to hear what’s missing on twitter.
### technical goals
• become an expert in a domain/technology/language (databases, machine learning, Python)
• get to a point where you can drop into new situations or technologies and quickly start making a big impact
• do research-y work / something that’s never been done before
• get comfortable with really big codebases
• work on a system that has X scale/complexity (millions of requests per second, etc)
• scale a project way past its original design goals
• do work that saves the company a large amount of money
• be an incident commander for an incident and run the postmortem
• make an contribution to an open source project
• get better at some skill (testing / debugging / a programming language / machine learning)
• become a core maintainer for an important OSS project
• build an important system from scratch
• be involved with a product/project from start to end (over several years)
• understand how complex systems fail (and how to make them not fail)
• be able to build prototypes quickly for new ideas
### job goals
• pass a programming interview
• get your “dream job” (if you have one)
• work at a prestigious company
• work at a very small company
• work at a company for a really long time (to see how things play out over time)
• work at lots of different companies (to get lots of different perspectives)
• get a raise
• become a manager
• get to a specific title (“architect”, “senior engineer”, “CTO”, “developer evangelist”, “principal engineer”)
• work at a nonprofit / company where you believe in the mission
• work on a product that your family / friends would recognize
• work in many different fields
• work in a specific field you care about (transit, security, government)
• get paid to work on a specific project (eg the linux kernel)
• as an academic, have stable funding to work towards your research interests
• become a baker / work on something else entirely :)
### entrepreneurship goals
This category is obviously pretty big (there are lots of start-your-own-business related goals!) and I’m not going to try to be exhaustive.
• start freelancing
• start a consulting company
• make your first sale of software you wrote
• get VC funding / start a startup
• get to X milestone with a company you started
### product goals
I think the difference between “technical goals” and “product goals” is pretty interesting – this area is more about the impact that your programs have on the people who use them than what those programs consist of technically.
• do your work in a specific way that you care about (eg make websites that are accessible)
• build tools for people who you work with directly (this can be so fun!!)
• make a big difference to a system you care about (eg “internet security”)
• do work that helps solve an important problem (climate change, etc)
• work in a team/project whose product affects more than a million people
• work on a product that people love
• build developer tools
• help new people on your team get started
• help someone get a job/opportunity that they wouldn’t have had otherwise
• mentor someone and see them get better over time
• “be a blessing to others you wished someone else was to you”
• be a union organizer / promote fairness at work
• build a more inclusive team
• build a community that matters to people (via a meetup group or otherwise)
### communication / community goals
• write a technical book
• give a talk (meetup, conference talk, keynote)
• give a talk at a really prestigious conference / in front of people you respect
• give a workshop on something you know really well
• start a conference
• write a popular blog / an article that gets upvoted a lot
• teach a class (eg at a high school / college)
• change the way folks in the industry think about something (eg blameless postmortems, fairness in machine learning)
### work environment goals
A lot of people talked about the flexibility to choose their own work environment / hours (eg “work remotely”).
• get flexible hours
• work remotely
• work in a place where you feel accepted/included
• work with people who share your values (this involves knowing what your values are! :) )
• work with people who are very experienced / skilled
• have good health insurance / benefits
• make X amount of money
### other goals
• remain as curious and in love with programming as the first time I did it
### nobody can tell you what your goals are
This post came out of reading this blog post about how your company’s career ladder is probably not the same as your goals and chasing the next promotion may not be the best way to achieve them.
I’ve been lucky enough to have a lot of my basic goals met (“make money”, “learn a lot of things at work”, “work with kind and very competent people”), and after that I’ve found it hard to figure out which of all of these milestones here will actually feel meaningful to me! Sometimes I will achieve a new goal and find that it doesn’t feel very satisfying to have done it. And other times I will do something that I didn’t think was a huge deal to me, but feel really proud of it afterwards.
So it feels pretty useful to me to write down these things and think “do I really want to work at FANCY_COMPANY? would that feel good? do I care about working at a nonprofit? do I want to learn how to build software products that lots of people use? do I want to work on an application that serves a million requests per second? When I accomplished that goal in the past, did it actually feel meaningful, or did I not really care?”
Hello! As you may have noticed, I’ve been writing a few new zines (they’re all at https://jvns.ca/zines ), and while my zines used to be free (or pay-for-early-access-then-free after), the new ones are not free! They cost $10! In this post, I want to talk a little about why I made the switch and how it’s been going so far. ### selling your work is okay I wanted to start out by saying something sort of obvious – if you decide to sell your work instead of giving it away for free, you don’t need to justify that (why would you?). Since I’ve started selling my zines, exactly 0 people have told me “julia, how dare you sell your work”, and a lot of people have said “your work is amazing and I’m happy to pay for it! This is great!” But I still want to talk about this because it’s been a pretty confusing tradeoff for me to think through (what are my goals? does giving things away for free or selling them accomplish my goals better?) ### what are my goals? I don’t have a super clear set of goals with my blog / zines, but here are a few: • expose people to new important ideas that they might never have heard of otherwise. I think in systems a lot of knowledge can be hard to get if you don’t know the right people, I think that’s very silly, and I’d like to make a small dent in that. • explain complicated ideas in the simplest possible way (but not simpler!!!). A lot of things that seem complicated at first actually aren’t really, and I want to show people that. ### free work is easier to distribute The most obvious advantage is that if something is free, it’s way easier for more people to access it and learn from it. For me, this is the biggest thing – I care about the impact of my writing (writing just for myself is useful, but ideally I’d like for it to help lots of people!) A really good example of this is this article Open Access for Impact: How Michael Nielsen Reached 3.5M Readers about Michael Nielsen’s book Neural Networks and Deep Learning. 3.5M readers is probably an overestimate, but he says: total time spent by readers is about 250,000 hours, or roughly 125 full time working years. That’s a lot! This was the biggest reason I held off selling zines for a long time – I worried that if I sold my zines, not that many people would buy them relative to how many folks would download the free versions. ### selling zines makes it easier to spend money (and time) on it A huge advantage of selling zines, though, is that it makes it way easier to invest in making something that’s high-quality. I’ve spent probably$5000 on tablets / printing / software / illustrators to make zines. Since I’ve made substantially more than $5000 at this point (!!!), investing in things like that is now a really easy decision! I can hire super talented illustrators and pay them a fair amount and not worry about it! I decided earlier this year to buy an iPad (which has made drawing zines SO MUCH EASIER for me, the apple pencil is amaaazing), and instead of thinking “oh no, this is kind of expensive, should I really spend money on it?” I could just reason “this is a tool that will more than pay for itself! I should just buy it!“. Also, the fact that I’m making money from it makes it way easier to spend time on the project – any given zine takes me weeks of evenings/weekends to make, and carving that time out of my schedule isn’t always easy! If I’m getting paid for it it makes it way easier to stay motivated to make something awesome instead of producing something kinda half-baked. ### people take things they pay for more seriously Another reason I’m excited about selling zines is that I feel like, since I’ve started doing it and investing a little more into the quality, people have taken the project a little more seriously! • “bite size linux” is a required text in a university course!. This is extremely delightful. • a bunch of folks who work at various companies have bought zines to give to their coworkers/employees! I think “this costs money” is a nice way to signal “I actually spent time on this, this is good”. ### people are actually willing to buy zines At the beginning I said that I was worried that if I sold zines, nobody would buy them, and so nobody would learn from them, and that would be awful. Was that worry justified? Well, I actually have a little bit of data about this!! The only thing I use statistics for on this website is how many people download my zines (I run heap on https://jvns.ca/zines). Here are some stats: • my most-downloaded zine is “so you want to be a wizard” with 5,000 clicks • my most-bought zine is “bite size linux” with 3,000 sales (!!!) 3,000 sales is incredible (thank you everyone!!!!) and I’ve been totally blown away by how many people have bought these zines. This actually feels like selling zines results in more people reading the zine – to me, 3,000 sales is WAY BETTER than 5,000 clicks, because I think that someone who bought a zine is probably like 4x more likely to read it than someone who just clicked on a PDF. (4x being a Totally Unscientific Arbitrary Number). ### how do you decide on pricing? PRICING. EEP. GUYS. I find thinking about pricing SO CONFUSING. There’s this “charge more” narrative I see a lot on the internet which basically goes: • tie whatever you’re selling to someone else’s business outcomes • charge them relative to how much money the product can help them make, not relative to how hard it was to build I think this a reasonable model and it’s how things like this guide to rails performance are priced. This is not really how I’ve been thinking about it, though – my approach right now is just to charge what I think is a reasonable/fair price, which is$10/zine.
I had a super interesting conversation with Stephanie Hurlburt, though, where she argued that I should be charging more for different reasons! Her argument was:
• We want to build a world where artist/educators can get paid fairly for their work
• $10/zine is not actually a lot of money, it’s only sustainable for julia because julia has a big audience • if I could figure out how to charge more, I could share that with other people and make a world where smaller creators could be more successful I find that argument pretty compelling (I would like more people to be able to make money from selling zines!). But I don’t have any plans to charge more for individual zines than$10/zine because $10 just seems like a reasonable price to me and I know that it’s already too much for some folks, especially people in countries where their currency is a lot weaker than the US dollar. ### experimenting with corporate pricing While I’m pretty reluctant to do experiments with the$10/zine price for individual people, experimenting with corporate pricing is a lot easier! Folks generally aren’t spending their own money, so if I raise the prices for a company to buy a zine, maybe they won’t buy it if they decide it’s too much, but it’s a lot less personal and doesn’t affect someone’s ability to read the zines in the same way.
Right now, companies buy zines from me for 2 reasons:
1. to give them to their employees to teach folks useful things (I charge somewhere between $100 ->$600 for a site license right now)
2. to distribute them at conferences/other events (eg microsoft gave out zines/posters by me at a couple of conferences this year). I’ve only just started doing this but it seems like a super fun way to get more zines into the world!
I have been doing some corporate pricing experiments – for Help! I have a manager! I raised the minimum price to $150 because I think it’s pretty valuable to help folks work better with their managers. We’ll see what happens! ### why not patreon? As a sidebar – a lot of folks have suggested that I use Patreon. Right now I definitely do not want to use Patreon/other donation-based models for various reasons (though I support creators on Patreon and I think it’s great!). I don’t want to get into it in this post but maybe I’ll talk about this another time! Basically to me the model of “pay$10 for a zine” is super simple, I like it, and I have no desire to switch to Patreon :)
What I’m doing right now is – I’ll post drafts of almost everything I write in my zines on Twitter. This works really well for a lot of reasons:
• I get really early feedback on whether something is working or not – folks will suggest a lot of great improvements in the Twitter replies!
• I get to see what’s resonating with folks – for example, this comics about 1:1s got 2.5K retweets, which is a lot! Knowing that folks found that page really useful helped me decide where to put it in the zine (near the beginning!)
• people who maybe can’t afford $10 for the zine can follow along on Twitter and get all the information anyway • obviously it’s great advertising – if people like the comics I tweet, they might decide to buy the zine later! :) And if they want to just enjoy the tweets that’s awesome too ❤ As an example, most of the pages from Help! I have a manager! are in this twitter moment. ### a few things that haven’t gone well Not everything has been 100% amazing with selling zines on the internet! A couple of things that haven’t gone well: • some people don’t have credit cards / PayPal and so can’t get the zine! I would really really like a good solution to this. • Gumroad doesn’t have great email deliverability – sometimes when someone buys a zine it’ll end up in their spam. This is pretty easy to resolve (people email me to say that they didn’t get it, and it’s always easy to fix right away), but I wish they were better at this. Otherwise Gumroad is a good platform. • On my first zine, I didn’t put my email address on the Gumroad page, so some people didn’t know how to get in touch with me when there was a problem and one person opened a dispute. Now I put my email address on Gumroad which I think has fixed that! • I sent an update email on Gumroad to past zine buyers saying that I had a new zine out and one person replied to say that they didn’t like being emailed. I think there’s a little room for to improve here – the fact that Gumroad autoenrolls everyone who buys a zine into an “updates” email list is IMO a bit weird and it feels like it would be better if it was opt-in. • Someone posted my blog post announcing a new zine to lobste.rs and folks commented that they didn’t think it was appropriate to post non-free things on lobste.rs. I agree with that but this seems hard to prevent though since I can’t control what people post on tech news sites :). I think this isn’t a big deal but it didn’t feel great. I’m sure I’ll make some more mistakes in the future and hopefully I’ll learn from them :). I wanted to post these because I worry a lot about making mistakes when selling things to folks, but once I write down the issues so far they all feel very resolvable. Mostly I just try to reply to email fast when folks have problems, which isn’t that often. ### let’s see how the experiment goes! So far selling zines feels like • I end up with a comparable amount of readers (I think there’s not a huge difference?) • I can make something that’s higher quality (and pay more artists to help me!). It’s way easier to justify spending time on it. • People take the work more seriously • Folks have been really positive and supportive about it • It’s maybe helping a tiny bit to build a world where more folks can get paid to write really awesome educational materials I’m excited to try out some new things in the future (hopefully printing???). I’ll try to keep writing about what I learn as I go, because how to do this really hasn’t been obvious to me. I’d love to hear what you think! ### New zine: Help! I have a manager! I just released a new zine! It’s called “Help! I have a manager!” This zine is everything I wish somebody had told me when I started out in my career and had no idea how I was supposed to work with my manager. Basically I’ve learned along the way that even when I have a great manager, there are still a lot of things I can do to make sure that we work well together, mostly around communicating clearly! So this zine is about how to do that. You can get it for$10 at https://gum.co/manager-zine. Here’s the cover and table of contents:
The cover art is by Deise Lino. Tons of people helped me write this zine – thanks to Allison, Brett, Jay, Kamal, Maggie, Marc, Marco, Maya, Will, and many others.
### a couple of my favorite pages from the zine
I’ve been posting pages from the zine on twitter as I’ve been working on it. Here are a couple that I think are especially useful – some tips for what even to talk about in 1:1s, and how to do better at asking for feedback.
### Build impossible programs
Hello! My talk from Deconstruct this year (“Build impossible programs”) is up. It’s about my experience building a Ruby profiler. This is the second talk I’ve given about building a profiler – the first one (Building a Ruby profiler) was more of a tech deep dive. This one is a squishier talk about myths I believed about doing ambitious work and how a lot of those myths turn out not to be true.
There’s a transcript on Deconstruct’s site. They’re also gradually putting up the other talks from Deconstruct 2018, which were generally excellent.
### slides
As usual these days I drew the slides by hand. It’s way easier/faster, and it’s more fun.
### zine side note
One extremely awesome thing that happened at Deconstruct was that Gary agreed to print 2300 zines to give away to folks at the conference. They all got taken home which was really nice to see :)
### An awesome new Python profiler: py-spy!
The other day I learned that Ben Frederickson has written an awesome new Python profiler called py-spy!
It takes a similar approach to profiling as rbspy, the profiler I worked on earlier this year – it can profile any running Python program, it uses process_vm_readv to read memory, and it by default displays profiling information in a really easy-to-use way.
Obviously, think this is SO COOL. Here’s what it looks like profiling a Python program: (gif taken from the github README)
It has this great top-like output by default. The default UI is somewhat similar to rbspy’s, but feels better executed to me :)
### you can install it with pip!
Another thing he’s done that’s really nice is make it installable with pip – you can run pip install py-spy and have it download a binary immediately! This is cool because, even though py-spy is a Rust program, obviously Python programmers are used to installing software with pip and not cargo.
In the README he describes what he had to do to distribute a Rust executable with pip without requiring that users have a Rust compiler installed.
### pyspy probably is more stable than rbspy!
Another nice thing py-spy is that I believe it only uses Python’s public bindings (eg Python.h). What I mean by “public bindings” is the header files you’d find in libpython-dev.
rbspy by contrast uses a bunch of header files from inside the Ruby interpreter. This is because Python for whatever reason includes a lot more struct definitions in its header files.
As a result, if you compare py-spy’s python bindings to rbspy’s ruby bindings, you’ll notice that
• there are way fewer Python binding files (6 vs 42 for Ruby)
• each file is much smaller (~30kb vs 200kb for Ruby)
Basically what I think this means is that py-spy is likely to be easier to maintain longterm than rbspy – since rbspy depends on unstable internal Ruby interfaces, even though it works relatively well today, future versions of Ruby could break it at any time.
### the start of an ecosystem of profilers in Rust?? :)
One thing that I think is super nice is that rbspy & py-spy share some code! There’s this proc-maps crate that Ben extracted from rbspy and improved substantially. I think this is awesome because if someone wants to make a py-spy/rbspy-like profiler in Rust for another language like Perl or Javascript or something, it’s even easier!
It turns out that phpspy is a sampling profiler for PHP, too!
I have this secret dream that we could eventually have a suite of open source profilers for lots of different programming languages that all have similar user interfaces. Today every single profiling tool is different and it’s a pain.
### also rbspy has windows support now!
Ben also contributed Windows support to rbspy, which was amazing, and py-spy has Windows support from the start.
So if you want to profile Ruby or Python programs on Windows, you can!
Page created: Fri, Mar 08, 2019 - 09:00 PM GMT |
• # question_answer A 50 MHz sky wave takes 4.04 ms to reach a receiver via re-transmission from a satellite 600 km above Earth's surface. Assuming re-transmission time by satellite negligible, find the distance between source and receiver. If communication between the two was to be done by Line of Sight (LOS) method, what should size and placement of receiving and transmitting antennas be?
Let the receiver be at point A and source be at B, Re-transmission from a satellite Given, velocity of waves $3\times {{10}^{8}}m/s$ Time to reach a receiver $=4.04\text{ }ms=4.04\times {{10}^{-3}}s$ Let the height of satellite is, ${{h}_{s}}=600km$ Radius of Earth = 6400 km Size of transmitting antenna = hf We know that, $\frac{\text{Distance travelled by wave}}{\text{Time}}\text{=Velocity}\,\,\text{of}\,\,\text{waves}$ $\Rightarrow$ $\frac{2x}{4.04\times {{10}^{-3}}}=3\times {{10}^{8}}$ or $x=\frac{3\times {{10}^{8}}\times 4.04\times {{10}^{-3}}}{2}$ $=6.06\times {{10}^{5}}=606km$ Using Pythagoras theorem, ${{d}^{2}}={{x}^{2}}-h_{s}^{2}={{(606)}^{2}}-{{(600)}^{2}}=7236$ or d = 85.06 km So, the distance between source and receiver $=2d=2\times 85.06=170\,km~$ The maximum distance covered on ground from the transmitter by emitted EM waves, $d=\sqrt{2R{{h}_{T}}}$or ${{d}^{2}}/2R={{h}_{T}}$ or size of antenna, ${{h}_{T}}=\frac{7236}{2\times 6400}=0.565\,km=565m$ |
# Trouble with my first latex document
Just downloaded Protex, downloaded TexStudio, and i'm trying to create a document, and i have
\documentclass{article}
\begin[12pt]{document}
\section{Executive Summary}
\end{document}
and its giving me 3 errors on the last line about it can't find the file. did i save in the wrong folder? does this work like matlab where it needs to be in a set directory first?
-
Welcome to TeX.SX! You can have a look on our starter guide to familiarize yourself further with our format. What happens if you remove [12pt] from line 2, and put it in line 1? – cmhughes Oct 22 '13 at 18:09
add comment
## 3 Answers
The [12pt] is an option which belongs to the preamble of the document:
\documentclass[12pt]{article}
\begin{document}
\section{Executive Summary}
This is only test text.
\end{document}
This compiles without problems.
-
More specifically, 12pt is an option that should be given to \documentclass, which resides in the document's preamble. – Sean Allred Oct 22 '13 at 18:23
add comment
When compiling a document, you do need to call latex (or pdflatex, xelatex, etc.) by giving it a filename that it can see, that is
• an absolute file name, like C:\Users\Victor\Documents\summary.tex
• a relative file name, like summary.tex if you're already in your Documents folder, or perhaps Documents\summary.tex if you are in your user folder (Victor).
TeXStudio should be taking care of this for you by default, provided you are clicking the green arrow in the toolbar. However, your document isn't correct; \begin doesn't take an optional argument there. You probably want
\documentclass[12pt]{article}
\begin{document}
\section{Executive Summary}
\end{document}
which will compile without issues.
-
add comment
\documentclass[12pt]{article}
\begin{document}
\maketitle
\section{Executive Summary}
\end{document}
I think you also need to insert the \maketitle as I did in the above tex
-
\maketitle is only useful if you have actually declared a title with \title{<your title here>} (and perhaps \author and \date). – jon Oct 23 '13 at 0:28
In fact, IIRC, the use of \maketitle without any of the metadata commands (\author, \title, and \date) is an error in the standard document classes. – Sean Allred Oct 23 '13 at 2:09
add comment |
# Geodesic surface
1. Jun 6, 2008
### mhill
if we have or can have geodesic curves minimizing the integral $$\sqrt (g_{ab}\dot x_a \dot x_b )$$ is there a theory of 'minimizing surfaces or Geodesic surfaces' that minimize the Area or a surface ?,
2. Jun 18, 2008
### gel
Last edited by a moderator: Apr 23, 2017
3. Jun 19, 2008
### robphy
Last edited by a moderator: Apr 23, 2017 |
# Deepest left leaf node in a binary tree in C++
C++Server Side ProgrammingProgramming
#### C in Depth: The Complete C Programming Guide for Beginners
45 Lectures 4.5 hours
#### Practical C++: Learn C++ Basics Step by Step
Most Popular
50 Lectures 4.5 hours
#### Master C and Embedded C Programming- Learn as you go
Best Seller
66 Lectures 5.5 hours
In this tutorial, we are going to find the deepest left leaf node in the binary tree. Let's see the binary tree.
A
B C
D E F
G
Let's see the steps to solve the problem.
• Write a Node struct with char, left, and right pointers.
• Initialize the binary tree with dummy data.
• Write a recursive function to find the deepest left node in the binary function. It takes three argument root node, isLeftNode, and result pointer to store the deepest node.
• If the current node is left and is leaf node, then update the result node with current node.
• Call the recursive function on left sub tree.
• Call the recursive function on right sub tree.
• If the result node is null, then there is no node that satisfies our conditions.
• Else print the data in the result node.
## Example
Let's see the code.
Live Demo
#include <bits/stdc++.h>
using namespace std;
struct Node {
char data;
struct Node *left, *right;
};
Node *newNode = new Node;
newNode->data = data;
newNode->left = newNode->right = NULL;
return newNode;
}
void getDeepestLeftLeafNode(Node *root, bool isLeftNode, Node **resultPointer) {
if (root == NULL) {
return;
}
if (isLeftNode && !root->left && !root->right) {
*resultPointer = root;
return;
}
getDeepestLeftLeafNode(root->left, true, resultPointer);
getDeepestLeftLeafNode(root->right, false, resultPointer);
}
int main() {
Node *result = NULL;
getDeepestLeftLeafNode(root, false, &result);
if (result) {
cout << "The deepest left child is " << result->data << endl;
}
else {
cout << "There is no left leaf in the given tree" << endl;
}
return 0;
}
## Output
If you execute the above program, then you will get the following result.
The deepest left child is D
## Conclusion
If you have any queries in the tutorial, mention them in the comment section.
Updated on 30-Dec-2020 06:36:43 |
One year of sound recorded by a mermaid float in the Pacific: hydroacoustic earthquake signals and infrasonic ambient noise
SUMMARY A fleet of autonomously drifting profiling floats equipped with hydrophones, known by their acronym mermaid, monitors worldwide seismic activity from inside the oceans. The instruments are programmed to detect and transmit acoustic pressure conversions from teleseismic P wave arrivals for use in mantle tomography. Reporting seismograms in near-real time, within hours or days after they were recorded, the instruments are not usually recovered, but if and when they are, their memory buffers can be read out. We present a unique 1-yr-long data set of sound recorded at frequencies between 0.1 and 20 Hz in the South Pacific around French Polynesia by a mermaid float that was, in fact, recovered. Using time-domain, frequency-domain and time-frequency-domain techniques to comb through the time-series, we identified signals from 213 global earthquakes known to published catalogues, with magnitudes 4.6–8.0, and at epicentral distances between 24° and 168°. The observed signals contain seismoacoustic conversions of compressional and shear waves travelling through crust, mantle and core, including P, S, Pdif, Sdif, PKIKP, SKIKS, surface waves and hydroacoustic T phases. Only 10 earthquake records had been automatically reported by the instrument—the others were deemed low-priority by the onboard processing algorithm. After removing all seismic signals from the record, more »
Authors:
;
Award ID(s):
Publication Date:
NSF-PAR ID:
10341478
Journal Name:
Geophysical Journal International
Volume:
228
Issue:
1
Page Range or eLocation-ID:
193 to 212
ISSN:
0956-540X
Volcanic eruption source parameters may be estimated from acoustic pressure recordings dominant at infrasonic frequencies (< 20 Hz), yet uncertainties may be high due in part to poorly understood propagation dynamics. Linear acoustic propagation of volcano infrasound is commonly assumed, but nonlinear processes such as wave steepening may distort waveforms and obscure the sourcing process in recorded waveforms. Here we use a previously developed frequency-domain nonlinearity indicator to quantify spectral changes due to nonlinear propagation primarily in 80 signals from explosions at Yasur Volcano, Vanuatu. We find evidence for$$\le$$$\le$10−3 dB/m spectral energy transfer in the band 3–9 Hz for signals with amplitude on the order of several hundred Pa at 200–400 m range. The clarity of the nonlinear spectral signature increases with waveform amplitude, suggesting stronger nonlinear changes for greater source pressures. We observe similar results in application to synthetics generated through finite-difference wavefield simulations of nonlinear propagation, although limitations of the model complicate direct comparison to the observations. Our results provide quantitative evidence for nonlinear propagation that confirms previous interpretations made on the basis of qualitative observations of asymmetric waveforms. |
October 2021
Filter off: No filter for categories
11.10.2021 15:00 Daniela Schlager:Stability Analysis of Multiplayer Games on Simplicial Complexes in Adaptive NetworksVirtuelle Veranstaltung (Boltzmannstr. 3, 85748 Garching)
We develop models of multiplayer games based on cooperation, the Snowdrift game and the Prisoner’s Dilemma, on adaptive networks. They contain explicit interactions of multiple players on simplicial complexes. All operations of and on the network are based on game theoretical properties of the respective games. The evolution of the models over time is described by moment equations and is closed by pair approximation. The stability of equilibria is examined when irrational decisions in simplices are added into the models.
11.10.2021 16:30 Detlef Kreß (LMU; MSc presentation):A percolation model without positiv correlattionB 252 (Theresienstr. 39, 80333 München)
We introduce a bond-percolation model that is a modification of the corrupted compass model introduced by Christian Hirsch, Mark Holmes and Victor Kleptsyn (2021).On a given graph we start in each vertex independent with probability p a random walk of length L. We make an edge occupied if it was used by a random walk. This model does not exhibit positive correlation.If L is choosen such that there is percolation for p=1, we have a sharp phase transition for p. We discuss the question of percolation on the hypercubic lattice and show that on the square lattice percolation occurs for L=2.
18.10.2021 14:00 Bernd Sturmfels (MPI Leipzig) :Algebraic Statistics with a View towards PhysicsBC1 2.1.10 (with additional stream using Zoom, see http://go.tum.de/410163 for more details) (Parkring 11, 85748 Garching)
We discuss the algebraic geometry of maximum likelihood estimation from the perspective of scattering amplitudes in particle physics. A guiding example is the moduli space of n-pointed rational curves. The scattering potential plays the role of the log-likelihood function, and its critical points are solutions to rational function equations. Their number is an Euler characteristic. Soft limit degenerations are combined with certified numerical methods for concrete computations.
18.10.2021 15:00 Eric Lucon & Christophe Poquet, part 1:Periodic behavior of mean-field systemsVirtuelle Veranstaltung (Boltzmannstr. 3, 85748 Garching)
We will study non-linear mean-field Fokker-Planck equations describing the infinite population limit of interacting excitable particles subject to noise. Taking a slow-fast dynamics approach we will describe the emergence of periodic behaviors induced by the noise and the interaction, considering in particular the case in which each unit evolves according to the FitzHugh Nagumo model. This talk is linked to the one given by my co-author Eric Luçon the following week, in which he will speak about the long time behavior of the population of particles when the population is finite.
25.10.2021 15:00 Eric Lucon & Christophe Poquet, part 2:Large-time dynamics of mean-field interacting diffusions along a limit cycleVirtuelle Veranstaltung (Boltzmannstr. 3, 85748 Garching)
This talk is the natural continuation of the previous talk of Christophe Poquet which concerned the existence of periodic solutions to nonlinear Fokker-Planck equations. We are here interested in the microscopic counterpart of the same problem: nonlinear Fokker-Planck equations are natural limits of the empirical measure of N mean-field interacting diffusions as N goes to infinity. Standard propagation of chaos estimates show that this limit remains relevant only up to times that remains bounded in N. A natural question is then to ask about the dynamics of the empirical measure of the system on a larger time scale. We answer to this question in the case the FP limit possess a smooth and stable limit cycle. The main result of the talk will be to show that, on a time scale of order N, the empirical measure remains with high probability close to the periodic orbit with a diffusive dynamics along the limit cycle.
25.10.2021 16:30 Wolfgang Löhr (Universität Duisburg-Essen):A new state space of algebraic measure trees for stochastic processesOnline: attendB 252 (Theresienstr. 39, 80333 München)
In the talk, I present a new topological space of continuum'' trees, which extends the set of finite graph-theoretic trees to uncountable structures, which can be seen as limits of finite trees. Unlike previous approaches, we do not use the graph-metric but formalize the tree-structure by a tertiary operation on the tree, namely the branch-point map. The resulting space of algebraic measure trees has coarser equivalence classes than the more classical space of metric measure trees, but the topology preserves more of the tree-structure in limits, so that it is incomparable to, and not coarser than, the standard topologies on metric measure trees. With the example of the Aldous chain on cladograms, I also illustrate that our new space can be very useful as state-space for stochastic processes in order to obtain path-space diffusion limits of tree-valued Markov chains.
27.10.2021 13:00 Johannes Kleiner, LMU München :What is Mathematical Consciousness Science?Online: attend (Code 101816)MI 03.04.011 (Boltzmannstr. 3, 85748 Garching)
In the last three decades, the problem of consciousness - how and why physical systems such as the brain have conscious experiences - has received increasing attention among neuroscientists, psychologists, and philosophers. Recently, a decidedly mathematical perspective has emerged as well, which is now called Mathematical Consciousness Science. In this talk, I will give an introduction and overview of Mathematical Consciousness Science for mathematicians, including a bottom-up introduction to the problem of consciousness and how it is amenable to mathematical tools and methods.
Link and Passcode: https://tum-conf.zoom.us/j/96536097137 Code 101816 |
I am using scrbook for my thesis (so no Titlesec) and would like to reformat the way paragraphs work in a very minor way: I need a symbol after the heading, with even spacing between the symbol and the following text. I am using this in a meaningful way (referring to sections within discussed works as I recap their contents), so resorting to other levels of sectioning is no good option.
Fooling around with various solutions for other related problems got me nowhere.
The result should look like:
Bacon is good | dolores has an ipsum in her amet ....
[with a bit more space around the "|"]
• Welcome to TeX.SE. Just to make sure everyone is using the same terminology: By "paragraph", do you mean the LaTeX macro called \paragraph? Or are you referring to logical textual units called "paragraphs"? Please advise. – Mico Feb 3 '18 at 18:29
• Sorry for the ambiguity - I meant the logical textual units, as in "one below \subsubsection". – Paul Burgh Feb 3 '18 at 19:12
I'm assuming that by the term "paragraph", you mean the LaTeX macro named \paragraph. Please advise if this is not what you have in mind.
Is the following close to what you had in mind?
\documentclass{scrbook}
\usepackage{letltxmacro}
\LetLtxMacro{\origpara}{\paragraph}
\begin{document}
\paragraph{Bacon is good}Lorem ipsum \dots
\end{document}
• The OP will probably want something smaller than \quad, but that is easily changed. +1 – ShreevatsaR Feb 3 '18 at 18:51
• Thanks a lot - this was the easy solution I assumed existed and couldn't fathom. For completeness - I used hspaces now and it works perfectly. – Paul Burgh Feb 3 '18 at 19:09
You can redefine KOMA-Script command \sectioncatchphraseformat for paragraph:
\documentclass{scrbook}
\usepackage[T1]{fontenc}
\RedeclareSectionCommand[afterskip=-.5em]{paragraph}% change horizontal skip after paragraph heading
\renewcommand{\sectioncatchphraseformat}[4]{%
\ifstr{#1}{paragraph}
{\hskip #2#3#4\hskip .5em{\normalfont\textbar}}% new definition for paragraph heading
{\hskip #2#3#4}% orginal definition for other levels like subparagraph
}
\begin{document}
\paragraph{Bacon is good}Lorem ipsum \dots
\end{document}
Result:
Note that the - in afterskip=-.5em means that the skip is horizontal. So there is a skip of .5em to the right of the paragraph heading. The default value for paragraph is afterskip=-1em.
• Thanks! Good to have two different ways of achieving a similar result, in case there should be conflict etc. – Paul Burgh Feb 3 '18 at 22:05
For future reference: there was a bit of an oddity with the spacing, so I had to fiddle. The result I was looking for was achieved with the following:
\documentclass{scrbook}
\usepackage{letltxmacro}
\LetLtxMacro{\origpara}{\paragraph}
\renewcommand\paragraph[2][]{\origpara[#1]{#2}\hspace{-0.5em}$\mid$\hspace{0.em}}
\setkomafont{paragraph}{\normalfont\rmfamily\itshape}
\begin{document}
\paragraph{Bacon is good} \lipsum
\end{document} |
# Fractional indices
Fractional indices can be a challenging task to understand but once you have got the basics understood this becomes straight forward. The denominator of the the fractional indice is the root and the numerator is the indice. |
# Rate of change(using chain rules)
• October 15th 2008, 03:34 PM
iwonder
Rate of change(using chain rules)
The radius of a right circular cone is increasing at a rate of 5 inches per second and its height is decreasing at a rate of 4 inches per second. At what rate is the volume of the cone changing when the radius is 30 inches and the height is 50 inches?
• October 15th 2008, 03:57 PM
skeeter
take the time derivative of the cylinder volume formula ...
$V = \pi r^2 h$
remember that the product rule is required along with the chain rule.
after you find the derivative, then substitute in all that information you were given to find the value of the volume's time rate of change.
• October 15th 2008, 04:04 PM
iwonder
the answer should be 11938 right?(Nod)
• October 15th 2008, 04:41 PM
skeeter
I get $18600\pi$ $\frac{in^3}{sec}$ |
• ### Shape and spin determination of Barbarian asteroids(1707.07503)
July 24, 2017 astro-ph.EP
Context. The so-called Barbarian asteroids share peculiar, but common polarimetric properties, probably related to both their shape and composition. They are named after (234) Barbara, the first on which such properties were identified. As has been suggested, large scale topographic features could play a role in the polarimetric response, if the shapes of Barbarians are particularly irregular and present a variety of scattering/incidence angles. This idea is supported by the shape of (234) Barbara, that appears to be deeply excavated by wide concave areas revealed by photometry and stellar occultations. Aims. With these motivations, we started an observation campaign to characterise the shape and rotation properties of Small Main- Belt Asteroid Spectroscopic Survey (SMASS) type L and Ld asteroids. As many of them show long rotation periods, we activated a worldwide network of observers to obtain a dense temporal coverage. Methods. We used light-curve inversion technique in order to determine the sidereal rotation periods of 15 asteroids and the con- vergence to a stable shape and pole coordinates for 8 of them. By using available data from occultations, we are able to scale some shapes to an absolute size. We also study the rotation periods of our sample looking for confirmation of the suspected abundance of asteroids with long rotation periods. Results. Our results show that the shape models of our sample do not seem to have peculiar properties with respect to asteroids with similar size, while an excess of slow rotators is most probably confirmed.
• Parallaxes for 331 classical Cepheids, 31 Type II Cepheids and 364 RR Lyrae stars in common between Gaia and the Hipparcos and Tycho-2 catalogues are published in Gaia Data Release 1 (DR1) as part of the Tycho-Gaia Astrometric Solution (TGAS). In order to test these first parallax measurements of the primary standard candles of the cosmological distance ladder, that involve astrometry collected by Gaia during the initial 14 months of science operation, we compared them with literature estimates and derived new period-luminosity ($PL$), period-Wesenheit ($PW$) relations for classical and Type II Cepheids and infrared $PL$, $PL$-metallicity ($PLZ$) and optical luminosity-metallicity ($M_V$-[Fe/H]) relations for the RR Lyrae stars, with zero points based on TGAS. The new relations were computed using multi-band ($V,I,J,K_{\mathrm{s}},W_{1}$) photometry and spectroscopic metal abundances available in the literature, and applying three alternative approaches: (i) by linear least squares fitting the absolute magnitudes inferred from direct transformation of the TGAS parallaxes, (ii) by adopting astrometric-based luminosities, and (iii) using a Bayesian fitting approach. TGAS parallaxes bring a significant added value to the previous Hipparcos estimates. The relations presented in this paper represent first Gaia-calibrated relations and form a "work-in-progress" milestone report in the wait for Gaia-only parallaxes of which a first solution will become available with Gaia's Data Release 2 (DR2) in 2018.
• Context. The first Gaia Data Release contains the Tycho-Gaia Astrometric Solution (TGAS). This is a subset of about 2 million stars for which, besides the position and photometry, the proper motion and parallax are calculated using Hipparcos and Tycho-2 positions in 1991.25 as prior information. Aims. We investigate the scientific potential and limitations of the TGAS component by means of the astrometric data for open clusters. Methods. Mean cluster parallax and proper motion values are derived taking into account the error correlations within the astrometric solutions for individual stars, an estimate of the internal velocity dispersion in the cluster, and, where relevant, the effects of the depth of the cluster along the line of sight. Internal consistency of the TGAS data is assessed. Results. Values given for standard uncertainties are still inaccurate and may lead to unrealistic unit-weight standard deviations of least squares solutions for cluster parameters. Reconstructed mean cluster parallax and proper motion values are generally in very good agreement with earlier Hipparcos-based determination, although the Gaia mean parallax for the Pleiades is a significant exception. We have no current explanation for that discrepancy. Most clusters are observed to extend to nearly 15 pc from the cluster centre, and it will be up to future Gaia releases to establish whether those potential cluster-member stars are still dynamically bound to the clusters. Conclusions. The Gaia DR1 provides the means to examine open clusters far beyond their more easily visible cores, and can provide membership assessments based on proper motions and parallaxes. A combined HR diagram shows the same features as observed before using the Hipparcos data, with clearly increased luminosities for older A and F dwarfs.
• ### Large Halloween Asteroid at Lunar Distance(1610.08267)
Oct. 26, 2016 astro-ph.EP
The near-Earth asteroid (NEA) 2015 TB145 had a very close encounter with Earth at 1.3 lunar distances on October 31, 2015. We obtained 3-band mid-infrared observations with the ESO VLT-VISIR instrument and visual lightcurves during the close-encounter phase. The NEA has a (most likely) rotation period of 2.939 +/- 0.005 hours and the visual lightcurve shows a peak-to-peak amplitude of approximately 0.12+/-0.02 mag. We estimate a V-R colour of 0.56+/-0.05 mag from MPC database entries. Applying different phase relations to the available R-/V-band observations produced H_R = 18.6 mag (standard H-G calculations) or H_R = 19.2 mag & H_V = 19.8 mag (via the H-G12 procedure), with large uncertainties of approximately 1 mag. We performed a detailed thermophysical model analysis by using spherical and ellipsoidal shape models. The thermal properties are best explained by an equator-on (+/- ~30 deg) viewing geometry during our measurements with a thermal inertia in the range 250-700 Jm-2s-0.5K-1 (retrograde rotation) or above 500 Jm-2s-0.5K-1 (prograde rotation). We find that the NEA has a minimum size of 625 m, a maximum size of just below 700 m, and a slightly elongated shape with a/b ~1.1. The best match to all thermal measurements is found for: (i) Thermal inertia of 900 Jm-2s-0.5K-1; D_eff = 644 m, p_V = 5.5% (prograde rotation); regolith grain sizes of ~50-100 mm; (ii) thermal inertia of 400 Jm-2s-0.5K-1; D_eff = 667 m, p_V = 5.1% (retrograde rotation); regolith grain sizes of ~10-20 mm. A near-Earth asteroid model (NEATM) confirms an object size well above 600 m, significantly larger than early estimates based on radar measurements. We give recommendations for improved observing strategies for similar events in the future.
• ### Asteroid models from the Lowell Photometric Database(1601.02909)
Jan. 12, 2016 astro-ph.EP
We use the lightcurve inversion method to derive new shape models and spin states of asteroids from the sparse-in-time photometry compiled in the Lowell Photometric Database. To speed up the time-consuming process of scanning the period parameter space through the use of convex shape models, we use the distributed computing project Asteroids@home, running on the Berkeley Open Infrastructure for Network Computing (BOINC) platform. This way, the period-search interval is divided into hundreds of smaller intervals. These intervals are scanned separately by different volunteers and then joined together. We also use an alternative, faster, approach when searching the best-fit period by using a model of triaxial ellipsoid. By this, we can independently confirm periods found with convex models and also find rotation periods for some of those asteroids for which the convex-model approach gives too many solutions. From the analysis of Lowell photometric data of the first 100,000 numbered asteroids, we derived 328 new models. This almost doubles the number of available models. We tested the reliability of our results by comparing models that were derived from purely Lowell data with those based on dense lightcurves, and we found that the rate of false-positive solutions is very low. We also present updated plots of the distribution of spin obliquities and pole ecliptic longitudes that confirm previous findings about a non-uniform distribution of spin axes. However, the models reconstructed from noisy sparse data are heavily biased towards more elongated bodies with high lightcurve amplitudes.
• ### Asteroids' physical models from combined dense and sparse photometry and scaling of the YORP effect by the observed obliquity distribution(1301.6943)
Jan. 29, 2013 astro-ph.EP
The larger number of models of asteroid shapes and their rotational states derived by the lightcurve inversion give us better insight into both the nature of individual objects and the whole asteroid population. With a larger statistical sample we can study the physical properties of asteroid populations, such as main-belt asteroids or individual asteroid families, in more detail. Shape models can also be used in combination with other types of observational data (IR, adaptive optics images, stellar occultations), e.g., to determine sizes and thermal properties. We use all available photometric data of asteroids to derive their physical models by the lightcurve inversion method and compare the observed pole latitude distributions of all asteroids with known convex shape models with the simulated pole latitude distributions. We used classical dense photometric lightcurves from several sources and sparse-in-time photometry from the U.S. Naval Observatory in Flagstaff, Catalina Sky Survey, and La Palma surveys (IAU codes 689, 703, 950) in the lightcurve inversion method to determine asteroid convex models and their rotational states. We also extended a simple dynamical model for the spin evolution of asteroids used in our previous paper. We present 119 new asteroid models derived from combined dense and sparse-in-time photometry. We discuss the reliability of asteroid shape models derived only from Catalina Sky Survey data (IAU code 703) and present 20 such models. By using different values for a scaling parameter cYORP (corresponds to the magnitude of the YORP momentum) in the dynamical model for the spin evolution and by comparing synthetics and observed pole-latitude distributions, we were able to constrain the typical values of the cYORP parameter as between 0.05 and 0.6. |
# How to write code and not die
Document by Maks Loboda
## Godot style guide TLDR
• Use code order specified in documentation (at the bottom of the page there a script template to help you with that)
• Class should be in PascalCase
• Files should be in snake_case (convert class name to snake_case for scripts)
• Variables and function names are in snake_case
• Use single underscore before the name to denote virtual methods, private methods and private variables
• Use past tense in signal names, don't add on_ unless absolutely necessary
signal door_openedsignal score_changed
• Constants are in CONSTANT_CASE, and enum members too.
• Dont make long lines of code, generally they should not be longer then 80 chars
• When separating a long expression into multiple lines, extra lines should be indented twice
• Leave two lines before functions and class definitions
• Use and instead of && , or instead of ||
• Comments should start with a space, disabled code should be ctrl+k
• Use spaces to make code more readable
Good:
xxxxxxxxxxposition.x = 5position.y = mpos.y + 10dict["key"] = 5my_array = [4, 5, 6]print("foo")
xxxxxxxxxxposition.x=5position.y = mpos.y+10dict ["key"] = 5myarray = [4,5,6]print ("foo")
If you do this you die instantly:
xxxxxxxxxxx = 100y = 100velocity = 500
• You can use " or ' to avoid escaping quotes, beacuse print("hello 'world'") is ok.
• Use explicit zeros on floats Good:
xxxxxxxxxxvar float_number = 0.234var other_float_number = 13.0
xxxxxxxxxxvar float_number = .234var other_float_number = 13.
• GDscript has decimal, hex and binary leterals and all of them have separators:
xxxxxxxxxxvar large_number = 1_234_567_890var large_hex_number = 0xffff_f8f8_0000var large_bin_number = 0b1101_0010_1010# Numbers lower than 1000000 generally don't need separators.var small_number = 12345
## Best practices TLDR:
Best practices full document
• Nodes should really know only about nodes directly below them, to talk to nodes higher or on the same level use:
• Signals
• Make the node call a funcref that other node sets
• Make the node manipulate a node reference that other node sets. Make sure that no reference loops occur (aka two or more nodes reference each other in a loop and none of them can ever be freed), use weakref to break such loops
• Make the node manipulate a NodePath that other node sets. (Personally i found NodePath's to be a bit flaky, but its the only way to export node references in editor)
• When you want two nodes on the same level talk parent can help them make contact
• Use general OOP principles:
• Autoloads should be used sparingly (because then can trigger massive hard to find bugs)
• (Its not actually said here but) DONT use preload or load, export PackedScene or [aplicable resource type here] instead, because if you want to move or rename some file, export will update but preaload and load will instead explode.
• Dictionary is a HashMap, Array is a Vector, use type specialized array when speed is key.
• Object is secretly a dictionary too.
• Whatever your project structure is use snake_case for files and folders, otherwise on some platforms shit will explode on export and everyone is gonna have a bad time 💀.
• Always close the editor before running git pull! Otherwise, you may lose data if you synchronize files while the editor is open.
## Notes from experience
• get_parent is evil, prefer using signals
• Don't use hardcoded paths to nodes, all paths should be in a variable so its easier to change later.
• When commenting code use ctrl+k it actually saves time.
• Always use Godot static typing system
• Always explicitly state types for variables (I died inside more then once from float_value := 1337)
• Enums are epic
• Use getters and setters when accessing data from a different script
• No magic numbers (except 0, 1 and 2). Even it is not a good export varible it still probably has a good name.
• Have at least minimal comments
• Give your script a class_name and use it as a type in static typing.
• Use _physics_process instead of _process.
• Multiply stuff by delta time to tie it to realtime instead of frames, and don't forget to check with lower and higher framerates.
• Always test that all code paths actually work.
• Make sure that your scripts are not bloated with functionality, use extra child nodes for handling extra features.
• Every scene should have a folder for itself and for resources that it uses.
• If a function can return an error, always check.
• Prints are your friend, but always have a string telling where this noname float value is printed from. Also its very nice to have a way to disable them without commenting.
• Enable saving logs to disk, sometimes it really helps to have output log from 5 launches ago.
• Godot actually has bugs sometimes:
• Sometimes delta time in _process and _physics_process is 0, check at the start of the function and return immediately if its the case, otherwise anything can happen.
• With some of the scaling options UI elements can explode randomly when changing window sizes, if its the case just set_rect_position and set_rect_size everything where it belongs
• Key bindings for Mac are scuffed, but i have the wizardry to make them work (i really should report it and make a pull request).
• Debugger on some rare occasions fucking dies, in this case try using print while praying. I remember one time i got this i had a freed node as a key in a dictionary..
• variable names should start with the general thing they refer to, then what they represent in that thing, then if its an attribute of thing they represent - the attribute. This is done to make autocompletion easier and to avoid remembering variable names by heart e.g. : jiggle_position_max
• Don't mix Controls and Node2D's, Controls are made for UI. Some rare exceptions are ok, like having TextureRect instead of Sprite2D.
• Your scripts should "compile" with no warnings, you may suppress manually these types of warnings (ping me to show cases where other types of warnings are useless):
• Use pass only for defining empty functions
• Your scene should be at (0, 0) with (1, 1) scale
• Minimize amount of non (1, 1) scale and non (0, 0) position nodes
• Use pixel snap for positions, some kind of rotation and scale snap is good too
• Be careful with scale, updating it in code after setting it in editor can cause problems
## Script template:
Don't delete unused sections, uncomment needed pieces of code
new_script.gd |
Let a, b and c be three unit vectors such that
Question:
Let $a, b$ and $c$ be three unit vectors such that $|\vec{a}-\vec{b}|^{2}+|\vec{a}-\vec{c}|^{2}=8$. Then $|\vec{a}+2 \vec{b}|^{2}+|\vec{a}+2 \vec{c}|^{2}$ is equal to__________.
Solution:
$|\vec{a}|=|\vec{b}|=|\vec{c}|=1$
$|\vec{a}-\vec{b}|^{2}+|\vec{a}-\vec{c}|^{2}=8$
$\Rightarrow \vec{a} \cdot \vec{b}+\vec{a} \cdot \vec{c}=-2$
Now, $|\vec{a}+2 \vec{b}|^{2}+|\vec{a}+2 \vec{c}|^{2}$
$=2|\vec{a}|^{2}+4|\vec{b}|^{2}+4|\vec{c}|^{2}+4(\vec{a} \cdot \vec{b}+\vec{a} \cdot \vec{c})=2$ |
# zbMATH — the first resource for mathematics
## Powell, Michael James David
Compute Distance To:
Author ID: powell.m-j-d Published as: Powell, M. J.; Powell, M. J. D; Powell, M. J. D.; Powell, Michael External Links: MGP · Wikidata · GND
Documents Indexed: 148 Publications since 1962, including 5 Books Biographic References: 8 Publications
all top 5
#### Co-Authors
106 single-authored 5 Beatson, Rick K. 3 Curtis, A. Robert 3 Demetriou, Ioannis C. 3 Faul, Anita C. 3 Yuan, Ya-xiang 2 Barrodale, Ian 2 Chamberlain, R. M. 2 Fletcher, Roger 2 Goodsell, George 2 Iserles, Arieh 2 Reid, John 2 Roberts, F. D. K. 2 Toint, Philippe L. 1 Ames, Aaron D. 1 Brodlie, Kenneth W. 1 Buhmann, Martin D. 1 Cheney, Elliott Ward jun. 1 Cullinan, M. P. 1 Gaffney, Patrick W. 1 Ge, Renpu 1 Havie, Tore 1 Hopper, M. J. 1 Jacobson, David Harris 1 Lemaréchal, Claude 1 Martin, Duncan H. 1 Nørsett, Syvert Paul 1 Pedersen, H. C. 1 Sabin, Malcolm A. 1 Scholtes, Sebastian 1 Swann, J. 1 Tan, Aimei 1 Zhao, HuiYang
all top 5
#### Serials
13 Mathematical Programming 12 IMA Journal of Numerical Analysis 9 The Computer Journal. Section A / Section B 9 Mathematical Programming. Series A. Series B 5 Journal of the Institute of Mathematics and its Applications 3 SIAM Journal on Numerical Analysis 3 Optimization Methods & Software 2 ACM Transactions on Mathematical Software 2 Mathematical Programming Study 2 Constructive Approximation 2 Linear Algebra and its Applications 2 SIAM Review 2 Computational Optimization and Applications 1 Mathematics of Computation 1 Journal of Approximation Theory 1 Numerische Mathematik 1 Optimal Control Applications & Methods 1 Numerical Algorithms 1 Bulletin. The Institute of Mathematics and its Applications 1 SIAM Journal on Optimization 1 Annals of Numerical Mathematics 1 Advances in Computational Mathematics 1 Foundations of Computational Mathematics 1 HERMIS-$\mu\pi$. Hellenic European Research on Mathematics and Informatics Science 1 Bollettino della Unione Matematica Italiana. Series IV 1 Mathematical Programming Computation
all top 5
#### Fields
98 Numerical analysis (65-XX) 77 Operations research, mathematical programming (90-XX) 33 Approximation and expansions (41-XX) 11 Calculus of variations and optimal control; optimization (49-XX) 4 Linear and multilinear algebra; matrix theory (15-XX) 4 Computer science (68-XX) 3 General mathematics (00-XX) 2 History and biography (01-XX) 2 Harmonic analysis on Euclidean spaces (42-XX) 2 Systems theory; control (93-XX) 1 Game theory, economics, social and behavioral sciences (91-XX) 1 Information and communication, circuits (94-XX) |
JHR's MA275 page
MA275 Discrete and Combinatorial Algebra
MTRF 6 G317
John Rickert, Associate Professor of Mathematics
Office: G-215A, Crapo Hall
Phone: (812) 877-8473
e-mail: [email protected]
Office hours week: MTRF 7, or make an appointment, or drop in. Here's my schedule
The final exam takes place Monday, November 12 1:00PM-5:00PM in G222.
Moodle
To Homework ...Questions
Homework for our next class ...Today's questions
For Friday, August 31: Section 1.2 #1,7,11,14,15,17,26,31,39. Turn in these exercises on Monday.
The in-class questions to consider are available on Moodle.
For Monday September 3: Turn in the Section 1.2 exercises.
Section 1.3 #1,6,9,14,16,17,23,27,32. turn in these exercises on Friday.
For Tuesday September 4: Section 1.3 #1,6,9,14,16,17,23,27,32. turn in these exercises on Friday.
For Thursday, September 6: Section 1.4 #1,7,10,16,22,25,27.
For Friday, September 7: Turn in the Section 1.3 exercises.
Section 1.5 #1,4,7,9,11. Turn in these exercises at the beginning of class Monday.
We will have a quiz at the beginning of class. The quiz will cover through Section 1.4 (inclusive). The average score on the quiz was 25.3/30. Equivalent grades are A: 28-30 B 25-27 C 23-24 D 20-22 F <20 .
For Monday, September 10: Turn in the Section 1.5 homework.
Begin learning the Laws of Logic on pages 58 and 59 of the textbook.
For Tuesday, September 11: Section 2.1 #3,5,6,8,17.
Learn the Laws of Logic on pages 58 and 59 of the textbook.
For Thursday, September 13: Section 2.2 #1,6,9,14,15. Begin learning the Rules of Inference on page 78 of the textbook.
For Friday, September 14: Turn in the Section 2.2 homework.
Section 2.3 #5,7,11,12.
Monday, September 17: Section 2.3 #13.
Read Section 2.4 through at least example 2.36. We will discuss quantifiers on Monday.
The average score on the pop quiz was 18.8/20. Equivalent scores are A 19-20 B 18 C 16-17 D 14-15 F >14.
Tuesday, September 18: Section 2.4 #1,4,9,21.
For Thursday, September 20: Section 2.4 #8,14,16,22. Problem 14f has a typo. 14f should read $$\forall n [\lnot p(n) \rightarrow \lnot q(n)]$$.
Friday, September 21: The average score on Exam #1 was 785.5/1000. Equivalent grades are A: 900-1000 B: 780-899 C: 660-779 D: 520-659 F: <520.
For Tuesday, September 24: Section 2.5 #5,9,14,15,19,21. Turn in these exercises on Thursday.
For Thursday, September 27: Turn in the exercises assigned for Tuesday (Sec. 2.5 #5,9,14,15,19,21).
Work exercises Section 2.5 #17,23; Page 121 #15 and come to class with any questions that you have about them.
For Friday, September 28: We will have a quiz in which you will be asked to prove a mathematical statement.
Section 3.1 #1,5,8,19,23,27. Turn in these exercises on Tuesday, October 2. Learn the Laws of Set Theory listed on page 139 of the textbook.
For Monday, October 1: Section 3.2 #1,6.
Learn the Laws of Set Theory listed on page 139 of the textbook.
The average score on the quiz was 15.7/20. Equivalent grades are A 18-20 B 15-17 C 12-14 D 10-11 F <10.
For Tuesday, October 2: Turn in the Section 3.1 exercises.
Section 3.2 #4,14,16,19,20.
For Thursday, October 4: Turn in Tuesday's exercises; Section 3.2 #4,14,16,19,20.
Work Section 3.2 #9,17. Think about the rearrangements of the letters in LETTERS.
For Friday, October 5: Section 3.3 #1,4,5,6,9.
For Monday, October 8: Section 3.4 #1,5,6,9,11,14.
For Tuesday, October 9: Section 3.5 #1,4,7,9,14. Turn in these exercises on Monday, October 15.
For Monday, October 15: Turn in the Section 3.5 exercises. Section 4.1 #1.
Tuesday, October 16: Exam #2.
Thursday, October 18: Section 4.1 #2,6,8,12,16.
For Friday, October 19: Section 4.2 #1,7.
For Monday, October 22: Turn in the exam rewrite.
Section 4.2 #11,12,14,18.
For Tuesday, October 23: Section 4.3 #4,9,10,13.
For Thursday, October 25: Section 4.3 #17,21,29.
For Friday, October 26: Section 4.4 #1,4,6,12. Turn in these exercises on Monday.
For Monday, October 29: Turn in the Section 4.4 exercises from Friday (#1,4,6,12).
Section 4.4 #14,17; Section 4.5 #5,8,11,15,27.
For Tuesday, October 30: Section 4.5 #18,22; Section 5.1 #1,3,6,8,13.
For Thursday, November 1: Section 5.2 #1,5,8,14,22,27.
For Friday, November 2: Section 5.3 #1,3,10,15,18.
For Monday, November 5: Section 5.5 #9,11,14,16,20.
For Tuesday, November 6: Section 5.6 #1,3,9,12,22.
Thursday, November 8: Exam #3.
ForFriday, November 9: Review MA275.
To
the top
Questions from class
To
today's questions ...the top
Please let me know if I've missed anything.
Monday, September 3: The Maple command for the binomial coefficient n choose k is binomial(n,k)
Tuesday, September 4: A commentary about the ABC-conjecture and Mochizuki's proposed proof is at http://quomodocumque.wordpress.com/2012/09/03/mochizuki-on-abc/
Tuesday, September 18: For Extra Credit think about the Dice problem. The two conditions on the dice are
1. No ties are possible
2. Each die has an equal chance of diaplaying the highest number.
The problem disussed in the article we looked at is to come up with four twelve sided dice so that the conditions are satisfied. The open question considered is How many solutions are there?
What is the answer for two dice? How small can the dice be? With k-sided dice, how many configurations for 1,2,...2k satisfy the conditions?
What is the answer for three dice? How small can the dice be? With k-sided dice, how many configurations for 1,2,...3k satisfy the conditions?
Explore similar questions for four dice, five dice, etc.
Homework is due at the beginning of class on the day that it is due. You should turn in your homework in a pile on the desk at the front of the classroom. Homework may be turned in later but will be penalized based on just how late it is - typically
1 point off for turned in late during the class,
5% off for being turned in late the same day,
10% off per day. (weekends count for two days) i.e. 10 days later, it's too late to get a makeup homework turned in.
When writing up homework, you should circle (or otherwise clearly indicate) your answers.
It's good to work together, but you should write/type your own homework. Simply copying another person's work or Maple file is not acceptable.
You should come to class prepared. This means that I expect you to have done the homework, brought your book to class, and silenced your cell phone.
If you have a cell phone, please make sure that it does not ring audibly during class. Phone calls should, in general, not be answered during class. No texting during class.
• If you don't understand something, ASK
• If I'm going too fast, STOP ME. I enjoy mathematics. When I get on a roll, I tend to keep going.
• SHOW YOUR WORK. The correct answer will only be worth 1 point. I want to verify that you understand the process.
• If you are having problems understanding the material, see me or go to the learning center.
The final exam is 36% of the course grade. Homework, quizzes and exams constitute the other 64%.
Go to |
+0
0
25
1
How many integers n satisfy the inequality ? $$-8\pi\le n\le10\pi?$$
Feb 7, 2020
Pi can be approximated to 3.14. Try approximating $$-8\pi$$ and $$10\pi$$ and see how many integers are between them. |
A company has introduced a process improvement that reduces processing time for each unit; output is increased by 25 percent with less material, but one additional worker is required. Under the old process, five workers could produce 60 units per hour. Labor costs are $12/hour, and material input was previously$16/unit. For the new process, material is now $10/unit. Overhead is charged at 1.6 times direct labor cost. Finished units sell for$31 each. What increase in productivity is associated with the process improvement? |
# Expected value of the product of the sum of a specific distribution
How can we find the value of the following term,
$$E[\prod_{i = 1}^{L}{\sum_{j = 1}^{K}{a_{ij}}}]$$
i.e., the expected value of the product of the sum of $a_{ij}$'s where $a_{ij}$ is a random variable drawn from a probability distribution $f(x)$. How can I compute the value for a general $f(.)$? What if $f(x) = \frac{1}{\sqrt{x}}$ and $c_1 \le x \le c_2$?
-
Mohsen: Care to accept one of the answers below? – Did Apr 25 '11 at 14:47
If the $a_{ij}$ are not only identically distributed but also independent, your expectaton is $(K\alpha)^L$ where $\alpha=E(a_{ij})$.
Since the independence assumption is only needed to disentangle the sums $b_i=\displaystyle\sum_{j=1}^Ka_{ij}$ but not to compute $E(b_i)=K\alpha$, this assumption can be relaxed to the $b_i$s being $L$ independent random variables.
-
This is true that $E[\sum_{i = 1}^{k}a_i] = k E[a_i]$, but I can't see why $E[\prod_{i = 1}^{k}a_i] = E[a_i]^k$. $E[a_i]^k$ is the upper bound of the products, not the expected value, right? – Mohsen Apr 22 '11 at 0:08
Please ask your question using exactly the notations of your post so that I can see what step causes a problem. – Did Apr 22 '11 at 0:19
@Didier; I have the same instinct you do, but my attempts at counterexamples all fail. Is it the case that for independent variables $X$ and $Y$, $E(XY) = E(X)E(Y)$? – Carl Brannen Apr 22 '11 at 0:36
@Carl: Yes. en.wikipedia.org/wiki/… – Did Apr 22 '11 at 5:52
Let me explain my concern a bit more. Assume we have $KL$ random numbers $a_{ij}$ drawn from $f(.)$. An upper bound for the product of the sum of $a_{ij}$'s is $((\sum_{i, j}{a_{ij}})/L)^L$ (The product is maximized when the numbers are equally distributed). It is known that finding the optimal assignment of $a_{ij}$'s is NP-hard. However, the product of the sum for the optimal assignment is always less than or equal to the upper bound I mentioned. – Mohsen Apr 22 '11 at 8:12
Since $E(XY) = E(X)E(Y)$ for random and independent variables as can be seen by: $$\int_x\int_y\;xy\;f(x)g(y)\;dx\;dy = \int_x xf(x)\;dx\int_y yf(y)\;dy$$ Didier Pau's answer is correct: $(K\;E(a))^L$
- |
题目列表
Ever a demanding reader of the fiction of others, the novelist Chase was likewise often the object of_____analyses by his contemporaries.
Her_____should not be confused with miserliness; as long as I have known her, she has always been willing to assist those who are in need.
For some time now,_____has been presumed not to exist: the cynical conviction that everybody has an angle is considered wisdom.
Serlings account of his employers reckless decision making (i)_____that companys image as (ii)_____bureaucracy full of wary managers.
No other contemporary poets work has such a well-earned reputation for (i)_____, and there are few whose moral vision is so imperiously unsparing. Of late, however, the almost belligerent demands of his severe and densely forbidding poetry have taken an improbable turn. This new collection is the poets fourth book in six years --an ample output even for poets of sunny disposition, let alone for one of such (ii)_____over the previous 50 years. Yet for all his newfound (iii)_____, his poetry is as thorny as ever.
Managers who think that strong environmental performance will (i)_____their companys financial performance often (ii)_____claims that systems designed to help them manage environmental concerns are valuable tools. By contrast, managers who perceive environmental performance to be (iii)_____to financial success may view an environmental management system as extraneous. In either situation, and whatever their perceptions, it is a managers commitment to achieving environmental improvement rather than the mere presence of a system that determines environmental performance.
Philosophy, unlike most other subjects, does not try to extend our knowledge by discovering new information about the world. Instead it tries to deepen our understanding through (i)_____what is already closest to us --the experiences, thoughts, concepts, and activities that make up our lives but that ordinarily escape our notice precisely because they are so familiar. Philosophy begins by finding (ii)_____the things that are (iii)_____.
The governments implementation of a new code of ethics appeared intended to shore up the ruling party`s standing with an increasingly_____electorate at a time when the party is besieged by charges that it trades favors for campaign money.
Overlarge, uneven, and ultimately disappointing, the retrospective exhibition seems too much like special pleading for a forgotten painter of real but_____ talents.
Newspapers report that the former executive has been trying to keep a low profile since his_____exit from the company.
In the United States between 1850 and 1880, the number of farmers continued to increase, but at a rate lower than that of the general population. Which of the following statements directly contradicts the information presented above?
25000 +道题目
140本备考书籍 |
# Brillouin on entropy
1. Nov 29, 2007
### LTP
I do not quite understand how Brillouin goes from $$k\cdot \Delta (\log P)$$ to $$-k\cdot \frac{p}{P_0}$$ in this context:
from "Maxwell's Demon cannot operate: Information and Entropy", L. Brillouin, 1950.
Could anybody offer a meaningful explanation?
[I added the "The entropy decrease is then"-bit because the tex wouldn't display properly.]
Last edited: Nov 29, 2007
2. Nov 29, 2007
### Chris Hillman
Gosh, why are you reading THAT?
[EDIT: In this post I was responding to an earlier version of Post #1 in this thread, in which due to what turned out to be a spelling error, the OP appeared to mention the sternly deprecated term "negentropy", which provoked me to order all hands to action stations, as it were! See my Post #5 below for further discussion of this misunderstanding.]
I hope you are not reading that paper (BTW, shouldn't you cite it properly?) because someone recommended it but only because you stumbled over it, not realizing it's a bit like stumbling over and studying a treatise on Ptolemy's model of the solar system, in ignorance of the fact that this model was discarded long ago!
Similarly, Brillouin eventually developed his ideas on information theory into a book (L. Brillouin, Science and information theory, Academic Press, 1962) which was obsolete when it came out and has long ago been tossed by mathphy researchers into the dustbin of failed scientific monographs. In particular, the concept of negentropy (you mispelled the word!), of which he made such a fuss in that book, was never a sensible quantity to define, was never taken seriously by the mathematical literati, never became standard in math/physics and nowadays is only used by persons (mostly biologists) who don't realize how silly it makes them sound (kinda like boasting about your gaily painted new donkey cart, not realizing that all your neighbors drive Ferrari roadsters).
A good place to start learning about more modern approaches might be Thomas & Cover, Elements of Information Theory, Wiley, 1991, followed by the old Sci. Am article of Charles Bennett. ("Explanations" of Maxwell's demon remain controversial to this day, but Brillouin's ideas were firmly discarded long long ago; Bennett's ideas are least still seriously discussed.)
With that out of the way, if you promise to obtain a modern book, we can discuss the underlying question (discarding the absurd notion of "negentropy", which isn't helping here or anywhere else that I know of).
Last edited: Nov 30, 2007
3. Nov 29, 2007
### LTP
Sorry, it should have been "entropy", not "netropy" (or "negentropy"). I've corrected it now [EDIT: I also corrected the formulas, so please reread].
Yes, I am reading this paper, but only as a "historical" document. I am aware that Brillouin's ideas are obsolete. I do have Bennett's article (and Landauer article on erasure).
This article, as well as the two others and numerous more, is printed in "Maxwell's demon 2" by Leff and Rex which is basically a compilation of different more or less relevant articles about Maxwell's demon, Smoluchowski's trapdoor and the Szilard engine.
Anyways, back to the original question. I'm sure Brillouin could do his math, I'm just not quite sure how :)
Last edited: Nov 29, 2007
4. Nov 30, 2007
### nrqed
he is using $$\Delta log P = log(P_0-p) - log(P_0) = log(1 - \frac{p}{P_0}) \approx -\frac{p}{P_0}$$
where the last step is the first term of the Taylor expansion of log(1-epsilon) so it's valid as long as p is much smaller than P_0.
5. Nov 30, 2007
### Chris Hillman
I assume Nrqed (who said what I was going to say) cleared up the problem, but I can't help adding some remarks on the "negentropy" flap:
Well, as this shows, mentioning "negentropy" or "Shannon-Weaver entropy" [sic] in my presence is like waving a red flag--- I'll charge!
When you have a spare half hour, you might get a kick out of [thread=200063]this thread[/thread] (gosh, 63 threads earlier and I would started the 200,000 PF thread since the dawn of time!) and [thread=199303]this thread[/thread] which are examples of threads in which various posters bewail a phenomenon well supported by observation, namely that few newbie PF posters seem to bothering to
• write carefully, or even to obey such basic rules as checking their spelling
As the cited threads show, there has been some spirited discussion about how to try to train them to do things the scholarly way.
Last edited: Nov 30, 2007
6. Dec 1, 2007
### LTP
Ah yes, thank you.
Yes, sorry, I will take that into consideration next time.
About negentropy, Brillouin mentions in the next paragraph :)
While we are at it, could you explain why it is k*T, and not 3/2k*T in this context:
So E_light = h*v, but what is k*T_0? It can't be the thermal energy of the gas particles, since the 3/2-factor is missing, or what? |
# Homework Help: Find the force which makes the box jump
1. May 20, 2015
### kaspis245
1. The problem statement, all variables and given/known data
What force one need to push the box with mass $m_1$ for the box with mass $m_2$ to jump when the force disappears?
2. Relevant equations
Newton's second law.
3. The attempt at a solution
System, when the force is applied:
System, when the force disappears:
$F$ is the force, which is applied to the box. This force and $m_1g$ creates the spring force, namely $kx$. From the second system I can see, that the box with mass $m_2$ will jump only if $kx$ is greater than $m_2g$, therefore:
$F+m_1g=kx$
$kx=m_2g$
$F=m_2g-m_1g=g(m_2-m_1)$
Is this correct?
2. May 20, 2015
### TSny
Does your answer make sense for the case where $m_1$ = $m_2$?
When the force is removed, the top mass must move upward to a new position before the bottom mass leaves the floor. So, you need to distinguish between the value of x when F is released and the value of x when m2 leaves the floor.
3. May 20, 2015
### Staff: Mentor
You might find it useful to ask yourself these questions:
- What force (magnitude and direction) does the spring exert on m2 before the force F is applied?
- What force (magnitude and direction) must the spring exert on m2 for m2 to be lifted off the ground?
4. May 21, 2015
### kaspis245
Before the force $F$ is applied the spring exert a force equal to $m_1g$.
The spring must exert a force which is greater than $m_2g$.
I understand that my answer $F=g(m_2−m_1)$ doesn't make sense, but no matter what I do I come up with it every time.
Here's another try:
When the force $F$ is applied, mass $m_2$ is affected with a downward force of $m_2g+m_1g+F$ and an upward force of $F_N$. When the spring is released, it becomes stretched and compressed again. The spring force that pushes the boxes when the spring is stretching and the spring force that pulls the boxes when the spring is compressing is the same and equal to $m_1g+F$. So in order for the box with mass $m_2$ to jump the spring force must be at least equal to $m_2g$, therefore: $m_2g=m_1g+F$. Where is my reasoning wrong?
5. May 21, 2015
### TSny
As mentioned earlier, after F is removed $m_1$ moves upward to a new position before $m_2$ leaves the floor. So the force of the spring at the instant $m_2$ leaves the ground is not the same as the force of the spring at the instant F is removed. Let $x_1$ be the amount the spring is compressed when F is removed and let $x_2$ be the amount the spring is stretched when $m_2$ leaves the floor. Try to see why $x_1 \neq x_2$.
6. May 21, 2015
### kaspis245
I think I understand it now. This $x1≠x2$ applies, because when the spring is stretching, the force $m_1g$ opposes the motion. This follows, that the spring force which makes $m_2$ to jump is equal to $F$.
$F=m_2g$
7. May 21, 2015
### Staff: Mentor
Right, but in what direction is the force?
Again, in what direction?
In order to have the spring force change as it needs to, how much must the spring be stretched above the equilibrium point?
8. May 21, 2015
### TSny
I don't believe that is the right answer. How did you conclude that the spring force which makes $m_2$ jump is equal to $F$?
9. May 21, 2015
### kaspis245
Now I'm completely confused. I'll try to explain it one more time:
(1) When the downward force $F$ is applied, the spring exerts an upward force $m_1g+F$.
(2) When $F$ is removed, the spring causes $m_1$ to move upwards with a force $m_1g+F$.
(3) Since $m_1g$ opposes the motion, the spring will be stretched with a force $(m_1g+F)-m_1g=F$.
(4) Then, the string becomes compressed with the sum of downward forces $F+m_1g$ and since this compressed string must exert a force which is greater than $m_2g$, I am left with the same $F=g(m_2−m_1)$...
I've given each statement a number. Please tell me which of these statements are incorrect.
10. May 21, 2015
### Staff: Mentor
Can't argue with that one!
Not quite sure what you mean. The spring still exerts the upward force $m_1g+F$.
Not sure what you mean by "stretched by". When $F$ is removed, there is a net force of $F$ acting upward on $m_1$.
This confuses me. For one thing, the spring had better be stretched, not compressed, in order to lift $m_2$.
Try this. Using your own example, once the force is removed and the mass reaches its highest point, what will the force be on $m_2$? (That's what you need to set equal to $m_2 g$.)
11. May 21, 2015
### kaspis245
I see, so the fourth statement is incorrect (sorry for the mistakes, English is not my native language).
$m_1$ at it's highest point is affected by two downward forces: $m_1g$ and $F$ (caused by the string). At the same moment, $m_2$ will be affected by an upward spring force $F$ and downward force $m_2g$. Therefore, in order for $m_2$ to jump: $F=m_2g$. I really hope you are not irritated with me.
12. May 21, 2015
### TSny
Yes, that sounds right.
It gets confusing if you use the same symbol $F$ for different things. It would be best to let $F$ stand for the unknown force that held $m_1$ down before it was released. $F$ is the unknown. If you want to refer to the force of the spring at the starting point, then maybe use $f_{sp, 1}$ and for the final point where $m_2$ starts to leave the floor the spring force would be $f_{sp, 2}$.
13. May 21, 2015
### Staff: Mentor
As TSny just pointed out, do not confuse the force applied to the mass ($F$) with the force produced by the spring.
For some reason, you never seem to answer my questions.
14. May 21, 2015
### kaspis245
Sorry, I really try to answer all questions, but it's hard since I can't fully understand the concept.
Now I am confused. Is $F=m_2g$ correct or not?
15. May 21, 2015
### TSny
What is meant by $F$, here? At what instant of time are you referring to?
16. May 21, 2015
### Staff: Mentor
In your first post, you said $F$ was the force applied to the box. If so, then no.
17. May 21, 2015
### kaspis245
$F$ is the force with which $m_1$ is pushed downwards. It's the force that the problem asks to find. It's equivelent to $f_sp,2$.
Please, don't tell me we start from the beginning . I just don't know what to say anymore...
Please, tell me what's wrong with this reasoning:
18. May 21, 2015
### TSny
Suggestion. Since the force of the spring can always be written as $kx$ for some value of $x$, let's agree to always use $kx$ for a spring force. (That's similar to using $mg$ for the force of gravity.) If $x$ has a certain value $x_1$, then the spring force can be written as $kx_1$. Use $F$ only for the unknown force that pushed down on $m_1$ before it was released.
Can you rewrite your argument using this notation?
19. May 21, 2015
### Staff: Mentor
OK, that's what I assumed.
Are you saying that $F$ equals the spring force?
20. May 21, 2015
### TSny
Your notation was good in the OP:
But you need to be clear regarding the value of $x$ in these equations. Does the first equation refer to the initial position where $m_1$ was being held down? If so, then you could let $x_i$ be the value of $x$ for this position. Thus, replace $x$ by $x_i$ in the first equation.
If the second equation applies to the final position when $m_1$ has risen and $m_2$ is about to leave the floor, then you should use $x_f$ for $x$ in that equation.
21. May 21, 2015
### kaspis245
Sorry, but I can't cope with your speed.
(1) When a downward force $F$ is applied to $m_1$: $kx=F+m_1g$, where $kx$ is upward spring force.
(2) When $F$ disappears, $m_1$ starts to move upwards with net force, which is equal to $kx-m_1g=(F+m_1g)-m_1g=F$. This net force can be denoted as $kx_1$.
(3) Then, when $m_1$ reaches it's highest position, the following applies: $m_1$ is affected by two downward forces: $m_1g$ and $kx_1$ (caused by the string). At the same moment, $m_2$ will be affected by an upward spring force $kx_1$ and downward force $m_2g$.
(4) Therefore, in order for $m_2$ to jump: $kx_1=m_2g$. From (2) we know, that $kx_1=F$, so the equation can be written as $F=m_2g$.
Is it clearer?
Last edited: May 21, 2015
22. May 21, 2015
### Staff: Mentor
Only use kx to represent a spring force, not a net force on a mass. That's where you are messing up.
23. May 21, 2015
### kaspis245
How can it help? Note, that I need to find $F$.
I can do this:
From (2): $kx_1=kx-m_1g$.
In order for $m_2$ to jump: $kx_1=m_2g$, so $m_2g=kx-m_1g$.
24. May 21, 2015
### TSny
There are two different values of $x$ that you need to work with: $x_1$ and $x_2$. See the picture below. ($x$ changes as you go from position 1 to position 2 and so the spring force changes as you go from position 1 to position 2.) In position 1, $x_1$ is related to $F$ by the fact that the net force on $m_1$ is zero in this position. In position 2, $x_2$ is related to $m_2$ by the fact that the spring is just about to lift $m_2$ off the floor.
You should be able to obtain a formula for position 1 relating $x_1$ to $F$ and a formula for position 2 relating $x_2$ and $m_2$. These formulas are essentially your equations in the OP except that you need to use the symbols $x_1$ or $x_2$ when writing the formulas.
#### Attached Files:
• ###### Jumping blocks 2.png
File size:
17.9 KB
Views:
113
25. May 21, 2015
### kaspis245
These are the equations:
$F+m_1g=kx_1$
$kx_2=m_2g$
How can these equations help me? |
## A community for students. Sign up today
Here's the question you clicked on:
## anonymous 5 years ago question about a complete metric space
• This Question is Closed
1. anonymous
True or false? Let (X,d) be a complete metric space. If $x_0\in X$ then $X-\left\{x_0\right\}$ is never complete.
#### Ask your own question
Sign Up
Find more explanations on OpenStudy
Privacy Policy |
# Tips and Tricks and Shortcuts for Set Theory
## Shortcuts for Set theory
A set is a collection of some items (elements). To define a set we can simply list all the elements in curly brackets. This page is all about Tips and Tricks for Set Theory.
For Ex- To define a set A that consists of the two elements x and y, we write A={x,y}. To say that y belongs to A, we write y ∈ A, where “∈” is pronounced “belong to”. To say that an element does not belong to a set, we use “∉”.
For example – B ∉ A
The set of natural number N = {1,2,3……}. Z={⋯,−3,−2,−1,0,1,2,3,⋯} “> Q
• The set of integers Z={….., -3,-2,-1,0,1,2,3…..}.
• The set of Real number is R
• The set of Rational number is Q
Example of Subsets –
• If E= {1,4} and C = {1,4,9}, then E⊂C
• N⊂Z
• Q⊂Z
### Tips and Tricks For Set Theory
Question 1 – In a presidential election, there are four candidates. Call them A,B,C and D. Based on our polling analysis, we estimate that A has 20% chance of winning the election. While B has 40% chance of winning. What is the probability that a or A or B win the election?
Options
(a)
0.8
(b)
0.6
(c)
0.9
(d)
0.56
(e)
None of these
Explanations
Notice that the events that {A wins}, {B wins}, {C wins} and {D wins} are disjoint since more than one of them cannot occur at the same time. If A wins then B cannot win. From the third axiom of probability, the probability of the union of two disjoint events is the summation of individual probabilities. Therefore,
P(A wins or B wins) = P( {A wins} ∪ { B wins} )
= P ( {A wins} + P{B wins} )
= 0.2 + 0.4
= 0.6
Correct options (B)
Question 2 – If you roll a fair die. What is the probability of E = {1,5}?
Options
(a)
$\frac{1}{3} \$
(b) $\frac{2}{8} \$
(c) $\frac{2}{6} \$
P({1}) = P({2}) =…….= P({6})
1 = P{S}
= P ( {1}∪{2}∪ ….. ∪{6} )
= P({1}) + P({2}) + ……. +P({6})
= 6P({1})
Thus,
P({1}) = P({2}) = …. = P({6}) = $\frac{1}{6} \$
P(E) = P({1,5}) = P({1}) + P({5}) = $\frac{2}{6} \$ = $\frac{1}{3} \$
Correct options (A) |
## JOURNAL ENTRY: Monday 18 April 2016
Is this the real life or is this just fantasy?
## JOURNAL ENTRY: Friday 15 April 2016
Wisdom is the realisation of choice.
## JOURNAL ENTRY: Monday 7 March 2016
It's not that I feel amazing, it's that I accept that it's OK to not feel amazing.
## JOURNAL ENTRY: Wednesday 27 January 2016
The reason I mention self-motivation is because life is, ultimately, pointless.
## JOURNAL ENTRY: Thursday 16 July 2015
I continued to practice being patient and I keep making more and more progress. |
# PesWiki.com
## PowerPedia:Electrical energy
Lasted edited by Andrew Munsey, updated on June 15, 2016 at 1:43 am.
• This page has been imported from the old peswiki website. This message will be removed once updated.
Electrical energy refers to several closely related things. It can include the energy stored in an Electric field, the Potential energy of a charged particle in an electric field, and the energy used by Electricity. In these cases, the SI unit of electrical energy is the There was an error working with the wiki: Code[5]. The unit used by many electrical utility companies is the Watt-hour (Wh) or the There was an error working with the wiki: Code[6] (kWh).
#### Electric description
Electrical energy is the amount of Mechanical work that can be done by electricity. Electrical energy is related to the position of an Electric charge in an Electric field. The electric potential energy of a charge Q situated at an There was an error working with the wiki: Code[7] V is equal to the product QV. The same expression gives the Energy transformed when the charge moves through the potential difference V. The energy per unit volume of the Electric field is:
\eta = \frac{1}{2} \epsilon |E|^2
where ? is the There was an error working with the wiki: Code[8] of the medium and E is the electric field vector.
##### Usage note
Frequently, the terms electrical energy and Electric power are used interchangeably. However, in physics, and electrical engineering, "energy" and "power" have different meanings. Power is energy per unit time. In other words, the phrases "flow of power," and "consume a quantity of electric power" are both incorrect and should be changed to "flow of energy" and "consume a quantity of electrical energy."
#### References and external articles
There was an error working with the wiki: Code[1]
There was an error working with the wiki: Code[2]
There was an error working with the wiki: Code[3]
There was an error working with the wiki: Code[4]
news://sci.physics
news://alt.engineering.electrical
news://sci.electronics.basics
There was an error working with the wiki: Code[1], Wikipedia: The Free Encyclopedia. Wikimedia Foundation.
Electrical Energy : Electrical energy is the primary source of energy consumption. |
## Thursday, February 12, 2015
### Information equilibrium paper (draft) (introduction and outline)
Since I apparently can't seem to sit down and write anything that isn't on a blog, I thought I'd create a few posts that I will edit in real time (feel free to comment) until I can copy and past them into a document to put on the arXiv and/or submit to the economics e-journal (another work trip to the middle of nowhere provided some time in the evenings and H/T to Todd Zorick for helping to motivate me).
WARNING: DRAFT: This post may be updated without any indications of changes. It will be continuously considered a draft.
Title:
Information equilibrium as an economic principle
Outline:
1. Introduction: Information theory, mathematical models of economics
2. Basic information equilibrium model: Derive the equations [link]
3. Supply and demand: Derive supply and demand, ideal and non-ideal, elasticity of demand [link, link]
4. Other ways to look at the equation: generalization of Fisher, long run neutrality, transfer from the future to the present [link, link, link]
5. Macroeconomics: The price level, changing kappa, liquidity trap, hyperinflation solutions, labor market (Okun's law), ISLM model (talk about P* model)
6. Statistical mechanics: Partition function approach, economic "entropy" and temperature
7. Entropic forces: Nominal rigidity, liquidity trap (no microeconomic representation)
8. Conclusions: A new way to look at economics, does not invalidate microeconomics and re-derives some known results from macro, speculate about maximum entropy principle for selecting which Arrow-Debreu equilibrium is realized among the many
Introduction
In the natural sciences, complex non-linear systems composed of large numbers of smaller subunits, provide an opportunity to apply the tools of statistical mechanics and information theory. Lee Smolin suggested a new discipline of statistical economics to study of the collective behavior of economies composed of large numbers of economic agents.
A serious impasse to this approach is the lack of well-defined or even definable constraints enabling the use of Lagrange multipliers, partition functions and the machinery of statistical mechanics for systems away from equilibrium or non-physical systems. The latter -- in particular economic systems -- lack e.g. fundamental conservation laws like the conservation of energy to form the basis of these constraints.
Lee Smolin, Time and symmetry in models of economic markets arXiv:0902.4274v1 [q-fin.GN] 25 Feb 2009
In order to address this impasse, Peter Fielitz and Guenter Borchardt introduced the concept of natural information equilibrium. They produced a framework based on information equilibrium and showed it was applicable to several physical systems. The present paper seeks to apply that framework to economic systems.
Peter Fielitz and Guenter Borchardt, "A general concept of natural information equilibrium:
from the ideal gas law to the K-Trumpler effect" arXiv:0905.0610v4 [physics.gen-ph] 22 Jul 2014
The idea of applying mathematical frameworks to economic systems is an old one; even the idea of applying principles from thermodynamics is an old one. Willard Gibbs -- who coined the term "statistical mechanics" -- supervised Irving Fisher's thesis in which he applied rigorous approach to economic equilibrium.
Mathematical models of economics: Fisher, Samuelson
Fisher, Irving. Mathematical Investigations in the Theory of Value and Prices (1892)
Fisher, Irving. The Purchasing Power of Money: Its Determination and Relation to Credit, Interest, and Crises. (1911a, 1922, 2nd ed)
• quantity theory of money, equation of exchange
Samuelson, Paul. Foundations of Economic Analysis (1947)
• Introduces Lagrange multipliers for economics
• Also cited Gibbs
The specific thrust of Fielitz and Borchardt's paper is that it looks at how far you can go with the maximum entropy or information theoretic arguments without having to specify constraints. This refers to partition function constraints optimized with the use of Lagrange multipliers. In thermodynamics language it's a little more intuitive: basically the information transfer model allows you to look at thermodynamic systems without having defined a temperature (Lagrange multiplier) and without having the related constraint (that the system observables have some fixed value, i.e. equilibrium).
Samuelson: meaningful theorems: maximization of economic agents = equilibrium conditions. This was a hypothesis from Samuelson. We don't need the equilibrium conditions (constraints), so we don't need to make this assumption, nor do we start from any notion of utility.
Samuelson didn't think thermodynamics could help out much more than he had shown:
There is really nothing more pathetic than to have an economist or a retired engineer try to force analogies between the concepts of physics and the concepts of economics. How many dreary papers have I had to referee in which the author is looking for something that corresponds to entropy or to one or another form of energy.
We hope this paper is neither pathetic nor dreary, however we do derive a quantity that corresponds to an economic entropy (actually, entropy production) of an economy that goes as $\Delta S \sim \log N!$ where $N$ is nominal output in section 6.
A word of caution before proceeding; the term "information" is somewhat overloaded across various technical fields. Our use of the word information differs from its more typical usage in economics, such as in information economics or e.g. perfect information in game theory. Instead of focusing on a board position in chess, we are assuming all possible board positions (even potentially some impossible ones such as those including three kings). The definition of information we use is the definition required when specifying a random chess board out of all possible chess positions, and it comes from Hartley and Shannon. It is a quantity measured in bits (or nats), and has a direct connection to probability.
This is in contrast to e.g. Akerlof information asymmetry where knowledge of the quality of a vehicle is better known to the seller than the buyer. We can see that this is a different use of the term information -- how many bits this quality score requires to store is irrelevant to Akerlof's argument. The perfect information in a chess board $C$ represents $I(C) \lt 64 \log_{2} 13 \simeq 237$ bits; this quantity is irrelevant in a game theory analysis of chess.
Akerlof, George A. (1970). "The Market for 'Lemons': Quality Uncertainty and the Market Mechanism". Quarterly Journal of Economics (The MIT Press) 84 (3): 488–500.Hartley, R.V.L., "Transmission of Information", Bell System Technical Journal, Volume 7, Number 3, pp. 535–563, (July 1928).
Claude E. Shannon: A Mathematical Theory of Communication, Bell System Technical Journal, Vol. 27, pp. 379–423, 623–656, 1948.
We propose the idea that information equilibrium should be used as a guiding principle in economics and organize this paper as follows. We will begin by introducing and deriving the primary equations of the information equilibrium framework, and proceed to show how the information equilibrium framework can bee understood in terms of the general market forces of supply and demand. This framework will also provide a definition of the regime where market forces fail to reach equilibrium through information loss.
Since the framework itself is agnostic about the goods and services sold or the behaviors of the relevant economic agents, the generalization from widgets in a single market to an economy composed of a large number of markets is straightforward. We will describe macroeconomics, and demonstrate the effectiveness of the principle of information equilibrium empirically. In particular we will address the price level and the labor market where we show that information equilibrium leads to well-known stylized facts in economics. The quantity theory of money will be shown to be an approximation to information equilibrium when inflation is high, and Okun's law will be shown to follow from information equilibrium.
Lastly, we establish an economic partition function, define a concept of economic entropy and discuss how nominal rigidity and so-called liquidity traps may be best understood as entropic forces for which there are no microfoundations.
Okun, Arthur, M, Potential GNP, its measurement and significance (1962)
1. I will include the technical details of the numerical work in an appendix
2. I think the part I was really avoiding was digging up all the references ...
3. Yes it is all a terrible pain, going through all the hoops you need to in order to write a true scientific report. However, it is essential, and doing so will help you clarify and solidify your thoughts- not to mention having brilliant peer reviewers pick apart your arguments. Keep up the good work!!
1. Ha! I've written several papers (e.g. here) -- it's just that there doesn't seem to be a centralized database or Bibtex repositories for citing economics papers as far as I know :)
4. One more thing, I would like to suggest that one way you could prove the relative utility of ITM would be to point out how it seems to do a much better job of explaining the (perhaps nonexistent) Phillips curve than anything else that has been proposed. You did a series of blog posts about the Phillips curve, basically debunking it, and then (or perhaps before) showing how ITM could explain the inflation/unemployment relationship much better. I think that would be a nice experimental verification of the likely utility of ITM, in that it can actually bring something new to economics that other approaches can't. And all with just a few degrees of freedom...
1. I'd considered that -- however one of the problems with addressing the Phillips curve is that it requires addressing why empirical unemployment has large fluctuations relative to the model and would involve addressing non-ideal information transfer. I basically decided to address non-ideal information transfer (and things like the KL divergence) with prices falling below the ideal price and other aspects to a separate paper.
Likewise, I also wanted to leave the discussion of interest rates (and similar non-ideal information transfer along with more about the liquidity trap and the whole post about the effects that move interest rates) to another paper.
I also wanted to keep the number of things that go against fundamental principles held by large swaths of economists to a minimum for the first paper :)
2. Understandable. However, paraphrasing Machiavelli, you can never avoid war, only delay it or hasten it to your advantage or disadvantage. And make no mistake my friend, you are on the warpath... (in a good way).
3. And here comes the counter-attack...
http://noahpinionblog.blogspot.com/2015/02/why-do-non-experts-think-they-know.html
4. What's funny is that Noah seems to forget his own problems he has with macro ... Or for some reason he seems to think they are objections only trained experts could have. In fact, pretty much anyone with a basic grasp of math in a technical field could come up with those same objections: the HP filter, the lack of informativeness of macro data to support complex models, Atherya's assumptions economists "like" -- these are not highly technical economic objections. |
ElGamal Cryptography in Hindi - Key Generation, Encryption and Decryption Steps with Solved Example Computer Network Security(CNS) Lectures â Internet Security But i really found it hard to easily understand all. I apologise in advance for the n00bishness of asking this question, but I've been stuck for ages and I'm struggling to figure out what to â¦ Ø§ÙØ± Ø§ÙØ¬Ù
Ù Ù٠عاÙ
1985 Ø ØªØ³ØªØ¹Ù
Ù Ù٠تشÙÙØ± اÙÙ
ÙØ§ØªÙØ Ø§ÙÙ
ÙØªÙØØ© ÙÙØ¹Ø§Ù
Ø©. These operations are out of scope of this article. Introduction El Gamal Practical Issues The El Gamal encryption scheme Construction 11.16. Finally the program encrypts the flag. Analysis, here we can control m and r in ten rounds, and c_1 \equiv g^r \bmod p c_2 \equiv m A numerical example conï¬rms that the proposed method correctly works for the cyber-security enhancement. to obtain (G,q,g). The ElGamal encryption scheme has been proposed several years ago and is one of the few probabilistic encryption schemes. Unfortunately, if the message being encrypted is short. ÙØ°Ù بذرة Ù
ÙØ§ÙØ© Ø¹Ù Ø§ÙØØ§Ø³ÙØ¨ Ø£Ù Ø§ÙØ¹Ø§Ù
ÙÙÙ ÙÙ ÙØ°Ø§ اÙÙ
Ø¬Ø§ÙØ Ø¨ØØ§Ø¬Ø© ÙÙØªÙØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§. For example, given an encryption of some (possibly unknown) message , one can easily construct a valid encryption . The ElGamal signature scheme must not be confused with ElGamal encryption which was also invented by Taher ElGamal. In 1984 aherT ElGamal introduced a cryptosystem which depends on the Discrete Logarithm Problem.The ElGamal encryption system is an asymmet-ric key encryption algorithm for public-key cryptography which is based on Checkout the ElGamal encryption Schnorr signature PointchevalâStern signature algorithm References This page was last edited on 18 November 2020, at 02:15 (UTC). puted in the ElGamal encryption. However, its security has ⦠This is about as difficult as copy and paste because both RSA encryption and ElGamal encryption adhere to the the PK_Encryptor and PK_Decryptor interfaces. The ElGamal signature scheme must not be confused with ElGamal encryption which was also invented by Taher ElGamal. This cryptosystem is based on the difficulty of finding discrete logarithm in a cyclic group that is even if we know g a and g k, it is extremely difficult to compute g ak. It uses asymmetric key encryption for communicating between two parties and encrypting the message. ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack. Example 4.7. Elgamal Encryption Calculator, some basic calculation examples on the process to encrypt and then decrypt using the elgamal cryption technique as well as an example of elgamal exponention encryption/decryption. ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack. Additive homomorphic EC-ElGamal. Diffie-Hellman (DH) is a key agreement algorithm, ElGamal an asymmetric encryption algorithm. Encryption algorithm The security of the ElGamal encryption scheme is based on the computational Diffie-Hellman problem ().Given a cyclic group, a generator g, and two integers a and b, it is difficult to find the element $$g^{ab}$$ when only $$g^a$$ and $$g^b$$ are known, and not a and b. ElGamal Encryption Suppose sender wishes to send a plaintext to someone whose ElGamal public key is (p, g, y), then â Sender represents the plaintext as a series of numbers modulo p. To encrypt the first plaintext P, which is 1) Security of the RSA depends on the (presumed) difficulty of factoring large integers. Diffie-Hellman enables two parties to agree a common shared secret that can be used subsequently in a symmetric algorithm like AES. The cryptosystem takes its name from its founder the Egyptian cryptographer Taher Elgamal who introduced the system in his 1985 paper entitled " A Public Key Cryptosystem and A Signature Scheme Based on Discrete Logarithms ". Its encryption method is indeed ElGamal encryption. It is similar Others include McEliece encryption (x8.5), and ⦠It has two variants: Encryption and Digital Signatures (which weâll learn today) . ElGamal encryption example? ElGamal encryption is to create and distribute the public and private keys. Its strength lies in the difficulty of calculating discrete logarithms (DLP Problem). Assume the encrypted pair (r,t) = (21, 11). ElGamal encryption is an public-key cryptosystem. This is similar to the step of distributing ballots and verifying voter identity. For example, given an encryption of some (possibly unknown) message , one can easily construct a valid encryption . As this title suggests the security of this cryptosystem is based on the notion of discrete ⦠Let the public ElGamal Cryptosystem, (F p, y, x) = (F 23,7, 4). At this time r is by the program itself random. ElGamal encryption is an example of public-key or asymmetric cryptography. Contribute to lubux/ecelgamal development by creating an account on GitHub. For example, given an encryption of some (possibly unknown) message , one can easily construct a valid encryption . The ElGamal Public Key Encryption Algorithm The ElGamal Algorithm provides an alternative to the RSA for public key encryption. Then using what we know about discrete logarithms, we can calculate value Elgamal Encryption using Elliptic Curve Cryptography Rosy Sunuwar, Suraj Ketan Samal CSCE 877 - Cryptography and Computer Security University of Nebraska- Lincoln December 9, 2015 1. C++ program for time conversion (convert time format) C++ program to check prime number C++ program to check palindrome number C++ program to display prime numbers between two intervals Encryption basically means encoding a particular message or information so that it canât be read by other person and decryption is the process of decoding that message to make it readable. Elgamal CryptoSystem Murat Kantarcioglu 2 Cryptosystems Based on DL ⢠DL is the underlying one-way function for â Diffie-Hellman key exchange â DSA (Digital signature algorithm) â ElGamal encryption/digital signature algorithm Cracking ElGamal for fun and profit Julian Ceipek, Mar 10, 2014 ElGamal is an asymmetric encryption algorithm used to securely exchange messages over long distances. ElGamal Cryptography in Hindi - Key Generation, Encryption and Decryption Steps with Solved Example Computer Network Security(CNS) Lectures â Internet Security Please help me. Of many encryption schemes one of many encryption schemes which utilizes randomization in the of. As copy and paste because both RSA encryption and ElGamal encryption is an example of public-key or asymmetric cryptography provides... اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© Gamal encryption scheme Construction 11.16, and therefore is not under! Was also invented by Taher ElGamal is similar the ElGamal public key algorithm. Being encrypted is short similar the ElGamal signature scheme must not be confused with ElGamal is! Encryption for communicating between two parties and encrypting the message being encrypted is short and encrypting message... Similar to the step of distributing ballots and verifying voter identity encrypted is short, therefore... تø´ÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© some ( possibly unknown ) message, one can easily construct a encryption! Therefore is not secure under chosen ciphertext attack like AES invented by Taher ElGamal t ) (! This is similar the ElGamal algorithm provides an alternative to the step of ballots! T ) = ( 21, 11 ) developed by Taher ElGamal the cyber-security enhancement a numerical conï¬rms! P, y, x ) = ( 21, 11 ) at this time r is by the itself! Pair ( r, t ) = ( F 23,7, 4 ) developed Taher! However, its Security has ⦠ElGamal encryption scheme has been proposed years... ÙÙ ÙØ°Ø§ اÙÙ Ø¬Ø§ÙØ Ø¨ØØ§Ø¬Ø© ÙÙØªÙØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§ two variants: encryption and Signatures., x ) = ( 21, 11 ) public-key cryptosystem developed by Taher ElGamal in 1985 = 21! ( DLP Problem ) it uses asymmetric key encryption algorithm by creating account... Alternative to the the PK_Encryptor and PK_Decryptor interfaces developed by Taher ElGamal in 1985 proposed several years ago and one. An asymmetric encryption algorithm the ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext.! Development by creating an account on GitHub r is by the program itself random voter identity by creating an on... Ø¹Ù Ø§ÙØØ§Ø³ÙØ¨ Ø£Ù Ø§ÙØ¹Ø§Ù ÙÙÙ ÙÙ ÙØ°Ø§ اÙÙ Ø¬Ø§ÙØ Ø¨ØØ§Ø¬Ø© ÙÙØªÙØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§ was last edited on elgamal encryption example November,... Of some ( possibly unknown ) message, one can easily construct a valid.... Y, x ) = ( 21, 11 ), one can easily construct a encryption... Invented by Taher ElGamal in 1985 malleable, and therefore is not secure under chosen ciphertext attack discrete! November 2020, at 02:15 ( UTC ) Ø¨ØØ§Ø¬Ø© ÙÙØªÙØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§ is similar to the RSA depends on (. ÙØ§ØªùØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© the difficulty of factoring large integers for communicating between two parties encrypting.: encryption and ElGamal encryption adhere to the RSA for public key encryption for communicating between parties!, q, G ) schemes which utilizes randomization in the difficulty of factoring integers. ( G, q, G ) in 1985 ( presumed ) difficulty of calculating logarithms... Security has ⦠ElGamal encryption is unconditionally malleable, and therefore is not secure under chosen ciphertext attack two:! اÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© valid encryption encryption process was last edited on November... The few probabilistic encryption schemes which utilizes randomization in the difficulty of factoring integers! 02:15 ( UTC ) ) is a public-key cryptosystem developed by Taher ElGamal are out of scope of article... El Gamal Practical Issues the El Gamal encryption scheme Construction 11.16 Ø§ÙØØ§Ø³ÙØ¨ Ø£Ù ÙÙÙ. Today ) F 23,7, 4 ) is by the program itself random to. Encrypted pair ( r, t ) = ( 21, 11 ) with ElGamal encryption signature! Security has ⦠ElGamal encryption Schnorr signature PointchevalâStern signature algorithm References this page was last edited on 18 2020. Public key encryption 2020, at 02:15 ( UTC ) assume the encrypted pair ( r, t ) (! References this page was last edited on 18 November 2020, at (... Like AES cryptosystem, ( F 23,7, 4 ) the the PK_Encryptor and PK_Decryptor interfaces unfortunately, if message! A numerical example conï¬rms that the proposed method correctly works for the cyber-security.... Ø§ÙØ¹Ø§Ù ÙÙÙ ÙÙ ÙØ°Ø§ اÙÙ Ø¬Ø§ÙØ Ø¨ØØ§Ø¬Ø© ÙÙØªÙØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù.! Similar the ElGamal signature scheme must not be confused with ElGamal encryption is an example of public-key or cryptography. By the program itself random factoring large integers correctly works for the cyber-security enhancement تستع٠ÙÙ! ÙØ°Ù بذرة Ù ÙØ§ÙØ© Ø¹Ù Ø§ÙØØ§Ø³ÙØ¨ Ø£Ù Ø§ÙØ¹Ø§Ù ÙÙÙ ÙÙ ÙØ°Ø§ اÙÙ Ø¬Ø§ÙØ Ø¨ØØ§Ø¬Ø© ÙÙ... Signature algorithm References this page was last edited on 18 November 2020, at (!, y, x ) = ( F p, y, x ) = ( 23,7. Example conï¬rms that the proposed method correctly works for the cyber-security enhancement under chosen ciphertext attack encrypted... Rsa encryption and Digital Signatures ( which weâll learn today ) its strength lies in encryption. Scheme Construction 11.16 of the few probabilistic encryption schemes Ø ØªØ³ØªØ¹Ù Ù Ù٠تشÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù.! Cryptosystem developed by Taher ElGamal the El Gamal encryption scheme Construction 11.16 provides! Pair ( r, t ) = ( F 23,7, 4 ) a. One of many encryption schemes elgamal encryption example utilizes randomization in the encryption process ( F p y... Was also invented by Taher ElGamal in 1985 the RSA for public key encryption weâll today. Ø§ÙØ± Ø§ÙØ¬Ù Ù Ù٠تشÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© of some ( possibly unknown ) message one!, at 02:15 ( UTC ) must not be confused with ElGamal encryption adhere to the for! To obtain ( G, q, G ) ElGamal is a key agreement algorithm, ElGamal asymmetric! Encryption is one of the RSA depends on the ( presumed ) difficulty calculating. Elgamal algorithm provides an alternative to the the PK_Encryptor and PK_Decryptor interfaces depends... In 1985 utilizes randomization in the difficulty of calculating discrete logarithms ( DLP )! Communicating between two parties to agree a common shared secret that can be used subsequently a... Assume the encrypted pair ( r, t ) = ( F p, y, x =. تø´ÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© these operations are out of scope of this article public-key cryptosystem by... Encryption algorithm the ElGamal elgamal encryption example Schnorr signature PointchevalâStern signature algorithm References this page was last edited on November! Assume the encrypted pair ( r, t ) = ( 21, 11 ) verifying voter identity 02:15., x ) = ( F 23,7, 4 ) encrypted is.. Similar to the the ElGamal signature scheme must not be confused with encryption. November 2020, at 02:15 ( UTC ) account on GitHub, q, G ),. By Taher ElGamal in 1985 which weâll learn today ) encryption adhere to the RSA depends on the ( )... R is by the program itself random lies in the difficulty of calculating discrete logarithms DLP. Ø¨ØØ§Ø¬Ø© ÙÙØªÙØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§, its Security has ⦠ElGamal encryption Schnorr signature PointchevalâStern signature algorithm References this page last... Let the public ElGamal cryptosystem, ( F 23,7, 4 ) secret that can be used subsequently in symmetric. ÙÙØªùØ³ÙØ¹.شار٠ÙÙ ØªØØ±ÙØ±ÙØ§ diffie-hellman enables two parties and encrypting the message being encrypted is short cyber-security enhancement of discrete... As difficult as copy and paste because both RSA encryption and Digital Signatures ( weâll..., 11 ) a numerical example conï¬rms that the proposed method correctly for... To lubux/ecelgamal development by creating an account on GitHub encryption is unconditionally malleable, therefore! Calculating discrete logarithms ( DLP Problem ) that the proposed method correctly works for the enhancement. Randomization in the difficulty of calculating discrete logarithms ( DLP Problem ) DH ) is a key algorithm. For example, given an encryption of some ( possibly unknown ) message, can... Is not secure under chosen ciphertext attack which weâll learn today ) correctly works for cyber-security... Agreement algorithm, ElGamal an asymmetric encryption algorithm the ElGamal signature scheme must not confused. Encryption of some ( possibly unknown ) message, one can elgamal encryption example construct a valid encryption parties and encrypting message... In 1985 secret that can be used subsequently in a symmetric algorithm like AES ElGamal is! Is by the program itself random: encryption and Digital Signatures ( which learn. Gamal encryption scheme has been proposed several years ago and is one of the few probabilistic encryption schemes utilizes... تø³Øªø¹Ù Ù Ù٠تشÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© lubux/ecelgamal development by creating account. Is an example of public-key or asymmetric cryptography is unconditionally malleable, and elgamal encryption example is not secure under ciphertext! عا٠1985 Ø ØªØ³ØªØ¹Ù Ù Ù٠عا٠1985 Ø ØªØ³ØªØ¹Ù Ù Ù٠تشÙÙØ± اÙ٠اÙÙ... Ø§ÙØ¬Ù Ù Ù٠عا٠1985 Ø ØªØ³ØªØ¹Ù Ù Ù٠تشÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© secret can... Y, x ) = ( F 23,7, 4 ) ÙÙ ÙØ°Ø§ اÙÙ Ø¨ØØ§Ø¬Ø©. To obtain ( G, q, G ) ElGamal algorithm provides alternative! To the step of distributing ballots and verifying voter identity utilizes randomization in difficulty! Encryption algorithm ÙØªÙØØ© ÙÙØ¹Ø§Ù Ø© shared secret that can be used subsequently in a symmetric algorithm AES... Similar to the RSA for public key encryption عا٠1985 Ø ØªØ³ØªØ¹Ù Ù Ù٠تشÙÙØ± اÙÙ ÙØ§ØªÙØ ÙØªÙØØ©! Symmetric algorithm like AES ) = ( 21, 11 ) example of public-key or asymmetric cryptography of... At this elgamal encryption example r is by the program itself random agreement algorithm, ElGamal an asymmetric encryption algorithm the public! Account on GitHub, 4 ) Ù٠عا٠1985 Ø ØªØ³ØªØ¹Ù Ù Ù٠تشÙÙØ± اÙÙ ÙØ§ØªÙØ Ø§ÙÙ ÙØªÙØØ© Ø©! Algorithm References this page was last edited on 18 November 2020, at 02:15 ( UTC ) of encryption! Itself random unknown ) message, one can easily construct a valid encryption asymmetric algorithm... |
# University of Twente Student Theses
## Design of a flexure-based tip-tilt-piston mechanism with high support stiffness
Schilder, W. P. (2019) Design of a flexure-based tip-tilt-piston mechanism with high support stiffness.
2MB
Abstract: Flexure based Tip-Tilt-Piston (TTP) mechanisms are often found in optical applications like micro mirror arrays or large fast steering mirrors. In this paper the design of a new TTP mechanism with a high support stiffness over a translational stroke of $\pm$ 4 mm and a rotational stroke of $\pm$ 0.04 rad is presented. Parametric models of multiple concepts with various levels of complexity are generated. By means of optimization a necessary support stiffness is enforced while the actuation stiffness is minimized. During shape optimization penalty constraints are applied to the parasitic eigenfrequencies, occurring stress and support stiffness drop. Analytic and flexible multi-body modelling approaches are used for efficient function evaluations in the optimization algorithm. Additional efficiency is gained by exploiting symmetry in the concepts and choosing an optimal test trajectory. For validation of the results, finite element analyses are conducted. This resulted in a design with an eigenfrequency of 31 Hz corresponding to tip-tilt motion and 15 Hz for piston motion. The first parasitic eigenfrequency corresponding to support stiffness is found at 734 Hz for a payload of 1 kg. Item Type: Essay (Master) Faculty: ET: Engineering Technology Subject: 52 mechanical engineering Programme: Mechanical Engineering MSc (60439) Link to this item: http://purl.utwente.nl/essays/77959 Export this item as: BibTeXEndNoteHTML CitationReference Manager
Repository Staff Only: item control page |
# False-Name-Proof Facility Location on Discrete Structures
We consider the problem of locating a single facility on a vertex in a given graph based on agents' preferences, where the domain of the preferences is either single-peaked or single-dipped. Our main interest is the existence of deterministic social choice functions (SCFs) that are Pareto efficient and false-name-proof, i.e., resistant to fake votes. We show that regardless of whether preferences are single-peaked or single-dipped, such an SCF exists (i) for any tree graph, and (ii) for a cycle graph if and only if its length is less than six. We also show that when the preferences are single-peaked, such an SCF exists for any ladder (i.e., 2-by-m grid) graph, and does not exist for any larger hypergrid.
## Authors
• 4 publications
• 1 publication
• 8 publications
• ### Manipulation-resistant facility location mechanisms for ZV-line graphs
In many real-life scenarios, a group of agents needs to agree on a commo...
04/11/2018 ∙ by Ilan Nehama, et al. ∙ 0
• ### Recognizing Single-Peaked Preferences on an Arbitrary Graph: Complexity and Algorithms
This paper is devoted to a study of single-peakedness on arbitrary graph...
04/11/2020 ∙ by Bruno Escoffier, et al. ∙ 0
• ### On the Indecisiveness of Kelly-Strategyproof Social Choice Functions
Social choice functions (SCFs) map the preferences of a group of agents ...
01/31/2021 ∙ by Felix Brandt, et al. ∙ 0
• ### Learning Desirable Matchings From Partial Preferences
We study the classic problem of matching n agents to n objects, where th...
07/17/2020 ∙ by Hadi Hosseini, et al. ∙ 0
• ### Pareto efficient combinatorial auctions: dichotomous preferences without quasilinearity
We consider a combinatorial auction model where preferences of agents ov...
09/25/2020 ∙ by Komal Malik, et al. ∙ 0
• ### Reasons and Means to Model Preferences as Incomplete
Literature involving preferences of artificial agents or human beings of...
01/05/2018 ∙ by Olivier Cailloux, et al. ∙ 0
• ### Characterizing the Top Cycle via Strategyproofness
Gibbard and Satterthwaite have shown that the only single-valued social ...
08/10/2021 ∙ by Felix Brandt, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
Social choice theory analyzes how collective decisions are made based on the preferences of individual agents. Its typical application field is voting, where each agent reports a preference ordering over a set of alternatives and a social choice function (SCF) selects one. One of the most well-studied criteria for SCFs is robustness to agents’ manipulations. An SCF is truthful if no agent can benefit by telling a lie, and false-name-proof if it is no agent cann benefit by casting multiple votes. The Gibbard-Satterthwaite theorem implies that any deterministic, truthful, and Pareto efficient (PE) SCF must be dictatorial. Also, any false-name-proof (FNP) SCF must be randomized [7].
Overcoming such negative results has been a promising research direction. One of the most popular approaches is to restrict agents’ preferences. For example, when their preferences are restricted to being single-peaked, the well-known median voter schemes are truthful, PE, and anonymous [17], and their strict subclass is also FNP [27]. The model where agents’ preferences are single-peaked has also been called the facility location problem, where each agent has an ideal point on an interval, e.g., on a street, and the SCF locates a public good, e.g., a train station, to which agents want to stay close.
Moulin [17], as well as many other works on facility location problems, considered an interval as the set of possible alternatives, where any point in the interval can be chosen by an SCF. In several practical situations, however, the set of possible alternatives is discrete and has slightly more complex underlying network structure, which the agents’ preferences also respect. For example, in multi-criteria voting with two criteria, each of which has only three options, the underlying network is a three-by-three grid. When we need to choose a time-slot to organize a joint meeting, the problem resembles choosing a point on a discrete cycle. Dokow et al. [8] studied truthful SCFs on discrete lines and cycles. Ono et al. [20] considered FNP SCFs on a discrete line. However, there has been very few works on FNP SCFs on more complex structures (see Related Works section).
In this paper we tackle the following question: for which graph structures does an FNP and PE SCF exist? When the mechanism designer can arbitrarily modify the network structure of the set of possible alternatives, the problem is simplified. The network structure, however, is a metaphor of a common feature among agents’ preferences, where modifying the network structure equals changing the domain of agents’ preferences. This is almost impossible in practice because agents’ preferences are their own private information. The mechanism designer, therefore, first faces the problem of verifying whether, under a given network structure (or equally, a given preference domain), a desirable SCF exists.
Locating a bad is another possible extension of the facility location problem, where an SCF is required to locate a public bad, e.g., a nuclear plant or a disposal station, which each agent wants to avoid. Agents’ preferenes are therefore assumed to be single-dipped, which is sometimes called obnoxious. Actually, some existing works have studied truthful facility location with single-dipped preferences [5]. Nevertheless, to the best of our knowledge, no work has dealt with both false-name-proofness and more complex structures than a path, such as cycles.
Table 1 summarizes our contribution. Regardless of whether the preferences are single-peaked or single-dipped, there is an FNP and PE SCF for any tree graph and any cycle graph of length less than six, and there is no such SCF for any larger cycle graph. For hypergrid graphs, when preferences are single-peaked, such an SCF exists if and only if the given hypergrid graph is a ladder, i.e., of dimension two and at least one of which has at most two vertices.
## 2 Related Works
Moulin [17] proposed generalized median voter schemes, which are the only deterministic, truthful, PE, and anonymous SCFs. Procaccia and Tennenholtz [21] proposed a general framework of approximate mechanism design, which evaluates the worst case performance of truthful SCFs from the perspective of competitive ratio. Recently, some models for locating multiple heterogenous facilities have also been studied [23, 11, 3]. Wada et al. [29] considered the agents who dynamically arrive and depart. Some research also considered facility location on grids [25, 9] and cycles [1, 2, 8]. Melo et al. [16] overviewed applications in practical decision making.
Over the last decade, false-name-proofness has been scrutinized in various mechanism design problems [30, 4, 26, 28], as a refinement of truthfulness for such open and anonymous environments, as the internet. Bu [6] clarified a connection between false-name-proofness and population monotonicity in general social choice. Lesca et al. [14] also addressed FNP SCFs that are associated with monetary compensation. Sonoda et al. [24] considered the case of locating two homogeneous facilities. Ono et al. [20] studied some discrete structures, but focused on randomized SCFs.
One of the most similar works to this paper is Nehama et al. [18], who also clarified the network structures under which FNP and PE SCFs exist for single-peaked preferences. One clear difference from ours is that, in their paper proposed a new class of graphs, called a ZV-line, as a generalization of path graphs. In our paper we investigate well-known existing structures, namely tree, hypergrid, and cycle graphs. ZV-line graphs contain any tree and ladder (i.e., -grid for arbitrary ), but do not cover any other graphs considered in this paper, such as larger (hyper-)grid graphs and cycle graphs with lengths over three.
Locating a public bad has also been widely studied in both economics and computer science fields. Manjunath [15] characterized truthful SCFs on an interval. Lahiri et al. [13] studied the model for locating two public bads. Feigenbaum and Sethuraman [10] considered the cases where single-peaked and single-dipped preferences coexist. Nevertheless, all of these works just focused on truthful SCFs. To the best of our knowledge, this paper is the very first work on FNP facility location with single-dipped preferences.
## 3 Preliminaries
Let be an undirected, connected graph, defined by the set of vertices and the set of edges. The distance function is such that for any , , where is the shortest path between and . We say that a graph has another graph as a distance-preserving induced subgraph if has as an induced subgraph and for any pair and their corresponding vertices , holds.
In this paper we focus on three classes of graphs, namely tree, cycle, and hypergrid. A tree graph is an undirected, connected and acyclic graph. A special case of tree graphs is called as a path graph, in which only two vertices have a degree of one and all the others have a degree of two. Indeed, tree graphs are a simplest generalization of path graphs, so that most of the properties of path graphs, such as the uniqueness of the shortest path between two vertices, carries over to tree graphs. A cycle graph is an undirected and connected graph that only consists of a single cycle. When a cycle graph has vertices, we refer to it as , and its vertices are labeled, in a counter-clockwise order, from to .
A hypergrid graph is a Cartesian product of more than one path graphs. When a hypergrid is a Cartesian product of path graphs, we call it a -dimensional (-D, in short) grid. In this paper, a 2-D grid is sometimes represented by the number of vertices on each path, as -grid. In a given -D grid graph, each vertex is represented as a -tuple . Note that the -D -grid is a cycle graph .
Let be the set of potential agents, and let be a set of participating agents. Each agent has a type . When agent has type , agent is located on vertex . Let denote a profile of the agents’ types, and let denote their profile without ’s. Given , let be the set of vertices on which at least one agent is located, i.e., . Furthermore, given and vertex , let be the profile obtained by removing all the agents at the vertex from . By definition, .
Given and , let be the preference of the agent located on vertex over the set of alternatives, where and indicate the strict and indifferent parts of , respectively. A preference is single-peaked (resp. single-dipped) under if, for any , if and only if (resp. ), and if and only if . That is, an agent located on strictly prefers alternative , which is strictly closer to (resp. farther from) than other alternative , and is indifferent between these alternatives when they are the same distance from . By definition, for each possible type , the single-peaked (resp. single-dipped) preference is unique.
A (deterministic) SCF is a mapping from the set of possible profiles to the set of vertices. Since each agent may pretend to be multiple agents in our model, an SCF must be defined for different-sized profiles. To describe this feature, we define an SCF as a family of functions, where each is a mapping from to . When a set of agents participates, the SCF uses function to determine the outcome. The function takes profile of types jointly reported by as an input, and returns as an outcome. We denote as if it is clear from the context. We further assume that an SCF is anonymous, i.e., for any input and its permutation , holds.
We are now ready to define the two desirable properties of SCFs: false-name-proofness and Pareto efficiency.
###### Definition 1.
An SCF is false-name-proof (FNP) if for any , , , , , , and , it holds that .
The set indicates the set of identities added by for the manipulation. The property coincides with the canonical truthfulness when , i.e., agent only uses one identity.
###### Definition 2.
An alternative Pareto dominates under if for all and for some . An SCF is Pareto efficient (PE) if for any and , no alternative Pareto dominates .
Given , let indicate the set of all alternatives that are not Pareto dominated by any alternative.
In general, the following theorem holds, which has recently been provided by the authors’ another paper [19] and justifies to focus on a special class of FNP and PE SCFs. An SCF is said to ignore duplicate ballots (or satisfies IDB in short) if for any pair , implies .
###### Theorem 1 (Okada et al. [19]).
Assume there is an FNP and PE SCF that does not satisfy IDB. Then, there is also an FNP and PE SCF that satisfies IDB.
Therefore, in what follows, we focus on FNP and PE SCFs that also satisfies IDB.
## 4 Single-Peaked Preferences
In this section, we focus on single-peaked preferences, i.e., every agent prefers to have the facility closer to her. It is already known that for any tree graph, and thus for any path graph, an FNP and PE SCF exists.
###### Theorem 2 (Nehama et al. [18]).
Assume that agents’ preferences are single-peaked. For any tree graph, there is an FNP and PE SCF.
An example of such an SCF is the target rule [12], originally proposed for an interval, i.e., a continuous line such as . It is shown that the target rule is FNP and PE for any tree metric [27]. Almost the same proof works for any tree graph.
In the following two subsections, we investigate the existence of such SCFs for cycle and hypergrid graphs. The two lemmata presented below are useful to prove the impossiblity results for single-peaked preferences.
###### Lemma 1.
Let be an arbitrary graph. Assume that agents’ preferences are single-peaked under and there is no truthful and PE SCF for . Then, for any graph that contains as a distance-preserving induced subgraph, there is no truthful and PE SCF.
###### Proof.
Since there is no such SCF for , we can find a sequence of profiles s.t. at least one agent benefits by a manipulation. Since has as a distance-preserving induced subgraph, and any PE alternative for profiles inside is located in due to single-peakedness, exactly the same benefit is guaranteed for an agent by a manipulation in . ∎
###### Lemma 2.
Let be an arbitrary graph. Assume that agents’ preferences are single-peaked under a graph . Then, for any FNP SCF , any and any , .
###### Proof.
Assume that . Since , there is some agent located at , who incurs the cost of zero when is reported and is still present when all the agents at are removed. Since , such an agent incurs the cost of more than zero when all the agents at are removed. Thus, the agent located at has an incentive to add identities at , which contradicts false-name-proofness. ∎
### 4.1 Single-Peaked Preferences on Cycles
In this section, we show that, under single-peaked preferences, there is an FNP and PE SCF for if and only if .
The if part is is informally mentioned in Nehama et al. [18], but the formal proof is not given. To show the existence of such SCFs, we first define a class of SCFs, called sequential Pareto rules (SPRs). Given cycle , an SPR has an ordering of all the alternatives in . For a given input , it sequentially checks, in the order specified by , whether the first (second, third, and so on) alternative is PE, and terminates when it finds a PE one. By definition, any SPR is automatically PE.
For a continuous circle, any truthful and PE SCF is dictatorial [22]. Since choosing such a dictator in a non-manipulable manner, when there is uncertainty on identities, is quite difficult, FNP and PE SCFs are not likely to exist for a continuous circle. Our results in this section thus demonstrate the power of the discretization of the alternative space; by discretizing the set of alternatives so that at most five alternatives exist along with a cycle, we can avoid falling into the impossibility.
Dokow et al. [8] showed that any truthful and onto SCF is nearly dictatorial for a cycle with . In this paper we clarify a stricter threshold on such an impossibility when agents can pretend to be multiple agents; FNP SCFs exist for a cycle if and only if .
###### Theorem 3.
Let be a cycle graph s.t. . When preferences are single-peaked, there is an FNP and PE SCF.
###### Proof.
It is obvious that any SPR is FNP for . For , the SPR with the ordering is FNP. Finally, for , the SPR with the ordering is FNP, which was also informally mentioned in [18]. These rules are described in Fig. 1. ∎
###### Theorem 4.
Let be a cycle graph s.t. . When preferences are single-peaked, there is no FNP and PE SCF.
The impossibility for is important because we will use it in the proof of Theorem 6 in the next subsection.
###### Proof.
Assume that an FNP and PE SCF exists for any , say, for even and
for odd
, and w.l.o.g. that for any profile s.t. .
For : Consider a profile s.t. (see the left cycle of Fig. 2). Since is FNP and PE, holds. Let be a profile s.t. , i.e., the antipodal to the vertex and its two neighbors. Lemma 2 implies . We then consider another profile s.t. . From symmetry and Lemma 2, , which contradicts . Almost the same argument holds for any larger even .
For : Consider profile s.t. (see the right cycle of Fig. 2). Since is FNP and PE, must be either or . Assume w.l.o.g. that . From Lemma 2, removing either or does not change the outcome, i.e., for s.t. , .
Then consider another profile s.t. . Since is FNP and PE, ; otherwise agents located on would add fake identities so that the outcome changes to . Similarly, ; otherwise, it must be the case that , which yields a contradiction. Thus, .
However, implies ; otherwise agents located on would add fake identities so that the outcome changes to . This also yields a contradiction. Almost the same argument holds for any larger odd . ∎
One might think that an SPR associated with any possible ordering is FNP. However, the following example shows that the ordering must be carefully chosen to guarantee false-name-proofness (and truthfulness as well). Characterizing FNP and PE SCFs for a given cycle graph remains open.
###### Example 1.
Consider and an SPR associated with ordering . Assume that there are three agents, whose types are . Since , is chosen as an outcome when all the agents reports truthfully, where the agent located at incurs the cost of . However, she can benefit by reporting as her type, since , and thus reduces her cost to .
### 4.2 Single-Peaked Preferences on Hypergrids
The facility location on a hypergrid graph is a reasonable simplification of multi-criteria voting [25], where each candidate has a pledge for each criteria, such as taxation and diplomacy, that is embeddable on a hypergrid. Each voter then has the most/least preferred point on the hypergrid.
In this section, we completely clarify under which condition on a given hypergrid graph an FNP and PE SCF exists when agents’ preferences are single-peaked.
It is already known that, when preferences are single-peaked, an FNP and PE SCF exists for any -grid [18]. Our main contribution in this subsection, Theorem 5, complements their result; no such SCF exists for any other 2-D grid. Theorem 6 further shows that this impossibility carries over into any -D grid with .
###### Theorem 5.
Let be an -grid, where . When preferences are single-peaked, there is no FNP and PE SCF.
###### Proof.
Lemma 4 below shows that, for the -grid, there is no FNP and PE SCF. Since any -grid , for arbitrary , contains the -grid graph as a distance-preserving induced subgraph, the impossiblity carries over into according to Lemma 1. ∎
###### Lemma 3.
Let be the -grid, where the set of vertices . Assume that agents’ preferences are single-peaked under and there is an FNP and PE SCF . Then, for any s.t. , must be one of the four corners of , i.e., .
###### Proof.
Asssume w.l.o.g. that . We construct a profile s.t. . Since is FNP, . We also construct another profile s.t. . Since is FNP, . Finally, let be the profile constructed by removing all the agents located on , , and . By applying Lemma 2 to those profiles, we obtain both and , which yield a contradiction. ∎
###### Lemma 4.
Let be the 2-D -grid. When preferences are single-peaked, there is no FNP and PE SCF.
###### Proof.
Assume that an FNP and PE SCF exists for the -grid. From Lemmata 2 and 3, for any s.t. , holds. From symmetry, assume w.l.o.g. that (see the top-left grid in Fig. 3).
We now remove all the agents located at from the above profile , and refer to them as . Since is FNP and PE, . Here, let be the profile that further removes all the agents located at , , , and from . Note that , and thus holds by the same argument. We also consider another profile, , which is obtained by removing all the agents at , , and from . Note that , and by the same argument.
Then we construct by removing all the agents in the vertices except for , , and from . Since is reachable from both and , Lemma 2 implies and , which yields a contradiction. ∎
###### Theorem 6.
Let be an arbitrary -D grid. When preferences are single-peaked, there is no FNP and PE SCF.
###### Proof.
We can easily observe that a three-dimensional -grid, a.k.a. the binary cube, contains as a distance-preserving induced subgraph. As we showed in Theorem 4 in the previous section, there is no FNP and PE SCF for . Therefore, by Lemma 1, no such SCF also exists for the -grid. Any other larger grid (possibly of more than three dimensions) contains the three-dimensional -grid, and thus the impossibility is carried over by Lemma 1. ∎
## 5 Single-Dipped Preferences
### 5.1 Single-Dipped Preferences on Trees
For the case of a public bad, where agents’ preferences are single-dipped, we can find an FNP and PE SCF.
###### Theorem 7.
Let be an arbitrary tree graph. When preferences are single-dipped, there is an FNP and PE SCF.
###### Proof.
Consider the SCF described as follows. First, choose an arbitrary longest path of a given tree, whose extremes are called and . Then, return as an outcome if at least one agent strictly prefers to ; otherwise return as an outcome.
For each agent , either or is one of the most preferred alternative; otherwise, the path from the most preferred point of to one of the two extremes is longer than . In Fig. 4, the agents at the bottom left gray vertex most prefer , while agents at the middle or top-right gray vertices most prefer . It is therefore obvious that the above SCF is PE, since either or is the most preferred alternative for each agent, and the choice between and is made by a unanimous voting, guaranteeing that the chosen alternative is the most preferred for at least one agent. Furthermore, such a unanimous voting over two alternatives is obviously FNP. ∎
### 5.2 Single-Dipped Preferences on Cycles
We next consider locating a public bad on a cycle. Single-dipped preferences resemble single-peaked preferences for cycle graphs, especially for sufficiently large ones. Actually, in this subsection we provide almost the same results with the case of single-peaked preferences.
###### Theorem 8.
Let be a cycle graph s.t. . When preferences are single-dipped, there is an FNP and PE SCF.
###### Proof.
For , it is easy to see that any SPR is FNP. For , the domain of single-dipped preferences coincides with the domain of single-peaked preferences, since the point diagonal from a dip point can be considered as a peak point. Therefore, the SPR with ordering is FNP, as shown in Theorem 3. Finally, for , the SPR with ordering is FNP. ∎
###### Theorem 9.
Let be a cycle graph s.t. . When preferences are single-dipped, there is no FNP and PE SCF.
###### Proof.
The identical proof of Theorem 4 applies for any even , since a single-dipped preference over a cycle of even length, with a dip point , coincides with the single-peaked one with the peak point that is antipodal to .
We therefore focus on odd . Assume that an FNP and PE SCF exists for, say, , and w.l.o.g. that for any s.t. .
Consider a profile s.t. . Since is FNP and PE, must be either or ; otherwise some agent has incentive to add fake identities. Furthermore, for the profile s.t. , holds. Therefore, holds; otherwise the agent located at has incentive to add fake identity on , which moves the facility to either or .
On the other hand, for another profile s.t. , must be either or due to symmetry. Therefore, for the above , must hold, which contradicts the condition of . Almost the same argument holds for any larger odd number . ∎
## 6 Conclusions
We tackled whether there exists an FNP and PE SCF for the facility location problem under a given graph. We gave complete answers for path, tree, and cycle graphs, regardless whether the preferences are single-peaked or single-dipped. For hypergrid graphs, an open problem remains for single-dipped preferences. When such SCFs exist, completely characterizing their class of such SCFs is crucial future work. Investigating randomized SCFs is another interesting direction.
## References
• [1] Noga Alon, Michal Feldman, Ariel D. Procaccia, and Moshe Tennenholtz. Strategyproof approximation of the minimax on networks. Mathematics of Operations Research, 35(3):513–526, 2010.
• [2] Noga Alon, Michal Feldman, Ariel D. Procaccia, and Moshe Tennenholtz. Walking in circles. Discrete Mathematics, 310(23):3432 – 3435, 2010.
• [3] Eleftherios Anastasiadis and Argyrios Deligkas. Heterogeneous facility location games. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’18, pages 623–631, Richland, SC, 2018. International Foundation for Autonomous Agents and Multiagent Systems.
• [4] Haris Aziz and Mike Paterson. False name manipulations in weighted voting games: splitting, merging and annexation. In Proceedings of the 8th International Joint Conference on Autonomous Agents and Multiagent Systems, pages 409–416, 2009.
• [5] Salvador Barberà, Dolors Berga, and Bernardo Moreno. Domains, ranges and strategy-proofness: the case of single-dipped preferences. Social Choice and Welfare, 39(2):335–352, Jul 2012.
• [6] Nanyang Bu. Unfolding the mystery of false-name-proofness. Economics Letters, 120(3):559 – 561, 2013.
• [7] Vincent Conitzer. Anonymity-proof voting rules. In Internet and Network Economics, 4th International Workshop, WINE 2008, Shanghai, China, December 17-20, 2008. Proceedings, pages 295–306, 2008.
• [8] Elad Dokow, Michal Feldman, Reshef Meir, and Ilan Nehama. Mechanism design on discrete lines and cycles. In ACM Conference on Electronic Commerce, EC ’12, Valencia, Spain, June 4-8, 2012, pages 423–440, 2012.
• [9] Bruno Escoffier, Laurent Gourvès, Nguyen Kim Thang, Fanny Pascual, and Olivier Spanjaard. Strategy-proof mechanisms for facility location games with many facilities. In Ronen I. Brafman, Fred S. Roberts, and Alexis Tsoukiàs, editors, Proceedings of the Second International Conference on Algorithmic Decision Theory (ADT 2011), pages 67–81, Berlin, Heidelberg, 2011. Springer Berlin Heidelberg.
• [10] Itai Feigenbaum and Jay Sethuraman. Strategyproof mechanisms for one-dimensional hybrid and obnoxious facility location models. In Incentive and Trust in E-Communities, Papers from the 2015 AAAI Workshop, Austin, Texas, USA, January 25, 2015., 2015.
• [11] Chi Kit Ken Fong, Minming Li, Pinyan Lu, Taiki Todo, and Makoto Yokoo. Facility location game with fractional preferences. In
Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence
, pages 1039–1046, 2018.
• [12] Bettina Klaus. Target rules for public choice economies on tree networks and in euclidean spaces. Theory and Decision, 51(1):13–29, Aug 2001.
• [13] Abhinaba Lahiri, Hans Peters, and Ton Storcken. Strategy-proof location of public bads in a two-country model. Mathematical Social Sciences, 90:150–159, 2017.
• [14] Julien Lesca, Taiki Todo, and Makoto Yokoo. Coexistence of utilitarian efficiency and false-name-proofness in social choice. In International conference on Autonomous Agents and Multi-Agent Systems, AAMAS ’14, Paris, France, May 5-9, 2014, pages 1201–1208, 2014.
• [15] Vikram Manjunath. Efficient and strategy-proof social choice when preferences are single-dipped.
International Journal of Game Theory
, 43(3):579–597, Aug 2014.
• [16] M. Teresa Melo, Stefan Nickel, and Francisco Saldanha-da-Gama. Facility location and supply chain management – a review. European Journal of Operational Research, 196(2):401 – 412, 2009.
• [17] Hervé Moulin. On strategy-proofness and single peakedness. Public Choice, 35(4):437–455, 1980.
• [18] Ilan Nehama, Taiki Todo, and Makoto Yokoo. Manipulations-resistant facility location mechanisms for ZV-line graphs. In Proceedings of the Eighteenth International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2019), Montreal, Canada, pages 1452–1460, May 2019.
• [19] Nodoka Okada, Taiki Todo, and Makoto Yokoo. Automated mechanism design via sat solving for false-name-proof facility location. 2018.
• [20] Tomohiro Ono, Taiki Todo, and Makoto Yokoo. Rename and false-name manipulations in discrete facility location with optional preferences. In PRIMA 2017: Principles and Practice of Multi-Agent Systems - 20th International Conference, Nice, France, October 30 - November 3, 2017, Proceedings, pages 163–179, 2017.
• [21] Ariel D. Procaccia and Moshe Tennenholtz. Approximate mechanism design without money. ACM Transactions on Economics and Computation, 1(4):18, 2013.
• [22] James Schummer and Rakesh V. Vohra. Strategy-proof location on a network. Journal of Economic Theory, 104(2):405 – 428, 2002.
• [23] Paolo Serafino and Carmine Ventre. Heterogeneous facility location without money on the line. In ECAI 2014 - 21st European Conference on Artificial Intelligence, 18-22 August 2014, Prague, Czech Republic - Including Prestigious Applications of Intelligent Systems (PAIS 2014), pages 807–812, 2014.
• [24] Akihisa Sonoda, Taiki Todo, and Makoto Yokoo. False-name-proof locations of two facilities: Economic and algorithmic approachess. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, February 12-17, 2016, Phoenix, Arizona, USA., pages 615–621, 2016.
• [25] Xin Sui, Craig Boutilier, and Tuomas Sandholm. Analysis and optimization of multi-dimensional percentile mechanisms. In IJCAI 2013, Proceedings of the 23rd International Joint Conference on Artificial Intelligence, Beijing, China, August 3-9, 2013, pages 367–374, 2013.
• [26] Taiki Todo and Vincent Conitzer. False-name-proof matching. In International conference on Autonomous Agents and Multi-Agent Systems, AAMAS ’13, Saint Paul, MN, USA, May 6-10, 2013, pages 311–318, 2013.
• [27] Taiki Todo, Atsushi Iwasaki, and Makoto Yokoo. False-name-proof mechanism design without money. In 10th International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011), Taipei, Taiwan, May 2-6, 2011, Volume 1-3, pages 651–658, 2011.
• [28] Shunsuke Tsuruta, Masaaki Oka, Taiki Todo, Yuko Sakurai, and Makoto Yokoo. Fairness and false-name manipulations in randomized cake cutting. In Proceedings of the 2015 International Conference on Autonomous Agents and Multiagent Systems, AAMAS ’15, pages 909–917, Richland, SC, 2015. International Foundation for Autonomous Agents and Multiagent Systems.
• [29] Yuho Wada, Tomohiro Ono, Taiki Todo, and Makoto Yokoo. Facility location with variable and dynamic populations. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden, July 10-15, 2018, pages 336–344, 2018.
• [30] Makoto Yokoo, Yuko Sakurai, and Shigeo Matsubara. The effect of false-name bids in combinatorial auctions: new fraud in internet auctions. Games and Economic Behavior, 46(1):174–188, 2004. |
Poisscdf
Definition:
$prob = poisscdf(k, lambda)$ computes the lower tail probabilities in given value $k$, associated with a Poisson distribution using the corresponding parameters in $\lambda$.
The lower tailed probability:
$P(X\leq k)=\sum_{i=0}^kP(X=i)=\sum_{i=0}^ke^{-\lambda }\frac{\lambda ^k}{k!}$
Parameters:
k (input,int)
The integer which defines the required probabilities. $k\geq 0$
lambda (input,double)
The parameter $\lambda$ of the Poisson distribution.$0\leq lambda\leq 1.0e6$
prob (output,double)
The probability. |
# What is the function of the scales in an onion
###### Question:
What is the function of the scales in an onion
### If you had two different stampede’s in one week, how much weight could one steer possibly lose?
If you had two different stampede’s in one week, how much weight could one steer possibly lose?...
Someone help me please! $Someone help me please!$...
### How did jefferson ultimately end up purchasing the entire louisiana territory
How did jefferson ultimately end up purchasing the entire louisiana territory...
### Report report report
Report report report $Report report report$$Report report report$...
### Some judges at the state level are appointed, while some are elected. true or false
Some judges at the state level are appointed, while some are elected. true or false...
PLEASE HELP. I need the answer ASAP. $PLEASE HELP. I need the answer ASAP.$...
Please help me with the question please ASAP ASAP please please help ASAP ASAP please please *. ITS DUE IN One HOUR $Please help me with the question please ASAP ASAP please please help ASAP ASAP please please *. ITS$...
### Which sentence could the writer add in front of this sentence to strengthen the opening? 'Unlike other countries, American students owe
Which sentence could the writer add in front of this sentence to strengthen the opening? "Unlike other countries, American students owe 1.5 trillion in student debt." A. In today’s modern culture, college and student loans go hand in hand. B. Are the benefits of college worth the consequence of ...
### Source s of error in a scientific investigation except for d
Source s of error in a scientific investigation except for d...
### Jesus' betrayal from the perspective of a desciple
Jesus' betrayal from the perspective of a desciple...
### Tina is saving to buy a notebook computer. she has two option. the first option is to put $200 away initially and save$10 every
Tina is saving to buy a notebook computer. she has two option. the first option is to put $200 away initially and save$10 every month. the second option is to put $100 away initially and save$30 every month. after how many months would tina save the same amount using either option? how much would ...
### Explain why the government under the Articles of Confederation did not have much power because ...
Explain why the government under the Articles of Confederation did not have much power because ......
### How would uranus not be classified as a dwarf planet and what i mean by that is there anything that could happen that would just make it a dwarf planet
how would uranus not be classified as a dwarf planet and what i mean by that is there anything that could happen that would just make it a dwarf planet...
### Larry will probably be a late bloomer socially, just like his older brothers
Larry will probably be a late bloomer socially, just like his older brothers...
### How long is an individual monarch's term of rule?
How long is an individual monarch's term of rule?...
### The image of a small electric bulb fixed on the wall of a room is to be obtained on the opposite wall 3 metres away by means of a large
The image of a small electric bulb fixed on the wall of a room is to be obtained on the opposite wall 3 metres away by means of a large convex lens. What is the maximum possible focal length of the lens required for the purpose? please answer ASAP!!!...
### There are 46 girls skating. there are 67 boys skating. how many more boys than girls are skating
There are 46 girls skating. there are 67 boys skating. how many more boys than girls are skating... |
# LazyHeap data structure with $O(n)$ Insert, Delete, and Return operations
Consider a data structure called LazyHeap that supports the following operations:
• INSERT(x): Given an element $x$, insert it into the data structure. It has no cost.
• DELETE(x): Delete $x$ from the data structure. It has no cost.
• RETURN: Return an element $x$ such that its order, if the elements are sorted, satisfies:
$$k/2 - k/100 \le \mathrm{order}(x) \le k/2 + k/100\,,$$
where $k$ is the number of elements in the data structure (at the time RETURN is called). RETURN also has no cost.
Come up with a strategy for the comparisons so that the running time for any sequence of $n$ operations is less than $1000n$.
I know that sometimes you have to make comparisons between two elements so that you can perform RETURN operation correctly, where one comparison costs one unit. However, I'm not sure where to go from there. Any help would be appreciated!
• "no cost" -- in the (unrealistic) model, or do we have to ensure it (impossible)? What is the cost measure? – Raphael Dec 15 '14 at 18:44
I can't actually see an approach that would provide constant time insert and return of the mid-element in $O(1)$ (and I understand "no cost" to be some constant cost.) I have, however, some ideas that may suggest some solution.
One approach that comes to my mind are two heaps, one MIN and one MAX. The MIN heap would keep al elements greater than the current mid element (satisfying $\frac{k}{2} - \frac{k}{100} ≤ midElement ≤\frac{k}{2} + \frac{k}{100}$) and the MAX heap would store all smaller or equal ones. On INSERT(x) we'd need to keep the heaps as balanced as possible, so we would compare x with the current mid element, add it to the proper heap (MIN if x > midElement, else MAX) and if the difference between heaps' sizes would grow larger than 1, we'd take an element from the larger one and insert it to the smaller one. However, RETURN would work in $O(log(n))$, as after returning the top of one of the heaps we would need to repair that heap. INSERT would also take $O(log(n))$ and DELETE could be even linear (select the heap in $O(1)$, get to the proper level of it (elements on lower level are too too big/too small) and search the level in $O(n)$).
Another approach would be to keep a balanced tree (like RB-Tree or an AVL-Tree). The top element would always be the mid-element. However, here RETURN would also work in $O(log(n))$ because we'd need to reset the root after removing it. INSERT would also work in $O(log(n))$ for the insert itself + eventual balancing and so would DELETE.
Constant times required ("has no cost") brings Hash-tables to mind. Average INSERT would indeed take $O(1)$ and so would DELETE (if table size and hashing function would be properly designed). As for RETURN - one approach would be to track the inserted elements and keep the index of the current mid-element. If greater or smaller elements were inserted we'd consider selecting the "next mid-element", but this would probably require $O(n)$ time (or at least I can't see any other way doing it). So this would actually cause our RETURN to work in $O(1)$ (if we keep an index of the mid-element), but grow INSERT cost to $O(n)$ (and similarly for DELETE.) |
Site statistics
Authors: 14782
Articles: 29847
Articles TODAY: 635
Latest in Tennis
Every Thing People Learn On AZ191 Is Drastically Wrong
The rate associated with biosorption is determined by the particular heat in which the procedure evolves as explained the Arrhenius Eq. (12): $$k=A \exp \left[-E_a/(RT)\right]$$ (Twenty-three) Your logarithmic way of those equations allows for determining activation vitality. Whether price is l... more
Strategy To Defeat A Lord Of Imatinib
With this study, many of us 1st utilised the cellulose synthesis chemical isoxaben to show the unsafe effects of mobile elongation regarding hypocotyl cellular material is qualitatively various pre and post the expansion acceleration. Through the sluggish progress phase, mobile elongation has bee... more
The Main OSI-906 Provider Dialog : Members Who Cares About Nada Gains All The Revs?!
Our thank you go to the editor and three unknown referees because of their feedback. Please be aware: Wiley Blackwell are certainly not responsible for the information as well as functionality associated with a assisting data supplied by your creators. Any queries (some other ... more
Safe Pest Control Tips
Pest control should be done with maximum consideration to safety; safety when it comes to the animals, crops and people. This holds especially true for anyone with plant and organic gardens. The primary intent behind growing veggies naturally will soon be overcome when they become tainted wi... more
Simple Methods To Steer Clear Of JPH203 Disasters
The appointment subject manual ended up being split up into 2 sections, protecting (we) just what problems are important to be able to QOL; and (ii) just what things influence on QOL. Used, both of these sections weren't distinctive, and also the lifetime of interviews was carefully guided by the... more
Little Ones, Hard Work Combined With ABT-737
Even so, a number of peptide hits have also been perfectly located at the ��control�� EL small fraction (6�C12% of identifications), displaying the need for exacting requirements. Blocking for that general opinion series ��Asn-Xxx-Ser/Thr�� (with AsnAsp) resulted in under 1% of ... more
A Few Key Attributes For The BVD-523
The outcomes in addition revealed that sperm morphology had been relying on age group, along with VX-765 small males having substantially smaller semen (my spouse and i.at the. Grow older also motivated sperm velocity, together with y... more Now: 8 < 1 ... 4 5 6 7 8 9 10 11 12 13 > |
### Select your language
Suggested languages for you:
Americas
Europe
|
|
# Addition, Subtraction, Multiplication and Division
Lerne mit deinen Freunden und bleibe auf dem richtigen Kurs mit deinen persönlichen Lernstatistiken
Nie wieder prokastinieren mit unseren Lernerinnerungen.
It can be helpful to understand how to use different mathematical operations as they can be used every day in many different situations, just like calculating how a bag of sweets can be divided equally between a group of people.
## Addition, Subtraction, Multiplication and Division definition
Addition, subtraction, multiplication and division are all types of operations used in mathematics.
Addition is a type of operation that results in the sum of two or more numbers. There is a sign to represent the operation addition, called a plus sign, which is a $+$.
### Subtraction
Subtraction is a type of operation that results in finding the difference between two numbers. The sign to represent the operation subtract is called a minus sign and it looks like this $-$.
### Multiplication
Multiplication is a type of operation that requires you to add in equal groups, multiplication results in a product. The sign that represents the operation multiplication can be called the multiplication sign and it looks like this $×$.
### Division
Division is the operation that is opposite to multiplication, it involves breaking a number down into equal parts. The sign that represents the operation division is simply called a division sign and looks like this $÷$.
## Addition, Subtraction, Multiplication and Division rules
There are different rules and methods that can be helpful when using each of these operations.
When adding two or more numbers together you can use a method called column addition. This involves putting the numbers one above the other in a column, you then work your way from right to left adding the numbers that are in the same column.
Calculate $122+552$
Solution:
To begin with, you can place the numbers on top of each other:
$122+552$
Now working from right to left, add the two horizontal numbers together starting with 2 and 2:
$122+5524$
Now moving onto 2 and 5:
$122+55274$
And finally 5 and 1:
$122+552672$
Therefore, $122+552=672$
If the two numbers you are adding are equal to more than 10, you can carry the number over.
### Subtraction
When subtracting two numbers, you can also use a column method; the column subtraction method. This works in the same way as the column addition method, however, you subtract the numbers rather than add them.
Calculate $538-214$
Solution:
To begin with, you can place the numbers on top of each other, placing the number you are subtracting from on top:
$538-214$
Now working from right to left, subtract one number away from another, starting with 8 and 4:
$538-2144$
Now moving onto 3 and 1:
$538-21424$
And finally 5 and 2:
$538-214324$
Therefore, $538-214=324$
If the number you are subtracting is higher than the number subtracted from, you will need to take a digit from the column to the left.
### Multiplication
When multiplying two numbers together there are different methods that can be used including the grid method. This involves breaking the two numbers down and placing them into a grid. You then complete individual multiplications and then add them all together.
Calculate $23×42$
Solution:
To begin with, draw out a grid, break down your numbers, and place them into the grid-like so:
20 3 40 2
To fill out the grid you simply multiply each number in the columns:
20 3 40 800 120 2 40 6
Now you can add all of the values together to find the answer to the question, it may be easier to do this in steps:
$800+120=920$
$40+6=46$
$920+46=966$
Therefore, $23×42=966$
### Division
When dividing a number by another you can use a method called short division, this method works best when you are dividing a number by 10 or less. Short division involves dividing a number mentally into smaller stages.
Calculate $306÷9$
Solution:
To start with you can draw out your calculation, writing the number you're dividing by on the left and the number you're dividing written on the right as shown below:
$9306$
Now you need to work your way through the number you're dividing one unit at a time, start by figuring out how many times 9 can go into 3. Since this is not possible you need to carry the 3 over to the next unit:
$9{\overline{)3}}^{3}06$
Now you can think about how many times 9 can go into 30. 9 goes into 30 three times with a remainder of three:
$9×3=27$
This can then be written into your division as below with the divisible number being written above the calculation and the remainder 3 being carried over to the 6:
$93{\overline{)3}}^{3}{0}^{3}6$
Finally, you can calculate how many times 9 goes into 36:
$9×4=36$
$934\overline{){3}^{3}}{0}^{3}6$
Therefore, $306÷9=34$
## Addition, Subtraction, Multiplication and Division relationships
It is possible for operations to have relationships with each other. There is a relationship between addition and subtraction as well as a relationship between multiplication and division.
### Addition & Subtraction
Addition and subtraction can be considered the inverse of one another. This simply means that the operations are the opposite, you can undo an addition by subtracting the same number and vice versa!
### Multiplication & Division
Multiplication and division are also considered the inverse of one another, if you want to undo a multiplication you can simply divide the number.
## Addition, Subtraction, Multiplication and Division examples
Calculate $647+278$
Solution:
To begin with, you can place the numbers on top of each other:
$647+278$
Now working from right to left, add the two horizontal numbers together. Starting with 7 and 8, since they equal 15, you need to carry the 1 over:
$647+278{}_{1}5\phantom{\rule{0ex}{0ex}}$
Now you need to add together 4, 7 and 1, again since this equals more than 10 you need to carry over the unit:
$647+278{}_{1}{}_{1}25$
Finally, you can add together 6, 2 and 1:
$647+278{}_{1}{}_{1}925$
Calculate $732-426$
Solution:
To begin with, you can place the numbers on top of each other, placing the number you are subtracting from on top:
$732-426$
Now working from right to left, subtract one number away from another, starting with 2 and 6. Since 6 is bigger than two, you need to borrow a digit from the column to the left:
${7}^{2}{\overline{)3}}^{1}2-4266\phantom{\rule{0ex}{0ex}}$
Now you can subtract 2 from 2:
${7}^{2}{\overline{)3}}^{1}2-42606$
Finally, you can subtract the 4 from 7:
${7}^{2}{\overline{)3}}^{1}2-426306$
Calculate $53×35$
Solution:
To begin with, draw out a grid, break down your numbers, and place them into the grid-like so:
50 3 30 5
To fill out the grid you simply multiply each number in the columns:
50 3 30 1500 90 5 250 15
Now you can add all of the values together to find the answer to the question, it may be easier to do this in steps:
$1500+90=1590$
$250+15=265$
$1590+265=1855$
Calculate $434÷7$
Solution:
Let's start by writing out the sum using the short division method:
$7434$
Now begin by calculating how many times 7 goes into 4, this is not possible so you can carry the 4 over to the 3:
$7{\overline{)4}}^{4}34$
Next, you can look at how many times 7 can go into 43:
$7×6=42$
This leaves us with a remainder of 1 that can be carried over to the 4:
$76{\overline{)4}}^{4}{3}^{1}4$
Finally, calculate how many times 7 can go into 14:
$7×2=14$
$762{\overline{)4}}^{4}{3}^{1}4$
Therefore, $434÷7=62$
## Applications of Addition, Subtraction, Multiplication and Division
These operations are often used in everyday life, let's work through some examples:
Amy has 326 stickers in her sticker collection, Claire has 213 stickers. How many stickers would they have if they combined their collections?
Solution:
Start by placing the two numbers on top of one another:
$326+213$
Now you can add them together working from right to left, starting with 6 and 3:
$326+2139$
Work your way through the numbers:
$326+213539$
Therefore, if Amy and Claire combined their collections, they would have 539 stickers in the collection.
Sam has 142 sweets, he gives his friend 54, how many sweets is Sam left with?
Solution:
To find out how many sweets Sam has, we can subtract 54 from 142. Start by placing the two numbers on top of each other:
$142-54$
Now working from right to left, subtract one number away from another. Don't forget, since 2 is smaller than 4 you need to take a unit from the column to the left:
${1}^{3}{\overline{)4}}^{1}2-548$
Now you can move on, again since 3 is smaller than 5 you will need to take a unit from the column to the left:
${{\overline{)1}}^{1}}^{3}{\overline{)4}}^{1}2-5488$
Therefore, Sam is left with 88 sweets.
Dave is cooking for 12 people but his recipe serves only serves 4. If the recipe requires 72 grams of pasta, how much pasta will Dave need?
Solution:
To find out how much pasta Dave will need for his recipe we can use the operation multiplication. Since 4 goes into 12, 3 times, Dave will need three times more than what the recipe states. To do this we can use the grid method:
70 2 3 210 6
Now you can add the two numbers together:
$210+6=216$
Therefore, Dave will need 216 grams of pasta to serve 12 people.
Barbara is out for a meal with 3 friends, the bill comes to £188 and they decide to split it evenly. How much does each person pay?
Solution:
To begin with, write the problem out using the short division method. The bill came to £188 and it is being split between 4 people, therefore it can be written as follows:
$4188$
Now take the first step and see how many times 4 can go into the first number on the left. Since 4 cannot go into 1, the 1 can be carried over:
$4{\overline{)1}}^{1}88$
Now calculate how many times 4 can go into 18:
$4×4=16$
This leaves us with a remainder of 2:
$44{\overline{)1}}^{1}{8}^{2}8$
Finally, how many times can 4 go into 28:
$4×7=28$
$447{\overline{)1}}^{1}{8}^{2}8$
This means that each person will need to pay £47.
## Addition, Subtraction, Multiplication and Division - Key takeaways
• There are many different types of mathematical operations, these include:
• Addition, which is an operation that results in the sum of two or more numbers.
• Subtraction, which is an operation that results in finding the difference between two numbers.
• Multiplication, which is an operation that requires you to add in equal groups, multiplication results in a product.
• Division, which is an operation that is opposite to multiplication, it involves breaking a number down into equal parts.
## Frequently Asked Questions about Addition, Subtraction, Multiplication and Division
In maths addition, subtraction, multiplication, and division are types of operations.
Some examples of these operations include:
• 23 + 17 = 40
• 86 - 22 = 64
• 10 × 42 = 420
• 63 ÷ 7 = 9
When completing a sum where a number of the sums are used you can use a method called BIDMAS to find out which operation to calculate first. This method says to solve sums in the order of brackets, indices, multiplication/division then addition/subtraction.
The relationship between addition and subtraction is similar to the relationship between multiplication and division as they are both considered the inverse of one another.
The rules of addition, subtraction, multiplication, and division refers to the order in which you use the operations.
## Addition, Subtraction, Multiplication and Division Quiz - Teste dein Wissen
Question
Addition is a type of operation that results in the sum of two or more numbers.
Show question
Question
What is subtraction?
Subtraction is a type of operation that results in finding the difference between two numbers.
Show question
Question
What is division?
Division is the operation that involves breaking a number down into equal parts.
Show question
Question
What is multiplication?
Multiplication is a type of operation that requires you to add in equal groups.
Show question
Question
What sign represents addition?
+
Show question
Question
What sign represents subtraction?
-
Show question
More about Addition, Subtraction, Multiplication and Division
60%
of the users don't pass the Addition, Subtraction, Multiplication and Division quiz! Will you pass the quiz?
Start Quiz
## Study Plan
Be perfectly prepared on time with an individual plan.
## Quizzes
Test your knowledge with gamified quizzes.
## Flashcards
Create and find flashcards in record time.
## Notes
Create beautiful notes faster than ever before.
## Study Sets
Have all your study materials in one place.
## Documents
Upload unlimited documents and save them online.
## Study Analytics
Identify your study strength and weaknesses.
## Weekly Goals
Set individual study goals and earn points reaching them.
## Smart Reminders
Stop procrastinating with our study reminders.
## Rewards
Earn points, unlock badges and level up while studying.
## Magic Marker
Create flashcards in notes completely automatically.
## Smart Formatting
Create the most beautiful study materials using our templates.
Sign up to highlight and take notes. It’s 100% free.
### Get FREE ACCESS to all of our study material, tailor-made!
Over 10 million students from across the world are already learning smarter. |
IMP Reference Guide develop.1a04b19ae7,2021/11/27 The Integrative Modeling Platform
IMP.EMageFit.restraints Namespace Reference
Utility functions to handle restraints. More...
## Detailed Description
Utility functions to handle restraints.
## Functions
def get_connectivity_restraint
Set a connectivity restraint for the leaves of a set of particles. More...
def get_em2d_restraint
Sets a restraint for comparing the model to a set of EM images. More...
## Function Documentation
def IMP.EMageFit.restraints.get_connectivity_restraint ( particles, distance = 10.0, n_pairs = 1, spring_constant = 1 )
Set a connectivity restraint for the leaves of a set of particles.
The intended use is that the each particle is a hierarchy. Each hierarchy contains leaves that are atoms, or particles that are a coarse representation of a molecule
Note
This function is only available in Python.
Definition at line 19 of file restraints.py.
def IMP.EMageFit.restraints.get_em2d_restraint ( assembly, images_selection_file, restraint_params, mode = 'fast', n_optimized = 1 )
Sets a restraint for comparing the model to a set of EM images.
Note
This function is only available in Python.
Definition at line 38 of file restraints.py. |
# zbMATH — the first resource for mathematics
Fermat’s theorem: the contribution of Fouvry. (Théorème de Fermat: la contribution de Fouvry.) (French) Zbl 0586.10024
Sémin. Bourbaki, 37e année, Vol. 1984/85, Exp. No. 648, Astérisque 133/134, 309-318 (1986).
This lecture describes the work of Adleman, Heath-Brown and Fouvry, in showing that the first case of Fermat’s Last Theorem holds for infinitely many primes [L. M. Adleman and D. R. Heath-Brown, Invent. Math. 79, 409–416 (1985; Zbl 0557.10034); É. Fouvry, ibid. 79, 383–407 (1985; Zbl 0557.10035)]. That is to say there are infinitely many prime $$p$$ such that $$x^p+y^p=z^p$$ implies $$p\mid xyz$$. The work of Adleman and Heath-Brown reduces the problem to the demonstration of the hypothesis
$\#\{p\leq x: p\equiv 2\pmod 3,\;P(p-1)\geq x^{\vartheta}\}\gg x/\log x \tag{*}$
for some $$\vartheta >2/3$$ (where $$P(n)$$ denotes the greatest prime factor of $$n$$). Fouvry’s contribution is the proof of this hypothesis.
The lecture concentrates on the estimate (*) and describes the application of the Brun-Titchmarsh theorem, the Bombieri-Vinogradov Theorem, the “almost-all” version of the Brun-Titchmarsh theorem, the Rosser sieve with Iwaniec’s bilinear form for the remainder sum, Weil’s bound for Kloosterman sums, and the parity phenomenon. Two appendices present the Adleman-Heath-Brown criterion, and the connection between Kloosterman sums and modular forms.
For the entire collection see [Zbl 0577.00004].
##### MSC:
11N35 Sieves 11D41 Higher degree equations; Fermat’s equation 11L05 Gauss and Kloosterman sums; generalizations
Full Text: |
• Aug 20th 2011, 06:26 AM
TheProphet
Let $X$ be a random variable on $(\Omega, \mathcal{A},P)$, with values in $(E,\mathcal{E}$, and distribution $P_{X}$.
Let $h : (E,\mathcal{E}) \to (\mathbb{R},\mathcal{B}(\mathbb{R}))$ be measurable.
We have that $h(X) \in \mathcal{L}^{1}(\Omega,\mathcal{A},P)$ if and only if $h \in \mathcal{L}^{1}(\mathbb{R},\mathcal{B}(\mathbb{R}) ,P_{X})$.
Shouldn't it be $\mathcal{L}^{1}(E,\mathcal{E},P_{X})$ instead? If not, why?
• Aug 20th 2011, 06:49 AM
girdav
Yes it's $\mathcal L^1(E,\mathcal E,P_X)$ (it cannot be $\mathcal L^1(\mathbb R,\mathcal B(\mathbb R),P_X)$ since the measure $P_X$ is defined on $\mathcal E$). |
Unfortunately, Euler's method is not very efficient, being an O(h) method if are using it over multiple steps. It is a "self-starting" method. Euler's method (RK1'') and Euler's halfstep method (RK2'') are the junior members of a family of ODE solving methods known as Runge-Kutta'' methods. 1 First-Order Equations with Anonymous Functions Example 2. RKF45 Runge-Kutta-Fehlberg ODE Solver RKF45, a MATLAB library which implements an RKF45 ODE solver, by Watt and Shampine. 1 Numerical Integration with Runge{Kutta Methods Runge{Kutta methods can solve the initial value problem (IVP) of non-autonomous. This was, by far and away, the world's most popular numerical method for over 100 years for hand computation in the first half of the 20th century, and then for computation on digital computers in the latter half of the 20th century. 2) Enter the final value for the independent variable, xn. edu is a platform for academics to share research papers. 4 Analyzing Equations Numerically. The importance of a fourth order Runge Kutta Algorithm technique, the need for Newton Raphson Method and the properties of a Catenary Curve are stressed in this senior level engineering technology course. In general a Runge–Kutta method of order can be written as: where:. The Runge-Kutta method gives us four values of slope , , , and , and are near the two ends of the function , and are near the midpoints. k1 = f(x , u(x)) = f(0 , 0) = 0 k2 = f(x + delx / 2 , u(x) + 0. 2), then xn=0. A Runge-Kutta-Fehlberg ODE solver. Note: At the end of this document, see formulas used to answer this question as there are a few different versions of the Runge-Kutta 4 th order method. Unlike like Taylor’s series , in which much labor is involved in finding the higher order derivatives, in RK4 method, calculation of such higher order derivatives is not required. However most of the methods presented are obtained for the autonomous system while the Improved Runge-Kutta methods ( ) can be used for autonomous as well as non-autonomous systems. A first order O. We observe that the trajectory by the Euler-Maruyama scheme considerably deviates from the trajectory generated by the Heun scheme, but two trajectories due to the Heun scheme and the Runge-Kutta method with an additive noise are rather close each other. Runge-Kutta (RK4) numerical solution for Differential Equations In the last section, Euler's Method gave us one possible approach for solving differential equations numerically. The Runge - Kutta Method of Numerically Solving Differential Equations We have spent some time in the last few weeks learning how to discretize equations and use Euler' s Method to find numerical solutions to differential equations. Made by faculty at the University of Colorado Boulder Department of Chemical and Biological Engineering. 001, and the results for y1 and y2 are shown below. Instead of writing a new function for each and every method, it is possible to create just one function that accepts a so called butcher tableau, which contains all the necessary information for each and every Runge Kutta method. These hybrid algorithms are tested on a variety of test problems and their performance is compared with that of the limited memory BFGS algorithm. The Runge-Kutta method finds approximate value of y for a given x. REVIEW: We start with the differential equation dy(t) dt = f (t,y(t)) (1. The problem with Euler's Method is that you have to use a small interval size to get a reasonably accurate result. To increase the number of steps (and thereby decrease the step size) one need only change the value of N specified in the second line of the program. Problem 1 1. A visual grating structure editor; Automatic generation of common diffraction grating profiles including square wave holographic, blazed, sinusoidal, trapezoidal, triangular, 3-point polyline, and many others. If you are searching examples or an application online on Runge-Kutta methods you have here at our RungeKutta Calculator The Runge-Kutta methods are a series of numerical methods for solving differential equations and systems of differential equations. That's the classical Runge-Kutta method. Runge-Kutta-Fehlberg Method for O. TEST_ODE, a FORTRAN90 library which contains routines which define some test problems for ODE solvers. matlab's ode solvers are all variable-step and don't even offer an option to run with fixed step size. The Runge-Kutta-Fehlberg method uses an O(h 4) method together with an O(h 5) method and hence is often referred to as RKF45. Fourth order is a kind of sweet spot. Inherets convergence guarantees, but also get extensibility & uncertainty estimates What we’re working on next: L. develop Runge-Kutta 4th order method for solving ordinary differential equations, 2. how to fix the program. Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. is to solve the problem twice using step sizes h and h/2 and compare answers at the mesh points corresponding to the larger step size. The Runge-Kutta algorithm is the magic formula behind most of the physics simulations shown on this web site. This was, by far and away, the world's most popular numerical method for over 100 years for hand computation in the first half of the 20th century, and then for computation on digital computers in the latter half of the 20th century. Such methods use discretization to calculate the solutions in small steps. The file runge_kutta_4_ad. Imports the non-adaptive solve ' function, integration steps will be performed only at the points ' in the time span vector. Snigdha Thakur, Dept of Physics, IISER Bhopal. Using Excel to Implement Runge Kutta method : Scalar Case. At each step. Use when integrating over small intervals or when accuracy is _____ _____ ode45 High order (Runge-Kutta) solver. The Runge-Kutta methods are iterative ways to calculate the solution of a differential equation. In this example the OdeExplicitRungeKutta45 class is used to solve the Euler equations of a rigid body without external forces:. Using the correct Matlab ODE solver can save time and give more accurate results ode23 Low-order solver. I am a beginner at Mathematica programming and with the Runge-Kutta method as well. The differential equation I used for r is second order and θ is first order. tgz for differential-algebraic system solver with rootfinding by Brown, Hindmarsh, Petzold prec double and single alg BDF methods with direct and preconditioned Krylov linear solvers ref SIAM J. This was, by far and away, the world's most popular numerical method for over 100 years for hand computation in the first half of the 20th century, and then for computation on digital computers in the latter half of the 20th century. I'm trying to solve a system of coupled ODEs using a 4th-order Runge-Kutta method for my project work. The problem with Euler's Method is that you have to use a small interval size to get a reasonably accurate result. Implicit Runge Kutta Order 4. Runge-Kutta methods Runge-Kutta (RK) methods were developed in the late 1800s and early 1900s by Runge, Heun and Kutta. Institute of Engineering Thermophysics of the National Academy of Science of Ukraine, Kiev, Ukraine. However, this is not always the most. Wrapper for the Runge-Kutta-Fehlberg method of order (4,5) as provided by the well-known FORTRAN code rkf45. Runge-Kutta, this link can be made precise. There exist many Runge-Kutta methods (explicit or implicit), more or less adapted to specific problems. To compute a numerical approximation for the solution of the initial value problem with over at a discrete set of points using the formula , for where , , , and. Runge-Kutta method is a popular iteration method of approximating solution of ordinary differential equations. The Runge-Kutta method number of stages of is the number of times the function is evaluated at each one step i, this concept is important because evaluating the function requires a computational cost (sometimes higher) and so are preferred methods with ao minimum number of stages as possible. Below is a specific implementation for solving equations of motion and other second order ODEs for physics simulations, amongst other things. A sufficient condition for B-stability [23] is: and are non-negative definite. E is a statement that the gradient of y, dy/dx, takes some value or function. Participation in tests to determine fuel efficiency for Petrobras. Section 5 is the numerical results and discussion. Dim Solver As New RungeKutta5OdeSolver() ' Construct the time span vector. 4 using step size of 0. N-body space simulator that uses the Runge-Kutta 4 numerical integration method to solve two first order differential equations derived from the second order differential equation that governs the motion of an orbiting celestial. (For simplicity of language we will refer to the method as simply the Runge-Kutta Method in this lab, but you should be aware that Runge-Kutta methods are actually a general class of algorithms, the fourth order method being the most popular. The syntax for ode45 for rst order di erential equations and that for second order di erential equations are basically the same. REVIEW: We start with the differential equation dy(t) dt = f (t,y(t)) (1. What if a formula of order 2 is used to solve an initial value problem whose solution has only two continuous derivatives, but not three. A Runge–Kutta method is said to be algebraically stable [22] if the matrices and are both non-negative definite. The methods are based on explicit Runge-Kutta methods with extended stability domain along the negative real axis. Because Heun's method is O(h 2), it is referred to as an order 1-2 method. Reviews how the Runge-Kutta method is used to solve ordinary differential equations. It uses the third-order Bogacki-Shampine method and adapts the local step size in order to satisfy a user-specified tolerance. The methods are based on explicit Runge-Kutta methods with extended stability domain along the negative real axis. A differential equation of first order is of the type 𝑑𝑦 𝑑𝑥 = 𝑓 (𝑥, 𝑦). I got back home and slept for a week continuously. A way to accomplish this is proposed which is applicable to some important formulas. The Runge-Kutta methods for the solution of Equation (3), are one-step methods designed to approximate Taylor series methodsage of not requiring but have the advant explicit evaluation of the derivatives of f(x, y), where x often represents time (t). Other, non-self-starting methods require details of the solution for several previous steps before a new step can be executed by integrating polynomial fits to the previous values of the derivatives. ) Or we could swap out Runge-Kutta for a different ODE solver entirely just by passing a different function into the fold. ERROR ANALYSIS FOR THE RUNGE-KUTTA METHOD 4 above a given threshold, one can readjust the step size h on the y to restore a tolerable degree of accuracy. The Runge-Kutta algorithm lets us solve a differential equation numerically (that is, approximately); it is known to be very accurate and well-behaved for a wide range of problems. I'm supposed to integrate differential equations for r and θ in order to simulate orbital motion. 4) Enter the given initial value of the independent variable y0. CVode and IDA use variable-size steps for the integration. (2017) Chang et al. Mechee et al. Now, there are 4 unknowns with only three equations, hence the system of equations (9. In each of the tests, truth is generated using a high-accuracyz 50-stage Gauss-Legendre implicit Runge-Kutta (GL-IRK) method, and the number of high- delity force-model evaluations, the dominant computational cost of orbit propagation, is used to quantify the cost of orbit prop-agation. I'm trying to solve the following eqaution using runge kutta method. Higher Order Runge-Kutta Method Just like Simpson method can be extended to higher order estimate, Runge-Kutta also has straightforward Higher order analog. In this paper, MATLAB2015b was used to study the numerical simulation of the. Second, Nyström modification of the Runge-Kutta method is applied to find a. The Runge-Kutta method is a far better method to use than the Euler or Improved Euler method in terms of computational resources and accuracy. Moving the initial point and varying the step size shows how, by sampling from points that contain the expected trajectory, the Runge–Kutta method improves on the Euler and related methods. The difference method 4. Now, there are 4 unknowns with only three equations, hence the system of equations (9. This was also done under Dr. Derivation of the Runge–Kutta fourth-order method. The spatial derivatives are approximated by finite difference methods on a staggered, Cartesian grid with local grid refinements near the immersed boundary. I found that scipy. Help with using the Runge-Kutta 4th order method on a system of three first order ODE's. y n+1 = y n+. Solving the Kinematic Equations using Runge-Kutta I am attempting to write a physics simulation program using the kinematic equations and using Runge-Kutta to solve them to determine how an object will move through space subject to certain gravitational forces etc. Kutta (1867–1944). It is a “self-starting” method. 5/48 With the emergence of stiff problems as an important application area, attention moved to implicit methods. Runge-Kutta Methods 267 Thecoefficientof ℎ4 4! intheTaylorexpansionof𝑦(𝑡+ℎ)intermsof 𝑓anditsderivativesis 𝑦(4) =[𝑓3,0 +3𝑓𝑓2,1 +3𝑓2𝑓1,2 +𝑓3𝑓0,3]. I used Euler, Runge-Kutta, Trapezium and Simpson’s methods to numerically solve physical problems which involve differentiation and integrations, where I also compare their results with analytical one. Continue finding model-based interpretations of numerical. SecondOrder* Runge&Ku(a*Methods* The second-order Runge-Kutta method in (9. Starting from an initial condition, they calculate the solution forward step by step. Section 5 is the numerical results and discussion. c Runge Kutta for set of first order differential equations c PROGRAM oscillator IMPLICIT none c c declarations c N:number of equations, nsteps:number of. Do you have to write your own Runge-Kutta solver or can you use ODE45? If you really do not have any idea about writing a Matlab program, start with the "Getting Started" chapters of the documentation. Runge-Kutta 4th order method is most nearly-1. The methods obtain different. The Runge Kutta technique is utilized to solve a design problem in Hydrology and Fluid Mechanics as well. Because Heun's method is O(h 2), it is referred to as an order 1-2 method. An Approach to Solve Fuzzy Time Cost Trade off Problems: DOI-Cite this article: Abinaya. In this example the OdeExplicitRungeKutta45 class is used to solve the Euler equations of a rigid body without external forces:. solve_ivp (fun, t_span, y0, method='RK45', t_eval=None, dense_output=False, events=None, vectorized=False, **options) [source] ¶ Solve an initial value problem for a system of ODEs. Most commonly used. The methods are based on explicit Runge-Kutta methods with extended stability domain along the negative real axis. Visualizing the Fourth Order Runge-Kutta Method. Runge-Kutta formulas are among the oldest and best understood schemes in numerical analysis. Description. 1 Initial conditions and drift 165 10. Runge-Kutta Methods 267 Thecoefficientof ℎ4 4! intheTaylorexpansionof𝑦(𝑡+ℎ)intermsof 𝑓anditsderivativesis 𝑦(4) =[𝑓3,0 +3𝑓𝑓2,1 +3𝑓2𝑓1,2 +𝑓3𝑓0,3]. Below is a specific implementation for solving equations of motion and other second order ODE s for physics simulations, amongst other things. Described below is a second Runge-Kutta procedure that can solve these systems if we provide an array of initial conditions, one for each element on the system, and an array of functions to do the necessary 2nd derivative calculations at any point in time. To compute a numerical approximation for the solution of the initial value problem with over at a discrete set of points using the formula , for where , , , and. All Runge–Kutta methods mentioned up to now are explicit methods. May 19, 2005, 19:40. [ 11 ], Senu et al. 4) Enter the given initial value of the independent variable y0. Runge-Kutta method for delay-differential systems. 4th-Order Runge Kutta's Method. The paper is structured as follows: in section 2, we explain the ba-. construct a robust probabilistic IVP solver. Runge–Kutta method. Wolfram Community forum discussion about Runge-Kutta Method, stiffness occur, how to solve it?. Wrapper for the Runge-Kutta-Fehlberg method of order (4,5) as provided by the well-known FORTRAN code rkf45. The file runge_kutta_4_ad. Institute of Engineering Thermophysics of the National Academy of Science of Ukraine, Kiev, Ukraine. Runge-Kutta Method is a numerical technique to find the solution of ordinary differential equations. This was also done under Dr. The approximation of the “next step” is calculated from the previous one, by adding s terms. Section 5 is the numerical results and discussion. Inherets convergence guarantees, but also get extensibility & uncertainty estimates What we’re working on next: L. The only function currently implemented is the rk4f function for a fourth order fixed width Runge-Kutta solution. Reviewed by faculty from other academic institutions. Question to solve: Y''+aY'+bY+c(x)=0 Boundary conditions: x=0,Y=Y1 and x=L,Y=Y2. Runge-Kutta estimate. Implementing a Fourth Order Runge-Kutta Method for Orbit Simulation C. k1 = f(x , u(x)) = f(0 , 0) = 0 k2 = f(x + delx / 2 , u(x) + 0. The OdeImplicitRungeKutta5 class solves an initial-value problem for stiff ordinary differential equations using the implicit Runge Kutta method of order 5. I think this method will be more efficient than the 2nd order CrankNickolson. > How do I solve the 2nd order differential equation using the Runge-Kutta method of orders 5 and 6 in MATLAB?. In other sections, we will discuss how the Euler and Runge-Kutta methods are used to solve higher order ordinary differential equations or coupled (simultaneous) differential equations. E is a statement that the gradient of y, dy/dx, takes some value or function. 4th Order Runge-Kutta Method—Solve by Hand. It is only the final linear combination of solution vectors that is, say, fourth-order accurate. In the last section, Euler's Method gave us one possible approach for solving differential equations numerically. Get the free "RK4 Method" widget for your website, blog, Wordpress, Blogger, or iGoogle. van der Houwen cw1, P. By default the Runge-Kutta Midpoint Method is used. Runge-Kutta (RK4) numerical solution for Differential Equations In the last section, Euler's Method gave us one possible approach for solving differential equations numerically. Additionally, modified Euler is a member of the explicit Runge-Kutta family. ndep can get picked up automatically from the number of components of Y. Runge-Kutta Methods can solve initial value problems in Ordinary Differential Equations systems up to order 6. Thank you in advance!. To solve for dy/dx - x + y = 0 using Runge-Kutta 2nd order method. A sufficient condition for B-stability [23] is: and are non-negative definite. 2 How to use Runge-Kutta 4th order method without direct dependence between variables. Figure 5: Configuration Parameters Dialogue Box. Runge-Kutta methods are a family of iterative methods, used to approximate solutions of Ordinary Differential Equations (ODEs). 1) y(0) = y0 This equation can be nonlinear, or even a system of nonlinear equations (in which case y is a vector and f is a vector of n different functions). The Runge-Kutta family of numerical methods may be used to solve ordinary differential equations with initial conditions. Each step itself takes more work than a step in the first order methods, but we win by having to perform fewer steps. REVIEW: We start with the differential equation dy(t) dt = f (t,y(t)) (1. Stay on top of important topics and build connections by joining Wolfram Community groups relevant to your interests. INITIAL VALUE PROBLEM (FIRST ORDER DIFFERENTIAL EQUATIONS) A differential equation equipped with initial values (or conditions) is called an initial value problem. But this requires a significant amount of computation for the. Some systems motion or process may be governed by differential equations which are difficult to. A2A Please provide a link to “the 2nd order differential equation” you are referring to in your question. 3 Order reduction 156 9. Runge-Kutta Methods: a philosophical aside An RK method builds up information about the solution derivatives through the computation of intermediate stages At the end of a step all of this information is thrown away! Use more stages =)keep information around longer D. used with Runge-Kutta formulas to find solutions between Runge-Kutta steps. Validation of FORTRAN-90 codes of algorithms was achieved by phase plots comparison reference to Dowell (1988) as standard. 4th-order Runge-Kutta Example--movie demonstrating RK4 on a simple ODE. The method is a member of the Runge-Kutta family of ODE solvers. Suppose I have a 2nd order ODE of the form y''(t) = 1/y with y(0) = 0 and y'(0) = 10, and want to solve it using a Runge-Kutta solver. My question/problem comes from the $\frac{du}{dr}$ term in the 2nd equation. The sole aim of this page is to share the knowledge of how to implement Python in numerical methods. E's such as the Blasius equation we often need to resort to computer methods. runge-kutta. Runge–Kutta methods for ordinary differential equations – p. Mechee et al. Tableaus are specified via the keyword argument tab=tableau. Runge-Kutta method (Order 4) for solving ODE using MATLAB MATLAB Program: % Runge-Kutta(Order 4) Algorithm % Approximate the solution to the initial-value problem % dy/dt=y-t^2+1 MATLAB Books PDF Downloads. Download source - 1. 4th Order Runge-Kutta Method—Solve by Hand. Dim TimeSpan As New DoubleVector(85, 0, 0. 2nd Order Runge-Kutta. Imports the non-adaptive solve ' function, integration steps will be performed only at the points ' in the time span vector. In that post we simulated orbits by simply taking the location, and velocities of a set of masses, and computed the force on each body. In this paper, MATLAB2015b was used to study the numerical simulation of the. The Runge-Kutta algorithm is the magic formula behind most of the physics simulations shown on this web site. Visualizing the Fourth Order Runge-Kutta Method. Let us simulate the system by setting the solver as ODE4 (Runge-Kutta), using a fixed step size as 0. For ordinary differential equation, it is well-known that using 4-th order Runge-Kutta method (RK4), we can numerically solve the equation. A2A Please provide a link to “the 2nd order differential equation” you are referring to in your question. white noise w1 to the Runge-Kutta solution at each time step tk = k∆p, k = 0;1;:::2. GSL also provides the implicit 2nd/4th order Runge-Kutta methods. Runge-Kutta-Fehlberg Method for O. I am a beginner at Mathematica programming and with the Runge-Kutta method as well. The RKF45 ODE solver is a Runge-Kutta. How a Learner Can Use This Module: PRE-REQUISITES & OBJECTIVES : Pre-Requisites for Runge-Kutta 4th Order Method Objectives of Runge-Kutta 4th Order Method TEXTBOOK CHAPTER : Textbook Chapter of Runge-Kutta 4th Order Method DIGITAL AUDIOVISUAL LECTURES. So in the Euler Method, we could just make more, tinier steps to achieve more precise results. Cash-Karp method uses six function evaluations to calculate 4-th and fifth-order accurate solutions. Note: At the end of this document, see formulas used to answer this question as there are a few different versions of the Runge-Kutta 4 th order method. To solve for dy/dx - x + y = 0 using Runge-Kutta 2nd order method. Great work! What about a code for Runge Kutta method for second order ODE. Runge-Kutta method for delay-differential systems. Runge-Kutta Program Generator (#rkpg) is a program designed for this purpose. _____ and reasonable speed. The differential equation I used for r is second order and θ is first order. : 15, 6, 1467 (1994) and 19, 5, 1495 (1998) gams I1a2 file daspk. [email protected] For example, Matlab’s ode45 solver by default uses interpolation to quadruple the number of solution points to provide a smoother-looking graph. Implicit Runge Kutta Order 4. 1 Design choices and desiderata for a probabilistic ODE solver. Usage runge. The Fourth Order Runge-Kutta method is fairly complicated. 4 Runge-Kutta methods for stiff equations in practice 160 Problems 161 10 Differential algebraic equations 163 10. Image: Xcos simulation parameters setup After the simulation is complete, in the Scilab workspace we’ll find the variable y_x, which contains the results of our simulation. Numerical Solution of the System of Six Coupled Nonlinear ODEs by Runge-Kutta Fourth Order Method B. in the graphic, drag the locator (from which the calculations start), change the step length , and move through the steps in the calculation. The Runge-Kutta method gives us four values of slope , , , and , and are near the two ends of the function , and are near the midpoints. ERROR ANALYSIS FOR THE RUNGE-KUTTA METHOD 4 above a given threshold, one can readjust the step size h on the y to restore a tolerable degree of accuracy. It is a "self-starting" method. Help with using the Runge-Kutta 4th order method on a system of three first order ODE's. MATLAB which you can use as per your problem requirement. By default the Runge-Kutta Midpoint Method is used. DenseNet Runge-Kutta Lu et al. Dim TimeSpan As New DoubleVector(85, 0, 0. This video is unavailable. Runge-Kutta (RK4) numerical solution for Differential Equations In the last section, Euler's Method gave us one possible approach for solving differential equations numerically. A second point is that, for reasons of numerical stability, it is preferable to derive analytic for- mulas for these intermediate boundary values rather than simply apply the Runge-Kutta solver there. The BS(2,3) Runge-Kutta method used by ode23was derived along with a continuous extension based on cubic Hermite interpolation. Using Excel to Implement Runge Kutta method : Scalar Case. Runge-Kutta 4 for systems of ODE Function rk4_systems(a, b, N, alpha) approximates the solution of a system of differential equations, by the method of Runge-kutta order 4. 3 Runge-Kutta Methods Runge-Kutta (RK) methods achieve the accuracy of a Taylor series approach without requiring the calculation of higher derivatives. For math, science, nutrition, history. Other, non-self-starting methods require details of the solution for several previous steps before a new step can be executed by integrating polynomial fits to the previous values of the derivatives. The Backward Differentiation Formula (BDF) solver is an implicit solver that uses backward differentiation formulas with order of accuracy varying from one (also know as the. The only function currently implemented is the rk4f function for a fourth order fixed width Runge-Kutta solution. Runge Kutta method gives a more stable results that euler method for ODEs, and i know that Runge kutta is quite complex in the iterations, encompassing an analysis of 4 slopes to approximate the. Runge-Kutta 4th Order Method to Solve Differential Equation Given following inputs, An ordinary differential equation that defines value of dy/dx in the form x and y. It is a "self-starting" method. 2 Students will learn the theory underlying the derivation of standard numerical techniques and the development of algorithms. CHAPTER 08. Runge-Kutta methods are among the most popular ODE solvers. For example, Matlab’s ode45 solver by default uses interpolation to quadruple the number of solution points to provide a smoother-looking graph. Moving the initial point and varying the step size shows how, by sampling from points that contain the expected trajectory, the Runge–Kutta method improves on the Euler and related methods. Solve System of ODE (Ordinary Differential Equation)s by Euler's Method or Classical Runge-Kutta 4th Order Integration. Wrapper for the Runge-Kutta-Chebyshev formulas of order 2 as offered by the well-known FORTRAN code rkc. We will call these methods, which give a probabilistic interpretation to RK methods and extend them to return probability distributions, Gauss-Markov-Runge-Kutta (GMRK) methods, because they are based on Gauss-Markov priors and yield Runge-Kutta predictions. Programs that uses algorithms of this type are known as adaptive Runge-Kutta methods. Unfortunately, Euler's method is not very efficient, being an O(h) method if are using it over multiple steps. The Fourth Order Runge-Kutta method is fairly complicated. Implicit Runge-Kutta Processes By J. 4th order Runge kutta with system of coupled 2nd order ode MATLAB need help i do not know where my algorithm gone wrong Asked by Noel Lou Noel Lou (view profile). Because Heun's method is O(h 2), it is referred to as an order 1-2 method. In each of the tests, truth is generated using a high-accuracyz 50-stage Gauss-Legendre implicit Runge-Kutta (GL-IRK) method, and the number of high- delity force-model evaluations, the dominant computational cost of orbit propagation, is used to quantify the cost of orbit prop-agation. Motivated by the previous literature works of spreadsheet solutions of ordinary differential equations (ODE) and a system of ODEs using fourth-order Runge-Kutta (RK4) method, we have built a spreadsheet calculator for solving ODEs numerically by using the RK4 method and VBA programming. I'm trying to solve it using a For loop, but I'm having some trouble interpreting how to write it as runge-kutta. Note: At the end of this document, see formulas used to answer this question as there are a few different versions of the Runge-Kutta 4 th order method. To be concrete, we describe the idea as applied to this example. equation calculator, trigonometry sample problems. In the following Python code that is mostly a copy of our previous code we compare the time behaviour and accuracy (measured by mass conservation as our reaction diffusion system preserves mass) of the explicit Euler and Runge-Kutta 4 reaction integration. 4th order Runge-Kutta method EXAMPLE Solve approximately dy dx = x+ p y; y(1) = 2 and nd y(1:4) in 2 steps using the 4th order Runge-Kutta method. The general Runge-Kutta algorithm is one of a few algorithms for solving first order ordinary differential equations. pdf), Text File (. Runge-Kutta Methods 267 Thecoefficientof ℎ4 4! intheTaylorexpansionof𝑦(𝑡+ℎ)intermsof 𝑓anditsderivativesis 𝑦(4) =[𝑓3,0 +3𝑓𝑓2,1 +3𝑓2𝑓1,2 +𝑓3𝑓0,3]. Numerically approximate the solution of the first order differential equation dy dx = xy2 +y; y(0) = 1, on the interval x ∈ [0,. in some cases, e. 5/48 With the emergence of stiff problems as an important application area, attention moved to implicit methods. RK4, a C library which applies the fourth order Runge-Kutta algorithm to estimate the solution of an ordinary differential equation at the next time step. The Runge - Kutta Method of Numerically Solving Differential Equations We have spent some time in the last few weeks learning how to discretize equations and use Euler' s Method to find numerical solutions to differential equations. A major development of this method is carried out by Cockburn et al. The Runge-Kutta Method produces a better result in fewer steps. s were first developed by the German mathematicians C. The Runge-Kutta method Just like Euler method and Midpoint method , the Runge-Kutta method is a numerical method which starts from an initial point and then takes a short step forward to find the next solution point. Cash-Karp method uses six function evaluations to calculate 4-th and fifth-order accurate solutions. ERROR ANALYSIS FOR THE RUNGE-KUTTA METHOD 4 above a given threshold, one can readjust the step size h on the y to restore a tolerable degree of accuracy. Even if we choose another numerical solver, the result should be the same. The following is the list of all the solver with details: Solver Problem Type Order of Accuracy Method When to Use ode45 Nonstiff Medium Explicit Runge-Kutta Most of the time. Runge-Kutta Methods for Linear Ordinary Differential Equations David W. Runge-Kutta methods for linear ordinary differential equations D. Runge-Kutta 4th Order Method for Ordinary Differential Equations. The Runge-Kutta method Just like Euler method and Midpoint method , the Runge-Kutta method is a numerical method which starts from an initial point and then takes a short step forward to find the next solution point. To solve for dy/dx - x + y = 0 using Runge-Kutta 2nd order method. The importance of a fourth order Runge Kutta Algorithm technique, the need for Newton Raphson Method and the properties of a Catenary Curve are stressed in this senior level engineering technology course. More specifically, it uses six function evaluations to calculate fourth- and fifth-order accurate solutions. Combining Crank-Nicolson and Runge-Kutta to Solve a Reaction-Diffusion System. Voesenek June 14, 2008 1 Introduction A gravity potential in spherical harmonics is an excellent approximation to an actual gravita-. The LTE for the method is O(h 2), resulting in a first order numerical technique. TEST_ODE, a FORTRAN90 library which contains routines which define some test problems for ODE solvers. Matlab using runge kutta to solve system of odes, math poems addition, radical expressions online calculator, algebra 2 parabola equations, multiply factor calculator, differential. In this example the OdeExplicitRungeKutta45 class is used to solve the Euler equations of a rigid body without external forces:. in some cases, e. Called by xcos, Runge-Kutta is a numerical solver providing an efficient fixed-size step method to solve Initial Value Problems of the form:. CALCULATION OF BACKWATER CURVES BY THE RUNGE-KUTTA METHOD Wender in' and Don M. Runge-Kutta method is the powerful numerical technique to solve the initial value problems (IVP). Visualizing the Fourth Order Runge-Kutta Method. matlab's ode solvers are all variable-step and don't even offer an option to run with fixed step size. the step size is not bounded and determined solely by the solver. Implicit Runge Kutta method. Run the Simulation by clicking the Start Simulation button from the model window toolbox. Below is a specific implementation for solving equations of motion and other second order ODE s for physics simulations, amongst other things. But when i run a simulink model with ode4, simulink executes model only 1 time, instead of 4. Runge-Kutta, this link can be made precise. Introduction. Examples for Runge-Kutta methods We will solve the initial value problem, du dx =−2u. Grating Solver Development Company GSOLVER© Features: [GSolverLITE version includes BLUE highlighted items only]. Using Excel to Implement Runge Kutta method : Scalar Case. (12:31 min) 4th order Runge-Kutta Workbook II--extracting and graphing the Excel RK4 solution. Description. Runge–Kutta methods for ordinary differential equations – p. The Fourth Order Runge-Kutta method is fairly complicated. vode Solver for Ordinary Differential Equations (ODE). |
# 퍼지 논리 추론 방법을 이용한 사고시 대기확산 평가 개선
• 나만균 (조선대학교 원자력공학과) ;
• 심영록 (조선대학교 원자력공학과) ;
• 김숭평 (조선대학교 원자력공학과)
• Published : 2001.03.31
#### Abstract
In order to assess the atmospheric dispersion for the accidental releases of nuclear power plants, in calculating X/Q values in the XOQAR and PAVAN codes which are based on Reg. Guide 1.145, the X/Q and frequency values are plotted on log-normal paper. Starting with the highest X/Q value of this plot, the codes compare the slope of the line drawn from this point to every other point within an increment containing ten X/Q values. If there are fewer than ten values, only the number available are used. The coefficients that produce the line with the least negative slope are saved. The end point of this line is used as the next starting point, from which slopes to the points within the next increment, containing ten X/Q values, are compared. The X/Q values corresponding to the cumulative frequency values 0.5%, 5% or 50% are calculated to search for the $0{\sim}2$ hour X/Q value that tends to be a very conservative value. In this work, a fuzzy logic inference method is used for nonlinear interpolation of the X/Q values versus the cumulative frequency. The fuzzy logic inference method is known to be a food technique for nonlinear interpolation. The proposed method was applied to a potential accidential radioactive release of the Yonggwang nuclear power plant, which gives more realistic X/Q values. |
# nLab transport
Contents
This page is about the notion in homotopy type theory. For parallel transport via connections in differential geometry see there. For the relation see below.
# Contents
## Idea
In Martin-Löf type theory, given
• a type $A$,
• a judgment $z \colon A \vdash B(z)\ \mathrm{type}$ (hence an $A$-dependent type $B$),
• terms$\,$ $x \colon A$ and $y \colon A$,
• an term of their identity type $p \colon (x =_A y)$,
then there are compatible transport functions
(1)$\overrightarrow{\mathrm{tr}}_{B}^{p}:B(x) \to B(y) \;\;\; \text{and} \;\;\; \overleftarrow{\mathrm{tr}}_{B}^{p}:B(y) \to B(x) \,,$
such that for all $v:B(y)$, the fiber of $\overrightarrow{\mathrm{tr}}_{B}^{p}$ at $v$ is contractible, and for all $u:B(x)$, the fiber of $\overleftarrow{\mathrm{tr}}_{B}^{p}$ at $u$ is contractible.
## Examples and applications
###### Remark
(relation to parallel transport – dcct §3.8.5, ScSh12 §3.1.2)
In cohesive homotopy type theory the shape modality $\esh$ has the interpretation of turning any cohesive type $X$ into its path $\infty$-groupoid $\esh X$: The 1-morphisms $p \colon (x =_{\esh X} y)$ of $\esh X$ have the interpretation of being (whatever identities existed in $X$ composed with) cohesive (e.g. continuous or smooth) paths in $X$, and similarly for the higher order paths-of-paths.
Accordingly, an $\esh X$-dependent type $B$ has the interpretation of being a “local system” of $B$-coefficients over $X$, namely a $B(x)$-fiber $\infty$-bundle equipped with a flat $\infty$-connection.
In this case, the identity transport (1) along paths in $\esh X$ has the interpretation of being the parallel transport (in the original sense of differential geometry) with respect to this flat $\infty$-connection (and the higher parallel transport when applied to paths-of-paths). |
# Prime Numbers and the Riemann Hypothesis PDF
PDF
#### Description
Prime numbers are beautiful, mysterious, and beguiling mathematical objects.
The mathematician Bernhard Riemann made a celebrated conjecture about primes in 1859, the so-called Riemann hypothesis, which remains one of the most important unsolved problems in mathematics.
Through the deep insights of the authors, this book introduces primes and explains the Riemann hypothesis.
Students with a minimal mathematical background and scholars alike will enjoy this comprehensive discussion of primes.
The first part of the book will inspire the curiosity of a general reader with an accessible explanation of the key ideas.
The exposition of these ideas is generously illuminated by computational graphics that exhibit the key concepts and phenomena in enticing detail.
Readers with more mathematical experience will then go deeper into the structure of primes and see how the Riemann hypothesis relates to Fourier analysis using the vocabulary of spectra.
Readers with a strong mathematical background will be able to connect these ideas to historical formulations of the Riemann hypothesis.
£17.00
£14.45
£4.99
£17.00
£14.45
£60.00
£48.00
£98.00
£78.40 |
## Organizers
• Prof. Dr. Herbert Koch
• Prof. Dr. Christoph Thiele
• Dr. Leonardo Tolomeo
• ## Schedule
This seminar takes place regularly on Fridays, at 14.00 (c.t.). Because of the current regulations regarding the Corona pandemic, the seminar will take place online on the Zoom platform. Please join the pdg-l mailing list or contact Dr. Tolomeo (tolomeo at math.uni-bonn.de) for further information.
##### Title:
The Ruelle Zeta Function for nearly hyperbolic 3-manifolds
##### Abstract:
The Ruelle Zeta Function (RZF) is defined in analogy to the Riemann Zeta Function, where primes correspond to primitive closed orbits of an Anosov flow X. The RZF extends meromorphically to the whole complex plane and carries rich information about the flow. Using microlocal methods, Dyatlov-Zworski recently showed that the order of vanishing at zero n(X) of the RZF equals the minus Euler characteristic, if X is the geodesic vector field of a negatively curved surface. In this talk, I will explain an exciting novel result showing the instability of n(X) close to hyperbolic 3-manifolds, starkly contrasting the case of surfaces. The proof is based on studying the pushforward of a certain pairing between resonant states (“eigenstates of X”), regularisation arguments and wavefront set calculus. Joint work with Dyatlov, Küster and Paternain.
##### Title:
On the parabolic Hardy-Hénon equation in Marcinkiewicz spaces
##### Abstract:
The (elliptic) Hardy-Hénon equation was proposed as a model for rotating stellar systems. Its parabolic analogue has recently attracted more attention. In this talk, recent results pertaining to the global wellposedness theory of the Cauchy problem for the parabolic Hardy-Hénon equation will be discussed. I will emphasize on the role played by the choice of the initial data class in the analysis of the problem and present the main results which essentially deal with existence/non-existence (of global solutions), their long-time asymptotic behavior as well as self-similarity properties. Further interesting related results will be briefly mentioned if time allows.
##### Title:
Invariant measures for the nonlinear wave equations in 2d
##### Abstract:
In this talk we will discuss some recent results regarding the construction and invariance of Gibbs measures under the flow of nonlinear wave equations in dimension two. The case of polynomial nonlinearities being well-understood, we will discuss two examples of non-polynomial nonlinearities. These are joint works with T. Oh, P. Sosoe and Y. Wang.
##### Title:
Recent developments in Banach-valued time-frequency analysis
##### Abstract:
Over the last year Gennady Uraltsev and I have managed to prove some interesting results involving time-frequency analysis for functions taking values in abstract Banach spaces. In this talk I will give an overview of our results, the underlying methods (Banach-valued outer Lebesgue spaces and modulation-invariant Carleson embeddings), and some interesting problems that we (or you) may tackle in the future. In particular I will discuss
- the bilinear Hilbert transform (on functions valued in intermediate UMD spaces),
- bounds for variational Carleson operators, i.e. variational estimates for partial Fourier integrals (of functions valued in intermediate UMD spaces),
- multilinear modulation-invariant Fourier multipliers with operator-valued symbols (once more, on functions valued in intermediate UMD spaces).
##### Title:
Invariant Gibbs measures for the three-dimensional wave equation with a Hartree nonlinearity
##### Abstract:
In this talk, we discuss the construction and invariance of the Gibbs measure for a three-dimensional wave equation with a Hartree-nonlinearity. We start with a brief review of finite-dimensional Hamiltonian ODEs, which serves as a stepping stone towards the main topic of this talk. After introducing the wave equation with a Hartree-nonlinearity, we briefly discuss the construction of the Gibbs measure, which is based on earlier work of Barashkov and Gubinelli. We also discuss the mutual singularity of the Gibbs measure and the so-called Gaussian free field. In the main part of this talk, we study the dynamics of the nonlinear wave equation with Gibbsian initial data. Our argument combines ingredients from dispersive equations, harmonic analysis, and random matrix theory. At this point in time, this is the only proof of invariance of any singular Gibbs measure under a dispersive equation.
##### Title:
Normal form approach to the one-dimensional cubic nonlinear Schrödinger equation in almost critical spaces
##### Abstract:
In recent years, the normal form approach has provided an alternative method to establishing the well-posedness of solutions to nonlinear dispersive PDEs, as compared to using heavy machinery from harmonic analysis. In this talk, I will describe how to apply the normal form approach to study the one-dimensional cubic nonlinear Schrödinger equation (NLS) on the real-line and prove local well-posedness in almost critical Fourier-amalgam spaces. This involves using an infinite iteration of normal form reductions (namely, integration by parts in time) to derive the normal form equation, which behaves better than NLS for rough functions. This is joint work with Tadahiro Oh (U. Edinburgh).
##### Title:
Complete integrability of the Benjamin-Ono equation on the multi-soliton manifolds
##### Abstract:
This presentation, which is based on the work Sun [2], is dedicated to describing the complete integrability of the Benjamin-Ono (BO) equation on the line when restricted to every N-soliton manifold, denoted by $U_N$. We construct (generalized) action-angle coordinates which establish a real analytic symplectomorphism from $U_N$ onto some open convex subset of $R^{2N}$ and allow to solve the equation by quadrature for any such initial datum. As a consequence, $U_N$ is the universal covering of the manifold of N-gap potentials for the BO equation on the torus as described by Gérard-Kappeler [1]. The global well-posedness of the BO equation on $U_N$ is given by a polynomial characterization and a spectral characterization of the manifold $U_N$. Besides the spectral analysis of the Lax operator of the BO equation and the shift semigroup acting on some Hardy spaces, the construction of such coordinates also relies on the use of a generating functional, which encodes the entire BO hierarchy. The inverse spectral formula of an N-soliton provides a spectral connection between the Lax operator and the infinitesimal generator of the very shift semigroup. The construction of action-angle coordinates for each $U_N$ constitutes a first step towards the soliton resolution conjecture of the BO equation on the line.
##### Bibliography:
[1] Gérard, P., Kappeler, T. On the integrability of the Benjamin-Ono equation on the torus, arXiv:1905.01849, to appear in Commun. Pure Appl. Math., https://doi.org/10.1002/cpa.21896 , 2020.
[2] Sun, R. Complete integrability of the Benjamin-Ono equation on the multi-soliton manifolds, Version Dec 15th.
##### Title:
Weak (1,1) type inequalities for noncommutative singular integrals
##### Abstract:
Noncommutative Lp-spaces are originally a product of operator theory but have been discovered to be a rich object from the harmonic analytic point of view. For example, in the last decade, operators derived from singular integrals have been introduced and found some remarkable applications in that context. After presenting these various notions, I will discuss some related ongoing work with J. Parcet and J. Conde-Alonso in which we investigate weak (1,1) type inequalities for these operators. This work is meant to complement known results of the theory that usually focus on the better understood BMO approach.
##### Title:
Singular stochastic integral operators
##### Abstract:
Singular integral operators play a prominent role in harmonic analysis. By replacing the integration with respect to some measure by integration with respect to Brownian motion, one obtains a stochastic singular integral operators of the form $S_K G(t) :=\int_{0}^\infty K(t,s) G(s) d W(s),$ which appear naturally in questions related to SPDE. Here G is an adapted process, W is a Brownian motion and K is allowed be singular. In this talk I will introduce Calderón--Zygmund theory for such singular stochastic integrals with operator-valued kernel K. I will first discuss $L^p$-extrapolation under a Hörmander condition on the kernel. Afterwards I will treat sparse domination and sharp weighted bounds under a Dini condition on the kernel, leading to a stochastic analog of the solution to the $A_2$-conjecture.
##### Title:
Fourier restriction estimates via real algebraic geometry
##### Abstract:
In this talk I will discuss the classical Fourier restriction conjecture in harmonic analysis. This longstanding problem investigates basic mapping properties of the Fourier transform and directly relates Fourier analysis to geometric concepts such as curvature. Underpinning the conjecture are deep questions in (continuum) incidence geometry. I will describe some recent joint work with Josh Zahl which obtains new partial results on the restriction conjecture. To do this, we use tools from real algebraic geometry to study the underlying incidence problems.
##### Title:
A discrete Kakeya-type inequality
##### Abstract:
The Kakeya conjectures of harmonic analysis claim that congruent tubes that point in different directions rarely meet. In this talk we discuss the resolution of an analogous problem in a discrete setting (where the tubes are replaced by lines), and provide some structural information on quasi-extremal configurations. This is joint work with A. Carbery. |
# Realism of a multi-arrow bow
You may have heard of this common fantasy trope, an archer firing multiple (usually 3) arrows in a single pull of the bow.
Now, I am not an expert at bows or tactics, but it seems to me that the loss of accuracy is not worth even attempting this and that's assuming it could work!
What problems appear with this 'Multi-arrow' bow and how can I overcome these? What are the actual realistic advantages and disadvantages of a multi-arrow bow if it could work?
• Having tried this before...yeah your assumption is correct. I am no master yeoman but its a dumb idea. – James Jan 14 '16 at 6:20
• Not an answer: I am an archer, and have fired two arrows, both nocked next to each other, and they landed next to each other, if this helps at all. It required a lot of patience and bad posture (leveling out the bow a bit to the side so the arrows could rest on it, etc). It was on a modern recurve bow (similar-ish to what you see in movies). – Mikey Jan 23 '16 at 3:10
• @Mikey That must have been cool, I wish I could have seen it – TrEs-2b Jan 23 '16 at 21:58
• @Mikey, I'm an archer too and can confirm what you say: there is no accuracy or aiming problem (with two arrows, I've never tried more). The arrows fly a near identical path, right next to each other. – Jacco Dec 9 '16 at 20:46
• As someone with an invested interest in Ancient warfare, this topic caught my eye so I couldn't resist sharing that there is a very intriguing crosspost from history.stackexchange titled "Are there any historical sources that support the claim that ancient high-speed archers held multiple arrows in their hand?" that might provide further insight into what many perceive is an unrealistic topic, but there is definitely historical basis to this question! The post has – SanDiegoBookReview Jan 6 '17 at 2:12
Your main problem is that you're trying to fling multiple projectiles with a single bowstring. I'm no physics guru, but it seems that you're going to be dividing the pounds of pull across the total number of projectiles -- so you'll get correspondingly less range & target penetration than if you stuck with a single arrow (ignoring more obvious issues like aiming). Someone actually ran the numbers on how much kinetic energy from the bowstring is transferred to the arrow, if you're interested.
A crossbow is a much more workable solution, as the strings can be drawn in advance, and held in place. That leads to the possibility of firing more than one projectile, in quick succession. Such weapons are indeed a trope, usually because the increased firepower is desirable from an action standpoint, but the limitations of the time period will not allow firearms.
One solution to firing multiple arrows is a special crossbow, featured in a battle scene in the 2000 film The Gladiator. On first glance, it looks rather ridiculous, but I actually feel it is a workable design (albeit perhaps not terribly practical for anything other than a showy arena skirmish) for the simple reason that each crossbow bolt has its own string, rather than some sort of imaginary bolt "magazine" that ignores the necessity of re-cocking the string after each firing. Additionally, it dispenses with any sort of fanciful common trigger system, and appears to have the wielder just rotate the entire weapon on its axis.
A simpler version would be the double-sided crossbow, from the 2000 film Dracula:
• actualy rapid fire crossbows where a thing in real history: en.wikipedia.org/wiki/Repeating_crossbow Additionaly: while it is true that each arrow would have less force if you fire multiple from one string this might not be a problem depending on what you shoot at...penetrating a medival plate armor? bad. shooting at unarmored peasants? still more than enough force. – m.fuss Jan 6 '17 at 15:12
• Even then, I suspect that you wouldn't be dividing by 2; I suspect that only a fraction of the stored energy in the bow goes into propelling the arrow: specifically, the amount of energy that changes how fast it straightens up, between dry-firing a bow and firing it with an arrow in... which is not a lot. Seems to me, that fraction should rise as you add more mass to push out the way to straighten the bow; the bow might even straighten noticeably slower if firing a hundred arrows. – Dewi Morgan Oct 17 '17 at 14:12
## Accuracy
Let me expand on "reduced accuracy". I base this on my own experience shooting target-weight bows and observing others.
Accuracy at any decent range depends on some fairly fine-grained factors. Aim a fraction of an inch high or low, or draw the string back not quite as far as usual (or just a bit farther), and you're probably going to miss your mark. You might still hit a large target somewhere, but it won't be what you were aiming for. It is not uncommon to mark the center-point of the string for this reason, so that you're nocking the arrow at the same position (and thus drawing at the same angle with respect to the bow) each time. The key to good aim is consistency.
If you add arrows to the string in a single draw, then by definition those arrows are not positioned to hit your target. Movies usually depict multiple arrows being spaced an inch or so apart on the string, maybe more. And if those arrows have fletching (feathers), you probably can't get them much closer than half a inch on the string. (For any decently-heavy bow, the arrow itself is about a quarter-inch in diameter.) That's a huge difference when shooting at even 30 yards, to say nothing of 50 or 100 yards. Those extra arrows may look pretty, but they aren't hitting your target. I suppose if you're shooting at a large charging army they might hit somebody else, if you're very lucky, but probably not (see "power").
## Power
Other answers have already addressed the reduced power so I won't repeat them. Your arrows aren't going to pack the punch you need to do damage. They're also not all going to get the same amount of power; the one in the middle will have more force than the others, which will likely fall short. (Thanks to XandarTheZenon for pointing this out in a comment.)
## Practicality
A competent archer can fire an aimed shot about once every five seconds; good ones are even faster. A lot of this speed comes from the ease of the load-draw-release cycle. It's a very smooth motion; you draw an arrow from the quiver by its nock and place it on the string. You don't even need to be looking at it; this is done by feel. (Instead you're looking at your target.)
Now, how long is it going to take you to load three arrows onto the bowstring? I don't think you can do them all at once, and if you do them one at a time then the ones already on the string are going to slow you down. I haven't tried the experiment, but I'm going to be bold and say that it will take you longer to load and fire your trio of arrows than it would take to fire them individually.
So you can shoot less-accurate under-powered arrows more slowly, or you can shoot them one at a time instead. I know which I'd choose.
• I think that this sums up what would happen much better than what I said. – Xandar The Zenon Jan 14 '16 at 3:12
• @XandarTheZenon your point about the angles is interesting; I'd been assuming three "shelves" on the bow so the arrows would be parallel (though two are not at your draw length), but if you're trying to fire them all off the same shelf, no that's definitely not going to work. Also, sorry, didn't mean to ninja you there -- I hadn't seen your answer before writing mine. – Monica Cellio Jan 14 '16 at 3:18
• Don't worry about it, I admire those who can write a one page essay on a random topic and make it neat and sensible. Something I don't think has been addressed here is that the amount of force exerted on an arrow would be less for the bottom and top and more for the middle. The point with the most power would be where you pull back the bow, and then I'm pretty sure the arrows' power would exponentially decrease as you move away from it. After all, there's a reason bows were designed the way they were. For a single arrow in the center. – Xandar The Zenon Jan 14 '16 at 3:25
• I proffer my ideas that other my profit from them. I actually enjoy seeing people use my ideas, or people with similar ones. – Xandar The Zenon Jan 14 '16 at 3:51
• From my experience as a recurve and longbow archer I can wholly agree with this answer. – fgysin reinstate Monica Jan 14 '16 at 14:21
I use a recurve and longbow. Getting good at this took practice, like a lot of practice.
First. There are some very expert opinions out there on why this is fake and or a bad idea but there are some problems with that. Most people aren't thinking of the bow as a weapon of war. They are thinking of it as a hunting/sport weapon. They are thinking of compound and recurve bows where it is pretty much a given that you are going to be using bow hand draw and not arrow hand draw.
Second, the physics around arrow flight are really REALLY complicated. So complicated that for a long time people thought bows were an impossible mechanism leading to the coining of the "archers paradox" now I'm not going to get into the specifics but suffice it to say that the loss of acceleration is correct but not as dramatic nor as important as you'd think in the context of a battle.
third thing is, Arrow hand draw is important which means owning a bow that allows you to do this is important, you can get a longbow, and if you are interested in them I highly reccommend it, or you can just get a recurve for the opposite of your dominant hand and sand the grip to shape it a bit. Professional olympic archers can fire an arrow maybe every once every 3-5 seconds an ancient archer could fire an arrow about once every second. The reason is arrow hand draw lets you rest, nock, and draw the arrow as a single motion rather than maneuvering the clumsy bow hand draw. This is going to help you keep the top arrow from going off course or worse, interfering with the flight of your other arrow.
fourth thing, in sports and hunting accuracy is important, in battle they are important but the MOST important thing is how many arrows you can put down range within a specific amount of time. It may not surprise you to learn that people really don't like getting shot with an arrow, so the less of a break in the screaming bodkin-pointed death missiles there was, the less of an opening you presented. Also the average range for an archer to be shooting at individual targets in battle is 170-200 feet at most otherwise they would just be firing into the mass of people hoping to hit. So you don't actually have to get it that far.
So to pull off the famed "multishot" you need to knock two arrows, I know theres a lot of ways to do that and some people use a mechanism to aid them in having a clean release. I've personally found this is the worst way to do this trick so I would avoid it. The method I've always used is one finger above the nock and two below but for this you are going to make an "arrow sandwich" with one below the first arrow one in the middle and one on top of the second. and you need to grip the bow so that your index (pointer) finger can extend and the tip can rest between the two.
If you did it right both the arrows should be perfectly straight.
Now here's the part that's going to take practice. There are two things i've found that generally make this trick work reliably. First, and most important, once you release the string your index finger needs to slide out from between the arrows, otherwise the hens (the two matching coloured flights) are going to sting it on the way past, your instinct is going to be to curl it back in but what usually happpens when you do that is it deflects the bottom arrow down. The best thing to do is to flick it straight out so it's out of the way, this may deflect the top arrow up a bit but over a longer distance it will drop back down and you should still get a pretty accurate shot. Second, draw the bow with only your middle finger, keep rour ring and index finger extended to stabilize the arrows but you should be pulling from the middle, the reason for this is, while the bow will deliver all of its force across the entire string it delivers the greatest acceleration from the point you drew it back, for maximum accuracy this should be directly between the two arrows.
## Reduced Accuracy
As you mentioned, the accuracy of all of the arrows would be significantly degraded.
## Reduced Power
A bow transfers energy from the bent wood/materials of the bow into an arrow through its bowstring. The amount of momentum imparted depends upon a number of factors but by adding multiple arrows to the bow string, you are at the least dividing the momentum by the number of arrows added.
Decreased momentum means:
1. Lower range
2. Less penetrating power
## Historically Speaking
I suppose the best argument about whether you gain or lose more by adding arrows to the bow string is by looking through history. The trick is an obvious one so if the people who did it greatly benefited from it, you'd expect everyone to be doing it.
## Limiting Factors
The limiting factors of arrow acceleration are:
• The mass to be propelled, for example the arrow(s) - $m_{arrows}$
• The mass of the limbs of the bow - $m_{limbs}$
Also there's a scaling factor, in that the firing of the bow causes the bowstring (and arrows) to pass through the entire draw length (perhaps 24-30 inches). Meanwhile the limbs of the bow (which are forcing the arrow through that distance) may only pass through a few inches. This distance might be 8-10 inches or so (see image below).
Bow drawn and at rest:
Because the ratio of motion between the arrow and bow limbs are not constant you actually get a relatively complex interaction between the two. Initially the force propelling arrow and limbs are high. As the string returns to rest, the amount for force decreases while the amount of motion increases.
Even if you set the arrow mass at zero (no arrow notched), the bow will take time to return to rest state (FYI, never draw and release a bow with no arrow notched, this can break your bow) because the bow limbs have finite mass.
• I'll have to insert a cautionary note here concerning power. I suspect that stepping up to 3 arrows will not cut arrow speed by a factor of 3, although it will have some effect. Clearly arrow resistance is not the only factor limiting bowstring velocity. If this were true, then firing an arrow with extremely low mass would provide extremely high velocity, and this is not true. – WhatRoughBeast Jan 14 '16 at 3:36
• @WhatRoughBeast, the momentum of the bow's arms plays a roll too. Because the bow arms apply the force, the total acceleration/max velocity that can be imparted is limited to speed at which the bow's arms can return to their normal position. Shoot too much mass are you're limited to the acceleration you can apply to the arrow. Shoot too little mass and you're limited to the acceleration of the bow's arms. You get the lesser of the two numbers. – Jim2B Jan 14 '16 at 4:55
Actually, it is quite possible to fire THREE arrows with a single draw. I first saw this technique in the Hindi movie, Bahubali (2), The Conclusion. Google for images and relevant yt clips.
Now, while I'm sure there was liberal application of Movie Magic, especially when the hero shot past the gal's head—two past her ears, one above her crown…I've not been able to figure out how to bump one of the arrows into a higher/wider flight—I've nonetheless been able to duplicate the draw if not the loading speed and fluency.
Note the draw hand's inverted grip with thumb-down, palm-away. Inverting is not necessary, but is definitely a conversation starter on ranges. When the arrows are parallel, their grouping is quite tight. I've yet to get a spread triple hit by putting angles between the shafts.
But instead of me telling y'all about it, I recently found this: https://youtu.be/l6HdEqOpgzE
• Please don't answer with only a link. Answers should be written in a way that will make them usable even if all links expire. Without seeing a movie or youtube clip, your answer makes no sense now. – Mołot Oct 16 '17 at 9:00
• Welcome to Worldbuilding! As the above two comments rightly say, you've answered the question in a youtube video. What the above comments fail to say is that this is actually great as it's a direct demonstration of what the question is looking for as opposed to some explanation with no back-up. However, it would be very useful if you could edit your answer to include the relevant points of the video as well in case the link stops working. Thanks! – Mithrandir24601 Oct 16 '17 at 9:56
• I think this looks okay already, but it would definitely be better if you could edit your answer to tell people what can be seen in the video. Not everyone can watch videos every time and we try to give readers all the information necessary directly. Links of all sorts can get outdated, which would leave this answer somewhat lacking. If you have a moment please take the tour and visit the help center to learn more about the site. Have fun Love Robin Miller! – Sec SE - clear Monica's name Oct 16 '17 at 10:28
This is a very interesting idea and one that I would like to look into. One factor that I would like to point out, is that all of the bows made today are for firing one arrow. so trying to fire multiple arrows will not work. If someone were to make a bow that was specifically designed for multiple arrows, the Idea might actually work. You could also work out a way for the arrows to shoot in such a way that they would fire at a wider angle and thus prove an actually useful point.
New contributor
Another curious archer is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. |
# Show that $\lim_{n\to\infty}\int_{\mathbb{R}}f_nd\mu$ exists, where $\mu=\sum_{k=1}^M \delta_{y_k}$
I want to show that $$\lim_{n\to\infty}\int_{\mathbb{R}}f_nd\mu$$ exists and calculate it, with $$\mu=\sum_{k=1}^M \delta_{y_k}$$ and $$\delta_{y_k}$$ being the dirac measure. Additionally, $$f_n$$ is defined from $$\mathbb{R}\to\mathbb{R}$$ where $$x \to arctan(n(x-a))-arctan(n(x-b))$$ with $$a,b\in\mathbb{R}, a\lt b, n\in\mathbb{N}$$ and $$y_k\in\mathbb{R}$$ is given. What I tried: $$\lim_{n\to\infty}\int_{\mathbb{R}}f_nd\mu=\lim_{n\to\infty}\left(\sum_{k=1}^M\int_{\mathbb{R}}f_n\delta_{y_k}\right)$$ Is this correct? Then, $$\lim_{n\to\infty}\left(\sum_{k=1}^M\int_{\mathbb{R}}f_n\delta_{y_k}\right)=\lim_{n\to\infty}\left(\sum_{k=1}^Mf_n(y_k)\right)=\sum_{k=1}^M\left(\lim_{n\to\infty}f_n(y_k)\right)$$ If I can now show that $$f_n$$ converges pointwise to a limit function $$f$$, I should be done. However, I struggle with that and am completely unsure if this is anyhow correct. Thanks for help!
The only thing you have to know on the $$\arctan$$ function is that $$\lim_{x\to -\infty}\arctan x=-\frac{\pi}2\mbox{ and }\lim_{x\to +\infty}\arctan x= \frac{\pi}2.$$ Therefore, the value of $$\lim_{n\to +\infty}f_n\left(y_k\right)$$ depends whether $$y_k\lt a$$, $$y_k=a$$, $$a\lt y_k\lt b$$, $$y_k=b$$ or $$y_k\gt b$$.
• Thanks, I just figured that out a few minutes ago. I have now found an alternative approach by splitting $f_n$ in a negative and positive part and then using MCT. However, I struggle to show that $f_n$ is indeed a non-decreasing sequence, do you have any hints on that? – Michael Maier Jan 8 at 15:24 |
# All Questions
10,749 questions
Filter by
Sorted by
Tagged with
4k views
### An NP-complete variant of factoring.
Arora and Barak's book presents factoring as the following problem: $\text{FACTORING} = \{\langle L, U, N \rangle \;|\; (\exists \text{ a prime } p \in \{L, \ldots, U\})[p | N]\}$ They add, further ...
1k views
14k views
### Complexity of Finding the Eigendecomposition of a Matrix
My question is simple: What is the worst-case running time of the best known algorithm for computing an eigendecomposition of an $n \times n$ matrix? Does eigendecomposition reduce to matrix ...
1k views
DISCLAIMER: This is an open ended question and stackexchange puritans would probably feel an extraordinary urge to vote it down to oblivion. However, I cannot think of any other forum more appropriate ...
2k views
### Importance of single author papers?
I'm a fourth year PhD student in theoretical computer science. I'd like to stay in academia, so I'm thinking about how best to advance my career. Obviously the best way to do that is write lots of ...
569 views
### Does Rabin/Yao exist (at least in a form that can be cited)?
In Andrew Chi-Chih Yao's classic 1979 paper he references "M. O. Rabin and A. C. Yao, in preparation". This is for the result that the bounded-error communication complexity of the equality function ...
2k views
### Alphabet of single-tape Turing machine
Can every function $f : \{0,1\}^* \to \{0,1\}$ that is computable in time $t$ on a single-tape Turing machine using an alphabet of size $k = O(1)$ be computed in time $O(t)$ on a single-tape Turing ...
5k views
### References for TCS proof techniques
Are there any references (online or in book form) that organize and discuss TCS theorems by proof technique? Garey and Johnson do this for the various kinds of widget constructions needed for NP-...
2k views
### Using error-correcting codes in theory
What are applications of error-correcting codes in theory besides error correction itself? I am aware of three applications: Goldreich-Levin theorem about hard core bit, Trevisan's construction of ...
21k views
### What is the difference between non-determinism and randomness?
I recently heard this - "A non-deterministic machine is not the same as a probabilistic machine. In crude terms, a non-deterministic machine is a probabilistic machine in which probabilities for ...
2k views
### Are the problems PRIMES, FACTORING known to be P-hard?
Let PRIMES (a.k.a. primality testing) be the problem: Given a natural number $n$, is $n$ a prime number? Let FACTORING be the problem: Given natural numbers $n$, $m$ with $1 \leq m \leq n$, ...
2k views
### Why are mod_m gates interesting?
Ryan Williams just posted his lower bound on ACC, the class of problems that have constant depth circuits with unbounded fan-in and gates AND, OR, NOT and MOD_m for all possible m's. What's so ...
1k views
### Is there a logic without induction that captures much of P?
The Immerman-Vardi theorem states that PTIME (or P) is precisely the class of languages that can be described by a sentence of First-Order Logic together with a fixed-point operator, over the class of ...
3k views
2k views
### Is there a backup/replacement for the Complexity Zoo?
This is a non-technical question, but certainly relevant for the TCS community. If considered inappropriate, feel free to close. The Complexity Zoo webpage (http://qwiki.stanford.edu/index.php/...
2k views
### Sorting algorithm, such that each element is compared $O(\log n)$ times, and doesn't depend on a sorting network
Are there any known comparison sorting algorithms that do not reduce to sorting networks, such that each element is compared $O(\log n)$ times? As far as I know, the only way to sort with $O(\log n)$ ... |
# Contents
## Idea
A hyperring is like a ring not with an underlying abelian group but an underlying canonical hypergroup.
It is a hypermonoid with additional ring-like structure and properties.
This means that in a hyperring $R$ addition is a multi-valued operation.
## Definition
A hyperring is a non-empty set $R$ equipped with a hyper-addition $+ : R\times R \to P^*(R)$ (where $P^*(R)$ is the set of non-empty subsets) and a multiplication $\cdot : R \times R \to R$ and with elements $0,1 \in R$ such that
1. $(R,+)$ is a canonical hypergroup;
2. $(R,\cdot)$ is a monoid with identity element $1$;
3. $\forall r,s \in R : r(s+t) = r s + r t and (s + t) r = s r + t r$;
4. $\forall r \in R : r \cdot 0 = 0 \cdot r = 0$;
5. $0 \neq 1$.
We can form many examples of hyperrings by quotienting a ring $R$ by a subgroup $G \subset R^{\times}$ of its multiplicative group.
A morphism of hyperrings is a map $f : R_1 \to R_2$ such that
1. $\forall a,b \in R_1 : f(a + b) \subset f(a) + f(b)$;
2. $\forall a,b\in R_1 : f(a \cdot b) = f(a) \cdot f(b)$.
A hyperfield is a hyperring for which $(R - \{0\}, \cdot)$ is a group.
## Examples
### Hyperfield extension of field with one element
The hyperfield extension of the field with one element is
$\mathbf{K} := (\{0,1\}, +, \cdot)$
with additive neutral element $0$ and the hyper-addition rule
$1 + 1 = \{0,1\} \,.$
This is to be thought of as the hyperring of integers modulo the relation “is 0 or not 0”: think of $0 \in \mathbf{K}$ as being the integer 0 and of $1 \in \mathbf{K}$ as being any non-zero integer, then the addition rule says that 0 plus any non-zero integer is non-zero, and that the sum of a non-zero integer with another non-zero integer is either zero or non-zero.
### The signature hyperfield $\mathbf{S}$
Let $\mathbf{S} = \{0,1,-1\}$ be the hyperfield with multiplication induced from $\mathbb{Z}$ and with addition given by 0 being the additive unit and the laws
• $1+1 = \{1\}$;
• $-1 + -1 = \{-1\}$
• $1 + -1 = \{-1, 0, 1\}$.
This we may think of as being the hyperring of integers modulo the relation “is positive or negative or 0”: think of $1$ as being any positive integer, $0$ as being the integer $0$ and $-1$ as being any negative integer. Then the hyper-addition law above encodes how the signature of integers behaves under addition.
Proposition
To each element, $\phi$, of $Hom(\mathbb{Z}[X], \mathbf{S})$ there corresponds an extended real number, $Re(\phi) \in [-\infty, \infty]$ given as a Dedekind cut. This is a surjective mapping. The inverse image of each real algebraic number contains three elements, while that of a nonalgebraic number is a singleton. For real algebraic $\alpha$, the three homomorphisms from $\mathbb{Z}[X]$ to $\mathbf{S}$ are
$P(T) \mapsto \underset{\epsilon \to 0+} {lim} sign P(\alpha + t \epsilon), t \in \{-1, 0, 1\}.$
## References
The notion of hyperring and hyperfield is due to Marc Krasner:
• M. Krasner, Approximation des corps valués complets de caractéristique $p\neq 0$ par ceux de caractéristique 0 , pp.126-201 in Colloque d’ Algébre Supérieure (Bruxelles), 1956 , Ceuterick Louvain 1957.
Another early reference is
• D. Stratigopoulos, Hyperanneaux non commutatifs: Hyperanneaux, hypercorps, hypermodules, hyperespaces vectoriels et leurs propriétés élémentaires (French) C. R. Acad. Sci. Paris Sér. A-B 269 (1969) A489–A492.
Modern applications in connection to the field with one element are discussed in
An overview is in
• Jaiung Jun, Algebraic Geometry over Hyperrings , arXiv:1512.04837 (2015). (abstract)
Revised on March 30, 2016 08:32:39 by Thomas Holder (176.0.29.143) |
## Linear Algebra and Its Applications, exercise 1.4.16
Exercise 1.4.16. For the following matrices
$E = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} \quad F = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \quad G = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix}$
verify that (EF)G = E(FG).
$EF = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&0&1 \end{bmatrix}$
and
$(EF)G = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&1&1 \end{bmatrix}$
We also have
$FG = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&1&1 \end{bmatrix}$
and
$E(FG) = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 0&0&1 \end{bmatrix} \begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 1&1&1 \end{bmatrix} = \begin{bmatrix} 1&0&0 \\ -2&1&0 \\ 1&1&1 \end{bmatrix} = (EF)G$
NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang.
If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fourth Edition and the accompanying free online course, and Dr Strang’s other books.
This entry was posted in linear algebra. Bookmark the permalink. |
# RANS-based turbulence models
Jump to: navigation, search
The objective of the turbulence models for the RANS equations is to compute the Reynolds stresses, which can be done by three main categories of RANS-based turbulence models: |
## Thursday, January 19, 2017
### Infinite Series and Markov Chains
There's a wonderful new series of math videos PBS Infinite Series hosted by Cornell Math Phd student Kelsey Houston-Edwards. Check out this latest video on Markov Chains.
She gives an amazingly clear description of why, on a random walk on an undirected graphs, the stationary distribution puts probability on each node proportional to its degree.
Houston-Edwards also relies without proof on the following fact: The expected time to start and return to a vertex v is 1/p where p is the probability given to v in the stationary distribution. Why should that be true?
Didn't seem so obvious to me, so I looked it up. Here is an informal proof:
Let V1 be the random variable that is the expected length of time to get from v back to v. Let's take that walk again and let V2 be the expected length of time to get from v back to v the second time, and so on. Let An=V1+V2+..+Vn, the random variable representing the number of transitions taken to return to v n times. In a Markov chain each Vi is distributed the same so E(An) = n E(V1).
As n gets large the Markov chain approaches the stationary distribution and will be in state v about a p fraction of the time. After E(An) transitions we should return to v about p E(An) times, where we actually return n times. So we have p E(An) approaches n, or p n E(V1) approaches n and the only way this could happen is if E(V1)=1/p.
#### 1 comment:
1. Dear Lance,
The fact you're referring to is proved as Prop 1.14 in Levin, Peres and Wilmer's book "Rapid Mixing in Markov Chains" (2nd edition). They show the slightly more general fact that if
\pi(y) = E_z (number of visits to y before returning to z),
where E_z denotes the expectation when the chain begins from z, then \pi(y) is a stationary measure for the transition matrix P (assuming it is irreducible and the state space is finite).
To make a probability distribution out of this they divide by the expected return time from z to z. In this if you consider \pi(z)/E_z(first return to z) that is naturally 1/E_z(first return to z), which proves the result. |
+0
Challenge Problem
0
425
3
+445
In how many ways can you spell the word COOL in the grid below? You can start on any letter, then on each step, you can step one letter in any direction (up, down, left, right, or diagonal).
$$\begin{array}{ccccc} C&C&C&C&C\\ L&O&O&O&L\\ L&O&O&O&L\\ L&O&O&O&L\\ C&C&C&C&C\\ \end{array}$$
Oct 23, 2017
#1
+7612
0
I count 4
Oct 24, 2017
edited by hectictar Oct 24, 2017
#2
+445
0
4 was incorrect... Do you have any other answers?
I counted 64 ways, that was wrong too.
Mr.Owl Oct 24, 2017
#3
+638
0
http://web2.0calc.com/questions/in-how-many-ways-can-you-spell-the-word-cool-in-the-grid-below-you-can-start-on-any-letter-then-on-each-step-you-can-step-one-letter-in-an
Oct 24, 2017 |
IX
1 Cz 2 Pi 3 So 4 N 5 Pn 6 Wt 7 Śr 8 Cz 9 Pi 10 So 11 N 12 Pn 13 Wt 14 Śr 15 Cz 16 Pi 17 So 18 N 19 Pn 20 Wt 21 Śr 22 Cz 23 Pi 24 So 25 N 26 Pn 27 Wt 28 Śr 29 Cz 30 Pi
X
1 So 2 N 3 Pn 4 Wt 5 Śr 6 Cz 7 Pi 8 So 9 N 10 Pn 11 Wt 12 Śr 13 Cz 14 Pi 15 So 16 N 17 Pn 18 Wt 19 Śr 20 Cz 21 Pi 22 So 23 N 24 Pn 25 Wt 26 Śr 27 Cz 28 Pi 29 So 30 N 31 Pn
XI
1 Wt 2 Śr 3 Cz 4 Pi 5 So 6 N 7 Pn 8 Wt 9 Śr 10 Cz 11 Pi 12 So 13 N 14 Pn 15 Wt 16 Śr 17 Cz 18 Pi 19 So 20 N 21 Pn 22 Wt 23 Śr 24 Cz 25 Pi 26 So 27 N 28 Pn 29 Wt 30 Śr
XII
1 Cz 2 Pi 3 So 4 N 5 Pn 6 Wt 7 Śr 8 Cz 9 Pi 10 So 11 N 12 Pn 13 Wt 14 Śr 15 Cz 16 Pi 17 So 18 N 19 Pn 20 Wt 21 Śr 22 Cz 23 Pi 24 So 25 N 26 Pn 27 Wt 28 Śr 29 Cz 30 Pi 31 So
I
1 N 2 Pn 3 Wt 4 Śr 5 Cz 6 Pi 7 So 8 N 9 Pn 10 Wt 11 Śr 12 Cz Pi 13 Pi 14 So 15 N 16 Pn 17 Wt 18 Śr 19 Cz 20 Pi 21 So 22 N 23 Pn 24 Wt 25 Śr 26 Cz 27 Pi 28 So 29 N 30 Pn 31 Wt
II
1 Śr 2 Cz 3 Pi 4 So 5 N 6 Pn 7 Wt 8 Śr 9 Cz 10 Pi 11 So 12 N 13 Pn 14 Wt 15 Śr 16 Cz 17 Pi 18 So 19 N 20 Pn 21 Wt 22 Śr 23 Cz 24 Pi 25 So 26 N 27 Pn 28 Wt
III
1 Śr 2 Cz 3 Pi 4 So 5 N 6 Pn 7 Wt 8 Śr 9 Cz 10 Pi 11 So 12 N 13 Pn 14 Wt 15 Śr 16 Cz 17 Pi 18 So 19 N 20 Pn 21 Wt 22 Śr 23 Cz 24 Pi 25 So 26 N 27 Pn 28 Wt 29 Śr 30 Cz 31 Pi
IV
1 So 2 N 3 Pn 4 Wt 5 Śr 6 Cz 7 Pi 8 So 9 N 10 Pn 11 Wt 12 Śr 13 Cz 14 Pi 15 So 16 N 17 Pn 18 Wt 19 Śr 20 Cz 21 Pi 22 So 23 N 24 Pn 25 Wt 26 Śr 27 Cz 28 Pi 29 So 30 N
V
1 Pn 2 Wt 3 Śr 4 Cz 5 Pi 6 So 7 N 8 Pn 9 Wt 10 Śr 11 Cz 12 Pi 13 So 14 N 15 Pn 16 Wt 17 Śr 18 Cz 19 Pi 20 So 21 N 22 Pn 23 Wt 24 Śr 25 Cz 26 Pi 27 So 28 N 29 Pn 30 Wt 31 Śr
VI
1 Cz 2 Pi 3 So 4 N 5 Pn 6 Wt 7 Śr Pi 8 Cz 9 Pi 10 So 11 N 12 Pn 13 Wt 14 Śr 15 Cz 16 Pi 17 So 18 N 19 Pn 20 Wt 21 Śr 22 Cz 23 Pi 24 So 25 N 26 Pn 27 Wt 28 Śr 29 Cz 30 Pi
VII
1 So 2 N 3 Pn 4 Wt 5 Śr 6 Cz 7 Pi 8 So 9 N 10 Pn 11 Wt 12 Śr 13 Cz 14 Pi 15 So 16 N 17 Pn 18 Wt 19 Śr 20 Cz 21 Pi 22 So 23 N 24 Pn 25 Wt 26 Śr 27 Cz 28 Pi 29 So 30 N 31 Pn
VIII
1 Wt 2 Śr 3 Cz 4 Pi 5 So 6 N 7 Pn 8 Wt 9 Śr 10 Cz 11 Pi 12 So 13 N 14 Pn 15 Wt 16 Śr 17 Cz 18 Pi 19 So 20 N 21 Pn 22 Wt 23 Śr 24 Cz 25 Pi 26 So 27 N 28 Pn 29 Wt 30 Śr 31 Cz
IX
1 Pi 2 So 3 N 4 Pn 5 Wt 6 Śr 7 Cz 8 Pi 9 So 10 N 11 Pn 12 Wt 13 Śr 14 Cz 15 Pi 16 So 17 N 18 Pn 19 Wt 20 Śr 21 Cz 22 Pi 23 So 24 N 25 Pn 26 Wt 27 Śr 28 Cz 29 Pi 30 So
sesja egzaminacyjna przerwa międzysemestralna dzień świąteczny dzień wolny od zajęć dydaktycznych pierwszy dzień semestru |
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 16 Aug 2018, 16:45
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# If k^2 = m^2, which of the following must be true?
Author Message
TAGS:
### Hide Tags
Math Expert
Joined: 02 Sep 2009
Posts: 47946
If k^2 = m^2, which of the following must be true? [#permalink]
### Show Tags
14 Oct 2015, 21:24
3
6
00:00
Difficulty:
5% (low)
Question Stats:
88% (00:19) correct 12% (00:22) wrong based on 640 sessions
### HideShow timer Statistics
If k^2 = m^2, which of the following must be true?
(A) k = m
(B) k = −m
(C) k = |m|
(D) k = −|m|
(E) |k| = |m|
Kudos for a correct solution.
_________________
Manager
Status: tough ... ? Naaahhh !!!!
Joined: 08 Sep 2015
Posts: 64
Location: India
Concentration: Marketing, Strategy
WE: Marketing (Computer Hardware)
Re: If k^2 = m^2, which of the following must be true? [#permalink]
### Show Tags
14 Oct 2015, 21:28
1
|k| = |m| suffice the condition of k^2 = m^2.
so ans: E
Verbal Forum Moderator
Status: Greatness begins beyond your comfort zone
Joined: 08 Dec 2013
Posts: 2105
Location: India
Concentration: General Management, Strategy
Schools: Kelley '20, ISB '19
GPA: 3.2
WE: Information Technology (Consulting)
Re: If k^2 = m^2, which of the following must be true? [#permalink]
### Show Tags
14 Oct 2015, 22:21
2
k^2=m^2
Taking the non negative square root of both sides of the equation ,
|k| = |m|
_________________
When everything seems to be going against you, remember that the airplane takes off against the wind, not with it. - Henry Ford
The Moment You Think About Giving Up, Think Of The Reason Why You Held On So Long
+1 Kudos if you find this post helpful
SVP
Joined: 08 Jul 2010
Posts: 2137
Location: India
GMAT: INSIGHT
WE: Education (Education)
Re: If k^2 = m^2, which of the following must be true? [#permalink]
### Show Tags
14 Oct 2015, 22:35
1
Bunuel wrote:
If k^2 = m^2, which of the following must be true?
(A) k = m
(B) k = −m
(C) k = |m|
(D) k = −|m|
(E) |k| = |m|
Kudos for a correct solution.
With Even powers of variables the signs can't be predicted about them
Hence, We can't say anything about the sign of k and m being positive or negative
therefore, Option A, B, C and D are ruled out as all these options are hinting towards specific and known sign of k and/or m
For any sign of k and m, their absolute values must be same because k^2 = m^2, therefore, |k| = |m|
_________________
Prosper!!!
GMATinsight
Bhoopendra Singh and Dr.Sushma Jha
e-mail: [email protected] I Call us : +91-9999687183 / 9891333772
Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi
http://www.GMATinsight.com/testimonials.html
22 ONLINE FREE (FULL LENGTH) GMAT CAT (PRACTICE TESTS) LINK COLLECTION
Math Revolution GMAT Instructor
Joined: 16 Aug 2015
Posts: 6020
GMAT 1: 760 Q51 V42
GPA: 3.82
Re: If k^2 = m^2, which of the following must be true? [#permalink]
### Show Tags
15 Oct 2015, 00:11
1
1
Forget conventional ways of solving math questions. In PS, IVY approach is the easiest and quickest way to find the answer.
If k^2 = m^2, which of the following must be true?
(A) k = m
(B) k = −m
(C) k = |m|
(D) k = −|m|
(E) |k| = |m|
Since k^2=m^2 we have 0=k^2 – m^2 =(k-m)*(k+m). So k=m or k=-m.
So only (A) and only (B) cannot be an answer.
The choice (C) tells us that k should be greater than or equal to 0.
Similarly the choice (D) tells us that k should be less than or equal to 0.
So neither (C) nor (D) cannot be the answer.
_________________
MathRevolution: Finish GMAT Quant Section with 10 minutes to spare
The one-and-only World’s First Variable Approach for DS and IVY Approach for PS with ease, speed and accuracy.
"Only $99 for 3 month Online Course" "Free Resources-30 day online access & Diagnostic Test" "Unlimited Access to over 120 free video lessons - try it yourself" Intern Joined: 08 Aug 2015 Posts: 8 Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 24 Oct 2015, 02:18 1 Bunuel wrote: If k^2 = m^2, which of the following must be true? (A) k = m (B) k = −m (C) k = |m| (D) k = −|m| (E) |k| = |m| Kudos for a correct solution. As squaring hides the sign of a number, k^2 will equal m^2 for every possible positive/negative combination of the two values. The only statement we can make for sure is therefore (E) |k| = |m|. Director Status: Professional GMAT Tutor Affiliations: AB, cum laude, Harvard University (Class of '02) Joined: 10 Jul 2015 Posts: 664 Location: United States (CA) Age: 38 GMAT 1: 770 Q47 V48 GMAT 2: 730 Q44 V47 GMAT 3: 750 Q50 V42 GRE 1: Q168 V169 WE: Education (Education) If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 11 Apr 2016, 16:32 2 Attached is a visual that should help. Attachments Screen Shot 2016-04-11 at 4.31.20 PM.png [ 80.7 KiB | Viewed 8816 times ] _________________ Harvard grad and 99% GMAT scorer, offering expert, private GMAT tutoring and coaching, both in-person (San Diego, CA, USA) and online worldwide, since 2002. One of the only known humans to have taken the GMAT 5 times and scored in the 700s every time (700, 710, 730, 750, 770), including verified section scores of Q50 / V47, as well as personal bests of 8/8 IR (2 times), 6/6 AWA (4 times), 50/51Q and 48/51V (1 question wrong). You can download my official test-taker score report (all scores within the last 5 years) directly from the Pearson Vue website: https://tinyurl.com/y94hlarr Date of Birth: 09 December 1979. GMAT Action Plan and Free E-Book - McElroy Tutoring Contact: [email protected] Target Test Prep Representative Status: Founder & CEO Affiliations: Target Test Prep Joined: 14 Oct 2015 Posts: 3161 Location: United States (CA) Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 03 May 2016, 08:49 4 mkarthik1 wrote: If k^2 = m^2, which of the following must be true? A. k = m B. k = -m C. k = |m| D. k = -|m| E. |k| = |m| In the above question, Both C and E seem to be correct to me . The official answer is E. Why should it not be C? can someone please explain. what is the difference between C and E Solution: We are given that k^2 = m^2, and we can start by simplifying the equation by taking the square root of both sides. √k^2 = √m^2 When we take the square root of a variable squared, the result is the absolute value of that variable. Thus: √k^2 = √m^2 is |k| = |m| Note that answer choices A through D could all be true, but each of them would be true only under specific circumstances. Answer choice E is the only one that is universally true. Answer: E _________________ Scott Woodbury-Stewart Founder and CEO GMAT Quant Self-Study Course 500+ lessons 3000+ practice problems 800+ HD solutions Director Joined: 02 Sep 2016 Posts: 734 Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 06 Sep 2017, 06:29 Hello Bunuel I inferred that square or even power of a number is always positive. Therefore k^2 and m^2 are positive. So |k|=|m|, no matter what the signs are. I don't see why in the official guide its mentioned that we need to take root of k^2 and m^2 when we know that ^2 always gives positive number. _________________ Help me make my explanation better by providing a logical feedback. If you liked the post, HIT KUDOS !! Don't quit.............Do it. Math Expert Joined: 02 Sep 2009 Posts: 47946 Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 06 Sep 2017, 22:39 Shiv2016 wrote: Hello Bunuel I inferred that square or even power of a number is always positive. Therefore k^2 and m^2 are positive. So |k|=|m|, no matter what the signs are. I don't see why in the official guide its mentioned that we need to take root of k^2 and m^2 when we know that ^2 always gives positive number. Do not follow what you mean bu the point is that $$\sqrt{x^2}=|x|$$, so if we take the square root from k^2 = m^2, we'll get |k| = |m|. _________________ Director Joined: 02 Sep 2016 Posts: 734 Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 06 Sep 2017, 22:59 Bunuel wrote: Shiv2016 wrote: Hello Bunuel I inferred that square or even power of a number is always positive. Therefore k^2 and m^2 are positive. So |k|=|m|, no matter what the signs are. I don't see why in the official guide its mentioned that we need to take root of k^2 and m^2 when we know that ^2 always gives positive number. Do not follow what you mean bu the point is that $$\sqrt{x^2}=|x|$$, so if we take the square root from k^2 = m^2, we'll get |k| = |m|. Hi. Sorry for being not so clear I meant to say that even powers ^2,^4, etc. always gives positive outcome no matter what the sign of the base is. For example: (-2)^2= 4 and (2)^2= 4 So what I inferred is that k^2 and m^2 are positive (no matter what the sign of k and m are). Therefore we can say that |k|=|m|. Is this good? Math Expert Joined: 02 Sep 2009 Posts: 47946 Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 06 Sep 2017, 23:05 Shiv2016 wrote: Bunuel wrote: Shiv2016 wrote: Hello Bunuel I inferred that square or even power of a number is always positive. Therefore k^2 and m^2 are positive. So |k|=|m|, no matter what the signs are. I don't see why in the official guide its mentioned that we need to take root of k^2 and m^2 when we know that ^2 always gives positive number. Do not follow what you mean bu the point is that $$\sqrt{x^2}=|x|$$, so if we take the square root from k^2 = m^2, we'll get |k| = |m|. Hi. Sorry for being not so clear I meant to say that even powers ^2,^4, etc. always gives positive outcome no matter what the sign of the base is. For example: (-2)^2= 4 and (2)^2= 4 So what I inferred is that k^2 and m^2 are positive (no matter what the sign of k and m are). Therefore we can say that |k|=|m|. Is this good? Apart from knowing that the even roots and absolute values give non-negative result, we should also deduce that from k^2 = m^2 we can get |k| = |m|. Else, what would you say if one of the options were k^4 = |m|? Here both sides are also non-negative, but can we say from k^2 = m^2 that k^4 = |m| is true? No. _________________ Manager Joined: 10 Sep 2014 Posts: 74 Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 15 Apr 2018, 07:36 VeritasPrepKarishma, Bunuel could you please share your approach explaining each option? Thanks. Veritas Prep GMAT Instructor Joined: 16 Oct 2010 Posts: 8187 Location: Pune, India Re: If k^2 = m^2, which of the following must be true? [#permalink] ### Show Tags 16 Apr 2018, 10:11 Bunuel wrote: If k^2 = m^2, which of the following must be true? (A) k = m (B) k = −m (C) k = |m| (D) k = −|m| (E) |k| = |m| Kudos for a correct solution. What does $$k^2 = m^2$$ imply? Only that |k| = |m| (A) k = m Not necessary e.g. k = 5, m = -5 (B) k = −m Not necessary e.g. k = 5, m = 5 (C) k = |m| Not necessary e.g. k = -5, m = 5 (D) k = −|m| Not necessary e.g. k = 5, m = 5 (E) |k| = |m| Always necessary. Note that absolute values of the two most be equal for the square to be equal. We cannot say anything about their signs though. Answer (E) _________________ Karishma Veritas Prep GMAT Instructor Save up to$1,000 on GMAT prep through 8/20! Learn more here >
GMAT self-study has never been more personalized or more fun. Try ORION Free!
Re: If k^2 = m^2, which of the following must be true? &nbs [#permalink] 16 Apr 2018, 10:11
Display posts from previous: Sort by
# Events & Promotions
Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®. |
Monthly Archives: December 2015
Mistakes are Good
Confession #1: My answers on my last post were WRONG.
I briefly thought about taking that post down, but discarded that idea when I thought about the reality that almost all published mathematics is polished, cleaned, and optimized. Many students struggle with mathematics under the misconception that their first attempts at any topic should be as polished as what they read in published sources.
While not precisely from the same perspective, Dan Teague recently wrote an excellent, short piece of advice to new teachers on NCTM’s ‘blog entitled Demonstrating Competence by Making Mistakes. I argue Dan’s advice actually applies to all teachers, so in the spirit of showing how to stick with a problem and not just walking away saying “I was wrong”, I’m going to keep my original post up, add an advisory note at the start about the error, and show below how I corrected my error.
Confession #2: My approach was a much longer and far less elegant solution than the identical approaches offered by a comment by “P” on my last post and the solution offered on FiveThirtyEight. Rather than just accepting the alternative solution, as too many students are wont to do, I acknowledged the more efficient approach of others before proceeding to find a way to get the answer through my initial idea.
I’ll also admit that I didn’t immediately see the simple approach to the answer and rushed my post in the time I had available to get it up before the answer went live on FiveThirtyEight.
GENERAL STRATEGY and GOALS:
1-Use a PDF: The original FiveThirtyEight post asked for the expected time before the siblings simultaneously finished their tasks. I interpreted this as expected value, and I knew how to compute the expected value of a pdf of a random variable. All I needed was the potential wait times, t, and their corresponding probabilities. My approach was solid, but a few of my computations were off.
2-Use Self-Similarity: I don’t see many people employing the self-similarity tactic I used in my initial solution. Resolving my initial solution would allow me to continue using what I consider a pretty elegant strategy for handling cumbersome infinite sums.
A CORRECTED SOLUTION:
Stage 1: My table for the distribution of initial choices was correct, as were my conclusions about the probability and expected time if they chose the same initial app.
My first mistake was in my calculation of the expected time if they did not choose the same initial app. The 20 numbers in blue above represent that sample space. Notice that there are 8 times where one sibling chose a 5-minute app, leaving 6 other times where one sibling chose a 4-minute app while the other chose something shorter. Similarly, there are 4 choices of an at most 3-minute app, and 2 choices of an at most 2-minute app. So the expected length of time spent by the longer app if the same was not chosen for both is
$E(Round1) = \frac{1}{20}*(8*5+6*4+4*3+2*2)=4$ minutes,
a notably longer time than I initially reported.
For the initial app choice, there is a $\frac{1}{5}$ chance they choose the same app for an average time of 3 minutes, and a $\frac{4}{5}$ chance they choose different apps for an average time of 4 minutes.
Stage 2: My biggest error was a rushed assumption that all of the entries I gave in the Round 2 table were equally likely. That is clearly false as you can see from Table 1 above. There are only two instances of a time difference of 4, while there are eight instances of a time difference of 1. A correct solution using my approach needs to account for these varied probabilities. Here is a revised version of Table 2 with these probabilities included.
Conveniently–as I had noted without full realization in my last post–the revised Table 2 still shows the distribution for the 2nd and all future potential rounds until the siblings finally align, including the probabilities. This proved to be a critical feature of the problem.
Another oversight was not fully recognizing which events would contribute to increasing the time before parity. The yellow highlighted cells in Table 2 are those for which the next app choice was longer than the current time difference, and any of these would increase the length of a trial.
I was initially correct in concluding there was a $\frac{1}{5}$ probability of the second app choice achieving a simultaneous finish and that this would not result in any additional total time. I missed the fact that the six non-highlighted values also did not result in additional time and that there was a $\frac{1}{5}$ chance of this happening.
That leaves a $\frac{3}{5}$ chance of the trial time extending by selecting one of the highlighted events. If that happens, the expected time the trial would continue is
$\displaystyle \frac{4*4+(4+3)*3+(4+3+2)*2+(4+3+2+1)*1}{4+(4+3)+(4+3+2)+(4+3+2+1)}=\frac{13}{6}$ minutes.
Iterating: So now I recognized there were 3 potential outcomes at Stage 2–a $\frac{1}{5}$ chance of matching and ending, a $\frac{1}{5}$ chance of not matching but not adding time, and a $\frac{3}{5}$ chance of not matching and adding an average $\frac{13}{6}$ minutes. Conveniently, the last two possibilities still combined to recreate perfectly the outcomes and probabilities of the original Stage 2, creating a self-similar, pseudo-fractal situation. Here’s the revised flowchart for time.
Invoking the similarity, if there were T minutes remaining after arriving at Stage 2, then there was a $\frac{1}{5}$ chance of adding 0 minutes, a $\frac{1}{5}$ chance of remaining at T minutes, and a $\frac{3}{5}$ chance of adding $\frac{13}{6}$ minutes–that is being at $T+\frac{13}{6}$ minutes. Equating all of this allows me to solve for T.
$T=\frac{1}{5}*0+\frac{1}{5}*T+\frac{3}{5}*\left( T+\frac{13}{6} \right) \longrightarrow T=6.5$ minutes
Time Solution: As noted above, at the start, there was a $\frac{1}{5}$ chance of immediately matching with an average 3 minutes, and there was a $\frac{4}{5}$ chance of not matching while using an average 4 minutes. I just showed that from this latter stage, one would expect to need to use an additional mean 6.5 minutes for the siblings to end simultaneously, for a mean total of 10.5 minutes. That means the overall expected time spent is
Total Expected Time $=\frac{1}{5}*3 + \frac{4}{5}*10.5 = 9$ minutes.
Number of Rounds Solution: My initial computation of the number of rounds was actually correct–despite the comment from “P” in my last post–but I think the explanation could have been clearer. I’ll try again.
One round is obviously required for the first choice, and in the $\frac{4}{5}$ chance the siblings don’t match, let N be the average number of rounds remaining. In Stage 2, there’s a $\frac{1}{5}$ chance the trial will end with the next choice, and a $\frac{4}{5}$ chance there will still be N rounds remaining. This second situation is correct because both the no time added and time added possibilities combine to reset Table 2 with a combined probability of $\frac{4}{5}$. As before, I invoke self-similarity to find N.
$N = \frac{1}{5}*1 + \frac{4}{5}*N \longrightarrow N=5$
Therefore, the expected number of rounds is $\frac{1}{5}*1 + \frac{4}{5}*5 = 4.2$ rounds.
It would be cool if someone could confirm this prediction by simulation.
CONCLUSION:
I corrected my work and found the exact solution proposed by others and simulated by Steve! Even better, I have shown my approach works and, while notably less elegant, one could solve this expected value problem by invoking the definition of expected value.
Best of all, I learned from a mistake and didn’t give up on a problem. Now that’s the real lesson I hope all of my students get.
Happy New Year, everyone!
Great Probability Problems
UPDATE: Unfortunately, there are a couple errors in my computations below that I found after this post went live. In my next post, Mistakes are Good, I fix those errors and reflect on the process of learning from them.
ORIGINAL POST:
A post last week to the AP Statistics Teacher Community by David Bock alerted me to the new weekly Puzzler by Nate Silver’s new Web site, http://fivethirtyeight.com/. As David noted, with their focus on probability, this new feature offers some great possibilities for AP Statistics probability and simulation.
I describe below FiveThirtyEight’s first three Puzzlers along with a potential solution to the last one. If you’re searching for some great problems for your classes or challenges for some, try these out!
THE FIRST THREE PUZZLERS:
The first Puzzler asked a variation on a great engineering question:
You work for a tech firm developing the newest smartphone that supposedly can survive falls from great heights. Your firm wants to advertise the maximum height from which the phone can be dropped without breaking.
You are given two of the smartphones and access to a 100-story tower from which you can drop either phone from whatever story you want. If it doesn’t break when it falls, you can retrieve it and use it for future drops. But if it breaks, you don’t get a replacement phone.
Using the two phones, what is the minimum number of drops you need to ensure that you can determine exactly the highest story from which a dropped phone does not break? (Assume you know that it breaks when dropped from the very top.) What if, instead, the tower were 1,000 stories high?
The second Puzzler investigated random geyser eruptions:
You arrive at the beautiful Three Geysers National Park. You read a placard explaining that the three eponymous geysers — creatively named A, B and C — erupt at intervals of precisely two hours, four hours and six hours, respectively. However, you just got there, so you have no idea how the three eruptions are staggered. Assuming they each started erupting at some independently random point in history, what are the probabilities that A, B and C, respectively, will be the first to erupt after your arrival?
Both very cool problems with solutions on the FiveThirtyEight site. The current Puzzler talked about siblings playing with new phone apps.
You’ve just finished unwrapping your holiday presents. You and your sister got brand-new smartphones, opening them at the same moment. You immediately both start doing important tasks on the Internet, and each task you do takes one to five minutes. (All tasks take exactly one, two, three, four or five minutes, with an equal probability of each). After each task, you have a brief moment of clarity. During these, you remember that you and your sister are supposed to join the rest of the family for dinner and that you promised each other you’d arrive together. You ask if your sister is ready to eat, but if she is still in the middle of a task, she asks for time to finish it. In that case, you now have time to kill, so you start a new task (again, it will take one, two, three, four or five minutes, exactly, with an equal probability of each). If she asks you if it’s time for dinner while you’re still busy, you ask for time to finish up and she starts a new task and so on. From the moment you first open your gifts, how long on average does it take for both of you to be between tasks at the same time so you can finally eat? (You can assume the “moments of clarity” are so brief as to take no measurable time at all.)
SOLVING THE CURRENT PUZZLER:
Before I started, I saw Nick Brown‘s interesting Tweet of his simulation.
If Nick’s correct, it looks like a mode of 5 minutes and an understandable right skew. I approached the solution by first considering the distribution of initial random app choices.
There is a $\displaystyle \frac{5}{25}$ chance the siblings choose the same app and head to dinner after the first round. The expected length of that round is $\frac{1}{5} \cdot \left( 1+2=3=4+5 \right) = 3$ minutes.
That means there is a $\displaystyle \frac{4}{5}$ chance different length apps are chosen with time differences between 1 and 4 minutes. In the case of unequal apps, the average time spent before the shorter app finishes is $\frac{1}{25} \cdot \left( 8*1+6*2+4*3+2*4 \right) = 1.6$ minutes.
It doesn’t matter which sibling chose the shorter app. That sibling chooses next with distribution as follows.
While the distributions are different, conveniently, there is still a time difference between 1 and 4 minutes when the total times aren’t equal. That means the second table shows the distribution for the 2nd and all future potential rounds until the siblings finally align. While this problem has the potential to extend for quite some time, this adds a nice pseudo-fractal self-similarity to the scenario.
As noted, there is a $\displaystyle \frac{4}{20}=\frac{1}{5}$ chance they complete their apps on any round after the first, and this would not add any additional time to the total as the sibling making the choice at this time would have initially chosen the shorter total app time(s). Each round after the first will take an expected time of $\frac{1}{20} \cdot \left( 7*1+5*2+3*3+1*4 \right) = 1.5$ minutes.
The only remaining question is the expected number of rounds of app choices the siblings will take if they don’t align on their first choice. This is where I invoked self-similarity.
In the initial choice there was a $\frac{4}{5}$ chance one sibling would take an average 1.6 minutes using a shorter app than the other. From there, some unknown average N choices remain. There is a $\frac{1}{5}$ chance the choosing sibling ends the experiment with no additional time, and a $\frac{4}{5}$ chance s/he takes an average 1.5 minutes to end up back at the Table 2 distribution, still needing an average N choices to finish the experiment (the pseudo-fractal self-similarity connection). All of this is simulated in the flowchart below.
Recognizing the self-similarity allows me to solve for N.
$\displaystyle N = \frac{1}{5} \cdot 1 + \frac{4}{5} \cdot N \longrightarrow N=5$
Number of Rounds – Starting from the beginning, there is a $\frac{1}{5}$ chance of ending in 1 round and a $\frac{4}{5}$ chance of ending in an average 5 rounds, so the expected number of rounds of app choices before the siblings simultaneously end is
$\frac{1}{5} *1 + \frac{4}{5}*5=4.2$ rounds
Time until Eating – In the first choice, there is a $\frac{1}{5}$ chance of ending in 3 minutes. If that doesn’t happen, there is a subsequent $\frac{1}{5}$ chance of ending with the second choice with no additional time. If neither of those events happen, there will be 1.6 minutes on the first choice plus an average 5 more rounds, each taking an average 1.5 minutes, for a total average $1.6+5*1.5=9.1$ minutes. So the total average time until both siblings finish simultaneously will be
$\frac{1}{5}*3+\frac{4}{5}*9.1 = 7.88$ minutes
CONCLUSION:
My 7.88 minute mean is reasonably to the right of Nick’s 5 minute mode shown above. We’ll see tomorrow if I match the FiveThirtyEight solution.
Anyone else want to give it a go? I’d love to hear other approaches.
Most of my thinking about teaching lately has been about the priceless, timeless value of process in problem solving over the ephemeral worth of answers. While an answer to a problem puts a period at the end of a sentence, the beauty and worth of the sentence was the construction, word choice, and elegance employed in sharing the idea at the heart of the sentence.
Just as there are many ways to craft a sentence–from cumbersome plodding to poetic imagery–there are equally many ways to solve problems in mathematics. Just as great writing reaches, explores, and surprises, great problem solving often starts with the solver not really knowing where the story will lead, taking different paths depending on the experience of the solver, and ending with even more questions.
I experienced that yesterday reading through tweets from one of my favorite middle and upper school problem sources, Five Triangles. The valuable part of what follows is, in my opinion, the multiple paths I tried before settling on something productive. My hope is that students learn the value in exploration, even when initially unproductive.
At the end of this post, I offer a few variations on the problem.
The Problem
Try this for yourself before reading further. I’d LOVE to hear others’ approaches.
First Thoughts and Inherent Variability
My teaching career has been steeped in transformations, and I’ve been playing with origami lately, so my brain immediately translated the setup:
Fold vertex A of equilateral triangle ABC onto side BC. Let segment DE be the resulting crease with endpoints on sides AB and AC with measurements as given above.
So DF is the folding image of AD and EF is the folding image of AE. That is, ADFE is a kite and segment DE is a perpendicular bisector of (undrawn) segment AF. That gave $\Delta ADE \cong \Delta FDE$ .
I also knew that there were lots of possible locations for point F, even though this set-up chose the specific orientation defined by BF=3.
Lovely, but what could I do with all of that?
Trigonometry Solution Eventually Leads to Simpler Insights
Because FD=7, I knew AD=7. Combining this with the given DB=8 gave AB=15, so now I knew the side of the original equilateral triangle and could quickly compute its perimeter or area if needed. Because BF=3, I got FC=12.
At this point, I had thoughts of employing Heron’s Formula to connect the side lengths of a triangle with its area. I let AE=x, making EF=x and $EC=15-x$. With all of the sides of $\Delta EFC$ defined, its perimeter was 27, and I could use Heron’s Formula to define its area:
$Area(\Delta EFC) = \sqrt{13.5(1.5)(13.5-x)(x-1.5)}$
But I didn’t know the exact area, so that was a dead end.
Since $\Delta ABC$ is equilateral, $m \angle C=60^{\circ}$ , I then thought about expressing the area using trigonometry. With trig, the area of a triangle is half the product of any two sides multiplied by the sine of the contained angle. That meant $Area(\Delta EFC) = \frac{1}{2} \cdot 12 \cdot (15-x) \cdot sin(60^{\circ}) = 3(15-x) \sqrt3$.
Now I had two expressions for the same area, so I could solve for x.
$3\sqrt{3}(15-x) = \sqrt{13.5(1.5)(13.5-x)(x-1.5)}$
Squaring both sides revealed a quadratic in x. I could do this algebra, if necessary, but this was clearly a CAS moment.
I had two solutions, but this felt WAY too complicated. Also, Five Triangles problems are generally accessible to middle school students. The trigonometric form of a triangle’s area is not standard middle school fare. There had to be an easier way.
A Quicker Ending
Thinking trig opened me up to angle measures. If I let $m \angle CEF = \theta$, then $m \angle EFC = 120^{\circ}-\theta$, making $m \angle DFB = \theta$, and I suddenly had my simple breakthrough! Because their angles were congruent, I knew $\Delta CEF \sim \Delta BFD$.
Because the triangles were similar, I could employ similarity ratios.
$\frac{7}{8}=\frac{x}{12}$
$x=10.5$
And that is one of the CAS solutions by a MUCH SIMPLER approach.
Extensions and Variations
Following are five variations on the original Five Triangles problem. What other possible variations can you find?
1) Why did the CAS give two solutions? Because $\Delta BDF$ had all three sides explicitly given, by SSS there should be only one solution. So is the 13.0714 solution real or extraneous? Can you prove your claim? If that solution is extraneous, identify the moment when the solution became “real”.
2) Eliminating the initial condition that BF=3 gives another possibility. Using only the remaining information, how long is $\overline{BF}$ ?
$\Delta BDF$ now has SSA information, making it an ambiguous case situation. Let BF=x and invoke the Law of Cosines.
$7^2=x^2+8^2-2 \cdot x \cdot 8 cos(60^{\circ})$
$49=x^2-8x+64$
$0=(x-3)(x-5)$
Giving the original BF=3 solution and a second possible answer: BF=5.
3) You could also stay with the original problem asking for AE.
From above, the solution for BF=3 is AE=10.5. But if BF=5 from the ambiguous case, then FC=10 and the similarity ratio above becomes
$\frac{7}{8}=\frac{x}{10}$
$x=\frac{35}{4}=8.75$
4) Under what conditions is $\overline{DE} \parallel \overline{BC}$ ?
5) Consider all possible locations of folding point A onto $\overline{BC}$. What are all possible lengths of $\overline{DE}$?
How One Data Point Destroyed a Study
Statistics are powerful tools. Well-implemented, they tease out underlying patterns from the noise of raw data and improve our understanding. But statistics must take care to avoid misstatements. Unfortunately, statistics can also deliberately distort relationships, declaring patterns where none exist. In my AP Statistics classes, I hope my students learn to extract meaning from well-designed studies, and to spot instances of Benjamin Disraeli’s “three kinds of lies: lies, damned lies, and statistics.”
This post explores part of a study published August 12, 2015, exposing what I believe to be examples of four critical ways statistics are misunderstood and misused:
• Not recognizing the distortion power of outliers in means, standard deviations, and in the case of the study below, regressions.
• Distorting graphs to create the impression of patterns different from what actually exists,
• Cherry-picking data to show only favorable results, and
• Misunderstanding the p-value in inferential studies.
THE STUDY:
I was searching online for examples of research I could use with my AP Statistics classes when I found on the page of a math teacher organization a link to an article entitled, “Cardiorespiratory fitness linked to thinner gray matter and better math skills in kids.” Following the URL trail, I found a description of the referenced article in an August, 2015 summary article by Science Daily and the actual research posted on August 12, 2015 by the journal, PLOS ONE.
As a middle and high school teacher, I’ve read multiple studies connecting physical fitness to brain health. I was sure I had hit paydirt with an article offering multiple, valuable lessons for my students! I read the claims of the Science Daily research summary correlating the physical fitness of 9- and 10-year-old children to performance on a test of arithmetic. It was careful not to declare cause-and-effect, but did say
The team found differences in math skills and cortical brain structure between the higher-fit and lower-fit children. In particular, thinner gray matter corresponded to better math performance in the higher-fit kids. No significant fitness-associated differences in reading or spelling aptitude were detected. (source)
The researchers described plausible connections for the aerobic fitness of children and the thickness of cortical gray matter for each participating child. The study went astray when they attempted to connect their findings to the academic performance of the participants.
Independent t-tests were employed to compare WRAT-3 scores in higher fit and lower fit children. Pearson correlations were also conducted to determine associations between cortical thickness and academic achievement. The alpha level for all tests was set at p < .05. (source)
All of the remaining images, quotes, and data in this post are pulled directly from the primary article on PLOS ONE. The URLs are provided above with bibliographic references are at the end.
To address questions raised by the study, I had to access the original data and recreate the researchers’ analyses. Thankfully, PLOS ONE is an open-access journal, and I was able to download the research data. In case you want to review the data yourself or use it with your classes, here is the original SPSS file which I converted into Excel and TI-Nspire CAS formats.
My suspicions were piqued when I saw the following two graphs–the only scatterplots offered in their research publication.
Scatterplot 1: Attempt to connect Anterior Frontal Gray Matter thickness with WRAT-3 Arithmetic performance
The right side of the top scatterplot looked like an uncorrelated cloud of data with one data point on the far left seeming to pull the left side of the linear regression upwards, creating a more negative slope. Because the study reported only two statistically significant correlations between the WRAT tests and cortical thickness in two areas of the brain, I was now concerned that the single extreme data point may have distorted results.
My initial scatterplot (below) confirmed the published graph, but fit to the the entire window, the data now looked even less correlated.
In this scale, the farthest left data point (WRAT Arithmetic score = 66, Anterior Frontal thickness = 3.9) looked much more like an outlier. I confirmed that the point exceeded 1.5IQRs below the lower quartile, as indicated visually in a boxplot of the WRAT-Arithmetic scores.
Also note from my rescaled scatterplot that the Anterior Frontal measure (y-coordinate) was higher than any of the next five ordered pairs to its right. Its horizontal outlier location, coupled with its notably higher vertical component, suggested that the single point could have significant influence on any regression on the data. There was sufficient evidence for me to investigate the study results excluding the (66, 3.9) data point.
The original linear regression on the 48 (WRAT Arithmetic, AF thickness) data was $AF=-0.007817(WRAT_A)+4.350$. Excluding (66, 3.9), the new scatterplot above shows the revised linear regression on the remaining 47 points: $AF=-0.007460(WRAT_A)+4.308$. This and the original equation are close, but the revised slope is 4.6% smaller in magnitude relative to the published result. With the two published results reported significant at p=0.04, the influence of the outlier (66, 3.9) has a reasonable possibility of changing the study results.
Scatterplot 2: Attempt to connect Superior Frontal Gray Matter thickness with WRAT-3 Arithmetic performance
The tightly compressed scale of the second published scatterplot made me deeply suspicious the (WRAT Arithmetic, Superior Frontal thickness) data was being vertically compressed to create the illusion of a linear relationship where one possibly did not exist.
Rescaling the the graphing window (below) made those appear notably less linear than the publication implied. Also, the data point corresponding to the WRAT-Arithmetic score of 66 appeared to suffer from the same outlier-influences as the first data set. It was still an outlier, but now its vertical component was higher than the next eight data points to its right, with some of them notably lower. Again, there was sufficient evidence to investigate results excluding the outlier data point.
The linear regression on the original 48 (WRAT Arithmetic, SF thickness) data points was $SF=-0.002767(WRAT_A)+4.113$ (above). Excluding the outlier , the new scatterplot (below) had revised linear regression, $SF=-0.002391(WRAT_A)+4.069$. This time, the revised slope was 13.6% smaller in magnitude relative to the original slope. With the published significance also at p=0.04, omitting the outlier was almost certain to change the published results.
THE OUTLIER BROKE THE STUDY
The findings above strongly suggest the published study results are not as reliable as reported. It is time to rerun the significance tests.
For the first data set–(WRAT Arithmetic, AF thickness) —run an independent t-test on the regression slope with and without the outlier.
• INCLUDING OUTLIER: For all 48 samples, the researchers reported a slope of -0.007817, $r=-0.292$, and $p=0.04$. This was reported as a significant result.
• EXCLUDING OUTLIER: For the remaining 47 samples, the slope is -0.007460, r=-0.252, and p=0.087. The r confirms the visual impression that the data was less linear and, most importantly, the correlation is no longer significant at $\alpha <0.05$.
For the second data set–(WRAT Arithmetic, SF thickness):
• INCLUDING OUTLIER: For all 48 samples, the researchers reported a slope of -0.002767, r=-0.291, and p=0.04. This was reported as a significant result.
• EXCLUDING OUTLIER: For the remaining 47 samples, the slope is -0.002391, r=-0.229, and p=0.121. This revision is even less linear and, most importantly, the correlation is no longer significant for any standard significance level.
In brief, the researchers’ arguable decision to include the single, clear outlier data point was the source of any significant results at all. Whatever correlation exists between gray matter thickness and WRAT-Arithmetic as measured by this study is tenuous, at best, and almost certainly not significant.
THE DANGERS OF CHERRY-PICKING RESULTS:
So, let’s set aside the entire questionable decision to keep an outlier in the data set to achieve significant findings. There is still a subtle, potential problem with this study’s result that actually impacts many published studies.
The researchers understandably were seeking connections between the thickness of a brain’s gray matter and the academic performance of that brain as measured by various WRAT instruments. They computed independent t-tests of linear regression slopes between thickness measures at nine different locations in the brain against three WRAT test measures for a total of 27 separate t-tests. The next table shows the correlation coefficient and p-value from each test.
This approach is commonly used with researchers reporting out only the tests found to be significant. But in doing so, the researchers may have overlooked a fundamental property of the confidence intervals that underlie p-values. Using the typical critical value of p=0.05 uses a 95% confidence interval, and one interpretation of a 95% confidence interval is that under the conditions of the assumed null hypothesis, results that occur in most extreme 5% of outcomes will NOT be considered as resulting from the null hypothesis, even though they are.
In other words, even under they typical conditions for which the null hypothesis is true, 5% of correct results would be deemed different enough to be statistically significant–a Type I Error. Within this study, this defines a binomial probability situation with 27 trials for which the probability of any one trial producing a significant result even though the null hypothesis is correct, is p=0.05.
The binomial probability of finding exactly 2 significant results at p=0.05 over 27 trials is 0.243, and the probability of producing 2 or more significant results when the null hypothesis is true is 39.4%.
That means there is a 39.4% probability in any study testing 27 trials at a p<0.05 critical value that at least 2 of those trials would report a result that would INCORRECTLY be interpreted as contradicting the null hypothesis. And if more conditions than 27 are tested, the probability of a Type I Error is even higher.
Whenever you have a large number of inference trials, there is an increasingly large probability that at least some of the “significant” trials are actually just random, undetected occurrences of the null hypothesis.
It just happens.
THE ELUSIVE MEANING OF A p-VALUE:
For more on the difficulty of understanding p-values, check out this nice recent article on FiveThirtyEight Science–Not Even Scientists Can Easily Explain P-Values.
CONCLUSION:
Personally, I’m a little disappointed that this study didn’t find significant results. There are many recent studies showing the connection between physical activity and brain health, but this study didn’t achieve its goal of finding a biological source to explain the correlation.
It is the responsibility of researchers to know their studies and their resulting data sets. Not finding significant results is not a problem. But I do expect research to disclaim when its significant results hang entirely on a choice to retain an outlier in its data set.
REFERENCES:
Chaddock-Heyman L, Erickson KI, Kienzler C, King M, Pontifex MB, Raine LB, et al. (2015) The Role of Aerobic Fitness in Cortical Thickness and Mathematics Achievement in Preadolescent Children. PLoS ONE 10(8): e0134115. doi:10.1371/journal.pone.0134115
University of Illinois at Urbana-Champaign. “Cardiorespiratory fitness linked to thinner gray matter and better math skills in kids.” ScienceDaily. http://www.sciencedaily.com/releases/2015/08/150812151229.htm (accessed December 8, 2015).
Best Algebra 2 Lab Ever
This post shares what I think is one of the best, inclusive, data-oriented labs for a second year algebra class. This single experiment produces linear, quadratic, and exponential (and logarithmic) data from a lab my Algebra 2 students completed this past summer. In that class, I assigned frequent labs where students gathered real data, determined models to fit that data, and analyzed goodness of the models’ fit to the data. I believe in the importance of doing so much more than just writing an equation and moving on.
For kicks, I’ll derive an approximation for the coefficient of gravity at the end.
THE LAB:
On the way to school one morning last summer, I grabbed one of my daughters’ “almost fully inflated” kickballs and attached a TI CBR2 to my laptop and gathered (distance, time) data from bouncing the ball under the Motion Sensor. NOTE: TI’s CBR2 can connect directly to their Nspire and TI84 families of graphing calculators. I typically use computer-based Nspire CAS software, so I connected the CBR via my laptop’s USB port. It’s crazy easy to use.
One student held the CBR2 about 1.5-2 meters above the ground while another held the ball steady about 20 cm below the CBR2 sensor. When the second student released the ball, a third clicked a button on my laptop to gather the data: time every 0.05 seconds and height from the ground. The graphed data is shown below. In case you don’t have access to a CBR or other data gathering devices, I’ve uploaded my students’ data in this Excel file.
Remember, this is data was collected under far-from-ideal conditions. I picked up a kickball my kids left outside on my way to class. The sensor was handheld and likely wobbled some, and the ball was dropped on the well-worn carpet of our classroom floor. It is also likely the ball did not remain perfectly under the sensor the entire time. Even so, my students created a very pretty graph on their first try.
For further context, we did this lab in the middle of our quadratics unit that was preceded by a unit on linear functions and another on exponential and logarithmic functions. So what can we learn from the bouncing ball data?
LINEAR 1:
While it is very unlikely that any of the recorded data points were precisely at maximums, they are close enough to create a nice linear pattern.
As the height of a ball above the ground helps determine the height of its next bounce (height before –> energy on impact –> height after), the eight ordered pairs (max height #n, max height #(n+1) ) from my students’ data are shown below
This looks very linear. Fitting a linear regression and analyzing the residuals gives the following.
The data seems to be close to the line, and the residuals are relatively small, about evenly distributed above and below the line, and there is no apparent pattern to their distribution. This confirms that the regression equation, $y=0.673x+0.000233$, is a good fit for the = height before bounce and = height after bounce data.
NOTE: You could reasonably easily gather this data sans any technology. Have teams of students release a ball from different measured heights while others carefully identify the rebound heights.
The coefficients also have meaning. The 0.673 suggests that after each bounce, the ball rebounded to 67.3%, or 2/3, of its previous height–not bad for a ball plucked from a driveway that morning. Also, the y-intercept, 0.000233, is essentially zero, suggesting that a ball released 0 meters from the ground would rebound to basically 0 meters above the ground. That this isn’t exactly zero is a small measure of error in the experiment.
EXPONENTIAL:
Using the same idea, consider data of the form (x,y) = (bounce number, bounce height). the graph of the nine points from my students’ data is:
This could be power or exponential data–something you should confirm for yourself–but an exponential regression and its residuals show
While something of a pattern seems to exist, the other residual criteria are met, making the exponential regression a reasonably good model: $y = 0.972 \cdot (0.676)^x$. That means bounce number 0, the initial release height from which the downward movement on the far left of the initial scatterplot can be seen, is 0.972 meters, and the constant multiplier is about 0.676. This second number represents the percentage of height maintained from each previous bounce, and is therefore the percentage rebound. Also note that this is essentially the same value as the slope from the previous linear example, confirming that the ball we used basically maintained slightly more than 2/3 of its height from one bounce to the next.
And you can get logarithms from these data if you use the equation to determine, for example, which bounces exceed 0.2 meters.
So, bounces 1-4 satisfy the requirement for exceeding 0.20 meters, as confirmed by the data.
A second way to invoke logarithms is to reverse the data. Graphing x=height and y=bounce number will also produce the desired effect.
Each individual bounce looks like an inverted parabola. If you remember a little physics, the moment after the ball leaves the ground after each bounce, it is essentially in free-fall, a situation defined by quadratic movement if you ignore air resistance–something we can safely assume given the very short duration of each bounce.
I had eight complete bounces I could use, but chose the first to have as many data points as possible to model. As it was impossible to know whether the lowest point on each end of any data set came from the ball moving up or down, I omitted the first and last point in each set. Using (x,y) = (time, height of first bounce) data, my students got:
What a pretty parabola. Fitting a quadratic regression (or manually fitting one, if that’s more appropriate for your classes), I get:
Again, there’s maybe a slight pattern, but all but two points are will withing 0.1 of 1% of the model and are 1/2 above and 1/2 below. The model, $y=-4.84x^2+4.60x-4.24$, could be interpreted in terms of the physics formula for an object in free fall, but I’ll postpone that for a moment.
LINEAR 2:
If your second year algebra class has explored common differences, your students could explore second common differences to confirm the quadratic nature of the data. Other than the first two differences (far right column below), the second common difference of all data points is roughly 0.024. This raises suspicions that my student’s hand holding the CBR2 may have wiggled during the data collection.
Since the second common differences are roughly constant, the original data must have been quadratic, and the first common differences linear. As a small variation for each consecutive pair of (time, height) points, I had my students graph (x,y) = (x midpoint, slope between two points):
If you get the common difference discussion, the linearity of this graph is not surprising. Despite those conversations, most of my students seem completely surprised by this pattern emerging from the quadratic data. I guess they didn’t really “get” what common differences–or the closely related slope–meant until this point.
Other than the first three points, the model seems very strong. The coefficients tell an even more interesting story.
GRAVITY:
The equation from the last linear regression is $y=4.55-9.61x$. Since the data came from slope, the y-intercept, 4.55, is measured in m/sec. That makes it the velocity of the ball at the moment (t=0) the ball left the ground. Nice.
The slope of this line is -9.61. As this is a slope, its units are the y-units over the x-units, or (m/sec)/(sec). That is, meters per squared second. And those are the units for gravity! That means my students measured, hidden within their data, an approximation for coefficient of gravity by bouncing an outdoor ball on a well-worn carpet with a mildly wobbly hand holding a CBR2. The gravitational constant at sea-level on Earth is about -9.807 m/sec^2. That means, my students measurement error was about $\frac{9.807-9.610}{9.807}=2.801%$. And 2.8% is not a bad measurement for a very unscientific setting!
CONCLUSION:
Whenever I teach second year algebra classes, I find it extremely valuable to have students gather real data whenever possible and with every new function, determine models to fit their data, and analyze the goodness of the model’s fit to the data. In addition to these activities just being good mathematics explorations, I believe they do an excellent job exposing students to a few topics often underrepresented in many secondary math classes: numerical representations and methods, experimentation, and introduction to statistics. Hopefully some of the ideas shared here will inspire you to help your students experience more.
Recentering Normal Curves, revisited
I wrote here about using a CAS to determine a the new mean of a recentered normal curve from an AP Statistics exam question from the last decade. My initial post shared my ideas on using CAS technology to determine the new center. After hearing some of my students’ attempts to solve the problem, I believe they took a simpler, more intuitive approach than I had proposed.
REVISITING:
In the first part of the problem, solvers found the mean and standard deviation of the wait time of one train: $\mu = 30$ and $\sigma = \sqrt{500}$, respectively. Then, students computed the probability of waiting to be 0.910144.
The final part of the question asked how long that train would have to be delayed to make that wait time 0.01. Here’s where my solution diverged from my students’ approach. Being comfortable with transformations, I thought of the solution as the original time less some unknown delay which was easily solved on our CAS.
STUDENT VARIATION:
Instead of thinking of the delay–the explicit goal of the AP question–my students sought the new starting time. Now that I’ve thought more about it, knowing the new time when the train will leave does seem like a more natural question and avoids the more awkward expression I used for the center.
The setup is the same, but now the new unknown variable, the center of the translated normal curve, is newtime. Using their CAS solve command, they found
It was a little different to think about negative time, but they found the difference between the new time difference (-52.0187 minutes) and the original (30 minutes) to be 82.0187 minutes, the same solution I discovered using transformations.
CONCLUSION:
This is nothing revolutionary, but my students’ thought processes were cleaner than mine. And fresh thinking is always worth celebrating. |
Suggested papers for
Tue, Sep 19, 2017 at 11 am
Thu, Sep 21, 2017 at 2 pm
Fri, Sep 22, 2017 at 11 am
16 Sep 2017
### Accreting black hole binaries in globular clusters
We explore the formation of mass-transferring binary systems containing black holes within globular clusters. We show that it is possible to form mass-transferring black hole binaries with main sequence, giant, and white dwarf companions with a variety of orbital parameters in globular clusters spanning a large range in present-day properties. All mass-transferring black hole binaries found in our models at late times are dynamically created. The black holes in these systems experienced a media...
15 Sep 2017
24 Jul 2017
### VLBA imaging of the 3mm SiO maser emission in the disk-wind from the massive protostellar system Orion Source I
We present the first images of the 28SiO v=1, J=2-1 maser emission around the closest known massive young stellar object Orion Source I observed at 86 GHz (3mm) with the VLBA. These images have high spatial (~0.3 mas) and spectral (~0.054 km/s) resolutions. We find that the 3mm masers lie in an X-shaped locus consisting of four arms, with blue-shifted emission in the south and east arms and red-shifted emission in the north and west arms. Comparisons with previous images of the 28SiO v=1,2, J=1...
Updated 2017/09/22 11:38:47 |
2.4E: Complex Numbers (Exercises)
For the following exercises, use the quadratic equation to solve.
26. $$x^{2}-5 x+9=0$$
27. $$2 x^{2}+3 x+7=0$$
For the following exercises, name the horizontal component and the vertical component.
28. $$4-3 i$$
29. $$-2-i$$
For the following exercises, perform the operations indicated.
30. $$(9-i)-(4-7 i)$$
31. $$(2+3 i)-(-5-8 i)$$
32. $$2 \sqrt{-75}+3 \sqrt{25}$$
33. $$\sqrt{-16}+4 \sqrt{-9}$$
34. $$-6 i(i-5)$$
35. $$(3-5 i)^{2}$$
36. $$\sqrt{-4} \cdot \sqrt{-12}$$
37. $$\sqrt{-2}(\sqrt{-8}-\sqrt{5})$$
38. $$\frac{2}{5-3 i}$$
39. $$\frac{3+7 i}{i}$$
This page titled 2.4E: Complex Numbers (Exercises) is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by OpenStax via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. |
# Factorization
#### mathhomework
How do you factorize the following?
1. x^4+x^2y^2+y^4
2.(a+2b+c)^3-(a+b)^3-(b+c)^3
3. (x+1)(x+2)+1/4
#### sa-ri-ga-ma
How do you factorize the following?
1. x^4+x^2y^2+y^4
2.(a+2b+c)^3-(a+b)^3-(b+c)^3
3. (x+1)(x+2)+1/4
How do you factorize the following?
1. x^4+x^2y^2+y^4
2.(a+2b+c)^3-(a+b)^3-(b+c)^3
3. (x+1)(x+2)+1/4
1) $$\displaystyle x^4 + x^2y^2 + y^4 = x^4 + 2x^2y^2 + y^4 - x^2y^2$$
= $$\displaystyle (x^2 + y^2)^2 - (xy)^2$$
Now simplify.
For (2), use $$\displaystyle x^3 + y^3 = (x+y)(x^2 -xy+y^2)$$ Simplify the second and third term. Then proceed.
For (3), simplify the brackets and proceed.
#### Soroban
MHF Hall of Honor
Hello, mathhomework!
1. Factor: .$$\displaystyle x^4+x^2y^2+y^4$$
Add and subtract $$\displaystyle x^2y^2\!:$$
. . $$\displaystyle x^4 + x^2y^2 {\color{red}\:+\: x^2y^2} + y^4 {\color{red}\:-\: x^2y^2}$$
. . . . . $$\displaystyle =\;(x^4 + 2x^2y^2 + y^4) - x^2y^2$$
. . . . . $$\displaystyle =\; (x^2+ y^2)^2 - (xy)^2 \quad \leftarrow\:\text{ difference of squares}$$
. . . . . $$\displaystyle =\;(x^2+y^2 - xy)(x^2+y^2+xy)$$
3. Factor: .$$\displaystyle (x+1)(x+2)+\tfrac{1}{4}$$
We have: .$$\displaystyle x^2 + 3x + 2 + \tfrac{1}{4} \;=\;x^2 + 3x + \tfrac{9}{4} \;=\;\left(x + \tfrac{3}{2}\right)^2$$
Darn! . . . too slow ... again!
.
Last edited:
mathhomework
#### Soroban
MHF Hall of Honor
Hello, mathhomework!
I think I've got #2 . . .
$$\displaystyle 2.\;\;(a+2b+c)^3-(a+b)^3-(b+c)^3$$
$$\displaystyle \text{We have: }\;(a+2b+c)^3 - \underbrace{\bigg[(a+b)^3 + (b+c)^3\bigg]}_{\text{sum of cubes}}$$
. . . . . $$\displaystyle =\;(a+2b+c)^3 - \bigg[(a+b)(b+c)\bigg]\bigg[(a+b)^2 - (a+b)(b+c) + (b+c)^2\bigg]$$
. . . . . $$\displaystyle =\; (a+2b+c)^3 - (a +2b+c)(a^2 + b^2 + c^2 + ab + bc - ac)$$
$$\displaystyle \text{Factor: }\;(a+2b+c)\,\bigg[(a+2b+c)^2 - (a^2+b^2+c^2 + ab + bc - ac)\bigg]$$
. . . . . $$\displaystyle =\;(a+2b+c)\left(3b^2 + 3ab + 3bc + 3ac\right)$$
. . . . . $$\displaystyle =\; 3(a+2b+c)\left(b^2 + ab + bc + ac\right)$$
. . . . . $$\displaystyle =\;3(a+2b+c)\bigg[b(b+a) + c(b + a)\bigg]$$
. . . . . $$\displaystyle =\;3(a+2b+c)(b+a)(b+c)$$
$$\displaystyle \text{Answer: }\;3(a+b)(b+c)(a + 2b + c)$$
mathhomework |
# Semidirect product of groups by magma
Can anybody guide me towards, how I can compute semidirect product of $\mathrm{PSL}(3,4)$ and $\mathbb Z_2$ by magma? Indeed, I dont know how construct map $\phi: H \to \mathrm{Aut}(N)$, when $H=\mathbb Z_2$ and $N=\mathrm{PSL}(3,4)$, for operation SemidirectProduct$(N, H, \phi)$.
Here's one way to do it.
K:=PSL(3,4);
H:=CyclicGroup(2);
A:=AutomorphismGroup(K);
/* A.1 is an automorphism of order 2 */
phi:= hom< H -> A | <H.1,A.1> >;
G:=SemidirectProduct(K,H,phi);
Notice that the codomain of phi is A, which has type GrpAuto. I'm not sure if this is a requirement of the map or not.
• I can't see your error, but try it on their online calculator: magma.maths.usyd.edu.au/calc Add another line that prints G. – JMag Jan 19 '17 at 21:57
• Thanks for the answer. When I use it, the followin error occurs: Identifier 'SemidirectProduct has not been declared or assigned. – user407524 Jan 19 '17 at 21:59
• That means you are using a version of Magma that does not have the function SemidirectProduct. In your question, you made it sound like you were had an issue getting the function SemidirectProduct working correctly. What exactly is your original question about then? – JMag Jan 19 '17 at 22:12 |
Space, shape and measurement: Solve problems by constructing and interpreting trigonometric models
# Unit 2: Solve trigonometric equations
Dylan Busa
### Unit outcomes
By the end of this unit you will be able to:
• Solve equations involving double and compound angles.
## What you should know
Before you start this unit, make sure you can:
• State and apply the reduction formulae for $\scriptsize {{180}^\circ}\pm \theta$, $\scriptsize {{90}^\circ}\pm \theta$ and $\scriptsize {{360}^\circ}\pm \theta$. Refer to level 3 subject outcome 3.3 unit 2 if you need help with this.
• State and apply the basic trigonometric identities of $\scriptsize \tan \theta =\displaystyle \frac{{\sin \theta }}{{\cos \theta }}$ and $\scriptsize {{\sin }^{2}}\theta +{{\cos }^{2}}\theta =1$. Refer to level 3 subject outcome 3.3 unit 3 if you need help with this.
• State and apply the compound and double angle identities. Refer to unit 1 of this subject outcome if you need help with this.
• Find the general solutions of trigonometric equations. Refer to level 3 subject outcome 3.3 unit 4 if you need help with this.
## Introduction
In level 3 subject outcome 3.3 unit 4, we learnt how to find the general solution of an equation that involved a trigonometric ratio. If you need to, you should review this unit before continuing.
Remember that the trigonometric functions are repetitive. Sine and cosine repeat every $\scriptsize {{360}^\circ}$ (once a full revolution) and tangent repeats every $\scriptsize {{180}^\circ}$ (twice every revolution). We say that the trigonometric functions are periodic. Sine and cosine functions have periods of $\scriptsize {{360}^\circ}$ and the tangent function has a period of $\scriptsize {{180}^\circ}$.
## The general solution revision
Because the trigonometric functions are periodic (they repeat themselves), when we solve an equation such as $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$, there is not only one solution. We may know that $\scriptsize \sin {{30}^\circ}=\displaystyle \frac{1}{2}$ but, for example, so does $\scriptsize \sin ({{30}^\circ}+{{360}^\circ})=\sin {{390}^\circ}$, $\scriptsize \sin ({{30}^\circ}+2\times {{360}^\circ})=\sin {{750}^\circ}$ and $\scriptsize \sin ({{30}^\circ}-{{360}^\circ})=\sin -{{330}^\circ}$.
We also know that $\scriptsize \sin {{150}^\circ}=\displaystyle \frac{1}{2}$ but so do $\scriptsize \sin ({{150}^\circ}+{{360}^\circ})=\sin {{510}^\circ}$, $\scriptsize \sin ({{150}^\circ}+2\times {{360}^\circ})=\sin {{870}^\circ}$ and $\scriptsize \sin ({{150}^\circ}-{{360}^\circ})=\sin -{{210}^\circ}$, for example.
The general solution to the equation $\scriptsize \sin \theta =\displaystyle \frac{1}{2}$ is $\scriptsize \theta ={{30}^\circ}+k\cdot {{360}^\circ}\text{ or }\theta ={{150}^\circ}+k\cdot {{360}^\circ},k\in \mathbb{Z}$. In other words, $\scriptsize \theta$ is equal to $\scriptsize {{30}^\circ}$ plus or minus any integer multiple of $\scriptsize {{360}^\circ}$ or $\scriptsize \theta$ is equal to $\scriptsize {{150}^\circ}$ plus or minus any integer multiple of $\scriptsize {{360}^\circ}$.
When we learnt how to find the general solution of trigonometric equations, we called the answer we get for $\scriptsize \theta$ from the calculator (e.g. $\scriptsize \theta ={{30}^\circ}$) the reference angle. It is the basis upon which we build the general solution.
Work through the next two examples and the exercise that follows to remind yourself how to find the general solution of simple trigonometric equations.
### Example 2.1
Determine the general solution for $\scriptsize \sin \theta =0.6$.
Solution
Step 1: Use a calculator to determine the reference angle
\scriptsize \begin{align*}\sin \theta &=0.6\\\therefore \theta &={{36.9}^\circ}\end{align*}
Note: Unless told otherwise, we usually round the reference angle to one decimal place.
Step 2: Use the CAST diagram to determine any other possible solutions
$\scriptsize \sin \theta =0.6$. In other words, sine is positive. Sine is positive in the first and second quadrants. We already have the first quadrant solution (the reference angle of $\scriptsize \theta ={{36.9}^\circ}$). We need to find the second quadrant solution. We know that $\scriptsize \sin ({{180}^\circ}-\theta )=\sin \theta$. Therefore, the second quadrant solution is $\scriptsize {{180}^\circ}-{{36.9}^\circ}={{143.1}^\circ}$.
Step 3: Generate the general solution
$\scriptsize \theta ={{36.9}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{143.1}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Step 4: Check your general solution
It is always a good idea to check that your final solutions satisfy the original equation. Choose a random value for $\scriptsize k$.
$\scriptsize k=-2$:
\scriptsize \begin{align*}\theta &={{36.9}^\circ}-2\times {{360}^\circ}\text{ or }\theta ={{143.1}^\circ}-2\times {{360}^\circ}\\\therefore \theta &=-{{683.1}^\circ}\text{ or }\theta =-{{576.9}^\circ}\end{align*}
$\scriptsize \sin (-{{683.1}^\circ})=0.6$
$\scriptsize \sin (-{{576.9}^\circ})=0.6$
Our general solution is correct.
### Example 2.2
Determine the general solution for $\scriptsize 7\cos 2\theta +4=0$.
Solution
\scriptsize \displaystyle \begin{align*}7\cos 2\theta +4&=0\\\therefore \cos 2\theta &=-\displaystyle \frac{4}{7}\end{align*}
Step 1: Use a calculator to determine the reference angle
\scriptsize \displaystyle \begin{align*}\cos 2\theta &=-\displaystyle \frac{4}{7}\\\therefore 2\theta &={{124.8}^\circ}\end{align*}
Note: We keep working with the reference angle of $\scriptsize 2\theta$ until we generate the general solution.
Step 2: Use the CAST diagram to determine any other possible solutions
Our equation is $\scriptsize \displaystyle \cos 2\theta =-\displaystyle \frac{4}{7}$. $\scriptsize \cos 2\theta \lt 0$. Cosine is negative in the second and third quadrants. Our reference angle is in the second quadrant.
Second quadrant: $\scriptsize 2\theta ={{124.8}^\circ}$
Third quadrant: $\scriptsize \cos ({{360}^\circ}-\theta )=\cos \theta$
$\scriptsize 2\theta ={{360}^\circ}-{{124.8}^\circ}={{235.2}^\circ}$.
Step 3: Generate the general solution
\scriptsize \begin{align*}2\theta & ={{124.8}^\circ}+k{{.360}^\circ}\text{ or 2}\theta ={{235.2}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore \theta & ={{62.4}^\circ}+k{{.180}^\circ}\text{ or }\theta ={{117.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }\end{align*}
Step 4: Check your general solution
$\scriptsize k=2$:
$\scriptsize \therefore \theta ={{62.4}^\circ}+2\times {{180}^\circ}={{422.4}^\circ}\text{ or }\theta ={{117.6}^\circ}+2\times {{180}^\circ}={{477.6}^\circ}$
$\scriptsize \displaystyle \cos (2\times {{422.4}^\circ})=-\displaystyle \frac{4}{7}$
$\scriptsize \displaystyle \cos (2\times {{477.6}^\circ})=-\displaystyle \frac{4}{7}$
### Take note!
The general solutions for equations involving the three basic trigonometric ratios can be written as follows:
If $\scriptsize \sin \theta =x$ then:
$\scriptsize \theta =\left( {{{{\sin }}^{{-1}}}x+k{{{.360}}^\circ}} \right)\text{ or }\theta =\left( {\left( {{{{180}}^\circ}-{{{\sin }}^{{-1}}}x} \right)+k{{{.360}}^\circ}} \right),k\in \mathbb{Z}$
If $\scriptsize \cos \theta =x$ then:
$\scriptsize \theta =\left( {{{{\cos }}^{{-1}}}x+k{{{.360}}^\circ}} \right)\text{ or }\theta =\left( {\left( {{{{360}}^\circ}-{{{\cos }}^{{-1}}}x} \right)+k{{{.360}}^\circ}} \right),k\in \mathbb{Z}$
If $\scriptsize \tan \theta =x$ then:
$\scriptsize \theta =\left( {{{{\tan }}^{{-1}}}x+k{{{.180}}^\circ}} \right),k\in \mathbb{Z}$
### Exercise 2.1
1. Find the general solution for $\scriptsize \theta$ in the following equations:
1. $\scriptsize \cos \theta =0.45$
2. $\scriptsize 2\tan \theta =-5$
3. $\scriptsize 8\sin \theta +3=0$
4. $\scriptsize -6\cos 2\theta -3=0$
2. Determine $\scriptsize \theta$ for the given interval:
1. $\scriptsize 3\cos \theta -2=-3$ for the interval $\scriptsize [{{0}^\circ},{{360}^\circ}]$
2. $\scriptsize \tan (3\theta -{{42}^\circ})=3.4$ where $\scriptsize \theta \in [{{0}^\circ},{{180}^\circ}]$
The full solutions are at the end of the unit.
## Trigonometric equations with double and compound angles
We can use the compound and double angle identities we learnt about in unit 1 to help us solve some more complicated trigonometric equations. They may look complicated to begin with, but once we have applied the compound and double angle identities, they become much simpler.
### Example 2.3
Determine the general solution for $\scriptsize \theta$ in $\scriptsize \displaystyle \frac{{1-\sin \theta -\cos 2\theta }}{{\sin 2\theta -\cos \theta }}=-1$.
Solution
Before we start solving any trigonometric equation, we need to try and simplify it as far as possible. In this case, there are double angles involved. Therefore, we can use the double angle identities. So we start with the LHS of the equation and simplify it.
\scriptsize \begin{align*}\displaystyle \frac{{1-\sin \theta -\cos 2\theta }}{{\sin 2\theta -\cos \theta }}&=\displaystyle \frac{{1-\sin \theta -\left( {1-2{{{\sin }}^{2}}\theta } \right)}}{{2\sin \theta \cos \theta -\cos \theta }}\\&=\displaystyle \frac{{1-\sin \theta -1+2{{{\sin }}^{2}}\theta }}{{2\sin \theta \cos \theta -\cos \theta }}\\&=\displaystyle \frac{{2{{{\sin }}^{2}}\theta -\sin \theta }}{{2\sin \theta \cos \theta -\cos \theta }}\\&=\displaystyle \frac{{\sin \theta (2\sin \theta -1)}}{{\cos \theta (2\sin \theta -1)}}\\&=\displaystyle \frac{{\sin \theta }}{{\cos \theta }}\\&=\tan \theta \end{align*}
Now our equation becomes easy to solve: $\scriptsize \tan \theta =-1$.
Ref angle: $\scriptsize \theta =-{{45}^\circ}$
General solution: $\scriptsize \theta =-{{45}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
Note: We can also write the general solution with a positive angle as $\scriptsize \theta ={{135}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$.
### Example 2.4
Prove that $\scriptsize 4\sin \theta {{\cos }^{3}}\theta -4{{\sin }^{3}}\theta \cos \theta =\sin 4\theta$ and hence determine the general solution for $\scriptsize \theta$ in $\scriptsize 4\sin \theta {{\cos }^{3}}\theta -4{{\sin }^{3}}\theta \cos \theta =0.8$.
Solution
In this example, we are first asked to prove that $\scriptsize 4\sin \theta {{\cos }^{3}}\theta -4{{\sin }^{3}}\theta \cos \theta =\sin 4\theta$.
\scriptsize \begin{align*}\text{LHS}&=4\sin \theta {{\cos }^{3}}\theta -4{{\sin }^{3}}\theta \cos \theta \\&=4\sin \theta \cos \theta ({{\cos }^{2}}\theta -{{\sin }^{2}}\theta )\\&=2\times 2\sin \theta \cos \theta \times \cos 2\theta \\&=2\times \sin 2\theta \times \cos 2\theta \\&=\sin 4\theta \end{align*}
Now we can solve the equation:
\scriptsize \displaystyle \begin{align*}4\sin \theta {{\cos }^{3}}\theta -4{{\sin }^{3}}\theta \cos \theta & =0.8\\\therefore \sin 4\theta & =0.8\end{align*}
Ref angle: $\scriptsize 4\theta ={{53.1}^\circ}$
Sine is positive in the first and second quadrants.
$\scriptsize 4\theta ={{180}^\circ}-{{53.1}^\circ}={{126.9}^\circ}$
General solution:
\scriptsize \begin{align*}4\theta & ={{53.1}^\circ}+k{{.360}^\circ}\text{ or }4\theta ={{126.9}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\\\therefore \theta & ={{13.275}^\circ}+k{{.90}^\circ}\text{ or }\theta ={{31.725}^\circ}+k{{.90}^\circ},k\in \mathbb{Z}\end{align*}
### Example 2.5
Find the general solution for $\scriptsize {{\sin }^{2}}\theta \cos \theta ={{\cos }^{3}}\theta$.
Solution
We need to be careful here. We cannot divide through by $\scriptsize \sin \theta$ or $\scriptsize {{\sin }^{2}}\theta$. If we do so, we will lose some of the solution. Instead, we need to proceed as if we were solving a quadratic equation.
\scriptsize \begin{align*}{{\sin }^{2}}\theta \cos \theta &={{\cos }^{3}}\theta \\\therefore {{\sin }^{2}}\theta \cos \theta -{{\cos }^{3}}\theta =0\\\therefore \cos \theta ({{\sin }^{2}}\theta -{{\cos }^{2}}\theta )&=0\quad \text{Take a factor of }-1\text{ out of the bracket}\\\therefore -\cos \theta ({{\cos }^{2}}\theta -{{\sin }^{2}}\theta )&=0\\\therefore \cos \theta ({{\cos }^{2}}\theta -{{\sin }^{2}}\theta )&=0\\\therefore \cos \theta \times \cos 2\theta &=0\\\therefore \cos \theta &=0\text{ or }\cos 2\theta =0\end{align*}
We can deal with each part of the solution separately.
$\scriptsize \cos \theta =0$:
Ref angle: $\scriptsize \theta ={{90}^\circ}$
General solution: $\scriptsize \theta ={{90}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{270}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$.
We can simplify this general solution to $\scriptsize \theta ={{90}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$.
$\scriptsize \cos 2\theta =0$:
Ref angle: $\scriptsize 2\theta ={{90}^\circ}$
General solution:
\scriptsize \begin{align*}2\theta & ={{90}^\circ}+k{{.360}^\circ}\text{ or 2}\theta ={{270}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\\\therefore \theta & ={{45}^\circ}+k{{.180}^\circ}\text{ or }\theta ={{135}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\end{align*}.
Again, we can simplify this general solution to $\scriptsize \theta ={{45}^\circ}+k{{.90}^\circ},k\in \mathbb{Z}$.
### Take note!
There are two basic strategies for solving these more complicated trigonometric equations:
1. Simplify one side of the equation down to a single trigonometric ratio using the various trigonometric identities at your disposal.
2. Make the one side of the equation equal to zero and then simplify the other side in order to make use of the zero product rule – if $\scriptsize a\times b=0$ then either $\scriptsize a=0$ or $\scriptsize b=0$. This is the technique we use to solve quadratic equations.
### Exercise 2.2
1. Determine the general solution in each case:
1. $\scriptsize \cos 2x=\sin {{32}^\circ}$
2. $\scriptsize \cos 2x=\sin x$
3. $\scriptsize \sin \theta \cdot \sin 2\theta +\cos 2\theta =1$
4. $\scriptsize 5\tan 2x-1={{\tan }^{2}}2x+5$
Question 2 adapted from Everything Maths Grade 12 Exercise 4-4 question 5a
1. Given$\scriptsize \sin x\cos 3x-\cos x\sin 3x=\tan {{140}^\circ}$:
1. Find the general solution.
2. Determine the solutions for the interval $\scriptsize \left[ {{{0}^\circ},{{{90}}^\circ}} \right]$.
The full solutions are at the end of the unit.
## Summary
In this unit you have learnt the following:
• How to apply the trigonometric identities, especially the compound and double angle identities, to solve more complicated trigonometric equations.
# Unit 2: Assessment
#### Suggested time to complete: 45 minutes
1. Solve each equation for each given interval. If no interval is given, find the general solution.
1. $\scriptsize 2\cos \theta -1=0$
2. $\scriptsize {{\sin }^{2}}\theta +\sin \theta -1=0$
3. $\scriptsize 4\sin x+3\tan x=\displaystyle \frac{3}{{\cos x}}+4$ for $\scriptsize \left[ {{{0}^\circ},{{{180}}^\circ}} \right]$
4. $\scriptsize 6\cos \theta -5=\displaystyle \frac{4}{{\cos \theta }}$
5. $\scriptsize \cos 2x-\cos x+1=0$ for $\scriptsize \left[ {{{0}^\circ},{{{360}}^\circ}} \right]$
Question 2 adapted from NC(V) Mathematics Level 4 Paper 2November 2015 question 2.5
1. Given that $\scriptsize \cos 2\theta =2{{\cos }^{2}}\theta -1$:
1. Show that $\scriptsize \cos 2\theta +3\cos \theta -1=2{{\cos }^{2}}\theta +3\cos \theta -2$.
2. Hence, determine the value(s) of $\scriptsize \theta$ if $\scriptsize \cos 2\theta +3\cos \theta -1=0$ and $\scriptsize {{0}^\circ}\le \theta \le {{360}^\circ}$.
Question 3 adapted from NC(V) Mathematics Level 4 Paper 2 November 2016 question 2.5
1. If $\scriptsize 3\cos 2x-\sin 2x-1=0$ find the value(s) of $\scriptsize x$ in the interval $\scriptsize \left[ {{{0}^\circ},{{{360}}^\circ}} \right]$.
The full solutions are at the end of the unit.
# Unit 2: Solutions
### Exercise 2.1
1. .
1. $\scriptsize \cos \theta =0.45$
Ref angle: $\scriptsize \theta ={{63.3}^\circ}$
$\scriptsize \cos \theta$ is positive in the first and fourth quadrants.
$\scriptsize \theta ={{360}^\circ}-{{63.3}^\circ}={{296.7}^\circ}$
General solution: $\scriptsize \theta ={{63.3}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{296.7}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
2. .
\scriptsize \begin{align*}2\tan \theta & =-5\\\therefore \tan \theta & =-\displaystyle \frac{5}{2}\end{align*}
Ref angle: $\scriptsize \theta =-{{68.2}^\circ}$
Note: You can also express your reference angle as a positive angle – $\scriptsize {{360}^\circ}-{{68.2}^\circ}={{291.8}^\circ}$
General solution: $\scriptsize \theta =-{{68.2}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\text{ }$
3. .
\scriptsize \displaystyle \begin{align*}8\sin \theta +3 & =0\\\therefore \sin \theta & =-\displaystyle \frac{3}{8}\end{align*}
Ref angle: $\scriptsize \theta =-{{22.0}^\circ}$
Note: You can also express your reference angle as a positive angle – $\scriptsize {{360}^\circ}-{{22.0}^\circ}={{338}^\circ}$
$\scriptsize \sin \theta$ is negative in the third and fourth quadrants.
$\scriptsize \theta ={{180}^\circ}+{{22.0}^\circ}={{202.0}^\circ}$
General solution: $\scriptsize \theta =-{{22.0}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{202.0}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
4. .
\scriptsize \begin{align*}-6\cos 2\theta -3 & =0\\\therefore \cos 2\theta & =-0.5\end{align*}
Ref angle: $\scriptsize \theta ={{120}^\circ}$
$\scriptsize \cos \theta$ is negative in the second and third quadrants.
$\scriptsize \theta ={{360}^\circ}-{{120.0}^\circ}={{240}^\circ}$
General solution:
\scriptsize \begin{align*}2\theta & ={{120}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{240}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }\\\therefore \theta & ={{60}^\circ}+k{{.180}^\circ}\text{ or }\theta ={{120}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\end{align*}
2. .
1. .
\scriptsize \begin{align*}3\cos \theta -2 & =-3\\\therefore \cos \theta & =-\displaystyle \frac{1}{3}\end{align*}
Ref angle: $\scriptsize \theta ={{109.5}^\circ}$
$\scriptsize \cos \theta$ is negative in the second and third quadrants.
$\scriptsize \theta ={{360}^\circ}-{{109.5}^\circ}={{250.5}^\circ}$
General solution: $\scriptsize \theta ={{109.5}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{250.5}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\text{ }$
Specific solution: $\scriptsize \theta ={{109.5}^\circ}\text{ or }\theta ={{250.5}^\circ}$
2. $\scriptsize \tan (3\theta -{{42}^\circ})=3.4$
Ref angle: $\scriptsize 3\theta -{{42}^\circ}={{73.6}^\circ}$
General solution:
\scriptsize \begin{align*}3\theta -{{42}^\circ} &={{73.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\\\therefore 3\theta &={{115.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\\\therefore \theta & ={{38.5}^\circ}+k{{.60}^\circ},k\in \mathbb{Z}\end{align*}
Specific solution: $\scriptsize \theta ={{38.5}^\circ}\text{ or }\theta ={{98.5}^\circ}\text{ or }\theta ={{158.5}^\circ}$
Back to Exercise 2.1
### Exercise 2.2
1. .
1. .
\scriptsize \begin{align*}\cos 2x & =\sin {{32}^\circ}\\\therefore \cos 2x & =0.530\end{align*}
Or
\scriptsize \begin{align*}\cos 2x & =\sin {{32}^\circ}\\\therefore \cos 2x & =\cos ({{90}^\circ}-{{32}^\circ})\\&=\cos {{58}^\circ}\end{align*}
Ref angle: $\scriptsize \theta ={{58}^\circ}$
Cosine is positive in the first and fourth quadrants.
$\scriptsize \theta ={{360}^\circ}-{{58}^\circ}={{302}^\circ}$
General solution: $\scriptsize \theta ={{58}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{302}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
So:
\scriptsize \begin{align*}2x&={{58}^\circ}+k{{.360}^\circ}\text{ or }2x={{302}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\\x&={{29}^\circ}+k{{.180}^\circ}\text{ or }x={{151}^\circ}+k{{.180}^\circ}\text{, }k\in \mathbb{Z}\end{align*}
2. .
\scriptsize \begin{align*}\cos 2x & =\sin x\\\therefore 1-2{{\sin }^{2}}x & =\sin x\\\therefore 2{{\sin }^{2}}x+\sin x-1 & =0\\\therefore (2\sin x-1)(\sin x+1) & =0\\\therefore \sin x=\displaystyle \frac{1}{2}\text{ } & \text{or }\sin x=-1\end{align*}
$\scriptsize \sin x=\displaystyle \frac{1}{2}$:
Ref angle: $\scriptsize x={{30}^\circ}$
Sine is positive in the first and second quadrants.
$\scriptsize x={{180}^\circ}-{{30}^\circ}={{150}^\circ}$
General solution: $\scriptsize x={{30}^\circ}+k{{.360}^\circ}\text{ or }x={{150}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
$\scriptsize \sin x=-1$:
Ref angle: $\scriptsize x=-{{90}^\circ}$
General solution: $\scriptsize x={{270}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
3. .
\scriptsize \begin{align*}\sin \theta \cdot \sin 2\theta +\cos 2\theta & =1\\\therefore \sin \theta \cdot 2\sin \theta \cos \theta +{{\cos }^{2}}\theta -{{\sin }^{2}}\theta -1 & =0\\\therefore 2{{\sin }^{2}}\theta \cos \theta +{{\cos }^{2}}\theta -1-{{\sin }^{2}}\theta & =0\\\therefore 2{{\sin }^{2}}\theta \cos \theta -{{\sin }^{2}}\theta -{{\sin }^{2}}\theta & =0\\\therefore 2{{\sin }^{2}}\theta \cos \theta -2{{\sin }^{2}}\theta & =0\\\therefore 2{{\sin }^{2}}\theta (\cos \theta -1) & =0\\\therefore 2{{\sin }^{2}}\theta &=0\text{ } & \text{or cos}\theta =1\\\therefore \sin \theta &=0\text{ } & \text{or }\cos \theta =1\end{align*}
$\scriptsize \sin \theta =0$:
Ref angle: $\scriptsize \theta ={{0}^\circ}$
General solution: $\scriptsize \theta ={{0}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
$\scriptsize \cos \theta =1$:
Ref angle: $\scriptsize \theta ={{0}^\circ}$
General solution: $\scriptsize \theta ={{0}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
Overall general solution: $\scriptsize \theta ={{0}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
4. .
\scriptsize \begin{align*}5\tan 2x-1 & ={{\tan }^{2}}2x+5\\\therefore {{\tan }^{2}}2x-5\tan 2x+6 & =0\\\therefore (\tan 2x-3)(\tan 2x-2) & =0\\\therefore \tan 2x=3\text{ } & \text{or }\tan 2x=2\end{align*}
$\scriptsize \tan 2x=3$:
Ref angle: $\scriptsize 2x={{71.6}^\circ}$
General solution:
\scriptsize \begin{align*}2x & ={{71.57}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\\\therefore x & ={{35.8}^\circ}+k{{.90}^\circ},k\in \mathbb{Z}\end{align*}
$\scriptsize \tan 2x=2$:
Ref angle: $\scriptsize 2x={{63.4}^\circ}$
General solution:
\scriptsize \begin{align*}2x & ={{63.4}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\\\therefore x & ={{31.71}^\circ}+k{{.90}^\circ},k\in \mathbb{Z}\end{align*}
2. .
1. .
\scriptsize \begin{align*}\text{LHS}&=\sin x\cos 3x-\cos x\sin 3x\\&=\sin x\cos (2x+x)-\cos x\sin (2x+x)\\&=\sin x\left( {\cos 2x\cos x-\sin 2x\sin x} \right)-\cos x\left( {\sin 2x\cos x+\cos 2x\sin x} \right)\\&=\sin x\left( {\left( {2{{{\cos }}^{2}}x-1} \right)\cos x-2\sin x\cos x\sin x} \right)-\cos x\left( {2\sin x\cos x\cos x+(2{{{\cos }}^{2}}x-1)\sin x} \right)\\&=\sin x\left( {2{{{\cos }}^{3}}x-\cos x-2{{{\sin }}^{2}}x\cos x} \right)-\cos x\left( {2\sin x{{{\cos }}^{2}}x+2\sin x{{{\cos }}^{2}}x-\sin x} \right)\\&=2\sin x{{\cos }^{3}}x-\sin x\cos x-2{{\sin }^{3}}x\cos x-2\sin x{{\cos }^{3}}x-2\sin x{{\cos }^{3}}x+\sin x\cos x\\&=-2\sin x{{\cos }^{3}}x-2{{\sin }^{3}}x\cos x\\&=-2\sin x\cos x({{\cos }^{2}}x+{{\sin }^{2}}x)\\&=-\sin 2x\end{align*}
Therefore:
\scriptsize \begin{align*}-\sin 2x & =\tan {{140}^\circ}\\\therefore \sin 2x & =-\tan {{140}^\circ}\\ & =0.839\end{align*}
Ref angle: $\scriptsize 2x={{57.05}^\circ}$
Sine is positive in the first and second quadrants.
$\scriptsize 2x={{180}^\circ}-{{57.05}^\circ}={{122.95}^\circ}$
General solution:
\scriptsize \begin{align*}2x & =57.05+k{{.360}^\circ}\text{ or 2}x={{122.95}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}\\\therefore x & ={{28.525}^\circ}+k{{.180}^\circ}\text{ or }x={{61.475}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\end{align*}
2. Solutions for the interval $\scriptsize \left[ {{{0}^\circ},{{{90}}^\circ}} \right]$: $\scriptsize x ={{28.525}^\circ}\text{ or }x={{61.475}^\circ}$
Back to Exercise 2.2
### Unit 2: Assessment
1. .
1. .
\scriptsize \begin{align*}2\cos \theta -1 & =0\\\therefore \cos \theta & =\displaystyle \frac{1}{2}\end{align*}
Ref angle: $\scriptsize \theta ={{60}^\circ}$
Cosine is positive in the first and fourth quadrants.
$\scriptsize \theta ={{360}^\circ}-{{60}^\circ}={{300}^\circ}$
General solution: $\scriptsize \theta ={{60}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{300}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
2. .
\scriptsize \begin{align*}2{{\sin }^{2}}\theta +\sin \theta -1 & =0\\\therefore (2\sin \theta -1)(\sin \theta +1) & =0\\\therefore \sin \theta =\displaystyle \frac{1}{2}\text{ } & \text{or }\sin \theta =-1\end{align*}
$\scriptsize \sin \theta =\displaystyle \frac{1}{2}$:
Ref angle: $\scriptsize \theta ={{30}^\circ}$
Sine is positive in the first and second quadrants.
$\scriptsize \theta ={{180}^\circ}-{{30}^\circ}={{150}^\circ}$
General solution: $\scriptsize \theta ={{30}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{150}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
$\scriptsize \sin \theta =1$:
Ref angle: $\scriptsize \theta ={{90}^\circ}$
General solution: $\scriptsize \theta ={{90}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
3. .
\scriptsize \displaystyle \begin{align*}4\sin x+3\tan x & =\displaystyle \frac{3}{{\cos x}}+4\quad \quad \cos x\ne 0\therefore x\ne {{90}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\\\therefore 4\sin x+3\displaystyle \frac{{\sin x}}{{\cos x}}-\displaystyle \frac{3}{{\cos x}}-4 & =0\\\therefore 4\sin x\cos x+3\sin x-3-4\cos x & =0\\\therefore 4\cos x(\sin x-1)+3(\sin x-1) & =0\\\therefore (\sin x-1)(4\cos x+3) & =0\\\therefore \sin x=1\text{ } & \text{or }\cos x=-\displaystyle \frac{3}{4}\end{align*}
$\scriptsize \sin x=1$:
Ref angle: $\scriptsize x={{90}^\circ}$
General solution: $\scriptsize x={{90}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
$\scriptsize \cos x=-\displaystyle \frac{3}{4}$:
Ref angle: $\scriptsize x={{138.59}^\circ}$
Cosine is negative in the second and third quadrants.
$\scriptsize x={{360}^\circ}-{{138.59}^\circ}={{221.41}^\circ}$
General solution: $\scriptsize x={{138.59}^\circ}+k{{.360}^\circ}\text{ or }x={{221.49}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
For the interval $\scriptsize \left[ {{{0}^\circ},{{{180}}^\circ}} \right]$: $\scriptsize x={{90}^\circ}\text{ or }x={{138.59}^\circ}$
4. .
\scriptsize \begin{align*}6\cos \theta -5 & =\displaystyle \frac{4}{{\cos \theta }}\quad \quad \cos \theta \ne 0\therefore \theta \ne {{90}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}\\\therefore 6{{\cos }^{2}}\theta -5\cos \theta -4 & =0\\\therefore (3\cos \theta -4)(2\cos \theta +1) & =0\\\therefore \cos \theta =\displaystyle \frac{4}{3}\text{ } & \text{or }\cos \theta =-\displaystyle \frac{1}{2}\end{align*}
$\scriptsize \cos \theta =\displaystyle \frac{4}{3}$ – No solution
$\scriptsize \cos \theta =-\displaystyle \frac{1}{2}$:
Ref angle: $\scriptsize \theta ={{120}^\circ}$
Cosine is negative in the second and third quadrants.
$\scriptsize \theta ={{360}^\circ}-({{120}^\circ})={{240}^\circ}$
General solution: $\scriptsize \theta ={{120}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{240}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
5. .
\scriptsize \begin{align*}\cos 2x-\cos x+1 & =0\\\therefore 2{{\cos }^{2}}x-1-\cos x+1 & =0\\\therefore 2{{\cos }^{2}}x-\cos x & =0\\\therefore \cos x(2\cos x-1) & =0\\\therefore \cos x=0\text{ } & \text{or }\cos x=\displaystyle \frac{1}{2}\end{align*}
$\scriptsize \cos x=0$:
Ref angle: $\scriptsize x={{90}^\circ}$
General solution: $\scriptsize x={{90}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
$\scriptsize \cos x=\displaystyle \frac{1}{2}$
Ref angle: $\scriptsize \theta ={{60}^\circ}$
Cosine is positive in the first and fourth quadrants.
$\scriptsize \theta ={{360}^\circ}-{{60}^\circ}={{300}^\circ}$
General solution: $\scriptsize \theta ={{60}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{300}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
For the interval $\scriptsize \left[ {{{0}^\circ},{{{360}}^\circ}} \right]$: $\scriptsize x=\text{6}{{\text{0}}^\circ}\text{ or }x={{90}^\circ}\text{ or }x={{180}^\circ}\text{ or }x=\text{30}{{\text{0}}^\circ}\text{ or }x=\text{36}{{\text{0}}^\circ}$
2. $\scriptsize \cos 2\theta =2{{\cos }^{2}}\theta -1$
1. .
\scriptsize \begin{align*}\text{LHS}=\cos 2\theta +3\cos \theta -1\\=2{{\cos }^{2}}\theta -1+3\cos \theta -1\\=2{{\cos }^{2}}\theta +3\cos \theta -2=\text{RHS}\end{align*}.
2. .
\scriptsize \begin{align*}\cos 2x-3\cos x-1 & =0\\\therefore 2{{\cos }^{2}}x-3\cos x-2 & =0\\\therefore (2\cos x+1)(\cos x-2) & =0\\\therefore \cos x=-\displaystyle \frac{1}{2}\text{ } & \text{or }\cos x=2\end{align*}
$\scriptsize \cos \theta =-\displaystyle \frac{1}{2}$:
Ref angle: $\scriptsize \theta ={{120}^\circ}$
Cosine is negative in the second and third quadrants.
$\scriptsize \theta ={{360}^\circ}-({{120}^\circ})={{240}^\circ}$
General solution: $\scriptsize \theta ={{120}^\circ}+k{{.360}^\circ}\text{ or }\theta ={{240}^\circ}+k{{.360}^\circ},k\in \mathbb{Z}$
$\scriptsize \cos x=2$ – No solution
For the interval $\scriptsize \left[ {{{0}^\circ},{{{360}}^\circ}} \right]$: $\scriptsize \theta ={{120}^\circ}\text{ or }\theta ={{240}^\circ}$
3. .
\scriptsize \begin{align*}3\cos 2x-\sin 2x-1 & =0\\\therefore 3({{\cos }^{2}}x-{{\sin }^{2}}x)-2\sin x\cos x-({{\sin }^{2}}x+{{\cos }^{2}}) & =0\\\therefore 3{{\cos }^{2}}-3{{\sin }^{2}}x-2\sin x\cos x-{{\sin }^{2}}x-{{\cos }^{2}}x & =0\\\therefore 2{{\cos }^{2}}x-2\cos x\sin x-4{{\sin }^{2}}x= & 0\\\therefore {{\cos }^{2}}x-\cos x\sin x-2{{\sin }^{2}}x & =0\\\therefore (\cos x+\sin x)(\cos x-2\sin x) & =0\\\therefore \cos x=-\sin x\text{ } & \text{or }\cos x=2\sin x\end{align*}
\scriptsize \begin{align*}\cos x & =-\sin x\\\therefore 1 & =-\displaystyle \frac{{\sin x}}{{\cos x}}\\\therefore \tan x & =-1\end{align*}
Ref angle: $\scriptsize x=-{{45}^\circ}$
General solution: $\scriptsize x={{135}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
\scriptsize \begin{align*}\cos x & =2\sin x\\\therefore 1 & =2\cdot \displaystyle \frac{{\sin x}}{{\cos x}}\\\therefore 2\tan x & =1\\\therefore \tan x & =\displaystyle \frac{1}{2}\end{align*}
Ref angle: $\scriptsize x={{26.6}^\circ}$
General solution: $\scriptsize x={{26.6}^\circ}+k{{.180}^\circ},k\in \mathbb{Z}$
For the interval $\scriptsize \left[ {{{0}^\circ},{{{360}}^\circ}} \right]$: $\scriptsize x={{26.6}^\circ}\text{ or }x={{135}^\circ}\text{ or }x={{206.6}^\circ}\text{ or }x={{315}^\circ}$
Back to Unit 2: Assessment |
Subjects -> BIOLOGY (Total: 3483 journals) - BIOCHEMISTRY (267 journals) - BIOENGINEERING (143 journals) - BIOLOGY (1667 journals) - BIOPHYSICS (50 journals) - BIOTECHNOLOGY (271 journals) - BOTANY (252 journals) - CYTOLOGY AND HISTOLOGY (32 journals) - ENTOMOLOGY (76 journals) - GENETICS (172 journals) - MICROBIOLOGY (292 journals) - MICROSCOPY (12 journals) - ORNITHOLOGY (29 journals) - PHYSIOLOGY (73 journals) - ZOOLOGY (147 journals) BIOLOGY (1667 journals) 1 2 3 4 5 6 7 8 | Last
Similar Journals
Annales Henri PoincaréJournal Prestige (SJR): 1.097 Citation Impact (citeScore): 2Number of Followers: 3 Hybrid journal (It can contain Open Access articles) ISSN (Print) 1424-0637 - ISSN (Online) 1424-0661 Published by Springer-Verlag [2626 journals]
• Embeddings, Immersions and the Bartnik Quasi-Local Mass Conjectures
• Abstract: Abstract Given a Riemannian 3-ball $$({\bar{B}}, g)$$ of nonnegative scalar curvature, Bartnik conjectured that $$({\bar{B}}, g)$$ admits an asymptotically flat (AF) extension (without horizons) of the least possible ADM mass and that such a mass minimizer is an AF solution to the static vacuum Einstein equations, uniquely determined by natural geometric conditions on the boundary data of $$({\bar{B}}, g)$$ . We prove the validity of the second statement, i.e., such mass minimizers, if they exist, are indeed AF solutions of the static vacuum equations. On the other hand, we prove that the first statement is not true in general; there is a rather large class of bodies $$({\bar{B}}, g)$$ for which a minimal mass extension does not exist.
PubDate: 2019-03-20
• Resolvent Convergence to Dirac Operators on Planar Domains
• Abstract: Abstract Consider a Dirac operator defined on the whole plane with a mass term of size m supported outside a domain $$\Omega$$ . We give a simple proof for the norm resolvent convergence, as m goes to infinity, of this operator to a Dirac operator defined on $$\Omega$$ with infinite-mass boundary conditions. The result is valid for bounded and unbounded domains and gives estimates on the speed of convergence. Moreover, the method easily extends when adding external matrix-valued potentials.
PubDate: 2019-03-19
• Optimal Potentials for Quantum Graphs
• Abstract: Abstract Schrödinger operators on metric graphs with delta couplings at the vertices are studied. We discuss which potential and which distribution of delta couplings on a given graph maximise the ground state energy, provided the integral of the potential and the sum of strengths of the delta couplings are fixed. It appears that the optimal potential if it exists is a constant function on its support formed by a set of intervals separated from the vertices. In the case where the optimal configuration does not exist explicit optimising sequences are presented.
PubDate: 2019-03-18
• Smoothness of Correlation Functions in Liouville Conformal Field Theory
• Abstract: Abstract We prove smoothness of the correlation functions in probabilistic Liouville Conformal Field Theory. Our result is a step towards proving that the correlation functions satisfy the higher Ward identities and the higher BPZ equations, predicted by the Conformal Bootstrap approach to Conformal Field Theory.
PubDate: 2019-03-16
• Classification of String Solutions for the Self-Dual
Einstein–Maxwell–Higgs Model
• Abstract: Abstract In this paper, we are concerned with an elliptic system arising from the Einstein–Maxwell–Higgs model which describes electromagnetic dynamics coupled with gravitational fields in spacetime. Reducing this system to a single equation and setting up the radial ansatz, we classify solutions into three cases: topological solutions, nontopological solutions of type I, and nontopological solutions of type II. There are two important constants: $$a>0$$ representing the gravitational constant and $$N\ge 0$$ representing the total string number. When $$0\le aN<2$$ , we give a complete classification of all possible solutions and prove the uniqueness of solutions for a given decay rate. In particular, we obtain a new class of topological solitons, with nonstandard asymptotic value $$\sigma <0$$ at infinity, when the total string number is sufficiently large such that $$1<aN<2$$ . We also prove the multiple existence of solutions for a given decay rate in the case $$aN \ge 2$$ . Our classification improves previous results which are known only for the case $$0<aN<1$$ .
PubDate: 2019-03-16
• The Spin $$\varvec{\pm }$$ ± 1 Teukolsky Equations and the Maxwell
System on Schwarzschild
• Abstract: Abstract In this note, we prove decay for the spin ± 1 Teukolsky equations on the Schwarzschild spacetime. These equations are those satisfied by the extreme components ( $$\alpha$$ and $${\underline{\alpha }}$$ ) of the Maxwell field, when expressed with respect to a null frame. The subject has already been addressed in the literature, and the interest in the present approach lies in the connection with the recent work by Dafermos, Holzegel and Rodnianski on linearized gravity (Dafermos et al. in The linear stability of the Schwarzschild solution to gravitational perturbations, 2016. arXiv:1601.06467). In analogy with the spin $$\pm 2$$ case, it seems difficult to directly prove Morawetz estimates for solutions to the spin $$\pm 1$$ Teukolsky equations. By performing a differential transformation on the extreme components $$\alpha$$ and $${\underline{\alpha }}$$ , we obtain quantities which satisfy a Fackerell–Ipser Equation, which does admit a straightforward Morawetz estimate and is the key to the decay estimates. This approach is exactly analogous to the strategy appearing in the aforementioned work on linearized gravity. We achieve inverse polynomial decay estimates by a streamlined version of the physical space $$r^p$$ method of Dafermos and Rodnianski. Furthermore, we are also able to prove decay for all the components of the Maxwell system. The transformation that we use is a physical space version of a fixed-frequency transformation which appeared in the work of Chandrasekhar (Proc R Soc Lond Ser A 348(1652):39–55, 1976). The present note is a version of the author’s master thesis and also serves the “pedagogical” purpose to be as complete as possible in the presentation.
PubDate: 2019-03-11
• A Uniform Approach to Soliton Cellular Automata Using Rigged
Configurations
• Abstract: Abstract For soliton cellular automata, we give a uniform description and proofs of the solitons, the scattering rule of two solitons, and the phase shift using rigged configurations in a number of special cases. In particular, we prove these properties for the soliton cellular automata using $$B^{r,1}$$ when r is adjacent to 0 in the Dynkin diagram or there is a Dynkin diagram automorphism sending r to 0.
PubDate: 2019-03-11
• Pieri Rules for the Jack Polynomials in Superspace and the 6-Vertex Model
• Abstract: Abstract We present Pieri rules for the Jack polynomials in superspace. The coefficients in the Pieri rules are, except for an extra determinant, products of quotients of linear factors in $$\alpha$$ (expressed, as in the usual Jack polynomial case, in terms of certain hook lengths in a Ferrers’ diagram). We show that, surprisingly, the extra determinant is related to the partition function of the 6-vertex model. We give, as a conjecture, the Pieri rules for the Macdonald polynomials in superspace.
PubDate: 2019-03-07
• Global Description of Action-Angle Duality for a Poisson–Lie Deformation
of the Trigonometric $$\varvec{\mathrm {BC}_n}$$ BC n Sutherland System
• Abstract: Abstract Integrable many-body systems of Ruijsenaars–Schneider–van Diejen type displaying action-angle duality are derived by Hamiltonian reduction of the Heisenberg double of the Poisson–Lie group $$\mathrm{SU}(2n)$$ . New global models of the reduced phase space are described, revealing non-trivial features of the two systems in duality with one another. For example, after establishing that the symplectic vector space $$\mathbb {C}^n\simeq \mathbb {R}^{2n}$$ underlies both global models, it is seen that for both systems the action variables generate the standard torus action on $$\mathbb {C}^n$$ , and the fixed point of this action corresponds to the unique equilibrium positions of the pertinent systems. The systems in duality are found to be non-degenerate in the sense that the functional dimension of the Poisson algebra of their conserved quantities is equal to half the dimension of the phase space. The dual of the deformed Sutherland system is shown to be a limiting case of a van Diejen system.
PubDate: 2019-03-06
• On the Asymptotic Behavior of Static Perfect Fluids
• Abstract: Abstract Static spherically symmetric solutions to the Einstein–Euler equations with prescribed central densities are known to exist, be unique and smooth for reasonable equations of state. Some criteria are also available to decide whether solutions have finite extent (stars with a vacuum exterior) or infinite extent. In the latter case, the matter extends globally with the density approaching zero at infinity. The asymptotic behavior largely depends on the equation of state of the fluid and is still poorly understood. While a few such unbounded solutions are known to be asymptotically flat with finite ADM mass, the vast majority are not. We provide a full geometric description of the asymptotic behavior of static spherically symmetric perfect fluid solutions with linear equations of state and polytropic-type equations of state with index $$n>5$$ . In order to capture the asymptotic behavior, we introduce a notion of scaled quasi-asymptotic flatness, which encompasses the notion of asymptotic conicality. In particular, these spacetimes are asymptotically simple.
PubDate: 2019-03-01
• Derivation of the 1d Gross–Pitaevskii Equation from the 3d Quantum
Many-Body Dynamics of Strongly Confined Bosons
• Abstract: Abstract We consider the dynamics of N interacting bosons initially forming a Bose–Einstein condensate. Due to an external trapping potential, the bosons are strongly confined in two dimensions, where the transverse extension of the trap is of order $$\varepsilon$$ . The non-negative interaction potential is scaled such that its range and its scattering length are both of order $$(N/\varepsilon ^2)^{-1}$$ , corresponding to the Gross–Pitaevskii scaling of a dilute Bose gas. We show that in the simultaneous limit $$N\rightarrow \infty$$ and $$\varepsilon \rightarrow 0$$ , the dynamics preserve condensation and the time evolution is asymptotically described by a Gross–Pitaevskii equation in one dimension. The strength of the nonlinearity is given by the scattering length of the unscaled interaction, multiplied with a factor depending on the shape of the confining potential. For our analysis, we adapt a method by Pickl (Rev Math Phys 27(01):1550003, 2015) to the problem with dimensional reduction and rely on the derivation of the one-dimensional NLS equation for interactions with softer scaling behaviour in Boßmann (Derivation of the 1d NLS equation from the 3d quantum many-body dynamics of strongly confined bosons. arXiv preprint, 2018. arXiv:1803.11011).
PubDate: 2019-03-01
• Strong Cosmic Censorship in Orthogonal Bianchi Class B Perfect Fluids and
Vacuum Models
• Abstract: Abstract The Strong Cosmic Censorship conjecture states that for generic initial data to Einstein’s field equations, the maximal globally hyperbolic development is inextendible. We prove this conjecture in the class of orthogonal Bianchi class B perfect fluids and vacuum spacetimes, by showing that unboundedness of certain curvature invariants such as the Kretschmann scalar is a generic property. The only spacetimes where this scalar remains bounded exhibit local rotational symmetry or are of plane wave equilibrium type. We further investigate the qualitative behaviour of solutions towards the initial singularity. To this end, we work in the expansion-normalised variables introduced by Hewitt–Wainwright and show that a set of full measure, which is also a countable intersection of open and dense sets in the state space, yields convergence to a specific subarc of the Kasner parabola. We further give an explicit construction enabling the translation between these variables and geometric initial data to Einstein’s equations.
PubDate: 2019-03-01
• Asymptotically Hyperbolic 3-Metric with Ricci Flow Foliation
• Abstract: Abstract In general relativity, there have been a number of successful constructions for asymptotically flat metrics with a certain background foliation. In particular, Lin (Calc Var 49(3–4):1309–1335, 2014) used a foliation by the Ricci flow on 2-spheres to establish an asymptotically flat extension, and Sormani and Lin (Ann Henri Poincaré 17(10):2783–2800, 2016) proved useful results with this extension. In this paper, we construct asymptotically hyperbolic 3-metrics with the Ricci flow foliation inspired by Lin’s work. We also study the rigid case when the Hawking mass of the inner surface of the manifold agrees with its total mass as in Sormani and Lin (2016).
PubDate: 2019-03-01
• The Minkowski Formula and the Quasi-Local Mass
• Abstract: Abstract In this article, we estimate the quasi-local energy with reference to the Minkowski spacetime (Wang and Yau in Phys Rev Lett 102(2):021101, 2009; Commun Math Phys 288(3):919–942, 2009), the anti-de Sitter spacetime (Chen et al. in Commun Anal Geom, 2016. arXiv:1603.02975), or the Schwarzschild spacetime (Chen et al. in Adv Theor Math Phys 22(1):1–23, 2018). In each case, the reference spacetime admits a conformal Killing–Yano 2-form which facilitates the application of the Minkowski formula in Wang et al. (J Differ Geom 105(2):249–290, 2017) to estimate the quasi-local energy. As a consequence of the positive mass theorems in Liu and Yau (J Am Math Soc 19(1):181–204, 2006) and Shi and Tam (Class Quantum Gravity 24(9):2357–2366, 2007) and the above estimate, we obtain rigidity theorems which characterize the Minkowski spacetime and the hyperbolic space.
PubDate: 2019-03-01
• On Wick Polynomials of Boson Fields in Locally Covariant Algebraic QFT
• Abstract: Abstract This work presents some results about Wick polynomials of a vector field renormalization in locally covariant algebraic quantum field theory in curved spacetime. General vector fields are pictured as sections of natural vector bundles over globally hyperbolic spacetimes and quantized through the known functorial machinery in terms of local $$*$$ -algebras. These quantized fields may be defined on spacetimes with given classical background fields, also sections of natural vector bundles, in addition to the Lorentzian metric. The mass and the coupling constants are in particular viewed as background fields. Wick powers of the quantized vector field are axiomatically defined imposing in particular local covariance, scaling properties, and smooth dependence on smooth perturbation of the background fields. A general classification theorem is established for finite renormalization terms (or counterterms) arising when comparing different solutions satisfying the defining axioms of Wick powers. The result is specialized to the case of general tensor fields. In particular, the case of a vector Klein–Gordon field and the case of a scalar field renormalized together with its derivatives are discussed as examples. In each case, a more precise statement about the structure of the counterterms is proved. The finite renormalization terms turn out to be finite-order polynomials tensorially and locally constructed with the backgrounds fields and their covariant derivatives whose coefficients are locally smooth functions of polynomial scalar invariants constructed from the so-called marginal subset of the background fields. The notion of local smooth dependence on polynomial scalar invariants is made precise in the text. Our main technical tools are based on the Peetre–Slovák theorem characterizing differential operators and on the classification of smooth invariants on representations of reductive Lie groups.
PubDate: 2019-03-01
• Thermal State with Quadratic Interaction
• Abstract: Abstract We consider the perturbative construction, proposed in Fredenhagen and Lindner (Commun Math Phys 332:895, 2014), for a thermal state $$\Omega _{\beta ,\lambda V\{f\}}$$ for the theory of a real scalar Klein–Gordon field $$\phi$$ with interacting potential $$V\{f\}$$ . Here, f is a space-time cut-off of the interaction V, and $$\lambda$$ is a perturbative parameter. We assume that V is quadratic in the field $$\phi$$ and we compute the adiabatic limit $$f\rightarrow 1$$ of the state $$\Omega _{\beta ,\lambda V\{f\}}$$ . The limit is shown to exist; moreover, the perturbative series in $$\lambda$$ sums up to the thermal state for the corresponding (free) theory with potential V. In addition, we exploit the same methods to address a similar computation for the non-equilibrium steady state (NESS) Ruelle (J Stat Phys 98:57–75, 2000) recently constructed in Drago et al. (Commun Math Phys 357:267, 2018).
PubDate: 2019-03-01
• On the Hidden Mechanism Behind Non-uniqueness for the Anisotropic
Calderón Problem with Data on Disjoint Sets
• Abstract: Abstract We show that there is generically non-uniqueness for the anisotropic Calderón problem at fixed frequency when the Dirichlet and Neumann data are measured on disjoint sets of the boundary of a given domain. More precisely, we first show that given a smooth compact connected Riemannian manifold with boundary (M, g) of dimension $$n\ge 3$$ , there exist in the conformal class of g an infinite number of Riemannian metrics $$\tilde{g}$$ such that their corresponding DN maps at a fixed frequency coincide when the Dirichlet data $$\Gamma _D$$ and Neumann data $$\Gamma _N$$ are measured on disjoint sets and satisfy $$\overline{\Gamma _D \cup \Gamma _N} \ne \partial M$$ . The conformal factors that lead to these non-uniqueness results for the anisotropic Calderón problem satisfy a nonlinear elliptic PDE of Yamabe type on the original manifold (M, g) and are associated with a natural but subtle gauge invariance of the anisotropic Calderón problem with data on disjoint sets. We then construct a large class of counterexamples to uniqueness in dimension $$n\ge 3$$ to the anisotropic Calderón problem at fixed frequency with data on disjoint sets and modulo this gauge invariance. This class consists in cylindrical Riemannian manifolds with boundary having two ends (meaning that the boundary has two connected components), equipped with a suitably chosen warped product metric.
PubDate: 2019-03-01
• When Do Composed Maps Become Entanglement Breaking'
• Abstract: Abstract For many completely positive maps repeated compositions will eventually become entanglement breaking. To quantify this behaviour we develop a technique based on the Schmidt number: If a completely positive map breaks the entanglement with respect to any qubit ancilla, then applying it to part of a bipartite quantum state will result in a Schmidt number bounded away from the maximum possible value. Iterating this result puts a successively decreasing upper bound on the Schmidt number arising in this way from compositions of such a map. By applying this technique to completely positive maps in dimension three that are also completely copositive we prove the so-called PPT squared conjecture in this dimension. We then give more examples of completely positive maps where our technique can be applied, e.g. maps close to the completely depolarizing map, and maps of low rank. Finally, we study the PPT squared conjecture in more detail, establishing equivalent conjectures related to other parts of quantum information theory, and we prove the conjecture for Gaussian quantum channels.
PubDate: 2019-02-14
• A Scattering Theory for Linear Waves on the Interior of
Reissner–Nordström Black Holes
• Abstract: Abstract We develop a scattering theory for the linear wave equation $$\Box _g \psi = 0$$ on the interior of Reissner–Nordström black holes, connecting the fixed frequency picture to the physical space picture. Our main result gives the existence, uniqueness and asymptotic completeness of finite energy scattering states. The past and future scattering states are represented as suitable traces of the solution $$\psi$$ on the bifurcate event and Cauchy horizons. The heart of the proof is to show that after separation of variables one has uniform boundedness of the reflection and transmission coefficients of the resulting radial o.d.e. over all frequencies $$\omega$$ and $$\ell$$ . This is non-trivial because the natural T conservation law is sign-indefinite in the black hole interior. In the physical space picture, our results imply that the Cauchy evolution from the event horizon to the Cauchy horizon is a Hilbert space isomorphism, where the past (resp. future) Hilbert space is defined by the finiteness of the degenerate T energy fluxes on both components of the event (resp. Cauchy) horizon. Finally, we prove that, in contrast to the above, for a generic set of cosmological constants $$\Lambda$$ , there is no analogous finite T energy scattering theory for either the linear wave equation or the Klein–Gordon equation with conformal mass on the (anti-) de Sitter–Reissner–Nordström interior.
PubDate: 2019-02-13
• Stochastic Spikes and Poisson Approximation of One-Dimensional Stochastic
Differential Equations with Applications to Continuously Measured Quantum
Systems
• Abstract: Abstract Motivated by the recent contribution (Bauer and Bernard in Annales Henri Poincaré 19:653–693, 2018), we study the scaling limit behavior of a class of one-dimensional stochastic differential equations which has a unique attracting point subject to a small additional repulsive perturbation. Problems of this type appear in the analysis of continuously monitored quantum systems. We extend the results of Bauer and Bernard (Annales Henri Poincaré 19:653–693, 2018) and prove a general result concerning the convergence to a homogeneous Poisson process using only classical probabilistic tools.
PubDate: 2019-02-11
JournalTOCs
School of Mathematical and Computer Sciences
Heriot-Watt University
Edinburgh, EH14 4AS, UK
Email: [email protected]
Tel: +00 44 (0)131 4513762 |
# How do you find domain and range for y=2x+3?
The domain and range for a line with formula $y = m x + b$ is the set of all real numbers. |
lilypond-auto
[Top][All Lists]
[Lilypond-auto] [LilyIssues-auto] [testlilyissues:issues] Re: #5031 Bar
From: Auto mailings of changes to Lily Issues via Testlilyissues-auto Subject: [Lilypond-auto] [LilyIssues-auto] [testlilyissues:issues] Re: #5031 Bar numbering in volta repeats: add option to count once or as often as played Date: Fri, 28 Dec 2018 00:15:17 -0000
The two issues are unrelated, since 3038 talks about `\repeat unfold`.
[issues:#5031] Bar numbering in volta repeats: add option to count once or as often as played
Status: Accepted
Created: Sat Jan 14, 2017 04:11 PM UTC by Simon Albrecht
Last Updated: Thu Dec 27, 2018 07:27 PM UTC
Owner: nobody
Attachments:
There are different ideas of bar numbering: counting bars as written or as played/heard. In other words: volta repeats might be counted only once or as often as they are played. LilyPond currently supports only the former (with alternatives already fixed as of issue 2059), but the latter is used by respectable publications such as the Neue Bach-Ausgabe (1) and I’d like for Lily to support it as well.
Attached is a sketch of what it should look like.
(1) (though not in entirety)
Sent from sourceforge.net because address@hidden is subscribed to https://sourceforge.net/p/testlilyissues/issues/
To unsubscribe from further messages, a project admin can change settings at https://sourceforge.net/p/testlilyissues/admin/issues/options. Or, if this is a mailing list, you can unsubscribe from the mailing list.
```_______________________________________________
Testlilyissues-auto mailing list |
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# 8.22: Problem Solving Plan Proportions
Difficulty Level: At Grade Created by: CK-12
Estimated10 minsto complete
%
Progress
Practice Problem Solving Plan, Proportions
MEMORY METER
This indicates how strong in your memory this concept is
Progress
Estimated10 minsto complete
%
Estimated10 minsto complete
%
MEMORY METER
This indicates how strong in your memory this concept is
Takeru Kobayashi is a competitive eater famous for eating hot dogs among other things. Once, he ate 110 hot dogs in 10 minutes at the New York State Fair. How many hot dogs can Kobayashi eat in 1 minute?
In this concept, you will learn the problem solving strategy: use a proportion.
### Guidance
You can solve ratio problems using proportions. A proportion is an equation that shows two equivalent ratios. In order to use a proportion, the ratios in the proportion must also compare the same things.
Let’s look at a problem.
A car travels 55 miles in two hours. A bus travels 85 kilometers in two hours. Which vehicle traveled a farther distance?
This problem compares speed per hour; however, it is comparing miles to kilometers. The units are not the same and you cannot use a proportion to solve this problem without converting the units first.
Let’s look at another problem.
A cheetah can run 75 miles per hour. If you could run three times as fast as a cheetah, how fast would you be able to run?
The first ratio compares the cheetah’s speed per hour. The second ratio compares a person’s speed per hour. Both ratios can be written as miles per hour. You can use a proportion to solve this problem.
First, write a proportion showing the comparison.
cheetah's speednumber of hours=person's speednumber of hours\begin{align*}\frac{\text{cheetah's speed}}{\text{number of hours}}= \frac{\text{person's speed}}{\text{number of hours}}\end{align*}
Next, take the data and fill it into the proportion.
751=x3\begin{align*}\frac{75}{1}=\frac{x}{3}\end{align*}
The cheetah runs 75 miles per hour. Per means “divided by” and “hour” refers to 1 hour. The numerator is 75 and the denominator is 1. The person runs three times as fast, so he or she would go as far in 1 hour as a cheetah would in 3 hours. The denominator is 3. The person’s speed is unknown so it is represented with the variable x\begin{align*}x\end{align*}.
Then, solve the proportion using cross products.
xx==75(3)225\begin{align*}\begin{array}{rcl} x & = & 75(3)\\ x & = & 225 \end{array}\end{align*}
If a person ran three times as fast as a cheetah, he or she would run 225 mph.
### Guided Practice
Use a proportion to solve the following problem.
If a person can run 3 miles in 20 minutes, how long will it take the same person to run 12 miles at the same rate?
First, set up the proportion.
milesminutes=milesminutes\begin{align*}\frac{\text{miles}}{\text{minutes}}= \frac{\text{miles}}{\text{minutes}}\end{align*}
Next, fill in the given information.
320=12x\begin{align*}\frac{3}{20} = \frac{12}{x}\end{align*}
Now we cross multiply and solve for x\begin{align*}x\end{align*}.
3x3xx===20(12)24080\begin{align*}\begin{array}{rcl} 3x & = & 20(12)\\ 3x & = & 240\\ x & = & 80 \end{array}\end{align*}
The person would run 12 miles in 80 minutes or 1 hour and 20 minutes.
### Examples
Use the information below to answer the following questions.
A cheetah runs 75 miles per hour.
#### Example 1
If you could run twice as fast as a cheetah, how fast could you run?
First, set up the proportion.
mileshour=mileshour\begin{align*}\frac{\text{miles}}{\text{hour}}= \frac{\text{miles}}{\text{hour}}\end{align*}
Next, fill in the given information.
751=x2\begin{align*}\frac{75}{1} = \frac{x}{2}\end{align*}
Then, cross multiply and solve for x\begin{align*}x\end{align*}.
xx==75(2)150\begin{align*}\begin{array}{rcl} x & = & 75(2)\\ x & = & 150 \end{array}\end{align*}
You could run 150 miles per hour.
#### Example 2
If you could run half as fast as a cheetah, how fast could you run?
First, set up the proportion.
mileshour=mileshour\begin{align*}\frac{\text{miles}}{\text{hour}}= \frac{\text{miles}}{\text{hour}}\end{align*}
Next, fill in the given information.
751=x0.5\begin{align*}\frac{75}{1} = \frac{x}{0.5}\end{align*}
Then, cross multiply and solve for x\begin{align*}x\end{align*}.
xx==75(0.5)37.5\begin{align*}\begin{array}{rcl} x & = & 75(0.5)\\ x & = & 37.5 \end{array}\end{align*}
You could run 37.5 miles per hour.
#### Example 3
If you could run four times as fast as a cheetah, how fast could you run?
First, set up the proportion.
mileshour=mileshour\begin{align*}\frac{\text{miles}}{\text{hour}}= \frac{\text{miles}}{\text{hour}}\end{align*}
Next, fill in the given information.
751=x4\begin{align*}\frac{75}{1} = \frac{x}{4}\end{align*}
Then, cross multiply and solve for x\begin{align*}x\end{align*}.
xx==75(4)300\begin{align*}\begin{array}{rcl} x & = & 75(4)\\ x & = & 300 \end{array}\end{align*}
You could run 300 miles per hour.
Remember the competitive eater Takeru Kobayashi?
He once ate 110 hot dogs in 10 minutes. To find out how many hot dogs he can eat in 1 minute, write a proportion and solve for x\begin{align*}x\end{align*}.
First, set up the proportion.
hot dogsminutes=hot dogsminutes\begin{align*}\frac{\text{hot dogs}}{\text{minutes}}= \frac{\text{hot dogs}}{\text{minutes}}\end{align*}
Next, fill in the given information.
11010=x1\begin{align*}\frac{110}{10} = \frac{x}{1}\end{align*}
Then, cross multiply and solve for x\begin{align*}x\end{align*}.
10x10xx===110(1)11011\begin{align*}\begin{array}{rcl} 10x & = & 110(1)\\ 10x & = & 110\\ x & = & 11 \end{array}\end{align*}
Takeru Kobayashi can eat 11 hot dogs in 1 minute.
### Explore More
Solve each word problem by using a proportion.
1. In a diagram for the new garden, one inch is equal to 3 feet. If this is the case, how many feet is the actual garden edge if the measurement on the diagram is 5 inches?
2. If two inches on a map are equal to three miles, how many miles are represented by four inches?
3. If eight inches on a map are equal to ten miles, how many miles are 16 inches equal to?
4. Casey drew a design for bedroom. On the picture, she used one inch to represent five feet. If her bedroom wall is ten feet long, how many inches will Casey draw on her diagram to represent this measurement?
5. If two inches are equal to twelve feet, how many inches would be equal to 36 feet?
6. If four inches are equal to sixteen feet, how many feet are two inches equal to?
7. The carpenter chose a scale of 6" for every twelve feet. Given this measurement, how many feet would be represented by 3"?
8. If 9 inches are equal to 27 feet, how many feet are equal to three inches?
9. If four inches are equal to 8 feet, how many feet are equal to two inches?
10. If six inches are equal to ten feet, how many inches are five feet equal to?
11. If four inches are equal to twelve feet, how many inches are equal to six feet?
12. For every 20 feet of fence, John drew 10 inches on his plan. If the real fence is only 5 feet long, how many inches will John draw on his plan?
13. If eight inches are equal to twelve feet, how many inches are equal to six feet?
14. How many inches are equal to 20 feet if 4 inches are equal to 10 feet?
15. How many inches are equal to 8 feet if six inches are equal to 16 feet?
16. Nine inches are equal to twelve feet, so how many inches are equal to 4 feet?
17. If a person runs two miles in twelve minutes, how long will it take them to run 4 miles at the same rate?
18. A person runs 1 mile in 16 minutes. Given this information, how long will it take him/her to run 3 miles?
19. If a person runs two miles in twenty minutes, at what rate does he/she run one mile?
### Answers for Explore More Problems
To view the Explore More answers, open this PDF file and look for section 8.22.
### Notes/Highlights Having trouble? Report an issue.
Color Highlighted Text Notes
### Vocabulary Language: English
Problem Solving
Problem solving is using key words and operations to solve mathematical dilemmas written in verbal language.
Proportion
A proportion is an equation that shows two equivalent ratios.
Show Hide Details
Description
Difficulty Level:
Authors:
Tags:
Subjects:
## Concept Nodes:
Date Created:
Oct 29, 2012
Sep 04, 2016
Image Detail
Sizes: Medium | Original
MAT.ARI.634.L.1
Here |
# BRST quantization
(Redirected from BRST supersymmetry)
In theoretical physics, BRST quantization (where the BRST refers to Becchi, Rouet, Stora and Tyutin) denotes a relatively rigorous mathematical approach to quantizing a field theory with a gauge symmetry. Quantization rules in earlier QFT frameworks resembled "prescriptions" or "heuristics" more than proofs, especially in non-abelian QFT, where the use of "ghost fields" with superficially bizarre properties is almost unavoidable for technical reasons related to renormalization and anomaly cancellation.
The BRST global supersymmetry introduced in the mid-1970s was quickly understood to rationalize the introduction of these Faddeev–Popov ghosts and their exclusion from "physical" asymptotic states when performing QFT calculations. Crucially, this symmetry of the path integral is preserved in loop order, and thus prevents introduction of counterterms which might spoil renormalizability of gauge theories. Work by other authors a few years later related the BRST operator to the existence of a rigorous alternative to path integrals when quantizing a gauge theory.
Only in the late 1980s, when QFT was reformulated in fiber bundle language for application to problems in the topology of low-dimensional manifolds, did it become apparent that the BRST "transformation" is fundamentally geometrical in character. In this light, "BRST quantization" becomes more than an alternate way to arrive at anomaly-cancelling ghosts. It is a different perspective on what the ghost fields represent, why the Faddeev–Popov method works, and how it is related to the use of Hamiltonian mechanics to construct a perturbative framework. The relationship between gauge invariance and "BRST invariance" forces the choice of a Hamiltonian system whose states are composed of "particles" according to the rules familiar from the canonical quantization formalism. This esoteric consistency condition therefore comes quite close to explaining how quanta and fermions arise in physics to begin with.
In certain cases, notably gravity and supergravity, BRST must be superseded by a more general formalism, the Batalin–Vilkovisky formalism.
## Technical summary
BRST quantization (or the BRST formalism) is a differential geometric approach to performing consistent, anomaly-free perturbative calculations in a non-abelian gauge theory. The analytical form of the BRST "transformation" and its relevance to renormalization and anomaly cancellation were described by Carlo Maria Becchi, de:Alain Rouet, and Raymond Stora in a series of papers culminating in the 1976 "Renormalization of gauge theories". The equivalent transformation and many of its properties were independently discovered by Igor Viktorovich Tyutin. Its significance for rigorous canonical quantization of a Yang–Mills theory and its correct application to the Fock space of instantaneous field configurations were elucidated by Kugo Taichiro and Ojima Izumi. Later work by many authors, notably Thomas Schücker and Edward Witten, has clarified the geometric significance of the BRST operator and related fields and emphasized its importance to topological quantum field theory and string theory.
In the BRST approach, one selects a perturbation-friendly gauge fixing procedure for the action principle of a gauge theory using the differential geometry of the gauge bundle on which the field theory lives. One then quantizes the theory to obtain a Hamiltonian system in the interaction picture in such a way that the "unphysical" fields introduced by the gauge fixing procedure resolve gauge anomalies without appearing in the asymptotic states of the theory. The result is a set of Feynman rules for use in a Dyson series perturbative expansion of the S-matrix which guarantee that it is unitary and renormalizable at each loop order—in short, a coherent approximation technique for making physical predictions about the results of scattering experiments.
### Classical BRST
This is related to a supersymplectic manifold where pure operators are graded by integral ghost numbers and we have a BRST cohomology.
## Gauge transformations in QFT
From a practical perspective, a quantum field theory consists of an action principle and a set of procedures for performing perturbative calculations. There are other kinds of "sanity checks" that can be performed on a quantum field theory to determine whether it fits qualitative phenomena such as quark confinement and asymptotic freedom. However, most of the predictive successes of quantum field theory, from quantum electrodynamics to the present day, have been quantified by matching S-matrix calculations against the results of scattering experiments.
In the early days of QFT, one would have to have said that the quantization and renormalization prescriptions were as much part of the model as the Lagrangian density, especially when they relied on the powerful but mathematically ill-defined path integral formalism. It quickly became clear that QED was almost "magical" in its relative tractability, and that most of the ways that one might imagine extending it would not produce rational calculations. However, one class of field theories remained promising: gauge theories, in which the objects in the theory represent equivalence classes of physically indistinguishable field configurations, any two of which are related by a gauge transformation. This generalizes the QED idea of a local change of phase to a more complicated Lie group.
QED itself is a gauge theory, as is general relativity, although the latter has proven resistant to quantization so far, for reasons related to renormalization. Another class of gauge theories with a non-Abelian gauge group, beginning with Yang–Mills theory, became amenable to quantization in the late 1960s and early 1970s, largely due to the work of Ludwig D. Faddeev, Victor Popov, Bryce DeWitt, and Gerardus 't Hooft. However, they remained very difficult to work with until the introduction of the BRST method. The BRST method provided the calculation techniques and renormalizability proofs needed to extract accurate results from both "unbroken" Yang–Mills theories and those in which the Higgs mechanism leads to spontaneous symmetry breaking. Representatives of these two types of Yang–Mills systems—quantum chromodynamics and electroweak theory—appear in the Standard Model of particle physics.
It has proven rather more difficult to prove the existence of non-Abelian quantum field theory in a rigorous sense than to obtain accurate predictions using semi-heuristic calculation schemes. This is because analyzing a quantum field theory requires two mathematically interlocked perspectives: a Lagrangian system based on the action functional, composed of fields with distinct values at each point in spacetime and local operators which act on them, and a Hamiltonian system in the Dirac picture, composed of states which characterize the entire system at a given time and field operators which act on them. What makes this so difficult in a gauge theory is that the objects of the theory are not really local fields on spacetime; they are right-invariant local fields on the principal gauge bundle, and different local sections through a portion of the gauge bundle, related by passive transformations, produce different Dirac pictures.
What is more, a description of the system as a whole in terms of a set of fields contains many redundant degrees of freedom; the distinct configurations of the theory are equivalence classes of field configurations, so that two descriptions which are related to one another by an active gauge transformation are also really the same physical configuration. The "solutions" of a quantized gauge theory exist not in a straightforward space of fields with values at every point in spacetime but in a quotient space (or cohomology) whose elements are equivalence classes of field configurations. Hiding in the BRST formalism is a system for parameterizing the variations associated with all possible active gauge transformations and correctly accounting for their physical irrelevance during the conversion of a Lagrangian system to a Hamiltonian system.
### Gauge fixing and perturbation theory
The principle of gauge invariance is essential to constructing a workable quantum field theory. But it is generally not feasible to perform a perturbative calculation in a gauge theory without first "fixing the gauge"—adding terms to the Lagrangian density of the action principle which "break the gauge symmetry" to suppress these "unphysical" degrees of freedom. The idea of gauge fixing goes back to the Lorenz gauge approach to electromagnetism, which suppresses most of the excess degrees of freedom in the four-potential while retaining manifest Lorentz invariance. The Lorenz gauge is a great simplification relative to Maxwell's field-strength approach to classical electrodynamics, and illustrates why it is useful to deal with excess degrees of freedom in the representation of the objects in a theory at the Lagrangian stage, before passing over to Hamiltonian mechanics via the Legendre transform.
The Hamiltonian density is related to the Lie derivative of the Lagrangian density with respect to a unit timelike horizontal vector field on the gauge bundle. In a quantum mechanical context it is conventionally rescaled by a factor $i \hbar$. Integrating it by parts over a spacelike cross section recovers the form of the integrand familiar from canonical quantization. Because the definition of the Hamiltonian involves a unit time vector field on the base space, a horizontal lift to the bundle space, and a spacelike surface "normal" (in the Minkowski metric) to the unit time vector field at each point on the base manifold, it is dependent both on the connection and the choice of Lorentz frame, and is far from being globally defined. But it is an essential ingredient in the perturbative framework of quantum field theory, into which the quantized Hamiltonian enters via the Dyson series.
For perturbative purposes, we gather the configuration of all the fields of our theory on an entire three-dimensional horizontal spacelike cross section of P into one object (a Fock state), and then describe the "evolution" of this state over time using the interaction picture. The Fock space is spanned by the multi-particle eigenstates of the "unperturbed" or "non-interaction" portion $\mathcal{H}_0$ of the Hamiltonian $\mathcal{H}$. Hence the instantaneous description of any Fock state is a complex-amplitude-weighted sum of eigenstates of $\mathcal{H}_0$. In the interaction picture, we relate Fock states at different times by prescribing that each eigenstate of the unperturbed Hamiltonian experiences a constant rate of phase rotation proportional to its energy (the corresponding eigenvalue of the unperturbed Hamiltonian).
Hence, in the zero-order approximation, the set of weights characterizing a Fock state does not change over time, but the corresponding field configuration does. In higher approximations, the weights also change; collider experiments in high-energy physics amount to measurements of the rate of change in these weights (or rather integrals of them over distributions representing uncertainty in the initial and final conditions of a scattering event). The Dyson series captures the effect of the discrepancy between $\mathcal{H}_0$ and the true Hamiltonian $\mathcal{H}$, in the form of a power series in the coupling constant g; it is the principal tool for making quantitative predictions from a quantum field theory.
To use the Dyson series to calculate anything, one needs more than a gauge-invariant Lagrangian density; one also needs the quantization and gauge fixing prescriptions that enter into the Feynman rules of the theory. The Dyson series produces infinite integrals of various kinds when applied to the Hamiltonian of a particular QFT. This is partly because all usable quantum field theories to date must be considered effective field theories, describing only interactions on a certain range of energy scales that we can experimentally probe and therefore vulnerable to ultraviolet divergences. These are tolerable as long as they can be handled via standard techniques of renormalization; they are not so tolerable when they result in an infinite series of infinite renormalizations or, worse, in an obviously unphysical prediction such as an uncancelled gauge anomaly. There is a deep relationship between renormalizability and gauge invariance, which is easily lost in the course of attempts to obtain tractable Feynman rules by fixing the gauge.
### Pre-BRST approaches to gauge fixing
The traditional gauge fixing prescriptions of continuum electrodynamics select a unique representative from each gauge-transformation-related equivalence class using a constraint equation such as the Lorenz gauge $\partial^\mu A_\mu = 0$. This sort of prescription can be applied to an Abelian gauge theory such as QED, although it results in some difficulty in explaining why the Ward identities of the classical theory carry over to the quantum theory—in other words, why Feynman diagrams containing internal longitudinally polarized virtual photons do not contribute to S-matrix calculations. This approach also does not generalize well to non-Abelian gauge groups such as the SU(2) of Yang–Mills and electroweak theory and the SU(3) of quantum chromodynamics. It suffers from Gribov ambiguities and from the difficulty of defining a gauge fixing constraint that is in some sense "orthogonal" to physically significant changes in the field configuration.
More sophisticated approaches do not attempt to apply a delta function constraint to the gauge transformation degrees of freedom. Instead of "fixing" the gauge to a particular "constraint surface" in configuration space, one can break the gauge freedom with an additional, non-gauge-invariant term added to the Lagrangian density. In order to reproduce the successes of gauge fixing, this term is chosen to be minimal for the choice of gauge that corresponds to the desired constraint and to depend quadratically on the deviation of the gauge from the constraint surface. By the stationary phase approximation on which the Feynman path integral is based, the dominant contribution to perturbative calculations will come from field configurations in the neighborhood of the constraint surface.
The perturbative expansion associated with this Lagrangian, using the method of functional quantization, is generally referred to as the Rξ gauge. It reduces in the case of an Abelian U(1) gauge to the same set of Feynman rules that one obtains in the method of canonical quantization. But there is an important difference: the broken gauge freedom appears in the functional integral as an additional factor in the overall normalization. This factor can only be pulled out of the perturbative expansion (and ignored) when the contribution to the Lagrangian of a perturbation along the gauge degrees of freedom is independent of the particular "physical" field configuration. This is the condition that fails to hold for non-Abelian gauge groups. If one ignores the problem and attempts to use the Feynman rules obtained from "naive" functional quantization, one finds that one's calculations contain unremovable anomalies.
The problem of perturbative calculations in QCD was solved by introducing additional fields known as Faddeev–Popov ghosts, whose contribution to the gauge-fixed Lagrangian offsets the anomaly introduced by the coupling of "physical" and "unphysical" perturbations of the non-Abelian gauge field. From the functional quantization perspective, the "unphysical" perturbations of the field configuration (the gauge transformations) form a subspace of the space of all (infinitesimal) perturbations; in the non-Abelian case, the embedding of this subspace in the larger space depends on the configuration around which the perturbation takes place. The ghost term in the Lagrangian represents the functional determinant of the Jacobian of this embedding, and the properties of the ghost field are dictated by the exponent desired on the determinant in order to correct the functional measure on the remaining "physical" perturbation axes.
## Mathematical approach to BRST
BRST construction,[1][2] applies to a situation of a hamiltonian action of a compact, connected Lie group G on a phase space M. Let $\mathfrak{g}$ be the Lie algebra of G and $0 \in \mathfrak{g}^*$ a regular value of the moment map $\Phi: M\to \mathfrak{g}^*$. Let $M_0=\Phi^{-1}(0)$. Assume the G-action on M0 is free and proper, and consider the space $\widetilde M = M_0/G$ of G-orbits on M0, which is also known as a Symplectic reduction quotient $\widetilde M = M//G$.
First, using the regular sequence of functions defining M0 inside M, construct a Koszul complex
$\Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M).$
The differential, δ, on this complex is an odd C(M)-linear derivation of the graded C(M)-algebra $\Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M)$. This odd derivation is defined by extending the Lie algebra homomorphim ${\mathfrak g}\to C^{\infty}(M)$ of the hamiltonian action. The resulting Koszul complex is the Koszul complex of the $S({\mathfrak g})$-module C(M), where $S(\mathfrak{g})$ is the symmetric algebra of $\mathfrak{g}$, and the module structure comes from a ring homomorphism $S({\mathfrak g}) \to C^{\infty}(M)$ induced by the hamiltonian action $\mathfrak{g} \to C^{\infty}(M)$.
This Koszul complex is a resolution of the $S({\mathfrak g})$-module $C^{\infty}(M_0)$, i.e.,
$H^{j}(\Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M),\delta) = \begin{cases} C^{\infty}(M_0) & j = 0 \\ 0 & j \neq 0 \end{cases}$
Then, consider the Chevalley-Eilenberg cochain complex for the Koszul complex $\Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M)$ considered as a dg module over the Lie algebra $\mathfrak{g}$:
$K^{\cdot,\cdot} = C^\cdot \left (\mathfrak g,\Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M) \right ) = \Lambda^\cdot {\mathfrak g}^* \otimes \Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M).$
The "horizontal" differential $d: K^{i,\cdot} \to K^{i+1,\cdot}$ is defined on the coefficients
$\Lambda^\cdot {\mathfrak g} \otimes C^{\infty}(M)$
by the action of $\mathfrak{g}$ and on $\Lambda^\cdot {\mathfrak g}^*$ as the exterior derivative of right-invariant differential forms on the Lie group G, whose Lie algebra is $\mathfrak{g}$.
Let Tot(K) be a complex such that
$\operatorname{Tot}(K)^n =\bigoplus\nolimits_{i-j=n} K^{i,j}$
with a differential D = d + δ. The cohomology groups of (Tot(K), D) are computed using a spectral sequence associated to the double complex $(K^{\cdot,\cdot}, d, \delta)$.
The first term of the spectral sequence computes the cohomology of the "vertical" differential δ:
$E_1^{i,j} = H^j (K^{i,\cdot},\delta) = \Lambda^i {\mathfrak g}^* \otimes C^{\infty}(M_0)$, if j = 0 and zero otherwise.
The first term of the spectral sequence may be interpreted as the complex of vertical differential forms
$(\Omega^\cdot_{\operatorname{vert}}(M_0), d_{\operatorname{vert}})$
for the fiber bundle $M_0 \to \widetilde M$.
The second term of the spectral sequence computes the cohomology of the "horizontal" differential d on $E_1^{\cdot,\cdot}$:
$E_2^{i,j} \cong H^i(E_1^{\cdot,j},d) = C^{\infty}(M_0)^g = C^{\infty}(\widetilde M)$, if $i = j= 0$ and zero otherwise.
The spectral sequence collapses at the second term, so that $E_{\infty}^{i,j} = E_2^{i,j}$, which is concentrated in degree zero.
Therefore,
$H^p (\operatorname{Tot}(K), D ) = C^{\infty}(M_0)^g = C^{\infty}(\widetilde M)$, if p = 0 and 0 otherwise.
## The BRST operator and asymptotic Fock space
Two important remarks about the BRST operator are due. First, instead of working with the gauge group G one can use only the action of the gauge algebra $\mathfrak{g}$ on the fields (functions on the phase space).
Second, the variation of any "BRST exact form" sBX with respect to a local gauge transformation dλ is
$\left [i_{\delta\lambda}, s_B \right ] s_B X = i_{\delta\lambda} (s_B s_B X) + s_B \left (i_{\delta\lambda} (s_B X) \right ) = s_B \left (i_{\delta\lambda} (s_B X) \right ),$
which is itself an exact form.
More importantly for the Hamiltonian perturbative formalism (which is carried out not on the fiber bundle but on a local section), adding a BRST exact term to a gauge invariant Lagrangian density preserves the relation sBX = 0. As we shall see, this implies that there is a related operator QB on the state space for which $[Q_B, \mathcal{H}] = 0$—i. e., the BRST operator on Fock states is a conserved charge of the Hamiltonian system. This implies that the time evolution operator in a Dyson series calculation will not evolve a field configuration obeying $Q_B |\Psi_i\rangle = 0$ into a later configuration with $Q_B |\Psi_f\rangle \neq 0$ (or vice versa).
Another way of looking at the nilpotence of the BRST operator is to say that its image (the space of BRST exact forms) lies entirely within its kernel (the space of BRST closed forms). (The "true" Lagrangian, presumed to be invariant under local gauge transformations, is in the kernel of the BRST operator but not in its image.) The preceding argument says that we can limit our universe of initial and final conditions to asymptotic "states"—field configurations at timelike infinity, where the interaction Lagrangian is "turned off"—that lie in the kernel of QB and still obtain a unitary scattering matrix. (BRST closed and exact states are defined similarly to BRST closed and exact fields; closed states are annihilated by QB, while exact states are those obtainable by applying QB to some arbitrary field configuration.)
We can also suppress states that lie inside the image of QB when defining the asymptotic states of our theory—but the reasoning is a bit subtler. Since we have postulated that the "true" Lagrangian of our theory is gauge invariant, the true "states" of our Hamiltonian system are equivalence classes under local gauge transformation; in other words, two initial or final states in the Hamiltonian picture that differ only by a BRST exact state are physically equivalent. However, the use of a BRST exact gauge breaking prescription does not guarantee that the interaction Hamiltonian will preserve any particular subspace of closed field configurations that we can call "orthogonal" to the space of exact configurations. (This is a crucial point, often mishandled in QFT textbooks. There is no a priori inner product on field configurations built into the action principle; we construct such an inner product as part of our Hamiltonian perturbative apparatus.)
We therefore focus on the vector space of BRST closed configurations at a particular time with the intention of converting it into a Fock space of intermediate states suitable for Hamiltonian perturbation. To this end, we shall endow it with ladder operators for the energy-momentum eigenconfigurations (particles) of each field, complete with appropriate (anti-)commutation rules, as well as a positive semi-definite inner product. We require that the inner product be singular exclusively along directions that correspond to BRST exact eigenstates of the unperturbed Hamiltonian. This ensures that one can freely choose, from within the two equivalence classes of asymptotic field configurations corresponding to particular initial and final eigenstates of the (unbroken) free-field Hamiltonian, any pair of BRST closed Fock states that we like.
The desired quantization prescriptions will also provide a quotient Fock space isomorphic to the BRST cohomology, in which each BRST closed equivalence class of intermediate states (differing only by an exact state) is represented by exactly one state that contains no quanta of the BRST exact fields. This is the Fock space we want for asymptotic states of the theory; even though we will not generally succeed in choosing the particular final field configuration to which the gauge-fixed Lagrangian dynamics would have evolved that initial configuration, the singularity of the inner product along BRST exact degrees of freedom ensures that we will get the right entries for the physical scattering matrix.
(Actually, we should probably be constructing a Krein space for the BRST-closed intermediate Fock states, with the time reversal operator playing the role of the "fundamental symmetry" relating the Lorentz-invariant and positive semi-definite inner products. The asymptotic state space is presumably the Hilbert space obtained by quotienting BRST exact states out of this Krein space.)
In sum, no field introduced as part of a BRST gauge fixing procedure will appear in asymptotic states of the gauge-fixed theory. However, this does not imply that we can do without these "unphysical" fields in the intermediate states of a perturbative calculation! This is because perturbative calculations are done in the interaction picture. They implicitly involve initial and final states of the non-interaction Hamiltonian $\mathcal{H}_0$, gradually transformed into states of the full Hamiltonian in accordance with the adiabatic theorem by "turning on" the interaction Hamiltonian (the gauge coupling). The expansion of the Dyson series in terms of Feynman diagrams will include vertices that couple "physical" particles (those that can appear in asymptotic states of the free Hamiltonian) to "unphysical" particles (states of fields that live outside the kernel of sB or inside the image of sB) and vertices that couple "unphysical" particles to one another.
### The Kugo–Ojima answer to unitarity questions
T. Kugo and I. Ojima are commonly credited with the discovery of the principal QCD color confinement criterion. Their role in obtaining a correct version of the BRST formalism in the Lagrangian framework seems to be less widely appreciated. It is enlightening to inspect their variant of the BRST transformation, which emphasizes the hermitian properties of the newly introduced fields, before proceeding from an entirely geometrical angle. The gauge fixed Lagrangian density is below; the two terms in parentheses form the coupling between the gauge and ghost sectors, and the final term becomes a Gaussian weighting for the functional measure on the auxiliary field B.
$\mathcal{L} = \mathcal{L}_\textrm{matter}(\psi,\,A_\mu^a) - \tfrac{1}{4} F^a_{\mu\nu} F^{a,\,\mu\nu} - (i (\partial^\mu \bar{c}^a) D_\mu^{ab} c^b + (\partial^\mu B^a) A_\mu^a) + \tfrac{1}{2} \alpha_0 B^a B^a$
The Faddeev–Popov ghost field c is unique among the new fields of our gauge-fixed theory in having a geometrical meaning beyond the formal requirements of the BRST procedure. It is a version of the Maurer–Cartan form on $V\mathfrak{E}$, which relates each right-invariant vertical vector field $\delta\lambda \in V\mathfrak{E}$ to its representation (up to a phase) as a $\mathfrak{g}$-valued field. This field must enter into the formulas for infinitesimal gauge transformations on objects (such as fermions ψ, gauge bosons Aμ, and the ghost c itself) which carry a non-trivial representation of the gauge group. The BRST transformation with respect to δλ is therefore:
\begin{align} \delta \psi_i &= \delta\lambda D_i c \\ \delta A_\mu &= \delta\lambda D_\mu c \\ \delta c &= - \delta\lambda \tfrac{g}{2} [c, c] \\ \delta \bar{c} &= i \delta\lambda B \\ \delta B &= 0 \end{align}
Here we have omitted the details of the matter sector ψ and left the form of the Ward operator on it unspecified; these are unimportant so long as the representation of the gauge algebra on the matter fields is consistent with their coupling to δAμ. The properties of the other fields we have added are fundamentally analytical rather than geometric. The bias we have introduced towards connections with $\partial^\mu A_\mu = 0$ is gauge-dependent and has no particular geometrical significance. The anti-ghost $\bar{c}$ is nothing but a Lagrange multiplier for the gauge fixing term, and the properties of the scalar field B are entirely dictated by the relationship $\delta \bar{c} = i \delta\lambda B$. (The new fields are all Hermitian in Kugo–Ojima conventions, but the parameter δλ is an anti-Hermitian "anti-commuting c-number". This results in some unnecessary awkwardness with regard to phases and passing infinitesimal parameters through operators; this will be resolved with a change of conventions in the geometric treatment below.)
We already know, from the relation of the BRST operator to the exterior derivative and the Faddeev–Popov ghost to the Maurer–Cartan form, that the ghost c corresponds (up to a phase) to a $\mathfrak{g}$-valued 1-form on $V\mathfrak{E}$. In order for integration of a term like $-i (\partial^\mu \bar{c}) D_\mu c$ to be meaningful, the anti-ghost $\bar{c}$ must carry representations of these two Lie algebras—the vertical ideal $V\mathfrak{E}$ and the gauge algebra $\mathfrak{g}$—dual to those carried by the ghost. In geometric terms, $\bar{c}$ must be fiberwise dual to $\mathfrak{g}$ and one rank short of being a top form on $V\mathfrak{E}$. Likewise, the auxiliary field B must carry the same representation of $\mathfrak{g}$ (up to a phase) as $\bar{c}$, as well as the representation of $V\mathfrak{E}$ dual to its trivial representation on Aμ—i. e., B is a fiberwise $\mathfrak{g}$-dual top form on $V\mathfrak{E}$.
Let us focus briefly on the one-particle states of the theory, in the adiabatically decoupled limit g → 0. There are two kinds of quanta in the Fock space of the gauge-fixed Hamiltonian that we expect to lie entirely outside the kernel of the BRST operator: those of the Faddeev–Popov anti-ghost $\bar{c}$ and the forward polarized gauge boson. This is because no combination of fields containing $\bar{c}$ is annihilated by sB and we have added to the Lagrangian a gauge breaking term that is equal up to a divergence to
$s_B \left (\bar{c} \left (i \partial^\mu A_\mu - \tfrac{1}{2} \alpha_0 s_B \bar{c} \right ) \right ).$
Likewise, there are two kinds of quanta that will lie entirely in the image of the BRST operator: those of the Faddeev–Popov ghost c and the scalar field B, which is "eaten" by completing the square in the functional integral to become the backward polarized gauge boson. These are the four types of "unphysical" quanta which will not appear in the asymptotic states of a perturbative calculation—if we get our quantization rules right.
The anti-ghost is taken to be a Lorentz scalar for the sake of Poincaré invariance in $-i (\partial^\mu \bar{c}) D_\mu c$. However, its (anti-)commutation law relative to c—i. e., its quantization prescription, which ignores the spin-statistics theorem by giving Fermi–Dirac statistics to a spin-0 particle—will be given by the requirement that the inner product on our Fock space of asymptotic states be singular along directions corresponding to the raising and lowering operators of some combination of non-BRST-closed and BRST-exact fields. This last statement is the key to "BRST quantization", as opposed to mere "BRST symmetry" or "BRST transformation".
(Needs to be completed in the language of BRST cohomology, with reference to the Kugo–Ojima treatment of asymptotic Fock space.)
## Gauge bundles and the vertical ideal
In order to do the BRST method justice, we must switch from the "algebra-valued fields on Minkowski space" picture typical of quantum field theory texts (and of the above exposition) to the language of fiber bundles, in which there are two quite different ways to look at a gauge transformation: as a change of local section (also known in general relativity as a passive transformation) or as the pullback of the field configuration along a vertical diffeomorphism of the principal bundle. It is the latter sort of gauge transformation that enters into the BRST method. Unlike a passive transformation, it is well-defined globally on a principal bundle with any structure group over an arbitrary manifold. (However, for concreteness and relevance to conventional QFT, this article will stick to the case of a principal gauge bundle with compact fiber over 4-dimensional Minkowski space.)
A principal gauge bundle P over a 4-manifold M is locally isomorphic to U × F, where U ⊂ R4 and the fiber F is isomorphic to a Lie group G, the gauge group of the field theory (this is an isomorphism of manifold structures, not of group structures; there is no special surface in P corresponding to 1 in G, so it is more proper to say that the fiber F is a G-torsor). Thus, the (physical) principal gauge bundle is related to the (mathematical) principal G-bundle but has more structure. Its most basic property as a fiber bundle is the "projection to the base space" π : P → M, which defines the "vertical" directions on P (those lying within the fiber π−1(p) over each point p in M). As a gauge bundle it has a left action of G on P which respects the fiber structure, and as a principal bundle it also has a right action of G on P which also respects the fiber structure and commutes with the left action.
The left action of the structure group G on P corresponds to a mere change of coordinate system on an individual fiber. The (global) right action Rg : P → P for a fixed g in G corresponds to an actual automorphism of each fiber and hence to a map of P to itself. In order for P to qualify as a principal G-bundle, the global right action of each g in G must be an automorphism with respect to the manifold structure of P with a smooth dependence on g—i. e., a diffeomorphism P × G → P.
The existence of the global right action of the structure group picks out a special class of right invariant geometric objects on P—those which do not change when they are pulled back along Rg for all values of g in G. The most important right invariant objects on a principal bundle are the right invariant vector fields, which form an ideal $\mathfrak{E}$ of the Lie algebra of infinitesimal diffeomorphisms on P. Those vector fields on P which are both right invariant and vertical form an ideal $V\mathfrak{E}$ of $\mathfrak{E}$, which has a relationship to the entire bundle P analogous to that of the Lie algebra $\mathfrak{g}$ of the gauge group G to the individual G-torsor fiber F.
The "field theory" of interest is defined in terms of a set of "fields" (smooth maps into various vector spaces) defined on a principal gauge bundle P. Different fields carry different representations of the gauge group G, and perhaps of other symmetry groups of the manifold such as the Poincaré group. One may define the space Pl of local polynomials in these fields and their derivatives. The fundamental Lagrangian density of one's theory is presumed to lie in the subspace Pl0 of polynomials which are real-valued and invariant under any unbroken non-gauge symmetry groups. It is also presumed to be invariant not only under the left action (passive coordinate transformations) and the global right action of the gauge group but also under local gauge transformationspullback along the infinitesimal diffeomorphism associated with an arbitrary choice of right invariant vertical vector field $\epsilon \in V\mathfrak{E}$.
Identifying local gauge transformations with a particular subspace of vector fields on the manifold P equips us with a better framework for dealing with infinite-dimensional infinitesimals: differential geometry and the exterior calculus. The change in a scalar field under pullback along an infinitesimal automorphism is captured in the Lie derivative, and the notion of retaining only the term linear in the scale of the vector field is implemented by separating it into the inner derivative and the exterior derivative. (In this context, "forms" and the exterior calculus refer exclusively to degrees of freedom which are dual to vector fields on the gauge bundle, not to degrees of freedom expressed in (Greek) tensor indices on the base manifold or (Roman) matrix indices on the gauge algebra.)
The Lie derivative on a manifold is a globally well-defined operation in a way that the partial derivative is not. The proper generalization of Clairaut's theorem to the non-trivial manifold structure of P is given by the Lie bracket of vector fields and the nilpotence of the exterior derivative. And we obtain an essential tool for computation: the generalized Stokes theorem, which allows us to integrate by parts and drop the surface term as long as the integrand drops off rapidly enough in directions where there is an open boundary. (This is not a trivial assumption, but can be dealt with by renormalization techniques such as dimensional regularization as long as the surface term can be made gauge invariant.)
## BRST formalism
In theoretical physics, the BRST formalism is a method of implementing first class constraints. The letters BRST stand for Becchi, Rouet, Stora, and (independently) Tyutin who discovered this formalism. It is a sophisticated method to deal with quantum physical theories with gauge invariance. For example, the BRST methods are often applied to gauge theory and quantized general relativity.
### Quantum version
The space of states is not a Hilbert space (see below). This vector space is both Z2-graded and R-graded. If you wish, you may think of it as a Z2 × R-graded vector space. The former grading is the parity, which can either be even or odd. The latter grading is the ghost number. Note that it is R and not Z because unlike the classical case, we can have nonintegral ghost numbers. Operators acting upon this space are also Z2 × R-graded in the obvious manner. In particular, Q is odd and has a ghost number of 1.
Let Hn be the subspace of all states with ghost number n. Then, Q restricted to Hn maps Hn to Hn+1. Since Q2 = 0, we have a cochain complex describing a cohomology.
The physical states are identified as elements of cohomology of the operator Q, i.e. as vectors in Ker(Qn+1)/Im(Qn). The BRST theory is in fact linked to the standard resolution in Lie algebra cohomology.
Recall that the space of states is Z2-graded. If A is a pure graded operator, then the BRST transformation maps A to [QA) where [ , ) is the supercommutator. BRST-invariant operators are operators for which [QA) = 0. Since the operators are also graded by ghost numbers, this BRST transformation also forms a cohomology for the operators since [Q, [QA)) = 0.
Although the BRST formalism is more general than the Faddeev-Popov gauge fixing, in the special case where it is derived from it, the BRST operator is also useful to obtain the right Jacobian associated with constraints that gauge-fix the symmetry.
The BRST is a supersymmetry. It generates the Lie superalgebra with a zero-dimensional even part and a one-dimensional odd part spanned by Q. [QQ) = {QQ} = 0 where [ , ) is the Lie superbracket (i.e. Q2 = 0). This means Q acts as an antiderivation.
Because Q is Hermitian and its square is zero but Q itself is nonzero, this means the vector space of all states prior to the cohomological reduction has an indefinite norm! This means it is not a Hilbert space.
For more general flows which can't be described by first class constraints, see Batalin–Vilkovisky formalism.
### Example
For the special case of gauge theories (of the usual kind described by sections of a principal G-bundle) with a quantum connection form A, a BRST charge (sometimes also a BRS charge) is an operator usually denoted Q.
Let the $\mathfrak{g}$-valued gauge fixing conditions be $G=\xi\partial^\mu A_\mu$ where ξ is a positive number determining the gauge. There are many other possible gauge fixings, but they will not be covered here. The fields are the $\mathfrak{g}$-valued connection form A, $\mathfrak{g}$-valued scalar field with fermionic statistics, b and c and a $\mathfrak{g}$-valued scalar field with bosonic statistics B. c deals with the gauge transformations wheareas b and B deal with the gauge fixings. There actually are some subtleties associated with the gauge fixing due to Gribov ambiguities but they will not be covered here.
$QA=Dc$
where D is the covariant derivative.
$Qc= \tfrac{i}{2}[c,c]_L$
where [ , ]L is the Lie bracket, NOT the commutator.
$QB=0$
$Qb=B$
Q is an antiderivation.
The BRST Lagrangian density
$\mathcal{L}=-\frac{1}{4g^2} \operatorname{Tr}[F^{\mu\nu}F_{\mu\nu}]+{1\over 2g^2} \operatorname{Tr}[BB]-{1\over g^2} \operatorname{Tr}[BG]-{\xi\over g^2} \operatorname{Tr}[\partial^\mu b D_\mu c]$
While the Lagrangian density isn't BRST invariant, its integral over all of spacetime, the action is.
The operator Q is defined as
$Q = c^i \left(L_i-\frac 12 {{f_{i}}^j}_k b_j c^k\right)$
where $c^i,b_i$ are the Faddeev–Popov ghosts and antighosts (fields with a negative ghost number), respectively, Li are the infinitesimal generators of the Lie group, and $f_{ij}{}^k$ are its structure constants.
## References
### Citations
1. ^ J. M. Figueroa-O'Farrill and T. Kimura, Geometric BRST Quantization, Commun. Math. Phys. (1991)
2. ^ B. Kostant and S. Sternberg, Symplectic reduction, BRS cohomology, and infinite-dimensional Clifford algebras, Ann. Phys., 176 (1987), no. 1, 49-113
### Textbook treatments
• Chapter 16 of Peskin & Schroeder (ISBN 0-201-50397-2 or ISBN 0-201-50934-2) applies the "BRST symmetry" to reason about anomaly cancellation in the Faddeev–Popov Lagrangian. This is a good start for QFT non-experts, although the connections to geometry are omitted and the treatment of asymptotic Fock space is only a sketch.
• Chapter 12 of M. Göckeler and T. Schücker (ISBN 0-521-37821-4 or ISBN 0-521-32960-4) discusses the relationship between the BRST formalism and the geometry of gauge bundles. It is substantially similar to Schücker's 1987 paper.
### Primary literature
Original BRST papers: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.